On Jan 29, 2008 7:27 PM, Colin Burnett <<a href="mailto:cmlburnett@gmail.com">cmlburnett@gmail.com</a>> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">On Jan 29, 2008 6:20 PM, Chris K. <<a href="mailto:lister@kulish.com">lister@kulish.com</a>> wrote:<br>> Colin,<br>><br>> My primary basis for that statement was a dual 1G P3 server with 512M of<br>
> RAM and 1024M of swap monitoring 2 physical devices (about 10-12<br>> services). The poor thing began swap usage in about 15 minutes and had<br>> finally used everything (physical + swap) in about 8 hours.<br>
><br>> Keep in mind the Zenoss instances and MySQL DB were on the same server.<br><br></div>That said, if it churns through 1/2 G in 15 minutes then it's<br>horribly, horribly, horribly bloated for **what little I think it's<br>
doing** (Wirth's law) or its got an insane memory leak. (By insane I<br>mean never-ever frees memory. Ever.) So I am thoroughly confused how<br>"pinging" 2 machines and "opening sockets" to a dozen ports requires<br>
that much CPU time AND that much memory.<br><br>I was hoping for more of an explanation about what would take up a gig<br>of ram to monitor services.</blockquote></div><br clear="all">Maybe the resources are needed to ensure low latency (i.e. keeping the working data set and application entirely in ram and disk cache).<br>
-- <br>Matthew Nuzum<br>newz2000 on freenode