On Mon, 12 Dec 2011, Eric J Esslinger wrote:
I'm not looking to monitor a massive infrastructure: 3 web sites, 2
servers (pop,imap,submission port, https webmail), 4 dns servers
(including lookups to ensure they're not listening but not
one inbound mx. A few network points to ping to ensure connectivity
throughout my system. Scheduled notification windows (for example,
during work hours I don't want my phone pinged unless it's everything
going offline. Off hours I do. Secondary notifications if problem
persists to other users, or in the event of many triggers. That
thing). Sensitivity settings (If web server 1 shows down for 5 min,
that's not a big deal. Another one if it doesn't respond to repeated
queries within 1 minute is a big deal) A Weekly summary of issues
be nice. (especially the 'well it was down for a short bit but we
notify as per settings') I don't have a lot of money to throw at
Hi Eric. The feature set you are describing should be in any
system worthy of the name. I've used Nagios to good effect for the
part of the last 12 years or so. Before that I used Big Brother,
sucked in various ways.
I did an evaluation on a wide variety of FOSS monitoring systems 2-3
ago and Nagios won at the time (again). Generally I found the
alternatives had problems that I considered to be quite serious
being overly complicated or doing checks so frequently that they
the systems they were supposed to be monitoring).
I'm currently trialing Icinga, a fork of Nagios.
Puppet can be set up to manage Nagios/Icinga config which cuts down
Nagios/Icinga can be hooked up to Collectd to provide performance
well as alert monitoring.
One concern about external monitoring services is the level of
they need to have in to your network to adequately monitor them.
My recommendation is to do a proper risk assessment on the available
DO have detailed internal monitoring of our systems but sometimes
is not entirely useful, due to the fact that there are a few 'single
points of failure' within our network/notification system, not to
mention if the monitor itself goes offline it's not exactly going
able to tell me about it. (and that happened once, right before the
server decided to stop receiving mail).
There are a couple of ways to deal with this. Some monitoring
applications can fail-over to a standby server if the primary
this isn't even really necessary. You will arguably gain higher
reliability by running multiple _independent_ monitors and have them
monitor each other. I have often used this approach.
The principal aim here is to guarantee that you are alerted to any
failure (a production service, system or a monitor). Multiple
simultaneous failures could still produce a blackspot. It is
design a system that will discover multiple simultaneous failures,
takes more effort and resources.
 Sometimes I wonder if the people developing certain systems have
operational experience at all.
 A system designed to fail-over on certain conditions may fail to
fail-over, ah, so to speak.
Email: robert () timetraveller org Linux counter ID #16440
IRC: Solver (OFTC & Freenode)
Director, Software in the Public Interest (http://spi-inc.org/)
Free & Open Source: The revolution that quietly changed the world
"One ought not to believe anything, save that which can be proven by
nature and the force of reason" -- Frederick II (26 December 1194 –
13 December 1250)