

Assuming that you can control the startup of the programs you wish to observe and that they won't do trickery behind your back, of course. It has a web interface for viewing but not as far as I know for configuring.
#Web monitor linux full
If that's important to you, you might go with a LD_PRELOAD interception library that tracks socket operations. I like munin - pretty much with just installation (munin-node on each host and munin 'master' on the collecting and graphing server), and pointing to the hosts, I got full detail on hardware sensors, cpu, disks, memory, interrupts, lots more. This does not differentiate (for example) disk I/O versus network I/O. You only need to set up and receive messages on a single socket.

This interface lets you monitor CPU, memory, and I/O usage by processes of your choosing. extensibility for use by future accounting patches.unified interface for multiple accounting subsystems.efficiently provide statistics during lifetime of a task and on its exit.This is easily the most granular of all the tools on the list, automatically pulling in information on hardware usage across the machine as well as per-core CPU usage graphs, network packet tracing separated by IPv4 vs. Taskstats was designed for the following benefits: Another web-based system monitor for Linux, Netdata is an incredible tool. Per-process statistics from the kernel to userspace. Taskstats is a netlink-based interface for sending per-task and usr/src/linux/Documentation/accounting/taskstats.txt The solution adopted by NetHogs involves a pretty high overhead in my opinion: it captures and analyzes every packet using libpcap, then for each packet the local port is determined and searched in /proc to find the corresponding process.ĭo you know if there are more efficient alternatives to these methods presented or any libraries that deal with this problems? Not to mention the parsing involved when reading these files.Īnother problem is the network bandwidth consumption: this cannot be easily computed for each process I want to monitor. For example to monitor the memory usage every second for 50 processes, I have to open, read and close 50 files (that means 150 system calls) every second from /proc. From what I know, the classic solution is to periodically read the information from /proc, but this doesn't seem the most efficient way (it involves many system calls). I want to write a daemon in C++ that does this monitoring for some given PIDs. I want to know if there is an efficient solution to monitor a process resource consumption (cpu, memory, network bandwidth) in Linux.
