As in, when I watched YouTube tutorials, I often see YouTubers have a small widget on their desktop giving them an overview of their ram usage, security level, etc. What apps do you all use to track this?
I currently use thr classic “Hu seems slow, checks basic things like disk usage and process CPU/RAM usage I’ll do a reboot to fix it for now”.
Windows Server? )
This is me. Can’t hurt to just do a reboot
The fastest way? Probably netdata
This. If you have more servers you can also get them all connected to a single UI where you can see all the Infos at once. With netdata cloud
Just set this up yesterday. I used a parent node and then have all my vms point to that. Took like an hour to figure it out
Hey, did you use the cloud functionality or not? I’m tryna go all local with parent-child kind of capability but so far unable to.
The parent still is visible to the cloud portal. My understanding is the data all resides local, but when you login to their cloud portal, it connects to the parent to display the information. I’m still playing with it to confirm. My parent node shows all the child nodes on the local interface but the cloud still shows them all.
I don’t know if I’ll keep running this. Already the child nodes are complaining about increase write delays since installing the agents on them.
agreed … BY FAR the fastest. Easiest learning curve as well
I’ll look into this too. Thank you.
I know that it needs a fix when my dad complaining that he can’t watch TV and the rolling door doesn’t open in the morning.
Checkmk (Raw - free version.) Some setup aspects are a bit annoying (wants to monitor every last ZFS dataset and takes too long to ‘ignore’ them one by one.) It does alert me to things that could cause issues, like the boot partition almost full. I run it in a Docker container on my (primarily) file server.
I use this as well! Works well and has built in intelligence for thresholds.
Netdata, I’ve meant to look into Grafana but it always seemed way too overcomplicated and heavy for my purposes. Maybe one day, though…
I thought the same thing but it’s not bad actually, there are some pre build dashboards you can import for common metrics from Linux, windows, firewalls etc …
netdata is much better though (IMHO)
Alerts are much more important than fancy dashboards. You won’t be staring at your dashboard 24/7 and you probably won’t be staring at it when bad things happen.
Creating your alert set not easy. Ideally, every problem you encounter should be preceded by corresponding alert, and no alert should be false positive (require no action). So if you either have a problem without being alerted from your monitoring, or get an alert which requires no action - you should sit down and think carefully what should be changed in your alerts.
As for tools - I recommend Prometheus+Grafana. No need for separate AletrManager, as many guides recommend, recent versions of Grafana have excellent built-in alerting. Don’t use those ready-to-use dashboards, start from scratch, you need to understand PromQL to set everything up efficiently. Start with a simple dashboard (and alerts!) just for generic server health (node exporter), then add exporters for your specific services, network devices (snmp), remote hosts (blackbox), SSL certs etc. etc. Then write your own exporters for what you haven’t found :)
I was looking at loki+grafana. is prometheus a replacement for loki in this setup and is it preferred?
No, they serve different purposes. Loki is for logs, Prometheus is for metrics. Grafana helps to visualize data from both.
What about InfluxDB? I hear that mentioned around Grafana a lot.
InfluxDB is just a storage. If you have a service that saves metrics to InfluxDB (IIRC, Proxmox can do that), Grafana can read it from there. Grafana can aggregate data from many sources, Prometheus+Loki+InfluxDB+even queries to arbitrary JSON APIs etc.
Thank you for this. I think I need a deeper understanding of Prometheus. I’ll look into it. You are awesome
Good luck, if you get into it, you’ll be unable to stop. Perfecting your monitoring system is a kind of mania :)
One more advice for another kind of monitoring. When you are installing / configuring something on your server - it’s handy if you can monitor it’s resource usage in real time. And that’s why I use MobaXterm as my terminal program. It has many drawbacks, and competitors such as XShell, RoyalTS or Tabby look better in many ways… but it has one killer feature. It shows a status bar with current server load (CPU, RAM, disk usage, traffic) right below your SSH session, so that you don’t have to switch to another window to see the effect of your actions. Saved me a lot of potential headache.
One thing about using Prometheus alerting is that it’s one less link in the chain that can break, and you can also keep your alerting configs in source control. So it’s a little less “click-ops,” but easier to reproduce if you need to rebuild it at a later date.
When you have several Prometheus instances (HA or in different datacenters), setting up separate AlertManagers for each of them is a good idea. But as OP is only beginning his journey to monitoring, I guess he will be setting up a single server with both Prometheus and Grafana on it. In this scenario a separate AlertManager doesn’t add reliability, but adds complexity.
As for source control, you can write a simple script using Grafana API to export alert rules (and dashboards as well) and push them to git. Not ideal, sure, but it will work.
Anyway, it’s never too late to go further and add AlertManager, Loki, Mimir and whatever else. But to flatten the learning curve I’d recommend starting with Grafana alerts that are much more user-friendly.
Alerts are much more important than fancy dashboards.
It depends, If you have to install lot of stuff or manage a lot of thing it’s a good idea to have one but if you mainly do maintenance and you want to have something reliable yes you should have an alerts, for exemple I don’t have a lot of thing install and doesn’t rly care about reliability so I do everything in terminal, I use arch btw
When you’ve got a lot of variables, especially when dealing with a distributed system, that importance leans the other way. Visualization and analytics are practically required to debug and tune large systems
I just check the proxmox dashboard every now and then. Honestly if everything is working I’m not too worried about exact ram levels at any given moment
Uptime Kuma and Grafana. Uptime Kuna to monitor if a service is up and running and Grafana to monitor the host like CPU, RAM, SSD usage etc.
Thank you for this. I appreciate the support.
Same here, also have some autoscaling mechanisms set up in docker swarm to scale certain services in case the load is high
If get ahead of it by getting extra.
Need 16 gb of ram and 8 cores ? Well let me add 64 gb to my cart and 12 core CPU.
Hasn’t failed me
Nobody mentioned htop 🤔
Bashtop is pretty. But not scalable.
htop is a selfhosted service?
Btop
Influx/telegraf/grafana stack. I have all 3 on one server and then I put just telegraf on the others to send data into influx. Works great for monitoring things like usage. You can also bring in sysstat.
I have some custom apps as well where each time they run I record the execution time and peak memory in a database. This lets me go back over time and see where something improved or got worse. I can get a time stamp and go look at gitea commits to see what I was messing with.
I don’t check it all the time like a maniac but I have a glances docker running on my main server.
Glances is really nice. I’ve been using btop more recently though.
I use Zabbix. Runs fine in a relatively small VM. Easy to write plugins.
If one of my users ever complained about anything I would possibly look into it, otherwise it all works so I don’t waste life energy on that.
I use sar for historical, my own scripts running under cron on the hosts for specific things I’m interested in keeping an eye on and my on scripts under cron on my monitoring machines for alerting me when something’s wrong. I don’t use a dashboard.