This article explains how to fix node-level graphs in Proxmox VE that display the date 1969-12-31 and show no data after a VM migration.
Symptoms
- Node-level graphs (CPU Usage, Server Load, Memory Usage, Network Traffic) display the date 1969-12-31 and show no data
- VM graphs work correctly
- Graphs render correctly when accessing the affected node's own web UI directly, but fail when viewed from other cluster nodes
- Running
pvesh get nodes/<node>/rrddata --timeframe hourfrom remote nodes returns:
RRD error: start (2137035316) should be less than end (1774687139)Cause
During VM migrations (especially mass migrations via PDM), the source node's RRD files can be written with corrupted future timestamps.
When Proxmox VE was upgraded from version 8 to 9, the RRD format changed. The upgrade process preserved old-format files with a .old suffix under /var/lib/rrdcached/db/pve2-node/. Proxmox reads the last_update timestamp from these old files to determine the query start time. If the old file contains a future timestamp (e.g., year 2037), the start time exceeds the end time, causing the RRD error.
The corrupted files exist on remote cluster nodes, not on the affected node itself, because each node maintains its own copy of RRD data for other cluster members.
Diagnosis
- Confirm the error from a remote cluster node:
pvesh get nodes/<affected-node>/rrddata --timeframe hour- Verify that graphs work when accessing the affected node's web UI directly (e.g.,
https://node-hostname:8006). - Identify the corrupted file on the remote nodes:
ls -la /var/lib/rrdcached/db/pve2-node/<affected-node>*- Verify the timestamp is in the future:
perl -e 'print scalar localtime(2137035316), "\n"'Output: Sat Sep 19 23:55:16 2037
Solution
Delete the corrupted old-format RRD file on each remote cluster node that displays the error:
rm /var/lib/rrdcached/db/pve2-node/<affected-node>.oldReplace <affected-node> with the actual node name.
This must be done on every node where the graphs are broken. No service restart is required—the fix takes effect immediately.
What does NOT fix the issue
The following actions will not resolve this problem:
- Deleting the new-format RRD file (
/var/lib/rrdcached/db/pve-node-9.0/<node>) on the affected node - Restarting
rrdcachedon the affected node - Restarting
pveproxyorpvestatdon any node - Deleting VM RRD directories under
pve-vm-9.0/
These actions address the wrong files. The issue is specifically in the old-format .old files on remote nodes.
Additional notes
- The
pve2-node/*.oldfiles are remnants from the PVE 8 to PVE 9 upgrade. They are kept for historical graph continuity but can be safely removed if corrupted. - If storage graphs are also affected, check for corresponding files under
/var/lib/rrdcached/db/pve2-storage/. - Related log entries like
PullMetric.pm: Use of uninitialized valueindicate that pveproxy cannot collect metrics, often during or after migrations when RRD data is unavailable.