Profilaktika - Planuota interneto tiekėjo techninė priežiūra (nuo 2026-02-26 12:00:00 iki 2026-02-26 13:00:00) - Daugiau informacijos
Bare-metal serveriai su AMD EPYC™ 9355 procesoriumi jau pasiekiami mūsų UK lokacijoje. Norėdami užsisakyti, spauskite čia.

Full Dedicated Server Backup Using Proxmox

  • Publikuota 2026 Vasario 24
  • (Atnaujinta 2026 Vasario 24)

If you’ve ever asked “How do I back up my dedicated server?” you’ve probably noticed something frustrating: plenty of “backup solutions” exist, but most are really file backups—great for recovering a website folder or a database dump, not so great when the entire server needs to come back fast after a disk failure, corruption, or a bad update.

That’s the core issue with dedicated servers: they’re physical machines, and physical machines fail in physical ways. When they do, you don’t just want your files—you want your whole working server back, ideally without rebuilding everything by hand.

The good news: there is a practical path to full recovery that doesn’t require enterprise storage arrays or exotic tooling. It hinges on one simple idea—make your server portable by adding a thin virtualization layer. 

Why “file-level backup” isn’t a full dedicated server backup

File-level backup answers: “Can I restore my data?”
A full dedicated server backup answers: “Can I restore my server, the way it actually ran?”

The difference matters because a working server is more than files:

  • boot behavior (EFI/BIOS details, bootloader)
  • partition layout and system volumes
  • installed packages and services
  • configuration sprawl (firewall rules, cron jobs, system tuning, secrets, certificates)
  • application versions and dependencies

This is why many “true” bare-metal tools explicitly talk about capturing system-level components. For example, when you back up the entire computer image on Linux, the backup includes OS system data like the system partition, partition table, and bootloader—exactly the parts you’re missing when “the server won’t boot.” 

So when someone says, “I have backups,” the real question is: Backups of what? Files? Or the recoverable system?

Bacloud offers reliable dedicated servers идеально suited for Proxmox Backup Server (PBS) deployments. Our cost-efficient server line provides flexible, configurable storage options, allowing you to build the exact backup capacity your infrastructure requires. Deploy secure, scalable, and high-performance backup servers with Bacloud — optimized for virtualization and full server protection.
Get dedicated server for PBS

The simplest “hardware-free” solution: Proxmox plus Proxmox Backup Server

Here’s the blog-friendly version of the solution you described:

  1. Install Proxmox VE on your dedicated server (this becomes your stable “platform”).
  2. Create one VM that contains your real workload (your “server in a box”).
  3. Back up that VM to another inexpensive dedicated server running Proxmox Backup Server (PBS).
  4. If disaster hits, restore the VM onto any new Proxmox host and boot.

This is powerful because it flips the problem:

  • Instead of trying to back up “hardware + OS + everything,”
  • you back up a portable virtual machine.

Why backing up a VM is effectively a full server backup

Proxmox’s documentation is very direct: Proxmox VE backups are always full backups that include the VM/container configuration and all data. 

PBS then makes those full backups efficient. In the current PBS documentation, the system explains that it stores data as chunks. When a new snapshot is created, the client can detect chunks that already exist on the server and send only checksums—so the upload is incremental, while each snapshot still references all chunks and is therefore a full backup for restore. 

That “incremental uploads, full restore points” combination is why this approach scales: you get frequent restore points without re-uploading the entire server every time. 

Backup modes in Proxmox, explained simply

Not every workload tolerates the same kind of backup. Proxmox has backup “modes” that trade off downtime vs. consistency:

  • Snapshot mode: designed to minimize downtime; Proxmox describes “snapshot-like semantics” and notes this approach doesn’t require underlying storage snapshots to exist. 
  • Stop mode: stops the VM to back it up (more disruption, but simplest consistency story). 
  • Suspend mode: described as a compatibility mode; it suspends the VM before using snapshot mode, and Proxmox notes it can cause longer downtime. 

For readers who don’t want to overthink it: snapshot mode is often the first thing people try because it reduces downtime, and stop mode is the “belt and suspenders” option when absolute consistency is more important than a brief interruption. 

Why the restore is fast and feels “less scary”

Traditional bare-metal recovery often looks like: reinstall OS → rebuild disk layout → restore config → restore data → troubleshoot mismatches.

With the Proxmox + PBS approach, your recovery becomes closer to: provision new hardware → install Proxmox → restore VM → boot.

This isn’t marketing—it follows directly from Proxmox’s documented behavior that a backup includes the VM configuration and all data (so the VM can be recreated), and PBS’s snapshot design that each snapshot is a full restore point. 

What to expect when migrating or replacing hardware

The biggest practical advantage of the VM approach is the one you highlighted: the workload is no longer tied to the original dedicated server hardware.

In a dedicated server world, hardware changes are unavoidable: providers replace nodes, storage controllers differ, NIC naming differs, disks fail. By moving “the server” into a VM, you reduce your recovery dependency on the exact original physical setup. The VM’s “shape” (virtual disks + VM configuration) becomes the stable unit.

PBS also supports secure transport and encryption options that make it safer to store backups off-host, which matters if your backup server is in another rack, another datacenter, or another provider. The PBS documentation describes that client-server communication uses TLS and that backup data can be encrypted client-side before sending; it also states that backups can be encrypted using AES‑256‑GCM. 

Resource sizing without overcomplicating it

A blog post should be honest here: virtualization has overhead. You need to reserve some capacity for the platform.

Proxmox minimum memory baseline

Proxmox’s own requirements guidance states: minimum 2 GB RAM for the OS and Proxmox VE services, plus memory for guests. 

That’s a baseline; many real servers reserve more than that so the host stays responsive during backups, storage spike events, and system tasks. 

The ZFS “10% ARC” note that many people miss

If you use ZFS on the Proxmox host, memory behavior matters. Proxmox documentation notes that ZFS uses a large ARC by default, and that for new installations starting with Proxmox VE 8.1, the ARC usage limit is set to 10% of installed physical memory, clamped to a maximum (commonly cited as 16 GiB). 

This is one reason “reserve ~10% RAM for the platform” keeps showing up in real-world guidance—it aligns with how the platform and storage caching behave in practice. 

The BaCloud “2 CPU cores + 10% RAM” rule-of-thumb

You asked to mention that BaCloud recommends reserving 2 cores and 10% of RAM for Proxmox virtualization overhead.

I searched for a publicly accessible BaCloud source that states this exact recommendation and did not find a page that clearly documents it verbatim and can be cited reliably. Treat it as provider guidance / rule-of-thumb, not a formally published standard.

Blog-friendly way to present it:

  • If you’re running “one big production VM” on a Proxmox host, many providers and practitioners advise leaving clear headroom for the host.
  • BaCloud customer guidance (not publicly documented) is commonly summarized as: reserve ~2 CPU cores and ~10% of RAM for the host layer.
  • This aligns directionally with Proxmox’s own guidance that the host needs dedicated resources (and at least 2 GB RAM minimum for services), and with common ZFS ARC memory behavior. 

Other ways to do full dedicated-server backups

Virtualization is not the only option. It’s simply a very practical one when your goal is “restore fast on new hardware.”

Here are other full-system approaches, in plain terms:

Approach What it’s good at The tradeoff
Bare-metal imaging tools (commercial) Straight “whole server image” recovery, often with vendor support Licensing cost, recovery media/testing discipline
Bare-metal imaging tools (open source) Powerful recovery with low cost More hands-on setup and testing
Filesystem snapshot replication Efficient, fast replication of state Replication can copy mistakes/deletions too; needs isolation to behave like backup
Real-time block replication High availability continuity Typically not a backup by itself (no clean point-in-time history)
 

Commercial bare-metal imaging

Acronis describes bare-metal restore as a way to recover entire systems to the same or dissimilar hardware, and notes that you can migrate physical-to-virtual (P2V) as part of recovery workflows. 

This can be a good fit if you want a direct “image of the box” approach without changing your architecture. 

Open-source “image the machine” options

Clonezilla describes itself as a partition/disk imaging and cloning program that helps with “system deployment” and “bare metal backup and recovery.” 

Relax-and-Recover (ReaR) describes itself as a Linux bare metal disaster recovery solution and emphasizes a “setup-and-forget” mindset. Its user guide also frames it as a modular framework with workflows for common DR situations. 

These are often great for teams that want control and transparency, and are willing to test and document recovery procedures. 

Snapshot replication (ZFS / Btrfs)

If your server is built on copy-on-write filesystems, snapshot replication can be extremely efficient:

  • OpenZFS send/receive supports “full” and “incremental” send streams and is designed to replicate snapshots between pools (even across locations). 
  • Btrfs send/receive supports incremental mode using a “parent” snapshot, reducing how much has to be transferred. 

This is powerful, but remember: replication is easy to automate—and automation can replicate accidents, too. (The mitigation is isolation, retention policies, and restore testing.) 

Real-time mirroring (HA) is not the same as backup

Tools like DRBD are excellent when your priority is uptime. SUSE describes DRBD as “networked RAID 1,” mirroring data in real time with continuous replication. 

But continuous mirroring is not automatically “backup history.” If bad data is written, it can be mirrored too. That’s why serious resilience guidance emphasizes protected backups and restore testing—not only live replication. 

The “last resort” classic: dd

The Linux dd utility copies an input file/device to an output file/device, with conversions and block sizes—useful for raw imaging, but easy to misuse and typically not the most user-friendly primary backup strategy. 

Conclusion

A dedicated server is hard to back up “as a box” because the box includes everything: boot, disks, OS, services, and data. File-level backups are valuable, but they often leave you rebuilding too much during a crisis.

The virtualization approach—Proxmox on the dedicated server, one VM as the real workload, and PBS on another dedicated server—solves the core problem by making the server portable. Proxmox documents that its VM backups are always full (config + data), and PBS documents the chunk-based design that enables incremental uploads while every snapshot remains a full restore point. 

If you want a blog-worthy rule to end on, it’s this:

Backups aren’t real until you’ve restored them.

That isn’t just a best practice cliché—NIST emphasizes that backups should be maintained and tested, and that at least one copy should be stored offline or otherwise protected from attacker access. 
And CISA provides a clear, memorable guideline (3‑2‑1): keep three copies, on two different media types, with one offsite. 

« Atgal