Proxmox VE (Virtual Environment) is an open-source platform for enterprise virtualization that integrates KVM and LXC, offering a web interface and CLI management tools for virtual machine and container management, high availability, live migration, and more.
Nodes are individual servers running Proxmox VE. These can be combined into clusters for centralized management and resource pooling.
Local storage refers to disks that are physically attached to the server. Proxmox VE can use local storage for VM and container disks, ISO images, and backup files.
Shared storage is accessible by all nodes within a Proxmox VE cluster. Types of shared storage include NFS, iSCSI, and Ceph. Shared storage is crucial for features like live migration and high availability.
ISCSI Multipath ensures continuous availability and high performance of storage in Proxmox VE by utilizing multiple paths to connect to the iSCSI targets. This setup enhances redundancy and load balancing by providing more than one physical path between the Proxmox server and the iSCSI storage, mitigating the risk of a single point of failure.
Install Multipath Tools: Ensure the multipath-tools
package is installed on your Proxmox VE node.
apt-get update
apt-get install multipath-tools
Identify WWIDs: Find the WWID of each iSCSI device you want to use with multipath. This can be done using the scsi_id
command:
/lib/udev/scsi_id -g -u -d /dev/sda
Configure Multipath: Edit the multipath configuration file /etc/multipath.conf
to suit your storage setup, including blacklisting all devices by default and allowing specific devices:
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "3600144f028f88a0000005037a95d0001"
wwid "3600144f028f88a0000005037a95d0002"
}
multipaths {
multipath {
wwid "3600144f028f88a0000005037a95d0001"
alias mpath0
}
multipath {
wwid "3600144f028f88a0000005037a95d0002"
alias mpath1
}
}
Add WWIDs to Multipath: For each device, add its WWID to the multipath setup using:
multipath -a <wwid>
This ensures that multipath will manage these devices.
Restart Multipath Service: Apply the changes by restarting the multipath service.
systemctl restart multipath-tools.service
Verify Multipath Setup: Confirm that the multipath setup is working as expected by listing the multipath devices.
multipath -ll
Discover ISCSI Targets: Use the iscsiadm
tool to discover available iSCSI targets on your storage network.
iscsiadm -m discovery -t sendtargets -p [TARGET_IP]
Log into the ISCSI Target: Establish a session with the discovered iSCSI target.
iscsiadm -m node -T [TARGET_IQN] -p [TARGET_IP] --login
Configure Storage in Proxmox VE: After logging into the iSCSI target, Proxmox VE should recognize the new iSCSI paths. You can now add the iSCSI storage through the Proxmox web interface or CLI, ensuring that you select the option to use multipath.
A 64-bit processor, at least 2 GB RAM, 16 GB of disk space, and a network interface card are required. Hardware virtualization support (Intel VT or AMD-V) is recommended.
Proxmox VE can be installed from an ISO image or on an existing Debian system.
Burn the ISO to a CD/DVD or create a bootable USB drive, then boot from it. Follow the on-screen instructions to complete the installation, including disk partitioning, setting up an administrator password, and configuring network settings.
To install Proxmox VE on Debian:
Add the Proxmox VE repository:
echo "deb http://download.proxmox.com/debian/pve buster pve-no-subscription" | sudo tee /etc/apt/sources.list.d/pve-install-repo.list
Add the Proxmox VE repository GPG key:
wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O- | sudo apt-key add -
Update your package lists:
sudo apt update
Install Proxmox VE:
sudo apt install proxmox-ve
This method integrates Proxmox VE into an existing Debian setup.
Network settings can be configured through the web interface or by editing /etc/network/interfaces
.
To create a cluster, access the Proxmox VE web interface on the first node, navigate to the "Cluster" section, and follow the prompts to create a new cluster. Additional nodes can be added to the cluster by selecting "Add Node" from the same section and following the instructions.
Access the CLI via SSH or by using the console directly from a Proxmox VE node.
qm list
qm start <vmid>
qm stop <vmid>
qm create <vmid> --name <name> --memory <memory> --net0 virtio,bridge=vmbr0
pct list
pct start <vmid>
pct stop <vmid>
pct create <vmid> local:vztmpl/template.tar.gz --hostname <name> --memory <memory>
pvecm create <clustername>
pvecm add <existing cluster node IP>
pvecm nodes
To create a VM through the Proxmox web interface, fill in the necessary information such as VM ID, name, OS type, disk size, and network configuration, then start the VM and access its console.
Prepare the Configuration:
Create the VM:
qm create
with the VM ID and basic options like name and memory:qm create 100 --name "example-vm" --memory 2048 --net0 virtio,bridge=vmbr0
Attach Installation Media:
Attach an ISO as a CD-ROM and add a virtual disk:
qm set 100 --ide2 local:iso/YOUR_ISO_FILE.iso,media=cdrom
qm set 100 --virtio0 local-lvm:32
Configure Boot Options:
qm set 100 --boot c --bootdisk virtio0
Start the VM:
qm start 100
Via Serial Console
Enable Serial Console for the VM:
qm set 100 --serial0 socket
Access the VM's console over the CLI:
qm terminal 100
This command opens a terminal connection to the VM, allowing interaction with the VM's console directly from the host's command line.
Via SPICE
From Proxmox VE Web Interface
Using Remote Viewer:
remote-viewer spice://<PROXMOX_HOST>:<PORT>?tls-port=<TLS_PORT>
Replace <PROXMOX_HOST>
, <PORT>
, and <TLS_PORT>
with your actual Proxmox server's address and the ports specified in the Proxmox VM's SPICE configuration.
Creating and managing LXC containers is also done through the Proxmox web interface. Containers are lightweight and efficient compared to VMs. Navigate to the "Create CT" button, provide the container details, and start the container. Containers share the kernel with the host, making them faster and more resource-efficient.
Proxmox VE includes a built-in backup tool that can be configured to take scheduled backups of VMs and containers. Set up backups through the web interface by specifying the backup target, schedule, and which VMs/containers to include.
Alternatively use the CLI command vzdump
.
Restoring a VM or container from a backup is straightforward. Navigate to the "Backup" section of the web interface, select the desired backup file, and choose "Restore". The process will overwrite the existing VM or container configuration and data with the backup contents.
Utilize built-in Linux tools and Proxmox-specific commands for monitoring system performance and resource usage, such as top
, htop
, and Proxmox VE's pvesh
commands for querying Proxmox resources.