Found 2 bookmarks
Newest
Openstack vs. Proxmox: Which Virtualization Solution is Right for Me?
Openstack vs. Proxmox: Which Virtualization Solution is Right for Me?

Openstack vs. Proxmox: Which Virtualization Solution is Right for Me?

https://cloudificationgmbh.blogspot.com/2025/03/openstack-vs-proxmox-which.html

Openstack vs. Proxmox: Which Virtualization Solution is Right for Me?

Today many organizations seek cost-effective and scalable cloud solutions (and alternatives to migrate away from VMware), with two players – OpenStack and Proxmox that have emerged as leading open-source virtualization solutions with over a decade of development. Both offer similar features, but they cater to different use cases and scale. In this article we compare OpenStack vs. Proxmox to help you choose the best solution for your business case. Let’s take a closer look.

First things first, what are OpenStack and Proxmox?

What is OpenStack?

OpenStack is an open-source cloud computing platform designed to manage large pools of compute, storage, and networking resources. It is a collection of open-source software modules that integrate together to orchestrate and manage cloud infrastructure resources such as VMs, Volumes, Load-Balancers, Routers and more. It is a great option for enterprises of all sizes looking to build private, hybrid, or public cloud infrastructure. In fact some of the well-known public clouds are running OpenStack – for example OVH, Open Telekom Cloud and Rackspace to name a few.

Openstack’s key features include scalability, support of multi-region cloud deployments and strong multi-tenancy. Also native integration with Kubernetes (KaaS), advanced networking SDN features with Neutron and flexible storage options (Ceph, Swift, NFS, and more than 30 vendor integrations such as NetApp, Pure, HP, Dell, Huawei, etc.) making it a versatile and fully featured cloud solution suitable for all kinds of scales and industries.

Check our previous post to learn more about Openstack.

 

What is Proxmox?

Proxmox Virtual Environment (PVE) is an open-source virtualization management platform developed by Proxmox Server Solutions GmbH. It supports two virtualization technologies, namely KVM (Kernel-based Virtual Machine which is also used in OpenStack) for full virtualization and LXC (Linux Containers) for lightweight container-based virtualization. It has recently gained popularity thanks to its simplicity and ease of use.

Key features of Proxmox include an user-friendly web-based GUI, high availability support, integrated backup, multiple storage options (ZFS, Ceph, LVM, and NFS) and lightweight container virtualization making it a great fit for small businesses and homelabs.

Key differences between OpenStack and Proxmox

Feature

OpenStack

Proxmox

Architecture

Modular (microservice) and highly scalable with separate components for compute (Nova), storage (Cinder, Swift), networking (Neutron), identity (Keystone), etc. Main hypervisor is KVM, with optional support for ESXi, XEN and LXC

Integrated stack with built-in KVM for VMs and LXC support for containers

User Interface

Horizon or Skyline dashboards (web-based) and CLI. Third party, commercial UI and billing solutions exist (HostBill, OSIE, Fleio)

Simple, user-friendly web-based GUI and CLI.

API

Extensive REST API for automation, orchestration and integration with third-party tools supported such as Terraform (OpenTofu), Ansible and backup solutions.

API allows users to programmatically manage their virtualization environments using RESTful web services. Provides endpoints for managing virtual machines, containers, storage, networking.

Deployment Complexity

High Requires knowledge of architecture, of multiple components and services.

Moderate

Easier to set up with a web-based management interface.

Base OS Layer

Has to be deployed to any Linux operating system or in containers.

Based on Debian with its own kernel, no other distributions are supported.

Scalability

Highly scalable, designed for large-scale infrastructure with multi-region support. Regions can grow to 1000-s of hypervisors.

Limited scalability compared to OpenStack, ideal for small clusters between 3-30 nodes.Limited scalability compared to OpenStack, ideal for small clusters between 3-30 nodes.

Storage

Supports Object, Block and Share types with a variety of backends (Ceph, Swift, LVM, NFS, NetApp, Pure, HP, etc.) and additional features including access control.

Supports Block and Share types via ZFS, Ceph, LVM, and NFS. Very limited support for storage vendor solutions.

Networking

Advanced SDN networking with Neutron, supports security groups, complex routing and multi-tenancy (ML2, OVS/OVN); VLAN, VxLAN and GRE tunnels; VPNaaS, L2GW. Load Balancing with Octavia.

Basic SDN networking (VLAN, DHCP, FRR, VxLAN)

Multi-Tenancy

Yes, built-in support with the concept of Keystone Domains, Projects and Sub-projects and a complete configurable RBAC.

Very limited, requires 3rd party solutions such as multiportal.io

High Availability

Requires third-party tools or advanced configuration (e.g. HAproxy, Pacemaker, Keepalived)

Built-in High Availability and clustering features

Licensing and Costs

Open source with optional but often required enterprise support or custom development

Open source with optional subscriptions to receive official packages and updates

Use Cases

IaaS & PaaS for public and private clouds, enterprise-grade multi-tenancy, and large-scale automation. Lots of customization options with drivers and integrations.

Small business virtualization, homelabs, test and dev environments where multitenancy is not required. Limited customization options.

The Business Model behind OpenStack and Proxmox

Both OpenStack and Proxmox are offered as open-source software. However, the business models and target audience differ quite a lot. Let’s check in more details.

Openstack Openstack follows a community-driven approach. It is governed by the OpenInfra Foundation (previously known as OpenStack Foundation), a non-profit organization dedicated to promoting open-source cloud technologies and OpenStack specifically. The foundation itself is financed by its supporting members with different levels of membership and contributions depending on the size of organization.

OpenStack is licensed under the Apache 2.0 license, ensuring broad usage rights and flexibility for both end-users and companies that want to offer public or private cloud solutions with OpenStack.

OpenStack is backed by a broad ecosystem of contributors, including major cloud providers and enterprise IT vendors, fostering an open development community. The source code of OpenStack is released simultaneously to all users ensuring equal access and transparency in development. With over 2000 active contributors both individuals and organizations, a regular voting for board of directors and Project Technical Leads (PTL), OpenStack can be considered a truly open-source software.

Proxmox Proxmox on the other hand is backed by Proxmox Server Solutions GmbH, a for-profit company based in Austria. It is operating under the GNU Affero General Public License (AGPL) v3, which requires users who modify and distribute the software to share their changes, unlike with OpenStack.

While the Proxmox Virtual Environment (VE) is free and open-source, Proxmox offers a freemium model where users can access a free community-supported version and enterprise users can purchase paid support subscriptions, which provide access to stable repositories, security updates and professional assistance. In other words, enterprise features, stability and support are gated behind paid subscriptions.

While both organizations are actively promoting open source, there is a potential licensing risk with Proxmox with a for-profit company behind it. In the recent years we have seen a number of companies alternating the licensing terms of their open-source solutions to charge money for any commercial usage. Big examples are HashiCorp with Terraform, Elastic with Elasticsearch and RedHat with CentOS. Such risk is less likely to happen with OpenStack that is governed by a non-profit, community-driven foundation.

In case you are concerned with the possibility of licensing changes, you should probably consider the implications of picking Proxmox over OpenStack for your organization.

OpenStack vs. Proxmox: Which One is for Me?

Choose OpenStack if..

Choose Proxmox if…

…you need an easy-to-use virtualization platform with easy setup.

…your infrastructure is limited to a single cluster or a few small clusters.

…your team/organisation is limited to a single tenant without the need for advanced permission separation or access management.

…you want built-in VM backup, high availability, and snapshot management out of the box.

…you do not need LBaaS, KaaS or other PaaS features from the platform.

…you need an easy-to-use virtualization platform with easy setup.

…your infrastructure is limited to a single cluster or a few small clusters.

…your team/organisation is limited to a single tenant without the need for advanced permission separation or access management.

…you want built-in VM backup, high availability, and snapshot management out of the box.

…you do not need LBaaS, KaaS or other PaaS features from the platform.

Can Proxmox replace OpenStack?

Not entirely.

While Proxmox is a great virtualization tool and uses similar technologies under the hood (KVM, Ceph, OVS like OpenStack) and it can be used as a simple VMware alternative, it lacks advanced SDN and LBaaS features, Kubernetes orchestration, and extensive storage drivers support found in OpenStack. OpenStack features come from numerous integrations and drivers, supporting all major vendors. The OpenStack upstream development process with Zuul CI allows 3rd party vendors to integrate their tests to ensure fully driver compatibility between the changes. With Proxmox the vendor and feature options are much more limited, therefore enterprises tend to pick OpenStack over Proxmox.

Openstack’s complexity comes with its benefits: OpenStack provides rich

·cloudificationgmbh.blogspot.com·
Openstack vs. Proxmox: Which Virtualization Solution is Right for Me?
Configuring Ceph pg_autoscale with Rook: A Guide to Balanced Data Distribution
Configuring Ceph pg_autoscale with Rook: A Guide to Balanced Data Distribution

Configuring Ceph pg_autoscale with Rook: A Guide to Balanced Data Distribution

https://cloudificationgmbh.blogspot.com/2025/02/configuring-ceph-pgautoscale-with-rook.html

Configuring Ceph pg_autoscale with Rook for OpenStack Deployments: A Guide to Balanced Data Distribution

At Cloudification, we deploy private clouds based on OpenStack, leveraging Rook-Ceph as a highly available storage solution. During the installation process, one of the recurring issues we faced is a proper configuration of the Ceph cluster to ensure balanced data distribution across OSDs (Object Storage Daemons).

The Problem: PG Imbalance Alerts

Right after a fresh installation, we started receiving PGImbalance alerts from Prometheus, indicating poorly distributed data across hosts. PG stands for Placement Group which is an abstraction under Storage Pool, where each individual object in a cluster is assigned to a PG. Since the number of objects in the cluster can be on the count of hundreds of millions, PGs allow Ceph to operate and rebalance without the need to address each object individually. Let’s have a look at Ceph Placement groups in the cluster:

$ ceph pg dump ... OSD_STAT USED AVAIL USED_RAW TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM 23 33 GiB 1.7 TiB 33 GiB 1.7 TiB [0,2,5,7,8,10,16,18,19,22] 4 0 4 113 MiB 1.7 TiB 113 MiB 1.7 TiB [0,1,2,3,5,6,8,9,11,12,14,15,16,17,20,23] 2 1 1 49 GiB 1.7 TiB 49 GiB 1.7 TiB [0,2,5,6,9,10,12,13,15,16,17,18,21,22] 26 19 19 23 GiB 1.7 TiB 23 GiB 1.7 TiB [1,2,3,5,10,16,18,20,21,22] 15 17 22 19 GiB 1.7 TiB 19 GiB 1.7 TiB [4,5,6,11,15,17,19,20,21,23] 11 0 21 226 GiB 1.5 TiB 226 GiB 1.7 TiB [1,3,9,10,13,16,17,18,20,22] 108 17 20 117 MiB 1.7 TiB 117 MiB 1.7 TiB [0,4,7,12,14,17,18,19,21,22] 5 0 18 258 GiB 1.5 TiB 258 GiB 1.7 TiB [1,5,8,10,11,14,16,17,19,21,22,23] 122 19 17 34 GiB 1.7 TiB 34 GiB 1.7 TiB [0,1,2,3,5,6,8,9,11,12,13,15,16,18,20,21,22,23] 6 4 16 33 GiB 1.7 TiB 33 GiB 1.7 TiB [0,5,7,8,11,12,13,15,17,20] 23 2 15 109 MiB 1.7 TiB 109 MiB 1.7 TiB [2,10,12,14,16,18,19,21,22,23] 4 0 0 109 MiB 1.7 TiB 109 MiB 1.7 TiB [1,2,7,8,12,13,14,17,20,23] 5 1 13 111 MiB 1.7 TiB 111 MiB 1.7 TiB [0,1,2,3,8,9,12,14,15,17,19,21] 7 2 2 116 MiB 1.7 TiB 116 MiB 1.7 TiB [1,3,8,11,15,17,18,19,20,22] 3 0 3 33 GiB 1.7 TiB 33 GiB 1.7 TiB [2,4,5,7,8,9,10,11,16,23] 12 0 5 52 GiB 1.7 TiB 52 GiB 1.7 TiB [1,4,6,11,12,13,14,16,17,18,19,20,21,22,23] 16 2 6 23 GiB 1.7 TiB 23 GiB 1.7 TiB [4,5,7,9,10,11,15,19,20,22] 4 2 7 793 MiB 1.7 TiB 793 MiB 1.7 TiB [0,1,3,4,6,8,10,12,13,14,15,16,18,19,21,23] 4 20 8 34 GiB 1.7 TiB 34 GiB 1.7 TiB [0,5,7,9,12,13,14,18,20,22] 5 2 9 60 GiB 1.7 TiB 60 GiB 1.7 TiB [0,1,3,8,10,12,13,16,17,21] 5 2 10 216 GiB 1.5 TiB 216 GiB 1.7 TiB [1,3,4,5,6,7,9,11,12,14,15,16,18,19,21,22] 101 18 11 101 MiB 1.7 TiB 101 MiB 1.7 TiB [1,2,5,10,12,16,18,19,22,23] 4 1 12 54 GiB 1.7 TiB 54 GiB 1.7 TiB [0,1,3,5,6,7,8,9,10,11,13,14,18,20,21] 16 34 14 25 GiB 1.7 TiB 25 GiB 1.7 TiB [4,5,6,7,10,12,13,15,19,20,22] 5 2 sum 1.1 TiB 41 TiB 1.1 TiB 42 TiB

Let’s check how many PGs are configured for pools:

bash-5.1$ for pool in $(ceph osd lspools | awk '{print $2}') ; do echo "pool: $pool - pg_num: ceph osd pool get $pool pg_num" ; done

pool: .rgw.root - pg_num: pg_num: 1 pool: replicapool - pg_num: pg_num: 1 pool: .mgr - pg_num: pg_num: 1 pool: rgw-data-pool - pg_num: pg_num: 1 pool: s3-store.rgw.log - pg_num: pg_num: 1 pool: s3-store.rgw.control - pg_num: pg_num: 1 pool: s3-store.rgw.buckets.index - pg_num: pg_num: 1 pool: s3-store.rgw.otp - pg_num: pg_num: 1 pool: s3-store.rgw.buckets.non-ec - pg_num: pg_num: 1 pool: s3-store.rgw.meta - pg_num: pg_num: 1 pool: rgw-meta-pool - pg_num: pg_num: 1 pool: s3-store.rgw.buckets.data - pg_num: pg_num: 1 pool: cephfs-metadata - pg_num: pg_num: 1 pool: cephfs-data0 - pg_num: pg_num: 1 pool: cinder.volumes.hdd - pg_num: pg_num: 1 pool: cinder.backups - pg_num: pg_num: 1 pool: glance.images - pg_num: pg_num: 1 pool: nova.ephemeral - pg_num: pg_num: 1

This directly correlates with imbalanced OSD utilization, as Ceph was only creating 1 Placement Group per pool, leading to inefficient data distribution.

To diagnose the issue, we used the rados df command to identify pools consuming the most space and adjusting pg_num. In this document you will find what you need to calculate this number here.

If we manually reconfigure the current number of PGs for several pools, for example Cinder, Nova, Glance and CephFS:

$ ceph osd pool set cinder.volumes.nvme pg_num 256 $ ceph osd pool set nova.ephemeral pg_num 16 $ ceph osd pool set glance.images pg_num 16 $ ceph osd pool set cephfs-data0 pg_num 16

This triggers rebalancing, resulting in more balanced usage and the resolution of the alert:

bash-5.1$ ceph -s cluster: id: a6ab9446-2c0d-42f4-b009-514e989fd4a0 health: HEALTH_OK

services: mon: 3 daemons, quorum b,d,f (age 3d) mgr: b(active, since 3d), standbys: a mds: 1/1 daemons up, 1 hot standby osd: 24 osds: 24 up (since 3d), 24 in (since 3d) rgw: 3 daemons active (3 hosts, 1 zones)

data: volumes: 1/1 healthy pools: 17 pools, 331 pgs objects: 101.81k objects, 371 GiB usage: 1.2 TiB used, 41 TiB / 42 TiB avail pgs: 331 active+clean

io: client: 7.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s rd, 166 op/s wr

...

OSD_STAT USED AVAIL USED_RAW TOTAL HB_PEERS PG_SUM PRIMARY_PG_SUM 23 68 GiB 1.7 TiB 68 GiB 1.7 TiB [0,1,2,3,4,5,6,10,11,12,13,14,16,17,18,19,22] 37 12 4 33 GiB 1.7 TiB 33 GiB 1.7 TiB [0,1,2,3,5,6,7,8,9,10,11,12,13,14,15,16,17,20,22,23] 34 13 1 37 GiB 1.7 TiB 37 GiB 1.7 TiB [0,2,3,5,6,7,9,10,11,12,13,14,15,16,17,18,20,21,22] 42 13 19 39 GiB 1.7 TiB 39 GiB 1.7 TiB [0,2,3,6,7,9,10,11,12,13,15,16,17,18,20,22,23] 41 12 22 36 GiB 1.7 TiB 36 GiB 1.7 TiB [0,1,2,3,4,5,6,7,8,9,10,11,12,15,16,19,21,23] 36 11 21 62 GiB 1.7 TiB 62 GiB 1.7 TiB [0,1,2,3,5,6,8,9,10,13,14,15,16,17,18,19,20,22] 37 9 20 35 GiB 1.7 TiB 35 GiB 1.7 TiB [0,1,4,6,7,8,10,12,14,15,16,17,18,19,21] 39 10 18 67 GiB 1.7 TiB 67 GiB 1.7 TiB [1,2,5,7,8,9,10,11,13,14,16,17,19,20,21,22,23] 37 12 17 65 GiB 1.7 TiB 65 GiB 1.7 TiB [0,1,2,3,4,5,6,8,9,11,12,13,15,16,18,19,20,21,22,23] 34 14 16 35 GiB 1.7 TiB 35 GiB 1.7 TiB [0,1,4,5,7,8,9,10,11,12,13,15,17,18,19,20,21,22,23] 39 13 15 39 GiB 1.7 TiB 39 GiB 1.7 TiB [1,2,6,10,12,13,14,16,18,19,21,23] 41 5 0 34 GiB 1.7 TiB 34 GiB 1.7 TiB [1,2,4,5,7,8,9,10,11,12,13,14,15,16,17,19,20,21,22,23] 37 13 13 31 GiB 1.7 TiB 31 GiB 1.7 TiB [0,1,2,3,4,5,6,7,8,9,12,14,15,16,17,18,19,20,21,22,23] 36 16 2 33 GiB 1.7 TiB 33 GiB 1.7 TiB [0,1,3,6,8,11,13,14,15,16,17,18,19,20,21,22] 34 11 3 33 GiB 1.7 TiB 33 GiB 1.7 TiB [2,4,5,7,8,9,10,12,13,15,16,17,19,21,22,23] 33 12 5 64 GiB 1.7 TiB 64 GiB 1.7 TiB [0,1,3,4,6,8,10,11,12,13,14,15,16,17,18,19,20,21,22,23] 37 9 6 54 GiB 1.7 TiB 54 GiB 1.7 TiB [1,4,5,7,8,9,10,11,12,13,14,15,16,19,20,21,22,23] 32 9 7 38 GiB 1.7 TiB 38 GiB 1.7 TiB [0,1,3,4,6,8,10,11,12,13,14,15,16,17,18,19,20,22,23] 39 11 8 65 GiB 1.7 TiB 65 GiB 1.7 TiB [0,3,5,6,7,9,10,12,13,14,15,17,18,20,22] 33 14 9 95 GiB 1.7 TiB 95 GiB 1.7 TiB [0,1,3,6,8,10,11,12,13,14,15,16,17,18,19,20,21,23] 36 11 10 62 GiB 1.7 TiB 62 GiB 1.7 TiB [0,3,4,5,6,7,8,9,11,14,15,16,17,18,19,20,21,22,23] 36 14 11 35 GiB 1.7 TiB 35 GiB 1.7 TiB [0,1,2,3,5,8,9,10,12,14,15,16,18,19,20,22,23] 37 14 12 58 GiB 1.7 TiB 58 GiB 1.7 TiB [0,1,3,4,5,6,7,8,9,11,13,14,15,17,18,19,20,21,23] 35 13 14 56 GiB 1.7 TiB 56 GiB 1.7 TiB [1,2,4,5,6,7,8,9,10,12,13,15,18,19,20,21,22,23] 34 15 sum 1.1 TiB 41 TiB 1.1 TiB 42 TiB

Why did this happen?

By default, Ceph might not create the optimal number of PGs for each pool, resulting in data skew and uneven utilization of storage devices. Manually setting the pg_num for each pool is not a sustainable solution, as data volume is expected to grow over time.

That mean

·cloudificationgmbh.blogspot.com·
Configuring Ceph pg_autoscale with Rook: A Guide to Balanced Data Distribution