NFVI & Telco Cloud Vendors
NFVI & Telco Cloud Vendors
Section 7.7 introduced NFVI as the infrastructure layer beneath MANO — the compute, storage, and networking platform that VNFs and CNFs actually run on. This section goes deeper: a comprehensive, vendor-neutral assessment of the NFVI and telco cloud vendor landscape — who builds these platforms, what each offers, and how to choose between them.
Two Generations of NFVI
The telco NFVI landscape is defined by a generational shift. Gen 1 NFVI virtualised network functions as VMs on OpenStack or VMware. Gen 2 containerises them as CNFs on Kubernetes. Most operators today run both generations simultaneously — and will do so for 5-10 years.
Gen 1 vs Gen 2 NFVI
| Dimension | Gen 1: VM-Based NFVI | Gen 2: Cloud-Native CaaS |
|---|---|---|
| Era | 2015-2020 deployments | 2020+ deployments |
| Platform | OpenStack or VMware vSphere | Kubernetes (OpenShift, Rancher, upstream K8s) |
| Workload Type | VNFs (Virtual Network Functions) | CNFs (Cloud-Native Network Functions) |
| Networking | OVS, SR-IOV, DPDK on VMs | Multus, SR-IOV CNI, DPDK in pods, eBPF |
| Lifecycle Mgmt | ETSI MANO (SOL003/SOL005) | Helm charts, Operators, GitOps, Nephio (emerging) |
| Scaling Model | Vertical (bigger VMs) + horizontal | Horizontal-first, auto-scaling native |
| Upgrade Pattern | Blue-green VM replacement | Rolling updates, canary deployments |
| Status | Production (majority of running NFs) | Growing (5G core, vRAN, edge-first) |
Why Two Generations Coexist
The transition from Gen 1 (VM-based) to Gen 2 (cloud-native) NFVI takes 5-10 years because it depends on NF vendors re-architecting their products, not just operators upgrading infrastructure.
Typical Operator NFVI Stack
Three-Layer NFVI Stack
MANO / Orchestration Layer
MANO (NFVO + VNFM)ONAP, OSM, Ericsson Orchestrator, or vendor-specific MANO. Manages VNF/CNF lifecycle: instantiate, scale, heal, terminate. Calls the VIM/CaaS layer below via SOL003/SOL005 or Kubernetes API.
VIM / CaaS Layer
VIM / CaaSOpenStack (Nova, Neutron, Cinder) for VMs or Kubernetes (API server, kubelet, CNI) for containers. This is the resource management layer that allocates compute, network, and storage to workloads. This is where NFVI platform vendors compete.
Bare Metal / Hardware Layer
Bare Metal InfrastructurePhysical servers (HPE ProLiant, Dell PowerEdge, Supermicro), NICs (Intel, Mellanox/NVIDIA), switches (spine-leaf fabric), and storage. Hardware vendors provide the foundation but are increasingly commoditised — the differentiation is in the platform layer above.
Infrastructure Platform Vendors
These are the vendors that provide the VIM/CaaS platform layer — the OpenStack distributions, Kubernetes distributions, and integrated cloud platforms that operators deploy NFVI on. This is the most competitive and consequential layer of the NFVI stack.
- Strengths: Only vendor with production-grade offerings in both Gen 1 (RHOSP) and Gen 2 (OpenShift) — single vendor for the transition
- Strengths: OpenShift is the most widely deployed Kubernetes platform in telco — certified by all major NF vendors (Ericsson, Nokia, Samsung)
- Strengths: Telco-specific OpenShift features: real-time kernel, PTP (Precision Time Protocol), SR-IOV, DPDK, huge pages, NUMA-aware scheduling
- Strengths: Strong open-source heritage — upstream contributions to OpenStack, Kubernetes, and OKD
- Strengths: Comprehensive lifecycle management and long-term support (4+ year ELS cycles)
- Limitations: RHOSP is in maintenance mode — Red Hat is strategically shifting to OpenShift, so Gen 1 investment is declining
- Limitations: OpenShift subscription costs are significant at telco scale (hundreds to thousands of nodes)
- Limitations: OpenShift adds complexity over upstream Kubernetes — operators must learn Red Hat-specific tooling (OperatorHub, MachineConfig, etc.)
- Limitations: IBM ownership creates strategic uncertainty for some operators
Cisco NFVIS: Edge and uCPE
NFVIS runs a lightweight Linux-based hypervisor on Cisco edge hardware, hosting VNFs like virtual routers, firewalls, SD-WAN edges, and WAN optimisers at customer premises or branch sites. It is managed by Cisco DNA Center, vManage (SD-WAN), or NSO. Think of it as "VMware for branch boxes" — a way to consolidate multiple physical appliances into a single Cisco platform running multiple VNFs.
- Strengths: Purpose-built for edge/branch VNF hosting on Cisco hardware — simple, tested, stable
- Strengths: Integrates with Cisco SD-WAN (vManage), DNA Center, and NSO for lifecycle management
- Strengths: Consolidates multiple branch appliances (router, firewall, WAN optimiser) into a single platform
- Strengths: Zero-touch provisioning for remote branch deployments
- Limitations: Cisco hardware only — not a general-purpose NFVI for any server
- Limitations: Limited to edge/branch use cases — not suitable for central data centre or core network VNFs
- Limitations: Small VNF capacity (2-4 VNFs per box typically) — not scalable like OpenStack/K8s
- Limitations: Vendor lock-in to Cisco hardware and management ecosystem
- Limitations: VNFs hosted on NFVIS must be Cisco-certified — limited third-party VNF support
Telco-Specific Cloud Vendors
The major Network Equipment Manufacturers (NEMs) — Ericsson, Nokia, and Samsung — each offer their own NFVI/cloud platforms, optimised for hosting their own network functions. These are not general-purpose NFVI — they are tightly integrated, pre-validated environments designed to simplify deployment of that vendor's NFs.
- Strengths: Pre-validated with Ericsson NFs — reduces integration and certification effort
- Strengths: CCD provides Kubernetes optimised for Ericsson 5G Core CNFs with telco-grade reliability
- Strengths: Single-vendor support model: NF + cloud platform from one vendor
- Strengths: Automated lifecycle management integrated with Ericsson Orchestrator
- Limitations: Optimised for Ericsson NFs — hosting non-Ericsson VNFs/CNFs is technically possible but not the primary use case
- Limitations: Creates deep vendor lock-in: Ericsson cloud + Ericsson NFs + Ericsson orchestration
- Limitations: Smaller community than Red Hat OpenShift or upstream Kubernetes — operational knowledge is Ericsson-specific
- Limitations: CEE (OpenStack) is legacy — Ericsson is steering customers toward CCD (Kubernetes)
Hyperscalers Entering Telco
The major public cloud providers — AWS, Microsoft Azure, and Google Cloud — are all actively pursuing the telco NFVI market with edge-focused, operator-targeted offerings. These are not traditional NFVI — they bring the hyperscaler model (managed services, consumption billing, global reach) to telco infrastructure.
- Strengths: Massive ecosystem of managed services (compute, storage, AI/ML, analytics) available alongside NFVI workloads
- Strengths: AWS Wavelength provides ultra-low-latency edge compute within operator networks
- Strengths: Proven at scale with Dish/EchoStar 5G core deployment (first cloud-native 5G network on public cloud)
- Strengths: EKS Anywhere enables consistent Kubernetes across on-prem, edge, and cloud
- Limitations: Operator loses control of infrastructure — AWS manages the platform
- Limitations: Data sovereignty and regulatory concerns for core network functions on public cloud
- Limitations: Consumption-based pricing can be unpredictable at telco scale
- Limitations: NF vendor certification on AWS is still maturing compared to on-prem NFVI
NFVI Vendor Comparison Matrix
NFVI Vendors — Capability Comparison
| Vendor / Platform | Category | Gen 1 (VM) | Gen 2 (K8s) | License Model | Edge Support | Multi-Vendor NF Support |
|---|---|---|---|---|---|---|
| Red Hat (RHOSP + OpenShift) | General-Purpose | Yes (RHOSP) | Yes (OpenShift) | Subscription | Yes (SNO, MicroShift) | Strong (broadest K8s certifications) |
| VMware / Broadcom | General-Purpose | Yes (vSphere) — dominant | Partial (Tanzu) | Subscription (bundled) | Limited | Strong (broadest VM certifications) |
| Wind River (StarlingX) | Edge-Optimised | Yes (integrated OpenStack) | Yes (integrated K8s) | Open source + Commercial | Excellent (purpose-built) | Moderate |
| Canonical | General-Purpose | Yes (Charmed OpenStack) | Yes (Charmed K8s) | Open source + Commercial | Yes (MicroK8s) | Growing |
| SUSE / Rancher | General-Purpose (K8s focus) | Partial (Harvester HCI) | Yes (Rancher K8s) | Open source + Commercial | Yes (K3s, Rancher) | Growing |
| HPE / Dell | Hardware + Platform | Via partner platform | Via partner platform | Hardware + optional platform | Yes (edge servers) | N/A (platform-dependent) |
| Cisco NFVIS | Edge Only | Yes (lightweight hypervisor) | No | Commercial (Cisco HW only) | Edge-only (uCPE) | Limited (Cisco-certified VNFs) |
| Ericsson (CCD/CEE) | NEM Telco Cloud | Yes (CEE) | Yes (CCD) | Bundled with NF contracts | Via CCD | Ericsson NFs primarily |
| Nokia (CBIS/NCS) | NEM Telco Cloud | Yes (CBIS) | Yes (NCS) | Bundled with NF contracts | Via NCS | Nokia NFs primarily |
| Samsung (SCP) | NEM Telco Cloud | Limited | Yes (cloud-native-first) | Bundled with NF contracts | Yes | Samsung NFs primarily |
| AWS (Wavelength, Outposts) | Hyperscaler | Via EC2 | Yes (EKS) | Consumption-based | Yes (Wavelength) | Growing certifications |
| Microsoft Azure (Operator Nexus) | Hyperscaler | Via Azure VMs | Yes (AKS) | Consumption-based | Yes (Azure Stack Edge) | Growing + owns NFs |
| Google Cloud (GDC Edge) | Hyperscaler | Minimal | Yes (GKE/Anthos) | Consumption-based | Yes (GDC Edge) | Growing certifications |
NFVI Decision Guide by Operator Profile
| Operator Profile | Recommended NFVI Approach | Reasoning |
|---|---|---|
| Tier 1, multi-vendor NFs, strategic flexibility | Red Hat OpenShift (Gen 2) + RHOSP or VMware (Gen 1 legacy) | Broadest NF certifications, dual-stack capability, managed migration path from VMs to containers |
| Single-NEM operator (Ericsson-heavy) | Ericsson CCD/CEE for core NFs + OpenShift for shared workloads | Pre-validated Ericsson stack for core, general-purpose NFVI for multi-vendor or non-NF workloads |
| Edge-first / distributed (O-RAN, MEC) | Wind River Studio (StarlingX) or OpenShift SNO | Purpose-built for edge: low footprint, dual-stack, distributed management |
| Cost-sensitive / mid-tier operator | Canonical (Charmed OpenStack/K8s) or SUSE (Rancher) | Lower subscription costs, open-source foundation, good enough for smaller NF portfolios |
| VMware migration in progress | Red Hat OpenShift (target) + VMware vSphere (legacy, time-limited) | Migrate NFs to OpenShift as vendors re-certify; maintain VMware for remaining VNFs until migration complete |
| Cloud-first / greenfield 5G | Hyperscaler (AWS/Azure) or NEM cloud (Ericsson CCD) | Greenfield operators can bet on cloud-native-only; no Gen 1 legacy to maintain |
Section 7.8 Key Takeaways
- NFVI is defined by a generational shift: Gen 1 (OpenStack/VMware VMs) coexists with Gen 2 (Kubernetes containers) for 5-10 years — strategy must cover both
- Red Hat is the strongest dual-stack vendor with RHOSP (Gen 1) and OpenShift (Gen 2) — broadest NF vendor certifications in the Kubernetes era
- VMware/Broadcom remains the largest Gen 1 installed base but the Broadcom acquisition has driven aggressive migration planning industry-wide
- Wind River (StarlingX) is purpose-built for distributed edge deployments — the go-to for far-edge, O-RAN, and MEC where full OpenStack/K8s is over-engineered
- NEM clouds (Ericsson CCD, Nokia NCS, Samsung SCP) reduce integration risk for their own NFs but create deep vendor lock-in — most Tier 1 operators use both NEM clouds and general-purpose NFVI
- Cisco NFVIS is an edge/uCPE platform, not a core telco cloud — do not conflate it with data centre NFVI
- Hyperscalers (AWS, Azure, Google) are entering telco NFVI with managed services, edge compute, and (in Microsoft's case) their own NFs — evaluate with the same lock-in scrutiny as traditional vendors
- The certification chain (NF → platform → hardware) is the primary source of practical lock-in — changing any layer requires re-validation across the stack
- There is no single "best" NFVI platform — the right choice depends on operator profile, existing NF portfolio, edge vs core requirements, and strategic flexibility goals