Operating Without a Centralised Catalog
What "No Centralised Catalog" Really Means
When we say a telco operates "without a centralised catalog," we do not mean it has no product list. Every operator has something they call a catalog — a spreadsheet, a CRM picklist, a billing table, a PDF price book. The problem is that none of these are authoritative, and none of them control downstream behaviour.
In a legacy environment without true catalog-driven architecture, the "catalog" is typically fragmented across multiple systems, each holding its own partial, often contradictory, definition of what the operator sells and delivers. CRM knows about products one way. Billing knows about them another. OSS may not know about them at all — relying instead on manually configured templates, spreadsheets, or tribal knowledge.
The absence of a centralised, modelled catalog means:
- Multiple product definitions exist across CRM, billing, provisioning, and OSS — with no single version of truth
- No unified Product → Service → Resource decomposition — each system interprets the product differently
- Tight coupling between systems and workflows — changing a product requires coordinated changes in 3–7 systems
- Business logic embedded in people, not platforms — fulfilment depends on who is on shift, not what the catalog says
- No lifecycle governance — products cannot be cleanly versioned, deprecated, or retired across all systems simultaneously
Operational Symptoms of Legacy Environments
The following table maps the concrete operational symptoms that emerge when a telco lacks a centralised, catalog-driven architecture. These are not hypothetical — they are the daily reality of most legacy environments.
Operational Symptoms Without a Centralised Catalog
| Area | What Happens | Why | Downstream Impact |
|---|---|---|---|
| Product Definition | Products are defined differently in CRM, billing, and provisioning systems. Spreadsheets fill the gaps. | No single authoritative catalog exists. Each system maintains its own product model independently. | Sales sells configurations that cannot be fulfilled. Billing charges for structures that do not match what was provisioned. |
| Order Capture | Order validation is incomplete or absent. Invalid orders reach fulfilment teams. | Without catalog-driven validation rules, order forms rely on agent training and manual checks. | Order fallout increases. Rework cycles consume fulfilment capacity. Customer experience degrades at first contact. |
| Order Fulfilment | Fulfilment is manual or semi-automated with hard-coded workflows per product. | No Product → CFS → RFS decomposition exists. Fulfilment teams interpret orders using experience and documentation. | Fulfilment time is unpredictable. New products require new workflows. Errors propagate silently to activation. |
| Service Activation | Activation is script-driven or manual, with no standard decomposition from service to resource. | No Service Catalog or Resource Catalog defines the CFS → RFS → Resource chain. Engineers provision from memory or runbooks. | Activation errors are common. Rollback is manual. No standard way to verify what was actually provisioned. |
| Subscription Lifecycle | No single system tracks what each customer has, in what state, at what version. | Product Inventory does not exist as a coherent entity. Subscriptions are implied by billing records or CRM flags. | Modifications, suspensions, and terminations drift out of sync. Orphaned and ghost services accumulate. |
| Billing & Charging | Billing is based on its own product model, which may not reflect what was actually provisioned or activated. | Billing system was configured independently of provisioning. No shared catalog ensures consistency. | Revenue leakage from unbilled services. Overbilling from stale records. Disputes increase over time. |
| Change Management | Product or service changes require coordinated updates across 3–7 systems, often manually. | No catalog propagation mechanism exists. Each system must be updated individually. | Changes are slow, risky, and expensive. Partial rollouts create version inconsistencies between systems. |
| Assurance & SLA Tracking | Trouble tickets cannot be correlated to specific services, resources, or SLA commitments. | No Service Inventory maps CFS instances to customers. No Resource Inventory maps resources to services. | Root cause analysis is slow. SLA breach detection is reactive. Impact analysis during outages is guesswork. |
| Reporting & Analytics | Reports are manually assembled from multiple system exports. No consistent entity model across data sources. | Each system uses different identifiers, product names, and data structures. No common data lineage exists. | Management decisions are based on stale or inconsistent data. Regulatory reporting requires heroic manual effort. |
Subscription Lifecycle Blindness
In a catalog-driven architecture, Product Inventory is the authoritative runtime record of what each customer has subscribed to, in what state, at what version. It is the BSS System of Record for subscription lifecycle. Every modification, suspension, upgrade, downgrade, and termination is tracked as a state transition against a catalog-defined product structure.
In legacy environments, this does not exist. Instead, "what the customer has" is implied rather than explicit. It is reconstructed from billing line items, CRM flags, provisioning tickets, and — in many cases — the customer's own memory of what they were told they were getting.
Why Legacy Systems Lose Track
- No explicit subscription entity — CRM tracks accounts and contacts, billing tracks charges, but no system owns "this customer has Product X in state Y since date Z"
- Changes are side-effects, not lifecycle events — a product upgrade is implemented as "cancel old billing line, add new billing line" rather than a modelled state transition
- Provisioning and billing are disconnected — what was activated on the network may not match what billing thinks the customer has
- Time creates drift — even if systems are aligned at order time, modifications, migrations, and manual fixes cause progressive divergence
The Concrete Consequences
Subscription lifecycle blindness produces four recurring patterns that consume disproportionate operational effort:
Subscription Lifecycle Failure Patterns
| Pattern | Description | Operational Cost |
|---|---|---|
| Orphaned Services | Services remain active on the network after the customer has cancelled or churned. No system triggers deactivation because no system authoritatively tracks the subscription state. | Wasted network resources. Security risk from unmanaged active services. Inflated capacity reports. |
| Active Services, No Billing | A service is provisioned and active, but billing was never initiated — or was stopped during a failed modification. The customer receives the service for free. | Direct revenue leakage. Undetectable without manual network-to-billing reconciliation. |
| Billing Without Active Service | The customer is charged for a service that was never provisioned, was deactivated, or failed during activation. Billing continues because no system confirms service state. | Customer disputes. Regulatory risk. Credit and refund processing costs. Reputational damage. |
| Permanent Manual Reconciliation | Operations teams run periodic manual reconciliation between billing, provisioning, and CRM to detect and correct inconsistencies. This becomes a permanent operating model. | Ongoing FTE cost. Reconciliation is always retrospective — problems are found weeks or months after they occur. |
Scenario: A Customer Upgrade That Drifts
Consider a straightforward scenario: a residential customer calls to upgrade from a 100 Mbps broadband plan to a 200 Mbps plan.
Legacy Upgrade Walkthrough
Agent Takes the Request
CRMThe CRM agent updates the customer's plan field from "Fibre 100" to "Fibre 200" in CRM. CRM has no decomposition logic — it only stores a label.
Manual Fulfilment Ticket
Ticketing / EmailThe agent creates a fulfilment ticket (email or ticketing system) requesting the speed change. The ticket contains free-text instructions because there is no structured order model.
Provisioning Team Interprets
Network Management (Manual)A provisioning engineer reads the ticket, logs into the access network management system, and manually changes the speed profile on the customer's port. The engineer uses a runbook — not a catalog-driven decomposition.
Billing Update — Maybe
Billing (If Notified)The billing team is supposed to receive a notification to change the recurring charge. In practice, this depends on the fulfilment ticket being routed correctly. If the ticket closes without billing notification, the customer continues to be billed at the old rate.
Result: Partial Update
Inconsistent StateCRM says "Fibre 200." The network is provisioned at 200 Mbps. Billing still charges for "Fibre 100." No system detected the inconsistency because no system owns the end-to-end subscription state.
Billing Leakage & Revenue Risk
Billing accuracy in a telco is not a billing system problem — it is a service lifecycle integrity problem. The billing system can only charge correctly if it knows, with certainty, what each customer has, when it was activated, and what state it is in. When this information is unreliable, revenue leakage follows.
In a catalog-driven architecture, billing reads from Product Inventory (the SoR for subscription state) and applies pricing rules defined in the Product Catalog. The chain is deterministic: catalog defines price, order creates subscription, subscription drives billing. In a legacy environment, this chain is broken at multiple points.
Cause → Effect Chains
Billing Leakage: Causes and Effects
| Cause | Mechanism | Effect | Detection Difficulty |
|---|---|---|---|
| Delayed billing start | Service is activated but billing initiation depends on a separate manual step or notification that is delayed or missed. | Customer receives service for free during the gap. If the gap is never closed, the revenue is permanently lost. | High — requires cross-referencing activation timestamps against billing start dates across systems. |
| Missed recurring charges | A product modification (upgrade, add-on, location change) updates provisioning but fails to update billing. | The customer is billed at the old rate while receiving the new service. The delta is unrecoverable revenue. | Medium — detectable during reconciliation, but only if reconciliation covers the specific product type. |
| Inconsistent usage association | Usage records (CDRs, data usage, event records) cannot be reliably mapped to the correct subscription or service instance. | Usage is either unbilled, billed to the wrong customer, or billed against the wrong rate plan. | Very high — requires correlation of network-layer usage data with commercial subscription data that may not share common identifiers. |
| Stale billing records after termination | A customer terminates, but billing continues because no authoritative signal confirms termination across all charge items. | Customer disputes the charge. Refund and credit processing costs exceed the revenue. Regulatory risk in some jurisdictions. | Low — but typically detected only when the customer complains, not proactively. |
| Back-billing attempts | Leakage is detected retrospectively and the operator attempts to back-bill the customer for the unbilled period. | Customer dissatisfaction. Regulatory constraints on back-billing periods. Collection costs may exceed recovered revenue. | N/A — this is the attempted remedy, not the cause. It introduces its own costs and risks. |
Lack of End-to-End Data Lineage
In a catalog-driven architecture, every entity is traceable through the decomposition chain: Product Offering → Product Specification → CFS → RFS → Resource. This chain is not just a design pattern — it is the operational foundation for impact analysis, root cause investigation, and change management.
In a legacy environment, this lineage does not exist. Products exist in CRM. Services exist (implicitly) in provisioning systems. Resources exist in network inventory. But there is no modelled relationship between them. Each layer is an island.
Questions That Cannot Be Answered
Without end-to-end data lineage, the following questions — all of which are routine operational requirements — become unanswerable or require days of manual investigation:
- "Which services are currently supporting this customer's broadband product?"
- "Which resources (ports, VLANs, IP addresses) are allocated to this customer?"
- "If this OLT fails, which customers and which products are affected?"
- "Which customers are still on the retired 'Legacy Fibre 50' product?"
- "What is the blast radius of decommissioning this network element?"
- "Which services need to be modified if we change the underlying transport technology?"
Operational Consequences
With vs Without Data Lineage
| Capability | Without Lineage (Legacy) | With Catalog-Driven Lineage |
|---|---|---|
| Impact analysis | Manual investigation across 3+ systems. Takes hours to days. Results are approximate. | Automated traversal of Product → Service → Resource chain. Results in seconds. Accurate and complete. |
| Root cause analysis | Engineers correlate alarms, tickets, and logs manually. Cause-to-customer mapping is guesswork. | Resource alarm → RFS → CFS → Product → Customer. Automated impact scope within minutes. |
| Change management | Change requests require manual audit of all affected systems. Risk assessment is based on experience, not data. | Catalog-driven impact analysis identifies every affected product, service, and resource before the change is approved. |
| Transformation & migration | Data migration is guesswork. Legacy data structures have no standard mapping to target architecture. | Catalog model provides the canonical entity structure. Migration maps legacy data to catalog-defined entities. |
| Audit & compliance | Regulatory audits require manual assembly of evidence from multiple systems. Consistency cannot be guaranteed. | Lineage provides auditable, traversable relationships from customer contract through to network resource. |
Orchestration Gaps Without COM / SOM / ROM
The three-layer order management model — Commercial Order Management (COM), Service Order Management (SOM), and Resource Order Management (ROM) — exists because a commercial order and a network activation are fundamentally different things. COM handles what the customer wants. SOM decomposes that into service-level work items. ROM translates those into resource-level actions. Each layer has its own logic, its own validation, and its own lifecycle.
When this separation does not exist, order logic collapses into one of three anti-patterns — each with severe operational consequences.
Where Order Logic Ends Up
CRM becomes the de facto order management system. Sales agents configure orders, trigger fulfilment, and track progress — all within CRM. The CRM was never designed for this. It has no concept of service decomposition, resource allocation, or orchestration sequencing.
- CRM workflows become brittle and product-specific — each new product requires new workflow development
- No separation between commercial validation and technical feasibility
- CRM becomes the bottleneck for all order types — including technical changes that have no commercial component
- Vendor lock-in deepens because all orchestration logic is embedded in a single platform's proprietary workflow engine
The Cost of No Orchestration Layers
Consequences of Missing COM/SOM/ROM Separation
| Consequence | Description |
|---|---|
| Inconsistent order outcomes | The same product ordered through different channels (web, call centre, retail) may be fulfilled differently because there is no shared orchestration logic. |
| No reusable decomposition logic | Each product's fulfilment logic is built from scratch. Shared service components (e.g., a CFS:VoIP used in multiple products) are re-implemented per product rather than reused. |
| Vendor lock-in via custom code | Orchestration logic embedded in a specific platform's workflow engine cannot be migrated without re-implementation. The operator is locked into the vendor, not by contract, but by accumulated custom code. |
| High cost per order | Manual steps, rework, and fallout handling inflate the cost of each order. As product complexity grows, cost per order increases rather than decreasing. |
| No order lifecycle visibility | Without COM/SOM/ROM, there is no standard way to query order status across layers. "Where is my order?" requires manual investigation across systems. |
Human-Centric Operations: The Hidden Cost
Legacy environments that lack catalog-driven architecture inevitably become human-centric rather than system-centric. The gap between what systems can do and what the business needs is filled by people — experienced engineers, long-tenured operations staff, and subject-matter experts who carry the institutional knowledge that should be encoded in catalogs and orchestration rules.
This works — until it does not. Human-centric operations are fragile, unscalable, and invisible to management reporting.
What Human-Centric Operations Look Like
- Tribal knowledge — "Only Sarah knows how to provision the enterprise VPN product because she wrote the original runbook five years ago"
- Key-person dependencies — specific individuals are the de facto System of Record for how products decompose into network actions
- Manual checklists — fulfilment teams follow Word documents or wiki pages that serve as the unofficial service catalog
- Email-driven fulfilment — orders move between teams via email threads, with no system tracking, no SLA enforcement, and no audit trail
- Spreadsheet reconciliation — monthly or quarterly reconciliation between billing, CRM, and network inventory is performed manually in spreadsheets by dedicated staff
The Risks
Risks of Human-Centric Operations
| Risk | Description | Business Impact |
|---|---|---|
| Knowledge loss | When experienced staff leave, retire, or change roles, their knowledge leaves with them. Runbooks and wikis are incomplete or outdated. | Fulfilment quality drops. Error rates increase. Recovery takes months as new staff learn through trial and error. |
| Scaling limits | Throughput is constrained by available headcount, not system capacity. Each new product or market adds operational complexity that requires more people. | Growth requires linear headcount increases. Margins shrink as the business scales. |
| Operational burnout | Experienced operations staff carry unsustainable workloads because they are the only ones who can handle complex orders or resolve escalations. | Staff turnover increases in the most critical roles. Replacement is difficult because the knowledge is undocumented. |
| Fragile delivery | Service delivery depends on the correct sequence of manual steps executed by the right people. Any disruption — staff illness, reorg, process change — causes delivery failures. | SLA breaches during staff transitions. Inconsistent customer experience. High operational risk during peak periods. |
Why Legacy Environments Struggle to Transform
The challenges described in this section are not just operational problems — they are structural barriers to transformation. Modern telco capabilities — cloud-native infrastructure, NFV/CNF, MANO orchestration, API-first integration, zero-touch automation — all assume that a modelled, catalog-driven foundation exists. Without it, transformation efforts fail not because the new technology does not work, but because there is nothing coherent to connect it to.
Why Modern Capabilities Fail Without Catalogs
Transformation Barriers Without Catalog Foundation
| Capability | What It Requires | Why It Fails in Legacy |
|---|---|---|
| Cloud-native / NFV / CNF | VNF/CNF lifecycle management requires modelled resource specifications, Day-0/1/2 configuration templates, and catalog-driven instantiation. | Legacy environments have no Resource Catalog. VNFs are deployed manually using custom scripts. There is no standard way to model what a VNF needs. |
| MANO / NFVO orchestration | MANO requires standardised VNF Descriptors (VNFDs) and Network Service Descriptors (NSDs) — which are, in essence, resource catalog entries. | Without a Resource Catalog, MANO has nothing to orchestrate against. Descriptors are created ad-hoc and are not linked to the service layer. |
| Zero-touch provisioning | Automated provisioning requires complete, machine-readable decomposition rules: Product → CFS → RFS → Resource → Configuration. | Legacy decomposition is in runbooks, wikis, and people's heads. Automation requires re-engineering the entire decomposition chain before it can begin. |
| API-first integration | TMF Open APIs assume standardised entity models (Product, Service, Resource) with consistent identifiers and lifecycle states. | Legacy systems use proprietary data models with inconsistent identifiers. Exposing TMF APIs requires a translation layer that may not have reliable source data. |
| Data migration | Migration to a target architecture requires mapping legacy data to catalog-defined entity structures. | Without a catalog in the source environment, there is no canonical entity model. Migration becomes a data archaeology exercise — reverse-engineering what products, services, and resources actually exist from fragmented system data. |
The problem is structural, not tooling. No amount of new technology can compensate for the absence of a modelled, catalog-driven foundation. Transformation programmes that skip catalog modelling do not fail because the technology was wrong — they fail because there was nothing coherent to build on.
This is why "big bang" transformation programmes — which attempt to replace legacy systems with modern platforms in a single programme — frequently collapse. The new platform assumes catalog-driven data structures that the legacy environment cannot provide. Data migration stalls. Integration points have no reliable source of truth. The programme descends into an extended data reconciliation exercise that consumes budget and timeline.
Summary: Why Centralised, Catalog-Driven Architecture Is Foundational
Every problem described in this section — subscription blindness, billing leakage, data lineage gaps, orchestration failures, human-centric fragility, and transformation barriers — traces back to the same root cause: the absence of a centralised, modelled catalog that controls the Product → Service → Resource chain end-to-end.
This is not an academic argument for architectural elegance. It is a practical statement about operational control:
- Centralised catalogs are not optional — they are the foundation that every other system depends on for consistent behaviour
- COM / SOM / ROM is not over-engineering — it is the minimum separation required for repeatable, scalable order orchestration
- Orchestration depends on modelling — you cannot automate what you have not modelled, and you cannot model what you have not catalogued
- Data lineage is the foundation of control — without traceable relationships from product to resource, every operational decision is based on incomplete information
Before vs After: Legacy vs Catalog-Driven
Legacy Environment vs Catalog-Driven Architecture
| Dimension | Legacy (No Centralised Catalog) | Catalog-Driven Architecture |
|---|---|---|
| Product definition | Duplicated across CRM, billing, provisioning. No single truth. | Single authoritative Product Catalog. All systems reference the same model. |
| Order decomposition | Manual or hard-coded per product. Changes require development. | Catalog-driven decomposition: Product → CFS → RFS → Resource. New products decompose automatically. |
| Fulfilment | Human-driven. Dependent on tribal knowledge and manual checklists. | System-driven orchestration via COM → SOM → ROM. Repeatable and auditable. |
| Subscription visibility | Implied from billing records and CRM flags. No authoritative lifecycle. | Explicit Product Inventory with modelled lifecycle states and version tracking. |
| Billing accuracy | Depends on manual reconciliation. Leakage is persistent. | Billing reads from Product Inventory. Lifecycle events trigger billing updates automatically. |
| Impact analysis | Manual investigation across disconnected systems. Takes hours to days. | Automated traversal of Product → Service → Resource chain. Results in seconds. |
| Change management | High-risk, multi-system coordination. Partial rollouts common. | Catalog versioning with controlled propagation. Changes validated before deployment. |
| Time to market | Weeks to months. Each product requires cross-system development. | Days to weeks. New products are catalog configuration, not code. |
| Scalability | Linear with headcount. More products = more people. | Catalog-driven. More products = more catalog entries, same operational model. |
| Transformation readiness | No foundation for automation, NFV, MANO, or API-first integration. | Catalog model provides the entity structure, migration mapping, and integration contracts. |
Core Principles
- A centralised, catalog-driven architecture is not a luxury — it is the structural prerequisite for operational control, billing accuracy, and transformation capability
- The absence of a catalog is not a technology gap — it is a modelling gap that no amount of tooling can compensate for
- COM/SOM/ROM separation exists because commercial, service, and resource concerns are fundamentally different — collapsing them creates fragile, unscalable systems
- Data lineage from Product through Service to Resource is the foundation for impact analysis, root cause investigation, and change management
- Human-centric operations are a symptom, not a strategy — they indicate that institutional knowledge has not been encoded into the platform
- Transformation programmes that skip catalog modelling do not fail because the technology was wrong — they fail because there was nothing coherent to build on
- The catalog model should be established first, before replacing operational systems — it provides the target data model, migration mapping, and integration contract