Fulcrum Framework
2.1 Fulcrum Network
2.1.1 Purpose
The study of a distributed network in the federation is necessary to ensure better service delivery and increase system reliability. The federation network will not be used for providing public-facing services, but will serve to connect CSPs that will share resources. The core of this operation is to enable the replication of database and storage systems in a continuous, automatic, and reliable manner. Later on, it may be appropriate to use these interconnections to also connect load balancing systems to Proxmox clusters (distributed even at an international level).
2.1.2 Requirements and considerations
Over time, CSPs managing significant amounts of traffic will need to interconnect with each other to ensure high service performance.
We have considered installing a network node at these CSPs, preferably virtual, that allows managing the activation of interconnections in a way appropriate to capacity and complexity.
The traffic that will pass through these nodes will be predominantly private traffic that needs to be conveyed point-to-point. The network mesh, which must be self-provisioning, will primarily be at L2. However, this should not exclude the possibility of elevating private connections to L3 to allow efficient communication between database and storage clusters as well as K8s clusters. Specifically, as the network expands, K8s clusters might begin to announce their own private IP addresses in BGP to other clusters, requiring a dedicated network layer.
2.1.3 Architectural overview
The following diagram summarizes all the functional components of the system, the physical topology, the logical topology, and the addressing plan:
2.1.4 Addressing Plan
2.1.4.1 Private
The entire private addressing plan for Fulcrum is based on the IPv4 10.0.0.0/8 network and Autonomous System 64512. Each CSP (Cloud Service Provider) is assigned a private subnet of the type 10.0.Y.0/24, which is used to advertise itself to the rest of the platform. This /24 subnet may be mapped via 1:1 NAT to the subnet the CSP has chosen for its own Proxmox cluster, based on its internal addressing needs.
The subnet used by the CEM for the cluster hosting its core infrastructure is always of the type 10.0.Y.0/24.
The subnet used for Fulcrum’s backbone (loopbacks, point-to-point links, etc.) is 10.255.0.0/16, segmented appropriately for various needs. Each CSP is assigned a /32 address for the loopback interface of its local device, and a /30 subnet is allocated for each point-to-point network between CSP and CEM.
2.1.4.2 Public
Each CSP reserves a block of public IPv4 addresses, part of its own address space, to allow public reachability of VMs instantiated on its cluster. These VMs are therefore directly exposed to the Internet by the CSP: they are protected by the Proxmox firewall and cannot access Fulcrum’s internal network.
2.1.5 Network Devices
The network components used in Fulcrum are essentially routers. We distinguish between physical and virtual routers depending on their topological role, and we identify different roles among them based on the type of traffic they route.
A fundamental prerequisite for the proper functioning of the entire infrastructure is the presence of dedicated point-to-point networks for connectivity between CSPs and the CEM: these may be Layer 2 (L2) connections, provided by IXs and ISPs using various L2VPN technologies, or VPN tunnels if the CSPs are not connected to Fulcrum partner IXs or ISPs. In the latter case, physical routers must have a public IP address to terminate the VPN tunnels.
2.1.5.1 Virtual routers
Virtual routers, also known as BGP agents, are instantiated within each CSP's cluster as VMs and are responsible for routing traffic to and from the Fulcrum federated network. They are responsible for announcing the local network assigned to the CSP (10.0.Y.0/24) via iBGP to the route reflectors/route forwarders in the network, thus enabling reachability of the local cluster through the federated network. They are also responsible for performing 1:1 NAT if needed.
2.1.5.1 Physical routers
Physical routers are used within Fulcrum's backbone network and are located at topologically significant sites, based on the location of participating CSPs, IX and ISP points of presence, and the highest available technical efficiency. They are interconnected through a highly reliable backbone network with Nx10G capacity over diversified and/or redundant paths. They exchange Loopback prefixes via an IGP (e.g., OSPF).
Physical routers serve as both route reflectors and route forwarders: the former are responsible for receiving and announcing routes via BGP without physically routing traffic, while the latter also route traffic. Route reflectors act as aggregators and route concentrators and are particularly effective in large networks with many devices, where a full mesh of all routers would create scalability issues.
2.2. Application Exposure
Applications will be exposed through a geographic load balancing system that allows maintaining high reliability and secure, controlled publication management. Below we report a graphic with the blocks used to perform this operation. CSPs can implement the same functionality provided they meet the public bandwidth capacity requirements that will be evaluated by the federation, and that they are interconnected with the Fulcrum network described above.
Below are the roles of the components explained
2.2.1. Geographic DNS
Purpose
Geographic DNS have the function of managing application names and ensuring that once distributed on federated resources, they can be traced independently of the individual public addresses of the clusters.
Technology used and macro functioning
For operation, a system based on PowerDNS has been designed, which allows, through LUA scripts, the introduction of geographical location management based on the public IP of origin.
Multiple DNS nodes are planned, distributed geographically in a master-slave architecture.
These DNS will be fed with data managed by the Fulcrum portal made available by the ICX Federation. The federation itself will be responsible for providing and managing this service on its own resources, possibly coordinating the human and hardware resources of the CEMs.
2.2.2. Geographic load balancers
Purpose
The purpose of geographic load balancers is to collect requests from clients by providing public IP addresses for application publication through the DNS we discussed above, decoupling the application from cluster addresses. Additionally, the geographic load balancers will have the task of introducing a layer of high reliability guaranteed by the possible simultaneous distribution of the application on K8s clusters in different locations and from different CSPs.
Technology used and macro functioning
The technology used is of the reverse proxy type. The tested system was based on NGINX, which allows ease of configuration management as well as high granularity in parameters.
The reverse proxies will also be provided and managed by the ICX in collaboration with the CEMs to ensure a uniform structure. The reverse proxies will be automatically configured by the Fulcrum portal.
At this layer, it will also be possible to implement Content Delivery Networks to speed up applications.
2.2.3. Content Delivery Network (CDN)
Purpose
The purpose of the CDN is to speed up the delivery, especially of static content, avoiding some traffic through the CSPs' public connections.
Technology used and macro functioning
The operation involves automatic detection between static and dynamic content of applications, allowing the population of the cache located at the balancing layer.
At the moment, tests have been conducted with various products, but the definitive system has not yet been identified. The research is focused on a system with simple automation regarding configurations.
3.0 Marketplace
Il Marketplace di Fulcrum offre un punto centralizzato per la distribuzione e la gestione di servizi cloud.
3.1 Identity and Access Management (IAM)
Purpose
To ensure secure, role-based access to the Fulcrum ecosystem and its services.
Key Features
• Centralized identity management across the Fulcrum platform
• Role-based access control (RBAC) for administrators, users, and external systems
• Integration with third-party identity providers via standards such as OAuth2, OpenID Connect, and SAML
• Support for multi-tenant access policies, allowing CSPs to isolate resources while federating access under Fulcrum governance
• Token-based authentication for API consumers
• Session management and audit logging of all access events
• User onboarding, password recovery, and policy enforcement mechanisms (e.g., MFA, password expiration)
3.2 Billing
Purpose
To manage usage-based and subscription-based billing across multiple CSPs within the Fulcrum Federation.
Key Features
• Unified billing engine supporting multiple billing models (pay-as-you-go, reserved instances, subscriptions)
• Usage metering at the service level, based on data collected from agents and metrics
• Per-CSP revenue reporting and usage breakdown
• Invoice generation and automated payment integrations (Stripe, PayPal, bank transfers)
• Quota and budget management for end-users and organizations
• Alerts and notifications for billing anomalies or quota overruns
• Tax compliance and internationalization support
3.3 User Interface / User Experience (UI/UX)
Purpose
To provide an intuitive and accessible interface for interacting with Fulcrum’s capabilities.
Key Features
• Responsive web portal for service discovery, provisioning, and monitoring
• Marketplace browsing with tag-based filtering (e.g., GDPR, reliability, region, Gaia-X compliance)
• Step-by-step wizards for deploying services without requiring technical expertise
• Visual dashboards with real-time metrics, job statuses, and usage overviews
• Personalized views for different user roles (end users, administrators, CSPs)
• Accessibility and localization features for international users
• Seamless integration with third-party marketplaces and portals via API and embeddable widgets