The ADRENALINE testbed® is designed and developed by the CTTC Optical Networks and Systems Department for experimental research on high-performance and large-scale intelligent optical transport networks. ADRENALINE is an SDN/NFV packet/optical transport network and edge/core cloud platform for end-to-end 5G and IoT services, deployed with open source software and Commercial Off The Shelf (COTS) hardware. As depicted in Fig. 1, The ADRENALINE testbed Lab-trial encompasses multiple interrelated although independent components and prototypes, to offer end-to-end services, interconnecting users and applications across a wide range of heterogeneous network and cloud technologies for the development and test of 5G and IoT services in conditions close to production systems. From a global perspective, ADRENALINE involves the following technologies/capabilities:

i) A fixed/flexi-grid DWDM core network with white box ROADM/OXC nodes and software-defined optical transmission (SDOT) technologies to deploy sliceable-bandwidth variable transceivers (S-BVTs) and programmable optical systems (EOS platform).

ii) A packet transport network for the edge (access) and metro segments for traffic aggregation and switching of Ethernet flows with QoS, and alien wavelength transport to the optical core network.

iii) A distributed core and edge cloud platform for the deployment of VNFs and VAFs. The core cloud infrastructure composed of a core-DC with high-performance computing (HPC) servers and an intra-DC packet network with alien wavelength transport to the optical core network. The edge cloud infrastructure is composed of micro-DCs in the edge nodes and small-DCs in the COs.

iv) An SDN/NFV control and orchestration system to provide global orchestration of the multi-layer (packet/optical) network resources and distributed cloud infrastructure resources, as well as end-to-end services (e.g. service function chaining of VNFs and VAFs) for multi-tenancy (i.e., network slicing).

v) Interconnection with other CTTC testbed facilities providing the wireless HetNet and backhaul (EXTREME Testbed® and LENA LTE-EPC protocol stack emulator) and wireless sensors networks (IoTWorld Testbed®).

Figure 1. The CTTC ADRENALINE Testbed for end-to-end 5G and IoT services

  • Optical core network

The optical core network includes a photonic mesh network with 4 nodes (2 ROADMs and 2 OXCs) and 5 bidirectional DWDM amplified optical links of up to 150 km (610 km of G.652 and G.655 optical fiber deployed in total). The S-BVT is provided by the EOS platform, which enables to implement flexible programmable (SDN-enabled) optical transmission systems, based on modular transceiver architectures. Particularly, a S-BVT, which can be seen as a set of virtual transceivers transmitting multiple flows at variable data rate/reach, can be set up. Alternative building blocks, optoelectronic front-ends and several programmable adaptive digital signal processing (DSP) modules are available, yielding different transmission schemes. In the platform, key advanced functionalities can be enabled and tested to fulfill the dynamic requirements and flexibility challenges of future optical networks. This includes spectral manipulation and rate/distance adaptability for optimal spectrum/resource usage, as each transceiver module is capable to generate a multi-format variable rate/distance data flow (slice). Moreover, SDN-enabled multi-flow generation and routing/switching on the network are also enabled.

The optical core network is based on the SDN paradigm. The photonic mesh network nodes (i.e., ROADMs and OXCs) are controlled with an active stateful PCE (AS-PCE) on top of a distributed GMPLS control plane for path computation and provisioning. The AS-PCE acts as unique interfacing element for the T-SDN orchestrator, ultimately delegating the dynamic lightpath provisioning and establishment of connections (termed Label Switched Paths or LSPs) to the underlying GMPLS control plane. By means of SDN agents, the S-BVT can be programmed and (re)-configured to adaptively transmit over the suitable optical network path. In addition, advanced (self)-performance monitoring is available on-demand in the platform. The SDN agent’s purpose is to map high-level operations coming from the T-SDN orchestrator into low-level, hardware-dependent operations. This involves defining an information and data model for the S-BVT and agreeing on a so called SDN controller south-bound interface (SBI) – with the corresponding message formats and encodings – towards the S-BVT agent(s).

  • Edge and metro packet transport network

The packet transport network leverages on the statistical multiplexing nature of cost-effective OpenFlow switches deployed on COTS and using Open vSwitch (OVS) technology. There are a total of ten OpenFlow switches distributed in the edge (access) and metro (aggregation) network segments. The edge packet transport network is composed of four edge nodes, providing connectivity to 5G base stations and IoT access gateways (offered by CTTC LENA emulator, and EXTREME and IoT testbeds) and two OpenFlow switches located in the COs. The edge nodes are lightweight industrialized servers based on Intel Next Unit of Computing (NUC) since they have to fit in cell-site or street cabinets. The metro packet transport network is composed of 4 OpenFlow switches. The two nodes connected to the optical core network are packet switches based on OVS but with a 10 Gb/s XFP tunable transponder as alien wavelengths. Both the edge and metro segments are controlled with two OpenDayLight (ODL) SDN controllers using OpenFlow.

  • Distributed edge and core cloud platform

The distributed core and edge cloud platform is composed by one core-DC, two small-DCs, and four micro-DCs, leveraging virtual machines (VM) and container-based technologies oriented to offer the appropriate compute resources depending on the network locations. Specifically, VM-centric host virtualization, largely studied in the scope of large data centres, is used for the core-DC and small-DCs, and container-based technology, less secure but lightweight, for micro-DCs. The core-DC is composed of three compute nodes (HPC servers with a hypervisor to deploy and run VMs) and each small-DC with one compute node. The four micro-DCs are integrated in the edge nodes, together with the OpenFlow switch. The intra-DC packet network of the core-DC, is composed of four OpenFlow switches deployed on COTS hardware and OVS technology as well. Two out of the four OpenFlow switches are equipped with a 10 Gb/s XFP tunable transponder connecting to the optical core network as alien wavelengths. The four OpenFlow switches are controlled by an SDN controller running ODL responsible for the Intra-DC network connectivity. The distributed cloud computing platform is controlled using three OpenStack controller nodes (Havana release), one for controlling the four compute nodes (computing, image and networking services) of core-DC, another for the two compute nodes of the small-DCs, and the last for the four compute nodes of the micro-DCs.

SDN/NFV CONTROL AND ORCHESTRATION SYSTEM

SDN/NFV control and orchestration system is composed of a transport SDN orchestrator, a cloud orchestrator, an NFV orchestrator & VNF Managers, and a global service orchestrator.

  • Cloud Orchestrator

On top of the multiple DC controllers we deploy a cloud orchestrator that enables to deploy general cloud services (e.g for VAFs) across distributed DC infrastructures (micro, small, core) resources for multiple tenants. Specifically, the cloud orchestrator allows to instantiate the creation/ migration/ deletion of VM/container (computing service), the storage of disk images (image service), and the management of the VM/container’s network interfaces (networking service) on the required DCs for each tenant. In a scenario with multiple OpenStack controllers, OpenStack API can be used as the southbound interface (SBI) of the cloud orchestrator, as well as the northbound interface (NBI) to the tenants. We refer to this recursive hierarchical architecture as OpenStack cascading.

  • Transport SDN orchestrator

The transport SDN orchestrator (T-SDNO) acts as a unified transport network operating system (or controller of controllers) that allows the control (e.g., E2E transport service provisioning), at a higher abstracted level, of heterogeneous network technologies regardless of the specific control plane technology employed in each domain through the use of the common Transport API. The Transport API enables to abstract a set of control plane functions used by an SDN Controller, allowing the T-SDNO to uniformly interact with heterogeneous control domains. The Transport API paves the way towards the required integration with wireless networks. This abstraction enables network virtualization, that is, to partition the physical infrastructure and dynamically create, modify or delete multiple co-existing virtual tenant networks (VTN), independent of the underlying transport technology and network protocols. The T-SDNO is also responsible for representing to the tenants an abstracted topology of each VTN (i.e., network discovery) and for enabling the control of the virtual network resources allocated to each VTN as if they were real resources through the Transport API. The conceived T-SDNO architecture is based on the Application-based Network Operations (ABNO). It interfaces with the AS-PCE of the photonic mesh network and the SDN-enabled S-BVT, and the three Packet SDN controllers of the edge, metro and inter-DC packet networks.

  • NFV orchestrator & VNF Managers

The NFV Infrastructure (NFVI) is composed of multiple NFV Infrastructure Points of Presence (NFVI-PoPs). A NFVI-PoP is a set of computing, storage and network resources that provides processing, storage and connectivity to VNFs through the virtualization layer (e.g. hypervisor). It is deployed in compute nodes (i.e., servers in the DCs). The ETSI NFV Management and Orchestration (MANO) architectural framework identifies three functional blocks; virtualized Infrastructure Manager (VIM), NFV Orchestrator (NFVO) and VNF Manager (VNFM). The VIM is responsible for controlling and managing the NFVI virtualized compute, storage and networking resources (e.g, Openstack controller). The VNFM is responsible for the lifecycle management (i.e., creation, configuration, and removal) of VNF instances running on top of virtual machines or containers. Finally, the NFVO has two main responsibilities, the orchestration of NFVI resources across multiple VIMs (resource orchestration), and the lifecycle management of network services (network service orchestration). The network service orchestration is responsible to coordinate groups of VNF instances that jointly realize a more complex function (e.g. service function chaining), including joint instantiation and configuration of VNFs and the required connections between different VNFs within the NVFI-PoPs. In our implementation, the interconnection of NFVI-PoPs is managed by the T-SDNO. Typical implementation of NFV MANO’s NFVO and VNFMs are open platform for NFV (OPNFV) and open source MANO (OSM).

  • Global Service Orchestrator

The Global Service Orchestrator (GSO) is deployed on top of the T-SDN orchestrator, the cloud orchestrator, and the NFV orchestrator. It is responsible to provide global orchestration of end-to-end services by decomposing the global service into cloud services, network services, and NFV services, and forwarding these service requests to the Cloud Orchestrator, the T-SDN orchestrator and the NFV Orchestrator.

On the one hand, the GSO can dynamically provide service function chaining by coordinating the instantiation and configuration of groups of cloud services (i.e., virtual machines/ containers instances) and NFV services (i.e., VNFs), and the connectivity services between them and the service end-points. For example, the GSO can request to the Cloud orchestrator the provisioning of a virtual machine in the core-DC for the deployment of a VAF (e.g. IoT analytics), and to the NFV orchestrator a VNF (e.g a NAT/Firewall) in the edge NFVI-PoP, and to the T-SDN orchestrator the required connections between the service end-point, the VNF and the virtual machine in a certain way (forwarding graph) in order to achieve the desired overall end-to-end functionality or service.

On the other hand, the GSO is responsible for the dynamic lifecycle management (provisioning, modification and deletion) of network slices. Each network slice is composed of virtual resources (VTN, computing and storage, and VNF) that exist in parallel and isolated for different tenants (e.g., vertical industries, virtual operators) in order to deliver the tenant-specific requirements (e.g, security, latency, resiliency, bandwidth). The GSO provides per-tenant programmability of the network slices and exposes an abstracted view of the network slice’s virtual resources to each tenant.

EXPERIMENTAL PLATFORM FOR OPTICAL OFDM SYSTEMS

Within the ADRENALINE testbed an experimental platform for optical orthogonal frequency division multiplexing (OFDM) systems (EOS) is available. In this platform, a sliceable bandwidth/bitrate variable transceiver (S-BVT) composed of an array of BVT modules can be implemented, as shown in Fig. 2. In particular, the S-BVT can encompass different setups, including optical and optoelectronic systems and subsystems, for multicarrier modulation (either OFDM or discrete multitone, DMT) based on offline processing. Several adaptive and programmable digital signal processing (DSP) modules are available, in Matlab and Python software, for adaptive loading, modulation based on alternative transforms (e.g. complex or real-valued fast Fourier transform, FFT, and fast Hartley transform, FHT), digital mixing, channel signal to noise ratio (SNR) estimation, equalization, self-performance monitoring and impairment compensation. Either an arbitrary waveform generator (up to 24 GSa/s and 9.6 GHz) or a high-speed digital to analog converter (DAC, up to 64 GSa/s and 13 GHz) are included in the platform to provide electrical analog signals (converting the digital ones). The optical modulation can be implemented by using broadband interferometric modulators (Mach-Zehnder up to 40 GHz), based on different technologies (LiNbO3, GaAs) with the corresponding linear drivers and S/C/L-band tunable laser sources with picometer resolution. Alternatively, phase modulator or I/Q modulators are available to support different optical implementations, enabling on-demand S-BVT reconfiguration thanks to its inherent modularity. Hence, multiple flows can be generated and later aggregated/distributed by using bandwidth variable wavelength selective switches (BV-WSSs). The platform includes programmable WSSs based on liquid crystal on silicon technology (LCoS), whose bandwidth occupation, central optical carrier frequency and power/attenuation per port can be adaptively tuned. Specifically, the platform includes 1:1 and 1:4 tunable optical filters, with variable bandwidth from 10 GHz to 5 THz, and four 1:9 flexgrid WSS modules, with channel configuration from 12.5 to 500 GHz. In addition, for filtering purpose, 100 GHz/50 GHz arrayed waveguide gratings are also available. On the other hand, several optical receiver options, based on direct or coherent detection, are available with bandwidth up to 50 GHz. In particular, simple PIN photo-diodes and coherent receiver front-end, featuring phase and polarization diversities, are included. For analog-to-digital conversion (ADC), a real-time digital phosphor oscilloscope (up to 100 GSa/s and 20 GHz bandwidth) is used. Furthermore, C-band EDFAs, optical spectrum analyser, CD/PMD emulation and analysis instruments are also available. The EOS platform offers a high degree of flexibility, including spectral manipulation, superchannel and multi-band generation, (dense)-wavelength division multiplexing (WDM), and polarization division multiplexing (PDM), and can be extended to enable space division multiplexing (SDM). Moreover, the platform can be dynamically configured and adaptively programmed by an SDN controller. Particularly, multiple parameters and features of the transmission system and subsystems can be suitably adapted via SDN-agents, specifically developed and integrated in the platform. Monitoring capability is also included envisioning advanced acquisition techniques (indicated with an optical performance monitoring – OPM   module in Fig. 2). This enables suitable network resource allocation and management, coping with signal degradation. Thanks to the S-BVT programmable elements, multiple advanced features/functionalities, such as slice-ability and adaptability, can be enabled, offering a multiple rate, multiple format, multiple reach, and multiple flow transmission, with sub and super-wavelength granularity (as specified above). Finally, the platform enables to adopt integrated devices (embedded in a single substrate) implementing the transceiver modules, for a significant reduction in terms of cost, energy consumption and equipment footprint. A maximum data rate of 60 Gb/s per flow/slice and polarization state is achieved in back to back (B2B) configuration. After a 2-hop path of 185 km in the ADRENALINE testbed, the maximum data rate achieved in the platform by a single flow is of 34 Gb/s.

 

 

Figure 2. EOS experimental platform.

 

 

­