Research and Implementation of Edge Gateway in Satellite OBS Networks

An edge gateway in satellite Optical Burst Switch (OBS) is presented in this paper, where DDR3 memory is chosen as the main switch storage resource. This design can improve the storage capacity, switching throughput and quality of service. The whole edge gateway is realized with Verilog HDL, simulated with ModelSim SE 10.6d and implemented in a Xilinx xc7vx690t-2ffg1927i FPGA. When the data is 64 bit wide and the system clock is 200 MHz, the peak throughput of the edge gateway can reach 10 Gbps.


Introduction
In recent years, with the continuous development of satellite communication technology and people's increasing requirements for high efficiency, large capacity and high rate of network, new requirements have been put forward for the processing capacity of network nodes [1]. Due to the large volume of satellite electrical switching equipment, the burden on satellites has been seriously increased. The electronic bottleneck of power switching makes it difficult to improve the capacity and throughput of satellite switching [2]. Compared with electrical switching, optical switching has the characteristics of large capacity, high confidentiality and light equipment. So optical switching has become the key technology to improve the transmission capacity and throughout of satellite switching.
Traditional optical switching can be divided into optical circuit switching (OCS) [3], optical packet switching (OPS) [4] and optical burst switching (OBS) [5]. Among them, optical burst switching takes advantage of coarse-grained optical circuit switching and fine-grain optical packet switching. It not only has the advantage of transparent transmitting burst packet, but also has the high efficiency advantage of resources reserve agreement. Its requirements of optical logic devices are low, and exchange of node structure is relatively simple. The transmission process from the source node to the destination node does not need any O-E-O conversion, which solves the electronic bottleneck problem in the traditional switching technology and is very suitable for the application in the satellite optical switching network. This article focuses on the key circuit design of the optical burst switching technology applied to the edge nodes of the satellite switching network. Figure 1 shows the architecture of OBS satellite network. The OBS satellite network is mainly composed of OBS edge gateway, OBS core switch, laser link, and other network. This paper mainly aims at OBS edge gateway.

Design of OBS edge gateway
The components of the OBS edge gateway mainly include a physical access layer, a MAC conversion layer, a table look-up module, a control module, a threshold management module, a queue manager, a scheduling module, a wavelength division multiplexer and a laser transceiver. The entrance of the edge node provides classification and arrangement of the incoming data information to form Burst Control Packet (BCP) and Burst Data Packet (BDP). The outlet is responsible for reverse-dismantling the assembled data into original data and sending it back to the access network. Core node routers main complete data forwarding. The data that comes from same edge node will be assembled to BDP and BCP. The corresponding BCP is sent to the core node to reserve resources. Then after a certain time, edge node sends BDP directly on the reserved all-optical channel, without the need for optical-electrical conversion [5].
Data through the physical layer and MAC layer is converted into a data frame. Route information is obtained through the look-up table module. The length of a data frame is calculated at the same time. The control information is stored in the BCP FIFO. Meanwhile, the BDP is stored in DDR3 off-chip memory. Then the scheduler sends BCP when the threshold signal comes, and sends BDP after a certain time.
In an optical burst switching network, the most basic switching unit is a burst. It consists of many IP packets with the same destination address and the same QOS level. The edge node aggregates various data from different interfaces having the same destination address and QOS level into much larger data than the IP packet. Then they are sent to the queue manager cache. After the threshold requirement is reached, the scheduler will send a corresponding control package to core node to reserve network resources. At the core node, the control package reserves resources through optical-electrical conversion and electrical processing. Figure 2 shows the overall structure of OBS edge gateway [6].  Figure 2. Structure of OBS edge gateway. Firstly, the IP packet from the outside enters into the 10g Ethernet system IP core [7] and converts to 64-bit wide data. Then, the control module adds the routing information to the data through the look-up table module and stores them in BDP FIFO. At the same time, the key information of IP packet (IP packet length, source address, destination address, etc.) is extracted and the unique identification number is generated and stored in BCP FIFO. Then DDR3 Manager sends data to DDR3 storage [8]. At the same time, the threshold controller detects the length and lifetime of burst. If the threshold requirement is reached, the threshold controller will generate a signal. Assembled burst identification number and the corresponding address of DDR3 will be sent to schedule module. The data burst will then be send to data FIFO and wait for instructions from the scheduling module. The schedule module will generate a BCP through the key information and send it to Wavelength Division Multiplex (WDM) circuit for preparing pre-sending. After BCP is sent, the scheduling module calculates the offset time dynamically, and sends BDP after the offset time is reached.

Core circuit -queue manager design
Considering the characteristics of optical burst switching network, which sends BCP to core node to reserve resources first and then sends BDP, combined with the advantages of double rate sampling of DDR3 SDRAM, this design uses DDR3 SDRAM as the shared storage of multiple input ports to meet the performance requirements of higher storage capacity, faster read/write speed, and reduced hardware resources.
The whole structure includes a Queue Controller (QC), a free pointer FIFO, a cell information ram, a queue information ram, and a DDR3 off-chip storage. The queue information ram depth is 64-bit, corresponding to 8 exports, and each export has 8 priorities. The table lookup circuit generates the corresponding ID number through the export information and the priority of each frame. There are 64 logical queues corresponding to these ID numbers. Figure 3 shows the queue manager structure.  For example, the forward circuit generates a cell with 64-byte. Firstly, a free pointer is allocated by the free pointer FIFO. In the cell information ram, 1-bit head tag and 1-bit tail tag and 30-bit DDR3 storage address are generated for this cell. When the next cell with the same ID number arrives, 1-bit head tag and 1-bit tail tag and 30-bit DDR3 storage address are generated, and the next hop pointer will be update in previous cell information. When one frame is received, the head and tail cell addresses and the length of this frame are updated in the queue information ram. Finally, the scheduling module decides when to send BCP and BDP according to the lifetime and the length of frame in the queue information ram.
The entire circuit design uses only 1 piece of FIFO, 2 pieces of ram and 1 piece of DDR3 storage to reach the scheduling function of 64 logical queues, which greatly improves the resource utilization and the switching capacity.

Main simulation results
Firstly, the control module checks to see whether the FQ_busy is 1 or not. If the FQ_busy is 1, the control module will send FQ_rd (read request) to free-pointer FIFO. Then the FQ_ptr_fifo_depth (available depth of the pointer) minus 1, and a free pointer will be retrieved for received cell. Secondly, the DDR3 manager will send a DDR3 written request signal, preparing to storage a unit of data to the address of free-pointer. After that, the i_cell_fifo_rd is set to 1. Then the 128-bit data is obtained four times in a row through i_cell_fifo_dout to form a 64-byte cell. The cell will be written to the address in the DDR3 pointed by the free pointer. The address used in DDR3 is 12 bits. Its higher 10 bits are free pointer addresses and the lower two bits are the count value of 128-bit data written four times, which can reduce the free pointer bit width. Figure 4 shows the main simulation result of core control module. When the threshold requirement is reached, BCP_ READY is set to 1. Then the BCP information is extracted. After a certain time, the control module reads data from DDR3 memory and submits it to the data transmission module. Figure 5 shows the simulated result of the DDR3 write and read process. Figure 5. Simulated waveform of the DDR3 write and read process.

Conclusion
An edge gateway in satellite optical burst switch is presented in this paper, which is proposed for satellite switch. Then the function of each key module is introduced. The DDR3-based queue manager is the core module of this design, which is given in detail. With the linked list and shared storage scheme, the multi-hop queue in this system can be independently managed and the system resource can be reduced reasonably. The whole design is simulated with ModelSim SE 10.6d and implemented in a Xilinx xc7vx690t-2ffg1927i FPGA. The next step is to test every internal module. The time delay and jitter will be improved to meet the needs of satellite network communication.