FCoE retains the reliability and performance of native Fibre Channel. Quality of Service QoS is used to guarantee the required bandwidth for the storage traffic. It ensures the normal data traffic does not hog too much bandwidth. One of the main benefits we get from FCoE is the savings we get in our network infrastructure. When we have redundant native Fibre Channel storage and Ethernet data networks, there are 4 adapters on our hosts and 4 cables that are connected to 4 switches.
With FCoE, we run the data and storage traffic through shared switches and shared ports on our hosts. Now we just have 2 adapters, 2 cables, and 2 switches. The required infrastructure is cut in half. We save on the hardware costs because of this and also require less rack space, less power, and less cooling, which gives us more savings. Aside from saving money from short and long-term perspectives, FCoE also allows organizations to take advantage of all the strong suits of ethernet and fibre channel in a single network.
FCoE combines the speed, reliability, and scalability of ethernet with the increased security, efficiency, and faster backup recovery capacity of fibre channel.
The data traffic requires a MAC address. The way that Ethernet data traffic and FCP storage traffic works is totally different so how can we support them both on the same physical interface? In the diagram below we have a single server, Server 1. There were various convenience and reliability issues with all of these. When the hard drive was developed and became popular, different connection types evolved.
But in most cases, this involved one computer system; the storage space and the data stored on it was basically locked onto the computer it was physically connected to. The downsides to this arrangement are the difficulty of accessing the data from elsewhere, in addition to the frequent waste of disk space.
The invention of methods of sharing this storage between computer systems helped solve these challenges. Networked Storage enables the allocation of disk space in blocks to multiple computers, making the use of expensive disk space much more efficient.
It also can enable, for example, a cluster of database servers to have access to the same data simultaneously. Here, we are focusing on "block" storage, which is chunks of disk space that appear to the computer's operating system to be the same as a locally attached hard drive.
SCSI is the grandparent of them all. Let's jump back in time again to the days before Networked Storage. Most of these were proprietary, i. This enabled multiple manufacturers to develop products that would work together, instead of buyers having to "lock in" with one vendor. This in turn fueled its growth in popularity.
A typical SCSI usually pronounced "skuz-ee" subsystem implementation involved one or more controllers circuit boards with cable connectors , one or more copper wire ribbon cables with multiple hard drive connectors, and a set of hard drives. Some of the primary features of SCSI are the relatively simple command set that is used to control the hard drives, and the amount of devices that can be attached to a controller 16 or more.
One drawback was the limited cable length supported, but this was rarely an issue with hard drives within a single computer system. As the name suggests, it was designed to be based on relatively new multi-mode fiber optic cabling as the physical transport, at least in longer distance scenarios, to overcome some of the limitations of the SCSI physical layer.
FC can actually run on copper cables over short distances, but the distance limitations on fiber optics are much more generous. Because the SCSI protocol was very popular and robust, it was implemented as an "upper layer protocol" riding on top of FC as the transmission protocol. In other words, FC handles the physical connectivity, encoding, and transmission of data, such as SCSI commands, from one endpoint to another.
FC is actually capable of transmitting other kinds of data, but we are focused on SCSI for block storage here. An FC network has the inherent features of providing lossless delivery of raw block data, as well as "in order" delivery of packets, both of which improve reliability and efficiency. Fibre Channel does require specially designed hardware.
Servers need to have a "Host Bus Adapter" port or card HBA , and the storage devices also need to have an FC interface, which is more often referred to as a front-end port in this case. A typical scenario consists of at least two dedicated FC Switches with multiple physical ports, with each switch representing a physically separate "Fabric", and each server or storage device having at least one connection to each Fabric.
This provides the added features of redundancy and the potential for improved performance. If a switch, HBA, or fiber optic cable fails, the connection to the storage is not lost.
In the case of a USB drive, it is a simple data block. SAN, in turn, contains data blocks of different volumes distributed between hard drives. As the network is developed for high-loaded storage devices, it uses a strong cyclic redundancy check CRC — a hash function used to produce a checksum in order to detect errors in data. Fibre channel is more isolated, as compared with TCP-IP based networks, thus minimizing security issues, malware aftermaths and human errors.
To avoid purchasing special hardware, you can opt for an Ethernet-based network. It allows the same block-level storage access but uses conventional Ethernet networks. Additionally, you will need to purchase hardware-accelerated network adapters to offload iSCSI processing from a host server or client. To implement a high-loaded storage network, you should deploy a dedicated 10Gbps Ethernet-based network either optical or copper with hardware-accelerated adapters and network switches supporting larger data frame transfers.
0コメント