TVs. Consoles. Projectors and accessories. Technologies. Digital TV

How to find out the technology used in the local network. Main characteristics of the technology. Choosing a home network architecture

Rapid development local networks, which has now been further embodied in standard 10 Gigabit Ethernet and technologies for building wireless networks IEEE 802.11b/a is attracting more and more attention. Ethernet technology has now become the de facto standard for cable networks. And although Ethernet technology has not been found in its classical form for a long time, the ideas that were originally laid down in the IEEE 802.3 protocol received their logical continuation in both Fast Ethernet and Gigabit Ethernet technologies. For the sake of historical justice, we note that technologies such as Token Ring, ARCNET, 100VG-AnyLAN, FDDI and Apple Talk also deserve attention. Well then. Let's restore historical justice and remember the technologies of bygone days.

I think there is no need to talk about the rapid progress in the semiconductor industry observed in the last decade. Network equipment suffered the fate of the entire industry: an avalanche-like growth in production, high speeds and minimal prices. In 1995, which is considered a turning point in the history of the Internet, about 50 million new Ethernet ports were sold. A good start for market dominance, which became overwhelming over the next five years.

This price level is not available for specialized telecommunications equipment. The complexity of the device does not play a special role in this case - it is rather a question of quantity. Now this seems quite natural, but ten years ago the unconditional dominance of Ethernet was far from obvious (for example, in industrial networks there is still no clear leader).

However, only in comparison with other methods of building networks can one identify the advantages (or disadvantages) of today's leader.

Basic methods of accessing the medium to the transmission medium

The physical principles according to which the equipment operates are not overly complex. According to the method of gaining access to the transmission medium, they can be divided into two classes: deterministic and non-deterministic.

With deterministic access methods, the transmission medium is distributed between nodes using a special control mechanism that guarantees the transmission of node data within a certain period of time.

The most common (but far from the only) deterministic access methods are the poll method and the transfer method. The polling method is of little use in local networks, but is widely used in industry to control technological processes.

The transfer of rights method, on the contrary, is convenient for transferring data between computers. The principle of operation is to transmit a service message - a token - over a network with a ring logical topology.

Receiving a token grants the device the right to access the shared resource. Choice workstation in this case it is limited to only two options. In any case, it must send the token to the next device in line. Moreover, this can be done after delivery of the data to the recipient (if available) or immediately (if there is no information that needs to be transferred). During the passage of data, the marker is absent in the network, the remaining stations do not have the ability to transmit, and collisions are impossible in principle. For processing possible errors, as a result of which the marker may be lost, there is a mechanism for its regeneration.

Random access methods are called non-deterministic. They provide for competition between all network nodes for the right to transmit. Simultaneous transmission attempts by several nodes are possible, resulting in collisions.

The most common method of this type is CSMA/CD (carrier-sense multiple access/collision detection). Before transmitting data, the device listens to the network to ensure that no one else is using it. If the transmission medium is being used by someone at this moment, the adapter delays the transmission, but if not, it begins to transmit data.

In the case when two adapters, having detected a free line, start transmitting simultaneously, a collision occurs. When it is detected, both transmissions are interrupted and the devices repeat the transmission after some arbitrary time (of course, after first “listening” to the channel again to see if it is busy). To receive information, a device must receive all packets on the network to determine whether it is the destination.

From the history of Ethernet

If we started looking at LANs with any other technology, we would be missing the real importance that Ethernet currently has in this area. Whether due to the prevailing circumstances or due to technical advantages, it has no competition today, occupying about 95% of the market.

Ethernet's birthday is May 22, 1973. It was on this day that Robert Metcalfe and David Boggs published a description of the experimental network they had built at the Xerox Research Center. It was based on a thick coaxial cable and provided a data transfer rate of 2.94 Mbit/s. New technology received the name Ethernet (over-the-air network), in honor of the ALOHA University of Hawaii radio network, which used a similar mechanism for dividing the transmission medium (radio air).

By the end of the 70s, Ethernet had a solid theoretical basis. And in February 1980, Xerox, together with DEC and Intel, presented the IEEE development, which three years later was approved as the 802.3 standard.

Ethernet's non-deterministic method for gaining access to the data transmission medium is carrier sense multiple access with collision detection (CSMA/CD). Simply put, devices share the transmission medium chaotically, randomly. In this case, the algorithm can lead to far from equal resolution of competition between stations for access to the medium. This, in turn, can cause long access delays, especially under congested conditions. In extreme cases, the transmission speed can drop to zero.

Because of this disorganized approach for a long time It was (and still is) believed that Ethernet does not provide quality transmission data. It was predicted that it would be replaced first by Token Ring, then by ATM, but in reality everything happened the other way around.

The fact that Ethernet still dominates the market is due to the great changes it has undergone during its 20-year existence. That “gigabit” in full duplex that we now see in networks entry level, bears little resemblance to the founder of the 10Base 5 family. At the same time, after the introduction of 10Base-T, compatibility is maintained both at the level of device interaction and at the cable infrastructure level.

Development from simple to complex, growth along with user needs - this is the key to the incredible success of the technology. Judge for yourself:

  • March 1981 - 3Com introduces an Ethernet transceiver;
  • September 1982 - the first one was created network adapter for personal computer;
  • 1983 - the IEEE 802.3 specification appeared, the bus topology of the 10Base 5 (thick Ethernet) and 10Base 2 (thin Ethernet) network was defined. Transfer speed - 10 Mbit/s. The maximum distance between points of one segment is set at 2.5 km;
  • 1985 - The second version of the IEEE 802.3 (Ethernet II) specification was released, in which minor changes were made to the packet header structure. A rigid identification of Ethernet devices (MAC addresses) has been formed. An address list has been created where any manufacturer can register a unique range (currently costs only $1,250);
  • September 1990 - IEEE approves 10Base-T (twisted pair) technology with a physical star topology and hubs. The CSMA/CD logical topology has not changed. The standard is based on developments by SynOptics Communications under the general name LattisNet;
  • 1990 - Kalpana (later it was quickly purchased along with the CPW16 switch developed by the future giant Cisco) offers switching technology based on the refusal to use shared communication lines between all nodes of the segment;
  • 1992 - the beginning of the use of switches (swich). Using the address information contained in the packet (MAC address), the switch organizes independent virtual channels between pairs of nodes. Switching effectively transforms the non-deterministic Ethernet model (with contention for bandwidth) into a data-addressed system without the user's attention;
  • 1993 - IEEE 802.3x specification, full duplex and connection control for 10Base-T appears, IEEE 802.1p specification adds multicast addressing and an 8-level priority system. Fast Ethernet proposed;
  • Fast Ethernet, IEEE 802.3u (100Base-T) standard, was introduced in June 1995.

On this a short history we can finish: Ethernet has taken on quite modern shapes, but the development of technology, of course, has not stopped - we will talk about this a little later.

Undeservedly forgotten ARCNET

ttached Resource Computing Network (ARCNET) is a network architecture developed by Datapoint in the mid-70s. ARCNET has not been adopted as an IEEE standard, but partially complies with IEEE 802.4 as a token-passing network (logical ring). The data packet can be any size ranging from 1 to 507 bytes.

Of all local networks, ARCNET has the most extensive topology capabilities. Ring, common bus, star, tree can be used in the same network. In addition to this, very extended segments (up to several kilometers) can be used. The same wide possibilities apply to the transmission medium - both coaxial and fiber optic cables, as well as twisted pair, are suitable.

This inexpensive standard was prevented from dominating the market by its low speed - only 2.5 Mbit/s. When Datapoint developed ARCNET PLUS with transfer speeds of up to 20 Mbit/s in the early 90s, time had already passed. Fast Ethernet did not leave ARCNET the slightest chance for widespread use.

Nevertheless, in favor of the great (but never realized) potential of this technology, we can say that in some industries (usually process control systems) these networks still exist. Deterministic access, auto-configuration capabilities, and negotiation of exchange rates in the range from 120 Kbit/s to 10 Mbit/s in difficult real-world production conditions make ARCNET simply irreplaceable.

In addition, ARCNET provides the ability necessary for control systems to accurately determine the maximum access time to any device on the network under any load using a simple formula: T = (TDP + TOBSNb)SND, where TDP and TOB are the transmission time of a data packet and one byte, respectively, depending on the selected transmission speed, Nb is the number of data bytes, ND is the number of devices on the network.

Token Ring is a classic example of token passing

oken Ring is another technology that dates back to the 70s. This development of the blue giant - IBM, which is the basis of the IEEE 802.5 standard, had a greater chance of success than many other local networks. Token Ring is a classic token-passing network. The logical topology (and physical in the first versions of the network) is a ring. More modern modifications are built on twisted pair cables in a star topology, and with some reservations are compatible with Ethernet.

The original transmission speed described in IEEE 802.5 was 4 Mbit/s, but a more recent implementation of 16 Mbit/s exists. Because of its more streamlined (deterministic) method of accessing the medium, Token Ring was often promoted in its early stages as a superior replacement for Ethernet.

Despite the existence of a priority access scheme (which was assigned to each station individually), it was not possible to provide a constant bit rate (Constant Bit Rate, CBR) for a very simple reason: applications that could take advantage of these schemes did not exist then. And nowadays there are not much more of them.

Given this circumstance, it was only possible to guarantee that the performance for all stations in the network would decrease equally. But to win competition this was not enough, and now it is almost impossible to find a really working Token Ring network.

FDDI - the first local network on fiber optics

Fiber Distributed Data Interface (FDDI) technology was developed in 1980 by an ANSI committee. This was the first computer network, which used only fiber optic cable as the transmission medium. The reasons that prompted manufacturers to create FDDI were the insufficient speed (no more than 10 Mbit/s) and reliability (lack of redundancy schemes) of local networks at that time. In addition, this was the first (and not very successful) attempt to bring data networks to the “transport” level, competing with SDH.

The FDDI standard stipulates data transmission over a double ring of fiber optic cable at a speed of 100 Mbit/s, which allows you to obtain a reliable (reserved) and fast channel. The distances are quite significant - up to 100 km around the perimeter. Logically, the network’s operation was based on the transfer of a token.

Additionally, a developed traffic prioritization scheme was provided. At first, workstations were divided into two types: synchronous (having a constant bandwidth) and asynchronous. The latter, in turn, distributed the transmission medium using an eight-level priority system.

Incompatibility with SDH networks did not allow FDDI to occupy any significant niche in the field of transport networks. Today this technology has practically been replaced by ATM. And the high cost left FDDI no chance in the fight with Ethernet for the local niche. Attempts to switch to cheaper copper cable did not help the standard either. CDDI technology, based on the principles of FDDI, but using twisted pair cables as a transmission medium, was not popular and was preserved only in textbooks.

Developed by AT&T and HP - 100VG-AnyLAN

that technology, like FDDI, can be classified as the second generation of local networks. It was created in the early 90s by joint efforts of AT&T and HP as an alternative to Fast Ethernet technology. In the summer of 1995, almost simultaneously with its competitor, it received the status of the IEEE 802.12 standard. 100VG-AnyLAN had a good chance of winning due to its versatility, determinism, and greater compatibility with existing systems than Ethernet. cable networks(twisted pair category 3).

The Quartet Coding scheme, using a 5B/6B redundant code, allowed the use of 4-pair twisted pair category 3, which was then almost more widespread than the modern 5th category. The transition period, in fact, did not affect Russia, where, due to the later start of construction of communication systems, networks were laid everywhere using the 5th category.

In addition to using legacy wiring, each 100VG-AnyLAN hub can be configured to support 802.3 (Ethernet) frames or 802.5 (Token Ring) frames. The Demand Priority media access method defines a simple two-level priority system - high for multimedia applications and low for all others.

I must say, this was a serious bid for success. Let down by the high cost, due to the greater complexity and, to a large extent, the technology being closed to replication by third-party manufacturers. Added to this is the already familiar Token Ring lack of real applications that take advantage of the priority system. As a result, 100Base-T managed to permanently and definitively seize leadership in the industry.

Innovative technical ideas a little later found application, first in 100Base-T2 (IEEE 802.3у), and then in “gigabit” Ethernet 1000Base-T.

Apple Talk, Local Talk

Apple Talk is a protocol stack proposed by Apple in the early 80s. Initially, Apple Talk protocols were used to work with network equipment, collectively called Local Talk (adapters built into Apple computers).

The network topology was built as a common bus or “tree”, maximum length it was 300 m, transmission speed - 230.4 Kbps. The transmission medium is shielded twisted pair. The Local Talk segment could connect up to 32 nodes.

Low bandwidth quickly necessitated the development of adapters for higher bandwidth network environments: Ether Talk, Token Talk, and FDDI Talk for Ethernet, Token Ring, and FDDI networks, respectively. Thus, Apple Talk has gone the route of universality at the link level and can adapt to any physical implementation of the network.

Like most other Apple products, these networks live within the “Apple” world and have virtually no overlap with PCs.

UltraNet - network for supercomputers

Another virtually unknown type of network in Russia is UltraNet. It was actively used to work with supercomputer-class computing systems and mainframes, but is currently being actively replaced by Gigabit Ethernet.

UltraNet uses a star topology and is capable of providing information exchange speeds between devices up to 1 Gbit/s. This network is characterized by a very complex physical implementation and very high prices, comparable to supercomputers. To control UltraNet, PC computers are used, which are connected to a central hub. Additionally, the network may include bridges and routers for connecting to networks built using Ethernet or Token Ring technologies.

Can be used as a transmission medium coaxial cable and optical fiber (for distances up to 30 km).

Industrial and specialized networks

It should be noted that data networks are used not only for communication between computers or for telephony. There is also a fairly large niche of industrial and specialized devices. For example, CANBUS technology is quite popular, created to replace thick and expensive wiring harnesses in cars with one common bus. This network does not have a large selection of physical connections, the segment length is limited, and the transmission speed is low (up to 1 Mbit/s). However, CANBUS is a successful combination of quality indicators and low price implementations necessary for small and medium-sized automation. Similar systems also include ModBus, PROFIBUS, FieldBus.

Today, the interests of CAN controller developers are gradually shifting towards home automation.

ATM as a universal data transmission technology

It is not for nothing that the description of the ATM standard is placed at the end of the article. This is perhaps one of the last, but unsuccessful attempts to give battle to Ethernet on its field. These technologies are the complete opposite of each other in terms of the history of creation, the course of implementation and ideology. If Ethernet rose “from the bottom up, from the specific to the general”, increasing speed and quality, following the needs of users, then ATM developed completely differently.

In the mid-1980s, the American National Standards Institute (ANSI) and the International Consultative Committee on Telephony and Telegraphy (CCITT) began developing the ATM (Asynchronous Transfer Mode) standards as a set of recommendations for the B-ISDN (Broadband Integrated) network. Services Digital Network). Only in 1991, the efforts of academic science culminated in the creation of the ATM Forum, which still determines the development of technology. The first major project made using this technology in 1994 was the backbone of the famous NSFNET network, which previously used the T3 channel.

The essence of ATM is very simple: you need to mix all types of traffic (voice, video, data), compress it and transmit it over one communication channel. As noted above, this is achieved not through any technical breakthroughs, but rather through numerous compromises. In some ways this is similar to the way we solve differential equations. Continuous data is divided into intervals that are small enough that switching operations can be performed.

Naturally, this approach greatly complicated the already difficult task of developers and manufacturers of real equipment and delayed the implementation timeframe unacceptably for the market.

The size of the minimum portion of data (cells - in ATM terminology) is influenced by several factors. On the one hand, increasing the size reduces the speed requirements of the cell processor-switch and increases the efficiency of channel utilization. On the other hand, the smaller the cell, the faster transmission is possible.

Indeed, while one cell is being transmitted, the second (even the highest priority) is waiting. Strong mathematics, the mechanism of queues and priorities can slightly smooth out the effect, but not eliminate the cause. After quite a lot of experimentation, in 1989 the cell size was determined to be 53 bytes (5 bytes of service and 48 bytes of data). It is obvious that for different speeds this size may vary. If for speeds from 25 to 155 Mbit/s a size of 53 bytes is suitable, then for a gigabit 500 bytes will be no worse, and for 10 gigabits 5000 bytes are also suitable. But in this case the compatibility problem becomes insoluble. The reasoning is by no means academic in nature - it was the limitation on the switching speed that set the technical limit for increasing ATM speeds beyond 622 Mbit and sharply increased the cost at lower speeds.

The second compromise of ATM is connection-oriented technology. Before a transmission session, a sender-receiver virtual channel is established at the link layer, which cannot be used by other stations, whereas in traditional statistical multiplexing technologies no connection is established, and packets with specified address. To do this, the port number and connection identifier, which is present in the header of each cell, are entered into the switching table. Subsequently, the switch processes incoming cells based on the connection IDs in their headers. Based on this mechanism, it is possible to regulate for each connection throughput, delay and maximum data loss - that is, to ensure a certain quality of service.

All of these properties, plus good compatibility with the SDH hierarchy, allowed ATM to relatively quickly become the standard for backbone data networks. But with the full implementation of all the capabilities of the technology, big problems arose. As has happened more than once, local networks and client applications did not support ATM functions, and without this, a powerful technology with great potential turned out to be just an unnecessary conversion between the worlds of IP (essentially Ethernet) and SDH. This was a very unfortunate situation that the ATM community tried to correct. Unfortunately, there were some strategic miscalculations. Despite all the advantages of fiber optics over copper cabling, the high cost of interface cards and switch ports made 155 Mbps ATM extremely expensive for use in this market segment.

Having attempted to identify low-speed solutions for desktop systems, The ATM Forum has become embroiled in a destructive debate over what speed and connection type to target. Manufacturers are divided into two camps: supporters of copper cable with a speed of 25.6 Mbit/s and supporters of optical cable with a speed of 51.82 Mbit/s. After a series of high-profile conflicts (the initial speed chosen was 51.82 Mbit/s), the ATM Forum proclaimed 25 Mbit/s as the standard. But precious time was lost forever. In the technology market, we had to meet not with “classic” Ethernet with its shared transmission medium, but with Fast Ethernet and switched 10Base-T (with the hope of the soon appearance of switched 100Base-T). High price, small number of manufacturers, need for more qualified service, problems with drivers, etc. only made the situation worse. Hopes for penetration into the corporate network segment collapsed, and the rather weak intermediate position of ATM was consolidated for some time. This is its position in the industry today.

ComputerPress 10"2002

In local networks, the main role in organizing the interaction of nodes belongs to the protocol link layer, which is focused on a very specific topology of the LCS. Thus, the most popular protocol of this level - Ethernet - is designed for the “common bus” topology, when all network nodes are connected in parallel to a common bus for them, and the Token Ring protocol is designed for the “star” topology. In this case, simple structures of cable connections between the PCs of the network are used, and to simplify and reduce the cost of hardware and software solutions, the sharing of cables by all PCs in time-sharing mode is implemented. Such simple solutions, characteristic of the developers of the first LCS in the second half of the 70s of the twentieth century, along with positive ones, also had negative consequences, the main of which were limitations on performance and reliability.

Since in a LCS with the simplest topology (common bus, ring, star) there is only one path for transmitting information - a mono channel, performance The network is limited by the capacity of that path, and the reliability of the network is limited by the reliability of the path. Therefore, as the scope of local networks developed and expanded with the help of special communication devices (bridges, switches, routers), these restrictions were gradually lifted. Basic configurations LKS (bus, ring) have turned into elementary links from which more complex structures of local networks are formed, with parallel and backup paths between nodes.

However, within the basic structures of local networks, the same Ethernet and Token Ring protocols continue to operate. The integration of these structures (segments) into a common, more complex local network is carried out using additional equipment, and the interaction of PCs in such a network is carried out using other protocols.

In the development of local networks, in addition to those noted, other trends have emerged:

  • refusal of shared data transmission media and the transition to the use of active switches, to which PC networks are connected by individual communication lines;
  • the emergence of a new mode of operation in LCS when using switches - full-duplex (although in the basic structures of local networks, PCs operate in half-duplex mode, since the network adapter of the station at each moment of time either transmits its data or receives others, but does not do this at the same time) . Today, each LCS technology is adapted to operate in both half-duplex and full-duplex modes. The standardization of LCS protocols was carried out by Committee 802, organized in 1980 at the IEEE Institute. The standards of the IEEE 802.X family cover only the two lower layers of the OSI model - physical and link. It is these levels that reflect the specifics of local networks; senior levels, starting with the network level, have common features for networks of any class.

On local networks link layer divided into two sublevels:

  • logical data transfer ( LLC - Logical Link Control);
  • media access control ( MAC - Media Access Control).

MAC sublayer protocols and LLC mutually independent, i.e. each MAC sublayer protocol can work with any sublayer protocol LLC, and vice versa.

The MAC sublayer provides sharing of a common transmission medium, and the MAC sublayer LLC organizes the transfer of personnel with different levels of quality of transport services. Modern LCSs use several MAC sublayer protocols that implement different algorithms for accessing shared environment and defining the specifics of technologies Ethernet, Fast Ethernet, Gigabit Ethernet, Token Ring, FDDI, 100VG-AnyLAN.

LLC Protocol. For LKS, this protocol ensures the necessary quality of transport service. It occupies a position between network protocols and MAC sublayer protocols. According to the protocol LLC frames are transmitted either by datagram method or using procedures that establish a connection between interacting network stations and restore frames by retransmitting them if they contain distortions.

Ethernet technology (802.3 standard). This is the most common local network standard. Most LCSs currently operate using this protocol. There are several options and modifications Ethernet technologies, making up a whole family of technologies. Of these, the most well-known are the 10-megabit version of the IEEE 802.3 standard, as well as new high speed technologies Fast Ethernet and Gigabit Ethernet. All these options and modifications differ in the type of physical data transmission media.

All types of Ethernet standards use the same method of accessing the transmission medium - the method random access CSMA/CD. It is used exclusively in networks with a common logical bus, which operates in multiple access mode and is used to transfer data between any two network nodes. This access method is probabilistic in nature: the probability of obtaining the transmission medium at your disposal depends on the network congestion. When the network is heavily loaded, the intensity of collisions increases and its useful throughput drops sharply.

Usable Network Bandwidth- This transmission speed user data carried by the frame data field. It is always less than the nominal bit rate of the Ethernet protocol due to frame overhead, interframe intervals and waiting for access to the medium. The network utilization rate in the absence of collisions and access waiting is maximum value 0,96.

Ethernet technology supports 4 different types frames having a common address format. Frame type recognition is carried out automatically.

All Ethernet standards have the following characteristics and limitations:

  • nominal throughput - 10 Mbit/s;
  • the maximum number of PCs in the network is 1024;
  • the maximum distance between nodes in the network is 2500 m;
  • the maximum number of coaxial network segments is 5;
  • maximum segment length - from 100 m (for 10Base -T) to 2000 m (for 10Base -F);
  • the maximum number of repeaters between any network stations is 4.

Token Ring technology (802.5 standard). This uses a shared transmission medium, which consists of cable segments connecting all PC networks into a ring. Deterministic access is applied to the ring (common shared resource), based on the transfer of the right to use the ring to stations in a certain order. This right is conveyed by means of a marker. The token access method guarantees each PC access to the ring within the token rotation time. A priority marker ownership system is used - from 0 (lowest priority) to 7 (highest). The priority for the current frame is determined by the station itself, which can seize the ring if there are no higher priority frames in it.

In Token Ring networks as a physical data transmission media Shielded and unshielded twisted pair and fiber optic cable are used. Networks operate at two bit rates - 4 and 16 Mbit/s, and in one ring all PCs must operate at the same speed. The maximum length of the ring is 4 km, and maximum quantity RS in the ring - 260. Restrictions on the maximum length of the ring are related to the time the marker turns around the ring. If there are 260 stations in the ring and the time the marker is held by each station is 10 ms, then the marker, after completing a full rotation, will return to the active monitor in 2.6 s. When transmitting a long message, divided, for example, into 50 frames, this message will be received by the recipient in the best case (when only the sender PC is active) after 260 s, which is not always acceptable for users.

The maximum frame size in the 802.5 standard is not defined. It is usually taken to be 4 KB for 4 Mbit/s networks and 16 KB for 16 Mbit/s networks.

16 Mbit/s networks also use a more efficient ring access algorithm. This is an early token release (ETR) algorithm: a station transmits an access token to the next station immediately after finishing transmitting the last bit of its frame, without waiting for the frame and the occupied token to return around the ring. In this case, frames from several stations will be transmitted simultaneously along the ring, which significantly increases the efficiency of using the ring capacity. Of course, in this case, at any given moment, only the RS that at that moment owns the access token can generate a frame into the ring, and other stations will only relay other people’s frames.

Token Ring technology (the technology of these networks was developed back in 1984 by IBM) is significantly more complex than Ethernet technology. It contains fault tolerance capabilities: due to feedback ring one of the stations (active monitor) continuously monitors the presence of the marker, the turnaround time of the marker and data frames, detected errors in the network are eliminated automatically, for example, a lost marker can be restored. If the active monitor fails, a new active monitor is selected and the ring initialization procedure is repeated.

The Token Ring standard initially provided for building connections in the network using hubs called MAU, i.e. multiple access devices. The hub can be passive (connects ports internal connections so that the PCs connected to these ports form a ring, and also provides bypass of a port if the computer connected to this port is turned off) or active (performs signal regeneration functions and is therefore sometimes called a repeater).

Token Ring networks are characterized by a star-ring topology: PCs are connected to hubs using a star topology, and the hubs themselves are combined through special Ring In (RI) and Ring Out (RO) ports to form a backbone physical ring. The Token Ring network can be built on the basis of several rings, separated by bridges, routing frames to the recipient (each frame is equipped with a field with the route of the rings).

Recently, the Token Ring technology, through the efforts of IBM, received a new development: it was proposed new option this technology ( HSTR), supporting bit rates of 100 and 155 Mbit/s. At the same time, the main features of the 16 Mbit/s Token Ring technology are preserved.

FDDI technology. This is the first LCS technology that uses fiber optic cable to transmit data. It appeared in 1988 and its official name is fiber optic distributed data interface ( Fiber Distributed Data Interface, FDDI). Currently, in addition to fiber optic cable, unshielded twisted pair cable is used as a physical medium.

Technology FDDI designed for use on backbone connections between networks, for connecting high-performance servers to a network, in corporate and metropolitan networks. Therefore, it provides high transmission speed data (100 Mbit/s), fault tolerance at the protocol level and long distances between network nodes. All this affected the cost of connecting to the network: this technology turned out to be too expensive for connecting client computers.

There is significant continuity between Token Ring and FDDI. The main ideas of the Token Ring technology were adopted and received improvement and development in technology

Network technology is a minimum set of standard protocols and software and hardware that implement them, sufficient to build computer network. Network technologies called core technologies. To ensure consistent operation in data networks, various data communication protocols are used - sets of rules that the transmitting and receiving parties must adhere to for consistent data exchange. Protocols are sets of rules and procedures that govern how some communication can be carried out. Protocols are rules and technical procedures that allow multiple devices, when networked, to communicate with each other.

There are many protocols. And although they all participate in the implementation of communication, each protocol has different goals, performs different tasks, and has its own advantages and limitations.

Protocols work on different levels interaction models open systems OSI/ISO. The functions of a protocol are determined by the layer at which it operates. Multiple protocols can work together. This is the so-called stack, or set, of protocols.

Just as network functions are distributed across all layers of the OSI model, protocols operate together at different layers of the protocol stack. The layers in the protocol stack correspond to the layers of the OSI model. Taken together, the protocols provide full description functions and capabilities of the stack.

Data transmission over a network, from a technical point of view, must consist of successive steps, each of which has its own procedures or protocol. Thus, a strict sequence in performing certain actions is maintained.

In addition, all these actions must be performed in the same sequence on each network computer. On the sending computer, actions are performed in a top-down direction, and on the receiving computer, from bottom to top.

The sending computer, in accordance with the protocol, performs the following actions: breaks the data into small blocks called packets that the protocol can work with, adds address information to the packets so that the receiving computer can determine that this data is intended for it, prepares the data for transmission through the network adapter card and then via the network cable.

The recipient computer, in accordance with the protocol, performs the same actions, but only in reverse order: receives data packets from the network cable; transmits data to the computer via the network adapter card; removes from the packet all service information added by the sending computer, copies the data from the packet into a buffer - to combine it into the original block, transfers this data block to the application in the format that it uses.

Both the sending computer and the receiving computer need to perform each action in the same way so that the data received over the network matches the data sent.

If, for example, two protocols have different ways of breaking up data into packets and adding information (packet sequencing, timing, and error checking), then a computer running one of those protocols will not be able to successfully communicate with a computer running the other protocol .

Until the mid-80s, most local networks were isolated. They served individual companies and were rarely combined into large systems. However, when local networks reached high level development and the volume of information transmitted by them has increased, they have become components of large networks. Data transmitted from one local network to another along one of the possible routes is called routed. Protocols that support data transfer between networks over multiple routes are called routed protocols.

Among the many protocols, the most common are the following:

IPX/SPX and NWLmk;

OSI protocol suite.

Wi-Fi is a wireless communication technology. This name stands for Wireless Fidelity (from English - wireless precision). Designed for access over short distances and, at the same time, at fairly high speeds. There are three modifications of this standard - IEEE 802.11a, b and g, their difference from each other is in the data transfer speed and the distance over which they can transmit data. Maximum speed operation 11/ 54/ 320 Mbit/s, respectively, and the transmission distance is about 100 meters. The technology is convenient in that it does not require much effort to connect computers into a network and avoids the inconveniences that arise when laying cables. Currently, the services can be used in cafes, airports, parks, etc.

USB network. Designed mainly for laptop users, because... If you don't have a network card in your laptop, it can be quite expensive. The convenience is that the network can be created without using network cards and hubs, versatility, the ability to connect any device. Data transfer speed is 5-7 Mbit/s.

Local network through electrical wires. 220V. Electrical networks cannot be compared with local and global networks. There is an electrical outlet in every apartment, in every room. You can stretch tens of meters of cables around the house, connecting all the computers, printers and other network devices. But then each computer will become a “workplace”, permanently located in the room. To move it means to shift it network cable. Can be installed at home wireless network IEEE 802.11b, but there may be problems with signal penetration through walls and ceilings, and besides, this is unnecessary radiation, which is already enough in modern life. But there is another way - to use existing electrical wires and sockets installed in the walls. The only thing you need for this is the appropriate adapters. Speed network connection through electrical wires is 14 Mbit/s. The range is approximately 500 meters. But it is worth considering that distribution network- three-phase, and one phase and one zero are supplied to the houses, evenly loading each of the phases. So, if one user is connected to one phase, and the second to another, then use similar system It won't work.

Network technology is a minimum set of standard protocols and software and hardware that implement them, sufficient to build a computer network. Network technologies are called core technologies. Currently, there are a huge number of networks with various levels of standardization, but such well-known technologies as Ethernet, Token-Ring, Arcnet have become widespread.

To ensure consistent operation in data networks, various data communication protocols are used - sets of rules that the transmitting and receiving parties must adhere to for consistent data exchange. Protocols are sets of rules and procedures that govern how some communication can be carried out. Protocols are rules and technical procedures that allow multiple computers, when networked, to communicate with each other.

There are many protocols. And although they all participate in the implementation of communication, each protocol has different goals, performs different tasks, and has its own advantages and limitations.

Protocols operate at different levels of the OSI/ISO open systems interconnection model. The functions of a protocol are determined by the layer at which it operates. Multiple protocols can work together. This is the so-called stack, or set, of protocols.

Just as network functions are distributed across all layers of the OSI model, protocols operate together at different layers of the protocol stack. The layers in the protocol stack correspond to the layers of the OSI model. Taken together, the protocols provide a complete description of the functions and capabilities of the stack.

Data transmission over a network, from a technical point of view, must consist of successive steps, each of which has its own procedures or protocol. Thus, a strict sequence in performing certain actions is maintained.

In addition, all these steps must be performed in the same sequence on each network computer. On the sending computer, actions are performed in a top-down direction, and on the receiving computer, from bottom to top.

The sending computer, in accordance with the protocol, performs the following actions: Breaks the data into small blocks called packets that the protocol can work with, adds address information to the packets so that the receiving computer can determine that this data is intended for it, prepares the data for transmission through the network adapter card and then via the network cable.

The recipient computer, in accordance with the protocol, performs the same actions, but only in reverse order: receives data packets from the network cable; transmits data to the computer via the network adapter card; removes from the packet all service information added by the sending computer, copies the data from the packet into a buffer - to combine it into the original block, transfers this data block to the application in the format that it uses.

Both the sending computer and the receiving computer need to perform each action in the same way so that the data received over the network matches the data sent.

If, for example, two protocols have different ways of breaking up data into packets and adding information (packet sequencing, timing, and error checking), then a computer running one of those protocols will not be able to successfully communicate with a computer running the other protocol .

Until the mid-80s, most local networks were isolated. They served individual companies and were rarely combined into large systems. However, when local networks reached a high level of development and the volume of information transmitted by them increased, they became components of large networks. Data transmitted from one local network to another along one of the possible routes is called routed. Protocols that support data transfer between networks over multiple routes are called routed protocols.

Among the many protocols, the most common are the following:

IPX/SPX and NWLmk;

OSI protocol suite.

At the moment, Ethernet is the most common technology in local networks. More than 10 million local networks and more than 100 million computers that have a network card that supports this technology operate on the basis of this technology. There are several subtypes of Ethernet depending on the speed and types of cable used.

One of the founders of this technology is Xerox, which developed and created the Ethernet Network test network in 1975. Most of the principles implemented in the mentioned network are still used today.

Gradually, the technology improved to meet the increasing level of user requests. This has led to the technology expanding its scope of application to data transmission media such as optical fiber or unshielded twisted pair.

The reason for the start of using these cable systems was the fairly rapid increase in the number of local networks in various organizations, as well as low performance local networks using coaxial cable. At the same time, a need arose for convenient and cost-effective management and maintenance of these networks, which legacy networks could no longer provide.

Basic principles of Ethernet operation. All computers on the network are connected to a common cable called a common bus. A cable is a transmission medium and can be used by any computer on a given network to receive or transmit information.

Ethernet networks use a packet data transfer method. The sending computer selects the data to be sent. This data is converted into short packets (sometimes called frames) that contain the sender and recipient addresses. The packet is equipped with service information - a preamble (marks the beginning of the packet) - and information about the value of the packet checksum, which is necessary to verify the correct transmission of the packet over the network.

Before sending a packet, the sending computer checks the cable, checking that it does not contain a carrier frequency at which the transmission will occur. If such a frequency is not observed, then it begins transmitting the packet to the network.

The packet will be accepted by all network cards of computers that are connected to this network segment. Network cards control the packet's destination address. If the destination address does not match the address of this computer, then the packet is rejected without processing. If the addresses match, then the computer will accept and process the packet, removing all service data from it and transporting necessary information“up” the levels of the OSI model up to the application level.

After the computer transmits the packet, it waits a short pause equal to 9.6 μs, after which it again repeats the packet transmission algorithm until the necessary data is completely transported. The pause is needed so that one computer does not have the physical ability to block the network when transmitting a large amount of information. While this technological pause lasts, the channel will be able to be used by any other computer on the network.

If two computers simultaneously check the channel and attempt to send data packets over a common cable, then as a result of these actions a collision occurs, since the contents of both frames collide on a common cable, which significantly distorts the transmitted data.

After a collision is found, the transmitting computer must stop the transmission for a short random interval of time.

An important condition for the correct operation of the network is the mandatory recognition of collisions by all computers at the same time. If any transmitting computer does not calculate the collision and conclude that the packet was transmitted correctly, then this packet will simply be lost due to the fact that it will be highly distorted and rejected by the receiving computer (checksum mismatch).

It is likely that lost or corrupted information will be retransmitted by the protocol top level, which works with connection establishment and identification of its messages. It should also be taken into account that retransmission will occur after a fairly long time interval (tens of seconds), which will lead to a significant reduction in the throughput of a particular network. That is why timely recognition of collisions is extremely important for the stability of the network.

All Ethernet parameters are designed so that collisions are always clearly identified. That is why the minimum length of the frame data field is at least 46 bytes (and taking into account service information - 72 bytes or 576 bits). The length of the cable system is calculated in such a way that during the time it takes a frame of minimum length to be transported, the collision signal has time to reach the most remote computer on the network. Based on this, at a speed of 10 Mbit/s, the maximum distance between arbitrary network elements cannot exceed 2500 m. The higher the data transfer rate, the shorter the maximum network length (decreases proportionally). Using the Fast Ethernet standard, the maximum distance is limited to 250 m, and in the case of Gigabit Ethernet - 25 m.

Thus, the probability of successfully obtaining a shared environment directly depends on the network load.

The constant increase in the level of requirements for network throughput led to the development of Ethernet technology, the transmission speed of which exceeded 10 Mbit/s. In 1992, the Fast Ethernet standard was implemented, supporting information transport at a speed of 100 Mbit/s. Most of the operating principles of Ethernet remain unchanged.

Some changes have occurred in cable system. Coaxial cable was not able to provide an information transfer rate of 100 Mbit/s, so it is being replaced in Fast Ethernet by unshielded twisted pair cables, as well as fiber optic cable.

There are three types of Fast Ethernet:

The 100Base-TX standard uses two cable pairs at once: UTP or STP. One pair is needed for data transmission, and the second for reception. Two cable standards meet these requirements: EIA/TIA-568 UTP Category 5 and STP Type 1 from IBM. 100Base-TX provides full duplex capability while working with network servers, as well as the use of only two of the four bunks of an eight-core cable - the remaining two pairs will be free and can be used in the future to expand the functionality of this network (for example, it is possible to organize a telephone network on their basis).

The 100Base-T4 standard allows the use of Category 3 and Category 5 cables. This is because 100Base-T4 uses four pairs of eight-core cable: one for transmitting and one for receiving, the rest can be used for both transmitting and transmitting. and for reception. Accordingly, both reception and transmission of data can be carried out over three pairs at once. If the total bandwidth of 100 Mbps is distributed over three pairs, then 100Base-T4 reduces the signal frequency, so a lower quality cable is sufficient for normal operation. Category 3 and Category 5 UTP cables can be used for 100Base-T4 networks, just like Category 5 UTP and Type 1 STP.

The 100Base-FX standard uses multimode optical fiber with a 62.5-micron core and 125-micron cladding for data transmission. This standard designed for highways - connecting Fast Ethernet repeaters within the same room. The main advantages of the optical cable were transferred to the 100Base-FX standard under consideration: immunity to electromagnetic noise, an increased level of information security and increased distances between network devices.

For a long time, the Firewire interface (High Speed ​​Serial Firewire, also known as IEEE1394) was used primarily for streaming video processing. In general, this is what it was originally designed for. However, the highest, even by today's standards, throughput of this interface (400 Mbit/s) made it quite effective for modern high-speed peripheral devices, as well as for organizing small high-speed networks.

Thanks to WDM driver support, the Firewire interface is supported by operating systems starting with Windows 98 Second Edition. However, built-in support Firewire interface was first implemented in Windows Millennium, and is now supported in Windows 2000 and Windows XP. All operating systems except Windows 98SE also support hot network installation. If a Firewire controller is present in the system, Windows automatically installs a virtual network adapter, with the ability to directly access and modify standard network settings.

By default, the Firewire network supports the TCP/IP protocol, which is quite sufficient to solve most modern network problems, for example, Internet function Connection Sharing (Internet sharing) built into operating system Microsoft.

Firewire provides a significant speed advantage over a standard 100BaseT Ethernet network. But this is not the main advantage of the Firewire network. More important is the ease of creating such a network, which is accessible to a user of not the highest level of training. It is also important to note versatility and low cost.

The main disadvantage of a Firewire network is the limited length of the cable. According to the specification, to operate at a speed of 400 Mbit/s, the cable length should not exceed 4.5 meters. To solve this problem, use various options repeaters.

A few years ago, a new Ethernet standard was developed - Gigabit Ethernet. At the moment it is not yet widely used. Gigabit Ethernet technology uses optical channels and shielded twisted pair as a medium for transporting information. Such a medium can increase data transfer rates tenfold, which is a necessary condition for conducting video conferences or running complex programs that operate large volumes information.

This technology uses the same principles as earlier Ethernet standards. In addition, a network that is based on shielded twisted pair cables can be implemented by switching to Gigabit Ethernet technology by replacing network cards and network equipment that are used on the network, 1000Base-X contains three physical interfaces, the parameters and characteristics of which are indicated below:

The 1000Base-SX interface defines lasers with an acceptable radiation length in the range of 770-860 nm, the transmitter radiation power in the range from 10 to 0 dBm, with the existing ON/OFF ratio (there is a signal/no signal) of at least 9 dB. The sensitivity of such a receiver is 17 dBm, and its saturation is 0 dBm.

The 1000Base-LX interface defines lasers with an acceptable radiation length in the range of 1270-1355 nm, the transmitter radiation power in the range from 13.5 to 3 dBm, with the existing ON/OFF ratio (there is a signal/no signal) of at least 9 dB. The sensitivity of such a receiver is 19 dBm, and its saturation is 3 dBm.

1000Base-CX is a shielded twisted pair cable designed for transporting data over short distances. All four pairs of copper cable are used to transport data, and the transmission speed over one pair is 250 Mbit/s. Gigabit Ethernet technology is the fastest local network technology currently available. Soon enough, most networks will be created based on this technology.

Wi-Fi is a wireless communication technology. This name stands for Wireless Fidelity. Designed for access over short distances and, at the same time, at fairly high speeds. There are three modifications of this standard - IEEE 802.11a, b and g, their difference from each other is in the data transfer speed and the distance over which they can transmit data. The maximum operating speed is 11/ 54/ 320 Mbit/s, respectively, and the transmission distance is about 100 meters. The technology is convenient in that it does not require much effort to connect computers into a network and avoids the inconveniences that arise when laying cables. Currently, the services can be used in cafes, airports, parks, etc.

USB network. Designed mainly for laptop users, because... If you don't have a network card in your laptop, it can be quite expensive. The convenience is that the network can be created without the use of network cards and hubs, versatility, the ability to connect any computer. Data transfer speed 5-7 Mbit/s. Local network via electrical wires. 220V. Electrical networks cannot be compared with local and global networks. There is an electrical outlet in every apartment, in every room. You can stretch tens of meters of cables around the house, connecting all the computers, printers and other network devices. But then each computer will become a “workplace”, permanently located in the room. Moving it means moving the network cable. You can install an IEEE 802.11b wireless network at home, but problems may arise with signal penetration through walls and ceilings, and besides, this is unnecessary radiation, which is already enough in modern life. But there is another way - to use existing electrical wires and sockets installed in the walls. The only thing you need for this is the appropriate adapters. The network connection speed via electrical wires is 14 Mbit/s. The range is approximately 500 meters. But it is worth considering that the distribution network is three-phase, and houses are supplied with one phase and a neutral, evenly loading each of the phases. So, if one user is connected to one phase, and the second to another, then it will not be possible to use such a system.

A comparative analysis of local area network technologies is presented in Appendix B.

Basic technologies local networks

To simplify and reduce the cost of hardware and software in local networks, mono channels are most often used, used jointly by all computers on the network in time-sharing mode (the second name for mono channels is shared channels). A classic example of a monochannel is a bus topology network channel. Networks of ring topology and radial topology with a passive center also use monochannels, since, despite the adjacency of each network node with its own network segment, access to these segments of adjacent nodes at an arbitrary point in time is not allowed. These segments are used only as a whole together with the entire shared channel by all computers on the network according to a specific algorithm. Moreover, at any given time, a mono channel belongs to only one computer. This approach makes it possible to simplify the logic of network operation, since there is no need to control the overflow of nodes with packets from many stations that decide to simultaneously transmit information. In global networks, very complex algorithms are used for this control.

But the presence of only one data transmission channel shared by all subscribers limits the system throughput. Therefore, in modern networks communication devices (bridges, routers) that separate shared network into subnetworks (segments) that can operate autonomously, exchanging data among themselves as needed. At the same time, the control protocols on the LAN remain the same as those used in non-shared networks.

The protocols of the two lower control levels of the OSI model have received the greatest development in local networks. Moreover, in networks using a monochannel, link level protocols are divided into two sublevels:

· sublevel of logical data transfer – LLC (Logical Link Control);

· network access control sublayer – MAC (Media Access Control).

The sublayer of logical data transfer for most protocols, including the IEEE 802.x family, which includes the main LAN protocols, is the same. (The main LAN protocols include: IEEE 802.2 - LLC logical data transfer protocol; MAC network access protocols: IEEE 802.3 - Ethernet - these protocols are almost the same; IEEE 802.4 - Token Bus, IEEE 802.5 - Token Ring, etc. ).

The currently most widespread technology (the number of networks using this technology has exceeded 5 million with the number of computers in these networks exceeding 50 million) was created in the late 70s and in its original version used a coaxial cable as a communication line. But later many modifications of this technology were developed, designed for other communications. Ethernet technologies And IEEE 802.3 are similar in many ways; the latter supports not only the “common bus” topology, but also the “star” topology. The Ethernet specification supports a random access method (contention method), and its popularity is due to its reliable, simple and inexpensive technologies.

IEEE 802.5/Token Ring technology supports ring (main) and radial (additional) network topologies that use the token passing method (also called the deterministic token method) to access the monochannel. The implementation of this technology is significantly more expensive and complex than Ethernet technology, but it is also quite common.

ARCNet technology(Attached Resource Computer Network) is a relatively inexpensive, simple and reliable technology used only in networks with personal computers. It supports a variety of communication lines, including coaxial cable, twisted pair cable, and fiber optic cable. The topologies it serves are radial and bus with access to a monochannel using the transfer of authority method.

FDDI technology(Fiber Distributed Data Interface, fiber-optic distributed data interface) is largely based on Token Ring technology, but is focused on fiber-optic communication lines (it is possible to use unshielded twisted pair) and provides data transmission over a ring length of up to 100 km with a maximum number of nodes 500 and speed 100 Mbit/s. A deterministic token access method is used without prioritization. Due to the high cost of technology
It is implemented mainly in trunk channels and large networks.



Related publications