Book Image

Implementing Cisco Networking Solutions

By : Harpreet Singh
5 (1)
Book Image

Implementing Cisco Networking Solutions

5 (1)
By: Harpreet Singh

Overview of this book

Most enterprises use Cisco networking equipment to design and implement their networks. However, some networks outperform networks in other enterprises in terms of performance and meeting new business demands, because they were designed with a visionary approach. The book starts by describing the various stages in the network lifecycle and covers the plan, build, and operate phases. It covers topics that will help network engineers capture requirements, choose the right technology, design and implement the network, and finally manage and operate the network. It divides the overall network into its constituents depending upon functionality, and describe the technologies used and the design considerations for each functional area. The areas covered include the campus wired network, wireless access network, WAN choices, datacenter technologies, and security technologies. It also discusses the need to identify business-critical applications on the network, and how to prioritize these applications by deploying QoS on the network. Each topic provides the technology choices, and the scenario, involved in choosing each technology, and provides configuration guidelines for configuring and implementing solutions in enterprise networks.
Table of Contents (12 chapters)

The OSI model and the TCP/IP stack

"A common language is a first step towards communication across cultural boundaries."
- Ethan Zuckerman

In communication, it is critical to have a common language and semantics that both parties can understand for the communication to be effective. This can be thought of as having a common language when talking of human communication, and as a protocol while talking of computer networking/communications. As discussed in the previous section, with the advent of computer networking, many vendors came out with their own proprietary protocols for computers to talk to each other, leading to interoperability issues between computer systems and networking was limited to devices from the same vendor. You can't get a person who knows only Chinese to effectively communicate with a person who knows only Russian!

International bodies involved in standardization were making efforts to evolve an open common framework, which could be used by all devices that needed to communicate with each other. These efforts led to the development of a framework called the Basic Reference Model for Open Systems Interconnections (OSI) reference model. This was jointly developed by the International Organization for Standardization (ISO) and International Telegraph and Telephone Consultative Committee (CCITT) (abbreviated from the Comité Consultatif International Téléphonique et Télégraphique), which later became the ITU-T.

We will broadly define the OSI model in the subsequent section, and then dive deeper into the TCP/IP model that will help clarify some of the concepts that might appear vague in the OSI discussion, as the OSI model is only a reference model without any standardization of interfaces or protocols, and was developed before the TCP/IP protocols were developed.

OSI had two major components as defined in the ISO/IEC 7498-1 standard:

  • An abstract model of networking, called the Basic Reference Model or seven-layer model
  • A set of specific protocols defined by other specifications within ISO

Basic OSI reference model

The communication entities perform a variety of different functions during the communication process. These functions range from creating a message, formatting the message, adding information that can help detect errors during transmission, sending the data on the physical medium, and so on.

The OSI reference model defines a layered model for interconnecting systems, with seven layers. The layered approach allows the model to group similar functions within a single layer, and provides standard interfaces allowing the various layers to talk to each other.

Figure 1 shows the seven layers of the OSI model. It is important to note that the reference model defines only the functions of each layer, and the interfaces with the adjoining layers. The OSI model neither standardizes the interfaces between the various layers within the system (subsequently standardized by other protocol standards) nor delves into the internals of the layer, as to how the functions are implemented in each layer.

The OSI model describes the communication flow between two entities as follows:

  • The layers have a strict peering relationship, which means that layers at a particular level would communicate with its peer layers on the other nodes through a peering protocol, for example, data generated at layer 3 of one node would be received by the layer 3 at the other node, with which it has a peering relationship.
  • The peering relationship can be between two adjacent devices, or across multiple hops. As an example, the intermediate node in figure 1, that has only layers 1 through 3, the peering relationship at layer 7 will be between the layer 7 at the transmitting and receiving nodes, which are not directly connected but are multiple hops away.
  • The data to be transmitted is composed at the application layer of the transmitting node and will be received at the application layer of the receiving node.
  • The data will flow down the OSI-layered hierarchy from layer 7 to layer 1 at the transmitting node, traverse the intermediate network, and flow up the layered hierarchy from layer 1 to layer 7 at the receiving node. This implies that within a node, the data can be handed over by a layer to its adjacent layer only. Each layer will perform its designated functions and then pass on the processed data to the next layer:
Figure 1: The OSI reference model

The high-level functions of each layer are described as follows:

Layer 1 - The physical layer

The primary function of this layer is to convert the bit stream onto the physical medium by converting it into electrical/optical impulses or radio signals. This layer provides the physical connection to the underlying medium and also provides the hardware means to activate, maintain, and de-activate physical connections between data link entities. This includes sequencing of the bit stream, identifying channels on the underlying medium, and optionally multiplexing. This should not be confused with the actual medium itself.

Some of the protocols that have a layer 1 component are Ethernet, G.703, FDDI, V.35, RJ45, RS232, SDH, DWDM, OTN, and so on.

Layer 2 - The data link layer

The data link layer acts as the driver of the physical layer and controls its functioning. The data link layer sends data to the physical layer at the transmitting end and receives data from the physical layer at the receiving node. It also provides error detection and correction that might have occurred during transmission/reception at the physical medium, and also defines the process for flow control between the two nodes to avoid any buffer overruns on either side of the data link connection. This can happen using PAUSE frames in Ethernet, and should not be confused with flow control in higher layers.

Some of the protocols that operate at the data link layer are LAPB, 802.3 Ethernet, 802.11 WiFi and 802.15.4 ZigBee, X.25, Point to Point (PPP) protocol, HDLC, SLIP, ATM, Frame Relay, and so on.

Layer 3 - The network layer

The basic service of the network layer is to provide the transparent transfer of datagrams between the transport layers at the two nodes. This layer is also responsible for finding the right intermediate nodes that might be required to send data to the destination node, if the destination node is not on the same network as the source node. This layer also breaks down datagrams into smaller fragments, if the underlying datalink layer is not capable of handling the datagram that is offered to the network layer for transport on the network.

A fundamental concept in the OSI stack is that data should be passed to a higher layer at the receiving node as it was handed over to the lower layers by the transmitting peer. As an example, the TCP layer passes TCP segments to the IP layer, and the IP layer might use the services of the lower layers, leading to fragment packets on the way to the destination, but when the IP layer passes the data to the TCP layer at the receiving node, the data should be in the form of TCP segments that were handed down to the IP layer at the transmitting end. To ensure this transparent transfer of datagrams to the receiving node TCP layer, the network layer at the receiving node reassembles all the fragments of a single datagram before handing it over to the transport layer.

The OSI model describes both connection-oriented and connectionless modes of the OSI network layer.

Connection- oriented and connectionless modes are used to describe the readiness of the communicating nodes before the process of actual data transfer between the two nodes. In the connection-oriented mode, a connection is established between the source and the destination, and a path is defined along the network through which actual data transfer would happen. A telephone call is a typical example of this mode, where you cannot talk until a connection has been established between the calling number and the called number.

In the connectionless mode of data transfer, the transmitting node just sends the data on the network without first establishing a connection, or verifying whether the receiving end is ready to accept data, or even if the receiving node is up or not. In this mode, there is no connection or path established between the source and the destination, and data generally flows in a hop by hop manner, with a decision being taken on the best path towards the destination at every hop. Since, data is sent without any validation of the receiving node status, there is no acknowledgement of data in a connectionless mode of data transfer. This is unlike the connection-oriented mode, where the path is defined the moment a connection is established, and all data flows along that path, with the data transfer being acknowledged between the two communicating nodes.

Since data packets in a connection-oriented mode follow a fixed path to the destination, the packets arrive in the same sequence at the receiver in which they were transmitted. On the other hand, packets in the case of a connectionless network might reach the receiver out of sequence if the packets are routed on different links on the network, as decisions are taken at every hop.

The OSI standard defined the network layer to provide both modes. However, most of the services were implemented in practice as the connectionless mode at layer 3, and the connection-oriented aspects were left to layer 4. We will discuss this further during our discussion on TCP/IP.

Some of the protocols that operate at the network layer are AppleTalk, DDP, IP, IPX, CLNP, IS-IS, and so on.

Layer 4 - The transport layer

The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host via one or more networks. This layer has end-to-end significance and provides a connectionless or connection-oriented service to the session layer. This layer is responsible for connection establishment, management, and release.

The transport layer controls the reliability of a given link through end-to-end flow control, segmentation/de-segmentation, and error control. This layer also provides multiplexing functions of multiplexing various data connections over a single network layer.

Some protocols operating at the transport layer are TCP, UDP, SCTP, NBF, and so on.

Layer 5 - The session layer

The primary purpose of the session layer is to coordinate and synchronize the dialog between the presentation layers at the two end points and to manage their data exchange. This layer establishes, manages, and terminates connections between applications. The session layer sets up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each end.

Some of the protocols operating at the session layer are sockets, NetBIOS, SAP, SOCKS, RPC, and so on.

Layer 6 - The presentation layer

The presentation layer provides a common representation of the data transferred between application entities, and provides independence from differences in data representation/syntax. This layer is also sometimes referred to as the syntax layer. The presentation layer works to transform data into the form that the application layer can accept. This layer is also responsible for encryption and decryption for the application data.

Some examples of protocols at the presentation layer are MIME, ASCII, GIF, JPEG, MPEG, MIDI, SSL, and so on.

Layer 7 - The application layer

The application layer is the topmost layer of the OSI model, and has no upper-layer protocols. The software applications that need communication with other systems interact directly with the OSI application layer. This layer is not to be confused with the application software, which is the program that implements the software; for example, HTTP is an application layer protocol, while Google Chrome is a software application.

The application layer provides services directly to user applications. It enables the users and software applications to access the network and provides user interfaces and support for services such as email, remote file access and transfer, shared database management, and other types of distributed information services.

Some examples of application layer protocols are HTTP, SMTP, SNMP, FTP, DNS, LDAP, Telnet, and so on.

The TCP/IP model

The Advanced Research Projects Agency Network (ARPANET), which was initially funded by the US Department of Defense (DoD) was an early packet-switching network and the first network to implement the protocol suite TCP/IP. ARPANET was the test bed of the TCP/IP protocol suite which resulted in the TCP/IP model also known as the DoD model.

The TCP/IP model is a simplified model of the OSI model and has only four broad layers instead of the seven layers of the OSI model. Figure 2 shows the comparison between the two models. As can be seen from the following figure, the TCP/IP model is a much more simplified model, where the top three layers of the OSI model have been combined into a single application layer, and the physical and data link layers have been combined into a network access layer:

Figure 2: Comparing the OSI model with TCP/IP model

Some of the major differences between the two models are as follows:

  • The functions of the application layer in the TCP/IP model include the functions of the application, presentation and session layer of the OSI model
  • The OSI session layer function of graceful close/end-to-end connection setup, management, and release is taken over by the TCP/IP transport layer (Transmission Control Protocol)
  • The network access layer combines the functions of the OSI data link and the physical layers
  • The network layer in the OSI mode can be connection oriented or connectionless, while the Internet Protocol (IP) is a connectionless protocol
  • The transport layer in the OSI model is connection oriented, whereas, different protocols at the transport layer in the TCP/IP model provide different types of services; for example, TCP provides a connection oriented service, while UDP provides a connectionless service

Let's explore what happens when data moves from one layer to another in the TCP/IP model taking Figure 3 as an example. When data is given to the software application, for example, a web browser, the browser sends this data to the application layer, which adds a HTTP header to the data. This is known as application data. This application data is then passed on to the TCP layer, which adds a TCP header to it, thus creating a TCP segment. This segment is then passed on to the network layer (IP layer) where the IP header is added to the segment creating an IP packet or IP datagram. This IP header is then encapsulated by the data link adding a data link header and trailer, creating a Frame. This frame is then transmitted onto the transmission medium as a bit stream in the form of electrical/optical/radio signals depending upon the physical media used for communication:

Figure 3: Data flow across the TCP/IP layers

A simplified stack showing some protocols in the TCP/ IP stack is shown in the following figure:

Figure 4: Common protocols in the TCP/IP stack

Let's delve deeper into the TCP/IP model by looking at the TCP/IP headers in some more detail.

Internet Protocol (IP)

Internet Protocol (IP) as it is commonly known, was developed by Bob Kahn and Vinton Cerf, and is a protocol operating at layer 3 (network layer) of the OSI model. The primary function of the IP is to transfer datagrams from source to destination and provide a network transport service. As noted in the preceding section, IP as defined in the TCP/IP model operates in a connectionless mode, and hence is sometimes referred to as Send and Pray protocol, as there is no acknowledgement/guarantee that the IP datagrams sent by the source have been received by the destination. This function is left to the upper layers of the protocol stack.

Figure 5 shows the structure and fields of an IPv4 header. The IPv4 header is defined in the IETF standard, RFC 791. The header is appended by the network layer to the TCP/UDP segments handed to the network layer. The length of the header is always a multiple of 4 bytes. The section consists of multiple fields that are outlined in the following figure.

The length of each part of the IPv4 header in bits is highlighted in Figure 5 within parenthesis after the name of the field:

Figure 5: IPv4 packet format

We will now talk about the fields in brief:

  • Version (4): This is a 4-bit field and is used to decode the IP address version being used by the IP system. The version for the header depicted in Figure 5 is version 4. There is a newer version of IP called IP version 6 or IPv6, which has a different header format and is discussed later.
  • Header Length: This is again a 4-bit field, and encodes the length of the IP header in 4-byte words. This means that if the IPv4 header has no options, the header would be 20 bytes long, and hence would consist of five 4-byte words. Hence, the value of the header length field in the IP header would be 5. This field cannot have a value less than 5 as the fields in the first 20 bytes of the IPv4 header are mandatory.
  • DSCP: Differentiated Services Code Point (DSCP) is a 6-bit field in the IPv4 header and is used to encode the Quality of Service (QoS) required by the IP datagram on the network. This field will define if the packet will be treated as a priority packet on the network, or should be discarded if there is congestion on the network. This field was not in the original RFC for IP, but was added later by RFC 2474 to support differentiated models for QoS on IP networks. We will discuss this in detail in the chapter on QoS implementation.
  • ECN: Explicit Congestion Notification (ECN) is a 2-bit field defined by RFC 2481, and the latest standard for this at the time of writing is RFC3168. This field is used to explicitly notify the end hosts if the intermediate devices have encountered congestion so that the end devices can slow down the traffic being sent on the network, by lowering the TCP window. This helps in managing congestion on the network even before the intermediate devices start to drop packets due to queue overruns.
  • Total Length: This is a 16-bit field that encodes the total length of the IP datagram in bytes. The total length of the IP datagram is the length of the TCP segment plus the length of the IP header. Since this is a 16-bit field, the total length of a single IP datagram can be 65535 bytes (216-1). The most commonly used length for the IP datagram on the network is 1500 bytes. We will delve deeper into the impact of the IP datagram size in the later chapters while discussing the impact on the WAN.
  • Identification (ID): This 16-bit value uniquely identifies an IP datagram for a given source address, destination address, and protocol, such that it does not repeat within the maximum datagram lifetime, which is set to 2 minutes by the TCP specification (RFC 793). RFC 6864 has made some changes to the original fields that are relevant only at high data rates, and in networks that undergo fragmentation. These issues will be discussed in the later chapters.
  • Flags: These are three different flags in the IPv4 header as shown in Figure 6. Each flag is one bit in length. The flags are used when the IP layer needs to send a datagram of a length that cannot be handled by the underlying data link layer. In this case, the intermediate nodes can fragment the datagram into smaller ones, which are reassembled by the IP layer at the receiving node, before passing on to the TCP layer. The flags are meant to control the fragmentation behavior:
Figure 6: Flags in IPv4 header
  • MBZ: This stands for Must be Zero (MBZ), where bits are always sent as 0 on the network.
  • DF: This stands for Do Not Fragment (DF) bit, which if set to 1 means that this packet should not be fragmented by the intermediate nodes. Such packets are discarded by the intermediate nodes, if there is a need to fragment these packets, and an error message is sent to the transmitting node using Internet Control Message Protocol (ICMP).
  • MF: This stands for More Fragments (MF) bit, which if set to 1 signifies that this is a fragmented packet and there are more fragments of the original datagram. The last fragment and an unfragmented packet will have the MF bit as 0:
    • Fragment Offset: This field is 13 bits long and is used only by the fragmented packets to denote where in the original datagram the fragment belongs. The first fragment will have the offset as 0 and the subsequent fragments will have the fragment offset value that defines the length of all fragments before this fragment in the original datagram as a number, where each number is 8 bytes.
    • Time To Live/TTL: This 8-bit field is used to denote the maximum number of intermediate nodes that can process the packet at the IP layer. Each intermediate node decrements the value by 1 to ensure that the IP packet does not get caught in an infinite routing loop and keeps on going back and forth between nodes. The packet is discarded when the field reaches a zero value, and is discarded by the node, and an error message sent to the source of the datagram as an ICMP message.
    • Protocol: This 8-bit field is used to denote what upper layer protocol is being encapsulated in the IP packet. Since the IP layer multiplexes multiple transport layers, for example, UDP, TCP, OSPF, ICMP, IGMP, and so on, this field acts as a demultiplexing identifier to identify which upper layer should the payload be handed to at the receiving node. The values for this field were originally defined in RFC 1700, which is now obsolete, and is replaced by an online database. Some of the common values for the protocol field are shown in the following figure:
Figure 7: Some common IP protocol numbers
  • Header Checksum: This 16-byte field is used for checking the integrity of the received IP datagram. This value is calculated using an algorithm covering all the fields in the header (assuming this field to be zero for the purposes of calculating the header checksum). This value is calculated and stored in the header when the IP datagram is sent from source to destination and at the destination side this checksum is again calculated and verified against the checksum present in header. If the value is the same, then the datagram was not corrupted, else it's assumed that datagram was received corrupted.
  • Source IP address and Destination IP address: These 32-bit fields contain the source and destination IP addresses respectively. Since the length of an IPv4 address is 32 bits, this field length was set to 32 bits. With the introduction of IPv6, which has a 128-bit address, this cannot fit in this format, and there is a different format for an IPv6 header.
  • Options: This optional, variable-length field contains certain options that can be used by IP protocol. Some of these options can be used for Strict Source routing, Loose Source routing, Record route options, and so on that are used for troubleshooting and other protocols.
  • Padding: This is a field that is used to pad the IP header to make the IPv4 header length a multiple of 4 bytes, as the definition of the Header Length field mandates that the IPv4 header length is a multiple of 4 bytes.
  • Data: This variable length field contains the actual payload that is encapsulated at the IP layer, and consists of the data that is passed onto the upper layer transport protocols to the IP layer. The upper layer protocols attach their own headers as the data traverses down the protocol stack, as we saw in Figure 3: Data flow across the TCP/IP layers.

Transmission Control Protocol (TCP)

"The single biggest problem with communication is the illusion that it has taken place."
- George Bernard Shaw

As discussed in the previous section, IP provides a connectionless service. There is no acknowledgement mechanism in the IP layer, and the IP packets are routed at every hop from the source to the destination. Hence, it is possible that some packets sent by the transmitting node are lost on the network due to errors, or are discarded by the intermediate devices due to congestion on the network. Hence the receiving node will never receive the lost packets in the absence of a feedback mechanism.

Further, if there are multiple paths on the network to reach the destination from the source, it is possible that packets will take different paths to reach the destination, depending upon the routing topology at a given time. This implies that packets can reach the receiving node out of sequence with respect to the sequence in which they were transmitted.

The TCP layer ensures that whatever was transmitted is correctly received. The purpose of the TCP layer is to ensure that the receiving host application layer sees a continuous stream of data as was transmitted by the transmitting node as though the two were connected through a direct wire. Since TCP provides that service to the application layer using the underlying services of the IP layer, TCP is called a connection-oriented protocol.

A typical TCP segment is shown in Figure 8, where the different fields of the TCP header are shown along with their lengths in bits in parentheses. A brief description of the functions of the various fields is shown in the following figure:

Figure 8: Transmission Control Protocol (TCP) segment structure
  • Source Port/Destination Port: As discussed in the earlier sections, the transport layer provides the multiplexing function of multiplexing various data connections over a single network layer. The source port and destination port fields are 16-bit identifiers that are used to distinguish the upper layer protocols. Some of the common TCP port numbers are shown in the following figure:
Figure 9: Common TCP Port Numbers
  • Sequence Number: This 16-bit field is used to number the starting byte of the payload data in this TCP segment with relation to the overall data stream that is being transmitted as a part of the TCP session.
  • Acknowledgement Number: This 16-bit field is a part of the feedback mechanism to the sender and is used to acknowledge to the sender how many bytes of the stream have been received successfully, and in sequence. The acknowledgement number identifies the next byte that the receiving node is expecting on this TCP session.
  • Data Offset: This 4-bit field is used to convey how far from the start of the TCP header the actual message starts. Hence, this value indicates the length of the TCP header in multiples of 32-bit words. The minimum value of this field is 5.
  • Reserved: These are bits that are not to be used, and will be reserved for future use.
  • Control flags: There are 9 bits reserved in the TCP header for control flags and there are 9 one-bit flags as shown in Figure 10. Although these flags are carried from left to right, we will describe them in the random order for ease of understanding:
Figure 10: TCP control Flags
  • SYN: This 1-bit flag is used to initiate a TCP connection during the three-way handshake process.
  • FIN: This 1-bit flag is used to signify that there is no more data to be sent on this TCP connection, and can be used to terminate the TCP session.
  • RST: This 1-bit flag is used to reject the connection to maintain synchronization of the TCP session between two hosts.
  • PSH: Push (PSH) is a 1-bit flag that tells the TCP receiver not to wait for the buffer to be full, but to send the data gathered so far to the upper layers.
  • ACK: This 1-bit flag is used to signify that the Acknowledgement field in the header is significant.
  • URG: Urgent (URG) is also a 1-bit flag, and when set signifies that this segment contains Urgent data and the urgent pointer defines the location of that urgent data.
  • ECE: This 1-bit flag (ECN Echo) signals to the network layer that the host is capable of using Explicit Congestion techniques as defined in the ECN bit section of the IP header. This flag is not a part of the original TCP specification, but is added by RFC 3168.
  • CWR: This is also a 1-bit flag added by RFC 3168. The Congestion Window Reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set.
  • NS (1 bit): This 1-bit flag is defined by an experimental RFC 3540, with the primary intention that the sender can verify the correct behavior of the ECN receiver.
    • Window Size: This 16-bit field indicates the number of data octets beginning with the one indicated in the acknowledgment field, which the sender of this segment is willing to accept. This is used to prevent the buffer overruns at the receiving node.
    • Checksum: This 16-byte field is used for checking the integrity of the received TCP segment.
    • Urgent Pointer: The urgent pointer field is often set to zero and ignored, but in conjunction with the URG control flags, it can be used as a data offset to identify a subset of a message that requires priority processing.
    • Options: These are used to carry additional TCP options such as Maximum Segment Size (MSS) that the sender of the segment is willing to accept.
    • Padding: This is a field that is used to pad the TCP header to make the header length a multiple of 4 bytes, as the definition of the data offset field mandates that the TCP header length be a multiple of 4 bytes.
    • Data: This is the data that is being carried in the TCP segment and includes the application layer headers.

Most of the traffic that we see on the internet today is TCP traffic. TCP ensures that application data is sent from the source to the destination in the sequence that it was transmitted, thus providing a connection oriented service to the application. To this end, TCP uses acknowledgement and congestion control mechanisms using the various header fields described earlier. At a very high level, if the segments are received at the receiver TCP layer that are out of sequence, the TCP layer buffers these segments and waits for the missing segments, asking the source to resend the data if required. This buffering, and the need to sequence datagrams, needs processing resources, and also causes unnecessary delay for the receiver.

We live in a world where data/information is time sensitive, and loses value if delivered later in time. Consider seeing the previous day's newspaper at your doorstep one morning. Similarly, there are certain types of traffic that lose their value if the traffic is delayed. This type of traffic is usually voice and video traffic when encapsulated in IP. Such traffic is time sensitive and there is no point in providing acknowledgements, and adding to delays. Hence, this type of traffic is carried in a User Datagram Protocol (UDP) that is a connectionless protocol and does not use any retransmission mechanism. We will explore this more during our discussions on designing and implementing QoS.

User Datagram Protocol (UDP)

UDP is a protocol that provides connectionless service to the application, and sends data to the application layer as received, without worrying about lost parts of the application data stream or some parts being received out of order. A UDP packet is shown in Figure 11:

Figure 11: UDP packet structure

Since UDP provides lesser services compared to TCP, the packet has fewer fields and is much simpler. The UDP datagram can be of any length as can be encapsulated in the IP packets as follows, and has a header that is of fixed 8-byte length. The different fields in the UDP packet are discussed as follows:

  • Source Port/Destination Port: Like TCP, UDP also serves multiple applications and hence has to provide the multiplexing function to cater to multiple applications that might want to use the services of the UDP layer. The source port/destination port fields are 16-bit identifiers that are used to distinguish the upper layer protocols. Some of the common UDP port numbers are shown in the following figure:
Figure 12: Common UDP port numbers
  • Length: This 16-bit field represents the total size of each UDP datagram, including both header and data. The values range from a minimum of 8 bytes (the required header size) to sizes above 65,000 bytes.
  • Checksum: Similar to TCP, this 16-bit field is used for checking the integrity of the received UDP datagram.
  • Data: This is the data that is being carried in the UDP packet and includes the application layer headers.

IP version 6

IPv6 is a new version of the IP protocol. The current version IPv4 had a limited number of IP addresses (2^32 addresses), and there was a need to connect more hosts. Hence IPv6 allows for a 128-bit address field compared to a 32-bit address field in IPv4. Hence, IPv6 can have 2^128 unique IP addresses. IPv6 also provides some new features and does away with some features of the IPv4 packet such as fragmentation and Header checksum. Figure 13 shows the IPv6 header and shows the various fields of the IPv6 header:

Figure 13: IPv6 header

We will not go into the details of IPv6 in this book, but will cover it as and when required during the discussion on design and implementation.