Network Traffic Management

Published on: April 1, 2003
Last Updated: April 1, 2003

What organization today isn’t involved with Intranets – Web-based applications that deliver critical information to employees – or extranets, which provide electronic links with our customers and suppliers? 

Enabled by an array of mature underlying technologies, including Web servers, browsers, and TCP/IP networks, the business benefits from applications based on intranets and extranets can be quite compelling.

Furthering the trend, there is currently much discussion about the uses and benefits of new technologies such as streaming video, XML [eXtended Markup Language], web-based document imaging, Internet telephony and the like.

We are also hearing a lot about the potential cost reductions and service improvements that result from electronically connecting to customers and suppliers.

There is much less discussion, however, about the impact these functional changes are and will continue to have on network infrastructure and performance.

This article looks at the challenges involved in assuring that the networks underlying intranet and extranet applications are up to the task.

Rising Tide

Early intranets tended to focus on self-service applications such as human resources benefits administration.

But although these systems can offer striking returns on investment, they are tactical rather than strategic: there is no direct correlation between their performance and an organization’s ability to conduct its business.

By contrast, emerging examples of intranet-based applications are increasingly mission-critical.

They include, for example, providing access to data warehouses for real-time marketing support; supporting customer service representatives; presenting video information to financial traders to improve investment decisions; and sharing medical images for collaborating on complex diagnoses. In these cases, business success is jeopardized if adequate performance isn’t assured.

In the case of extranets, networks become electronic highways for passing information back and forth between an organization’s stakeholders: buyers, sellers, suppliers and partners.

The exchanged information can include orders, inquiries, payments, service and support, designs, plans and any other type of information.

Regardless of whether the exchange involves simple text or compound documents incorporating images, audio and video streams, the relationship between stakeholders can be enhanced by connections that provide adequate performance – or undermined by inadequate performance.

In fact, many of the intranet and extranet applications now being planned will generate very large amounts of network traffic.

Technologies such as imaging, conferencing and streaming video provide users with unprecedented information content but also place an unprecedented burden on our networks.

Furthermore, the amount of network traffic generated by these applications is non-deterministic, influenced by outside factors such as sales volume or financial-market activity.

Even after compression with the most sophisticated techniques, a typical document image consumes on the order of 50 kB, a minute of video several megabytes.

The result? High data volumes threaten to swamp our networks just when robust information access is most critical.

Right Of Way

Traditionally, the TCP/IP networks we use as infrastructure for our intra/extranets make no distinction between different types of traffic.

“A packet is a packet,” and all traffic gets equal, “best-effort” priority, whether it’s an e-mail about the department picnic, an order from a customer or a video from the CEO about a major event (and we won’t even discuss the possibility of personal traffic on the network).

As we grow more dependent on intranets and extranets to conduct business, we need to ensure that we can impose reasonable priorities on the flow of network traffic.

Inevitably, some traffic will be more important than other traffic, and as long as we are in a situation with finite network capacity, we will need the ability to allocate that capacity in accordance with the priorities of the organization.

In an ideal world, we could add network capacity quickly and cheaply as we needed it.

But while that may someday be feasible given emerging telecommunications standards, today we operate in a world of limited bandwidth, network capacity and budget.

There is also typically a considerable delay between the time additional network capacity is ordered and the time it becomes available.

So how do we make sure the most critical information gets delivered in time? Bandwidth management.

We need to shift from providing a passive network that treats all traffic equally to an intelligent network that imposes management rules on the traffic it carries – in short, a network that can deliver Quality of Service (QoS).

All traffic needs to get delivered to its destination. In times of heavy load, however, traffic marked ‘critical’ should get priority over other traffic.

Prioritizing requires the ability to identify classes of traffic and users, as well as technology to impose priorities and rules on the traffic. These requirements are not met by the Internet’s traditional egalitarian protocols.

Unlike the Internet, however, intranets and (to a lesser degree) extranets afford a measure of control over the network infrastructure.

A number of mechanisms are emerging that make QoS for an intranet or extranet both feasible and practical.

The two basic approaches for controlling traffic flow are the Resource Reservation Protocol (RSVP) and IP Precedence.

The techniques for implementing these approaches include rate control (or traffic shaping) and queuing.

These mechanisms make rules-based decisions about how traffic should travel across the network.

This in turn allows us to define types of traffic that correspond to a particular class of service (for example, “all traffic from business partners”) and the policies associated with that class of service (“traffic from business partners gets highest priority through the network”).

Booking First Class

RSVP is an IP-based protocol that allows a sender of information to temporarily reserve bandwidth at routers along the way to one or more designated receivers.

The sender initiates the reservation by requesting bandwidth at each router along the path to the receiver.

When the receiver replies along the same path, the temporary reservation is changed into a permanent one that lasts until the session is completed.

RSVP was originally intended to allow delay sensitive applications, such as voice and video, to reserve the bandwidth needed to avoid intolerable delays.

RSVP does not address TCP, HTTP or FTP and there is no mechanism for central control or policy enforcement.

It also works on a “first-come, first-served” basis, which means mission-critical applications can still be compromised by low-priority traffic if the lower priority traffic reserved bandwidth first.

Additionally, although RSVP works to minimize network delay by reserving bandwidth, if bandwidth is unavailable, it resorts to the best-effort principle.

IP Precedence, another QoS approach, avoids many of RSVP’s inherent problems. With IP precedence, each packet traveling across a network is assigned a priority.

Network devices, such as routers, simply ship high-priority packets ahead of lower-priority ones.

Network managers can define several classes of service and define network policies for congestion handling and bandwidth allocation for each class.

For example, traffic from customers or business partners could be assigned the highest priority while other types of traffic receive lower priority. IP Precedence allows for considerable flexibility in precedence assignment.

Once either RSVP or IP Precedence has been employed to “set the ground rules” for traffic flow, either rate control or queuing mechanisms enforce them.

Rate control mechanisms essentially regulate the TCP/IP communication flow between the sender and the receiver of information according to those ground rules.

In normal TCP/IP communication, the receiver submits acknowledgment packets to the sender, which tell the sender how many packets to send next (known as the TCP window size).

Rate control mechanisms intercept acknowledgment packets from the receiver as they travel to the sender, and then modify the TCP window size in order to control the rate at which the sender receives the acknowledgments.

Thus, high priority traffic will have its rate increased while low priority traffic will have its rate lowered.

Rate control mechanisms limit congestion and offer a significant side benefit by smoothing-out the typically “bursty” nature of TCP traffic.

An example of a product which can be used to implement rate control on an intranet or extranet is PacketShaper from Packeteer, Inc.

Queuing mechanisms traditionally come into play in routers and queuing-based QoS is supported by all major router vendors.

Using one of a variety of queuing algorithms based on IP Precedence, RSVP reservations or some other identifier, these mechanisms assign different priority traffic to different queues.

The router allocates separate buffers for each queue, and sends high-priority traffic ahead of low-priority traffic.

Additional features ensure that packets are not lost due to full queues, lower priority traffic is not kept waiting indefinitely as higher priority traffic is routed and unused bandwidth is reallocated to different queues to maximize traffic flow.

Summary

Intranets and extranets are rapidly becoming essential components of business success, with businesses constantly finding innovative uses for their network infrastructures.

These new network uses are generating huge increases in network traffic. Network capacity is a limited resource, which in some cases is already being stretched to its limits.

Evolving business needs require that we start managing network traffic according to business priorities. Technologies already exist and others are emerging to make TCP/IP traffic management a reality.

By taking these solutions into account while growing its network, an organization can cope with limited bandwidth while expanding the business uses of its data. 

Stay on top of the latest technology trends — delivered directly to your inbox, free!

Subscription Form Posts

Don't worry, we don't spam

Written by Bobby

Bobby Lawson is a seasoned technology writer with over a decade of experience in the industry. He has written extensively on topics such as cybersecurity, cloud computing, and data analytics. His articles have been featured in several prominent publications, and he is known for his ability to distill complex technical concepts into easily digestible content.
Latest Stories

Secure your digital life with NordVPN