More and more companies are realizing that networking PCs and workstations can save a great deal of time and money. Many companies with extensive local area network (LAN) and wide area network (WAN) facilities employ full-time network managers. But at businesses just starting to network, a member of the company’s information technology or engineering staff may be appointed the part-time network manager and troubleshooter.
If you are that individual, there are two types of basic test equipment you will find indispensable: a traffic generator and a protocol analyzer, both often contained in a single instrument.
The first network you are likely to encounter will be a single LAN, or a group of locally interconnected LANs. The traffic generator must create control sequences and data packages matching your facilities, usually a token ring, an Ethernet, a fiber distributed data interface (FDDI) or a combination of these networks.
The traffic generator should create an adequate amount of traffic so devices within the network can be tested under normal and stress conditions. Also, the instrument should contain custom hardware to perform stimulus/response testing; that is, the capability to transmit test packets and simultaneously receive them. This is particularly important when testing the performance and interoperability of the individual devices in the LAN.
Generating traffic is relatively straightforward, but interpreting the network’s behavior to normal or simulated traffic is not always easy. That is where protocol analyzers come into play.
Troubleshooting Aids
A protocol analyzer is the key tool for verifying proper operation of equipment in a company-wide network. Protocol analyzers generally provide two basic types of measurements: statistics and decodes. Intelligent preconfigured measurement sequences are often included to verify operability and to troubleshoot networks.
Today’s corporate networks typically consist of multiple LANs (of one or more technology) connected by WANs (of one or more technology). To be an effective troubleshooting tool, a protocol analyzer must be able to connect at any point in these complex, heterogeneous networks. Why? Because problems can occur at any point along the way.
For example, consider a corporate network with multiple sites. Some sites are running Ethernet, including switched Ethernet hubs. Other sites are running 4- and 16-Mb token ring with FDDI backbones. In addition to multiple technologies, these sites are running multiple protocols, such as TCP/IP, Banyan, Novell, DECnet and OSI.
The sites are also connected via many technologies: leased 56k and T1 lines, ATM and
frame relay. Routers, bridges and switching hubs are usually involved as interconnect devices (Figure 1). Since there are so many potential fault points in this complex network and different technologies, protocols, operating systems and devices to test, it is critical to have equipment that can troubleshoot the network and verify proper operation from any point.
As a result, a protocol analyzer must contain every one of the applicable measurement capabilities, complete decodes for every protocol encountered and statistics, and intelligent troubleshooting tools to assess the operation of any network component. Only with complete insight into every node can you pinpoint and resolve network problems.
Additional features include a node-list generation facility with friendly node-name discovery, a powerful PC platform, a migration path to add testing capabilities, and custom front-end hardware so future protocols and types of traffic can be captured regardless of network configuration and utilization.
Managing the Network
To manage routine operations, a protocol analyzer must baseline the network during normal operation. Also, you must be able to set reporting thresholds on key network parameters, such as utilization levels and error occurrences.
The capability to set a wide range of sample periods is also important. For instance, a minute sample period is good for a 24-hour baseline, but if the network experiences problems, 1-second samples are a must.
Baselining essentially captures a variety of statistics on a periodic basis during normal network operations, then observes changes to these parameters, usually by graphing the results and comparing them with previous characterizations. When critical parameters, such as collisions on an Ethernet, are occurring more often on a particular segment, the network manager can proactively investigate the situation, troubleshoot a particular device, and possibly resegment the network or apply other standard preventive measures.
Setting thresholds on key parameters can warn you when a value has exceeded normal limits. If an analyzer can “freeze the scenario” whenever a statistical measurement threshold is exceeded, you most likely will be able to trace the cause of the exceeded threshold and be in a better position to take action and prevent a problem from occurring on the network.
If a crisis does occur, you need a tool that can provide expert system analysis to help pinpoint the source of the problem quickly. With the proliferation of network operating systems and network protocols (including routing protocols), there are few experts who know all the ins and outs of each network environment. That’s why having an intelligent tool with expert system technology included in the protocol analyzer can be a lifesaver.
Observations and Advice
Interoperability should be verified at the highest layers of the network stack. In actuality, no one currently performs these tests because diverse multivendor equipment makes such broad performance corroboration practically impossible. To complicate the situation, interconnects do not always interoperate well, even with existing tried-and-true protocols.
Don’t panic if you install new devices and they don’t work. In this case, sequential monitoring and analyzing of each layer of the stack should be performed, progressing from the lowest layer to the highest, until the source of the problem is identified.
Managing routine LAN operation should be done by exception, with equipment performing continuous monitoring of key segments and personnel being alerted only when preset thresholds are exceeded. Baseline the network statistically and use this data to set the alarm thresholds.
Only a few test facilities that replicate actual networks exist. Try to emulate the protocol mix, the packet size distribution and the host population. These three sets of facts can have a critical impact on the performance of interconnects. For instance, router forwarding rates can be protocol dependent, and bridge performance varies with the number of table entries.
When it comes to equipment, the options are many. For LAN troubleshooting of token ring, Ethernet, FDDI or ATM, consider portable, lightweight laptop-like dedicated instruments. For comprehensive LAN/WAN management, traffic generators and protocol analyzer systems with extensive expansion potential are good choices.
To gain better insight into what is happening at distant locations, remote monitoring facilities are also available. This equipment can report traffic conditions and equipment-status back to the network manager via the network or during outages via a modem. RMON MIB, a remote monitoring equipment communication standard, included in the commonly used Simple Network Management Protocol, provides a common framework for status-reporting traffic, and many companies are using it.
About the Author
John Mandico is a Technical Support Engineer at Hewlett-Packard. He has been with the company for five years in a variety of positions in product engineering and technical support. Mr. Mandico has a B.S. degree in mathematics from The University of Notre Dame, a master’s degree in computer science from Georgia Institute of Technology, and an M.B.A. degree from the University of Maryland. Hewlett-Packard Co., 5070 Centennial Blvd., Colorado Springs, CO 80919, (719) 531-4770.
Copyright 1995 Nelson Publishing Inc.
July 1995