Electronic Design

Take Advantage Of Fast Ethernet PHY Testing

Here Are Five Important Tests That Can Help You Get A Handle On The Performance Of Physical-Layer Devices.

Today's increase in computing power, coupled with the ability to share applications and data, has driven the modern networking infrastructure to new levels of speed and sophistication. Having emerged as the leader in desktop networking, Fast Ethernet is able to bring much needed bandwidth to users, while maintaining the integrity of Ethernet.

Adopting physical-layer conventions from the fiber distributed data interface (FDDI), Fast Ethernet has leveraged existing technology into the framework of a carrier-sense multiple-access/collision-detection (CSMA/CD) network, preserving users' and administrators' knowledge of network operation. Coexisting with installed network devices, Fast Ethernet has become both legacy-compatible and capable of providing future migration to 100-Mbit/s connections as users upgrade their interconnect systems. Consequently, it's brought about a slew of testing and interoperability concerns.

The IEEE has provided the standards by which the physical-layer and physical-medium devices must perform. Mixing this with interoperability testing of existing devices provides a robust environment for determining the overall fitness of the system. Properly testing for forward migration, while maintaining backwards interoperability, shows that the network is evolving as a tool that maintains data integrity without compromise.

In addition to the IEEE standards, the University of New Hampshire Interoperability Lab, Durham, N.H., has formed a Fast Ethernet consortium. Members may have their products tested in a series of both IEEE and interoperability scenarios—testing that has become a necessity for any company developing Fast Ethernet systems. Let's review the basics of Fast Ethernet testing. Covering methodologies to real examples will provide an understanding of the challenges faced in meeting IEEE conformance.

The logical place to begin physical-layer testing is with the individual system, i.e., the network host or adapter. Testing is then done in a contained environment. Typical vehicles for testing Fast Ethernet physical-layer (PHY) and transceiver devices are media-access units (MAUs) or network interface circuits (NICs). The MAU is useful, because network test equipment frequently comes ready to test devices via a media-independent interface (MII) connection. Board-level testing should allow access to the silicon for measurement, as well as to the magnetic devices and RJ-45 connector. At the board level, other non-connection related issues may be addressed, such as power consumption, footprint, and external component count.

The basic equipment required for performing tests on Fast Ethernet physical layers includes an MII-based test platform, such as the Netcom Systems' X-1000 or Smartbits system. Testing also demands accurate voltage and current generators, as well as an oscilloscope, multimeter, and various types of cables and terminations.

The mechanism through which these multispeed devices configure the appropriate speed is auto-negotiation. It's actually a process through which the part signals the far-end station. Using 10-Mbit/s fast link pulses (FLPs), it then configures to a predetermined speed option or the highest common denominator. Auto-negotiation also lets devices distinguish between FLPs and normal link pulses (NLPs), so there's no confusing 10-Mbit/s data with the auto-negotiation process.

In addition, the part can sense "idles" in 100-Mbit/s systems. It therefore does not miss a non-auto-negotiation-capable 100-Mbit/s-only device on the other end. The end result is that a 10-Mbit/100-Mbit system with auto-negotiation should be capable of communicating with any other 10-Mbit/100-Mbit Ethernet device.

Robust physical-layer devices need to be able to operate with non-standard-compliant, as well as compliant, auto-negotiation devices. Early implementers of non-standard 10-Mbit/100-Mbit devices used a technique known as speed sensing to configure to the appropriate speed. With that technique, the host device first sends out 10-Mbit/s data and then 100-Mbit/s data alternately. That way, it can see how the far end responds. If available—many speed-sensing systems are still in place—testing should include the use of these devices. They definitely provide an added dimension of test data.

By default, it's become commonplace to perform certain tests on Fast Ethernet devices. Few people, however, understand the purpose of these tests and their relevance to the network. An example of this is the cable-length test.

It's normal to run PHY devices up to 130 m of cable. After all, the IEEE specification calls for 90 m of cable plus a patch, so 130 m guarantees this. And first-generation Fast Ethernet devices worked up to this distance, so people have been conditioned to test to that limit.

The EIA/TIA specification requires that all PHYs be able to receive a compliant signal through 90 m of CAT-5 cable, plus an additional 10 m of CAT-5 patch cable through at least four jacks in the setup. That test alone does not ensure that the part will function correctly at any other distance or temperature variations.

Every physical-layer evaluation should include tests from 0 m through at least 130 m of CAT-5 cable in 10-m increments. Some devices have deficiencies in the midrange cables. Unfortunately, these distances are common in many smaller offices. At any distance, the result should be correct auto-negotiation, a link, and an extremely low bit error rate. By understanding the test better, as well as what it exercises in the design, you will see it as a more powerful tool—not just a checklist requirement.

More often than not, there's some combination of improper and incomplete testing on these devices. Any situation in which a signal is somehow deficient has a negative effect on network performance. In the best case, that part of the network segment isn't capable of talking with the rest of the network, but doesn't interfere with its operation. If the situation is that non-standard format signals are sent across the network, other problems arise.

Say the entity is a node connected to a workgroup-level repeater. Then, the entire workgroup sees the "garbage" the node is transmitting. Most likely, this garbage is propagated up to a more intelligent repeater or switch capable of partitioning the node. Or, if the repeaters are configured in a stack arrangement without a management unit, quite a few workgroups would be exposed to the bad data. The net result would be a loss of bandwidth.

Looking into the network from a system perspective, there are other limitations that must be tested to ensure proper operation. A transceiver sending out a signal that's been compromised—by return loss, VOD, transmit rise-/fall-time violation, excessive jitter, etc.—places the burden of receiving that signal on the far end.

Remember, a sound physical-layer/transceiver device should be able to work with all other devices currently on the network—IEEE compliant or not. This is where the interoperability testing mentioned previously plays a large part. If the host system is a multiport device such as a repeater, it places itself in a position to "repeat" bad information, as well as non-standard information.

Every designer should have a basic battery of tests that can be run on PHY devices prior to beginning a design. That way, he or she can judge the fitness of the part. The tests below should help to construct this test suite.

Test 1: Return Loss. Transmit return loss is calculated using a network analyzer measuring the reflections from the transmit drivers of the PHY (Fig. 1). This results in a combination of both the front-end transformer and the physical-layer device. The reflection combines with the attenuated signal, crosstalk, flat-channel loss, and other interferences to produce a highly distorted input to the far-end receiving physical-layer device.

Per ANSI specification X3.363 TP-PMD 9.1.5 Return Loss, the following transmit return-loss characteristics must be met:

  • Greater than 16 dB from 2 to 30 MHz
  • Greater than \[16 ­ 20(log(f/30 MHz)\] dB from 30 to 60 MH
  • Greater than 10 dB from 60 to 80 MHz.

The network analyzer's reflection port should be connected to a short BNC cable. That cable is then connected to a 50- to 100-Ω balun, which should be modified to have an RJ45 interface. The balun is then connected to a short CAT-5 cable.

Use this setup to calibrate the network analyzer. Set the analyzer to look at reflections from 16 to 80 MHz. The use of a power supply and temperature chamber for modifying the environment to worst-case conditions can be added to the test conditions. The 85-Ω and 115-Ω return-loss results may be calculated mathematically.

Test 2: Rise Time/Fall Time. Measuring the rise and fall times of the transitions is vital. The standard—ANSI specification X3.363 TP-PMD 9.1.6 Rise/Fall Times \[Symmetry\]—calls for parts with 3- to 5-ns rise/fall times. A sharp rise time will help with cable distances and give you a margin if you need to slow it down to help with emissions. The difference between all measured rise/fall times must be less than 0.5 ns. Most likely, a part that exceeds this symmetry specification will help with EMI compliance.

Connect the two oscilloscope inputs to the BNC cables (Fig. 2). Then, join the cables to the transmit pins of the RJ45 going into the device under test (DUT). Remember to terminate the oscilloscope probe with 50-Ω impedances.

The DUT should be set to transmit scrambled MLT-3 idle signals. Now, measure the 10% to 90% rise and fall times of the differential signal. All measured times should be between 3 and 5 ns. Rise/fall symmetry should be less than 0.45 ns. If the rise and fall of the differential signals are added together, they should equal a constant voltage. Any delta from that constant becomes a common-mode spike. In the EMI realm, that's a source of noise that will make it hard for the designer to be FCC-compliant. The more symmetrical the rise and fall time, the smaller the common-mode spike.

Test 3: BER Testing For 10/100 Mbits/s. BER testing is one of the main tests for both 10-Mbit and 100-Mbit systems. It ensures packet reception under the environments in the test. That's the basis for the cable-length test employed by most people (Fig. 3).

BER testing should be performed for both 10-Mbit and 100-Mbit rates. During this test, it would help if the DUTs were subjected to worst-case environments. This includes -20 to +90°C, as well as 10% above and below voltage levels.

An MII cable allows the DUT to be placed in a temperature chamber. Two DUTs should always be used. Using only one with a loopback cable doesn't take ppm variations of the clock into account. The clock ppm difference of the two DUTs should be at worst 100 ppm. Test every distance between 0 and 130 m of cable. Many physical-layer devices have holes in their performance. In some of them, it's not surprising to see an area of several meters—40 to 45 m, for example—where the part does not function at all.

The part also may be exercised over both voltage and temperature by means of a power-supply and temperature chamber. Over temperature, the cable changes attenuation, causing a part that may work one day to fail the next at the same cable length. Although the specification calls for a total of 90 m plus a 10-m patch, cable, jack, and connector, variations over temperature should call for extra margin.

When checking error rates, use the largest legal size (1514 bytes) and minimum gap (9.6 µs for 10 Mbit, 960 ns for 100 Mbit). This has data on the wire for the longest period of time. Also, utilize incremental and F0F0 byte patterns. The F0F0 pattern causes the worst-case transitions on the part's MII bus.

Test 4: Power-Supply Sensitivity. Through this test, the designer can see what may happen to the physical-layer device when it's placed in a more complicated/noisy environment than the MII MAU. When included with power-supply and temperature variation, it also may be used in combination with the previously mentioned tests (Fig. 4).

Most physical-layer-device MII MAUs have power split into separate VCC planes. A signal generator coupled with a slew-rate-controlled buffer can inject noise into a system. All frequencies should be injected into each and every isolated plane. Some physical-layer devices need a clean or regulated power supply to function properly. Test cable length and BER with a noisy system. It's likely that cable length will decrease as noise to the system increases.

Test 5: Receive Equalization/Jitter. This one's more complicated than the previous tests, but very useful in exercising the physical-layer device in worst-case scenarios. The key to most of these devices is their ability to properly receive and decode a signal, however distorted it may become, from the network. This test is becoming more widespread as users start to understand the importance of exercising the receive equalizer, as well as the fact that for interoperability, it must be able to meet the worst-case TP-PMD specification (Fig. 5).

Starting with basic ideal binary data, it's then modified to reflect a certain set of parameters. For a worst-case scenario, those parameters would include ppm drift, rise/fall time, jitter, cable loss/phase delay, transformer effects, crosstalk, etc. The end result is a waveform representing a packet that has been modified with the above parameters.

For converting the data, it's typical to use a tool like Mathcad. The modified data is sampled and converted to a series of analog voltages—hence an analog waveform. This waveform is supplied to the analog waveform generator, and sent to the DUT via BNC cables connected to the Receive differential signals. The ability of the DUT to properly receive the packet is then analyzed using the network analyzer, in this case a SmartBits type of tester. An external power supply tests the DUT's performance over different voltage ranges.

When evaluating a 10/100 physical layer, the first set of tests done on the part should include a subset of TP-PMD testing. This will allow you to test, in part, the transmitter of the device. First, get a look at the transmit waveforms of the device while it's transmitting 100-Mbit idles. Optimally, all measurements should be done at the RJ-45. With a BNC-to-RJ45 cable, this is easily done. If using simple oscilloscope probes, the next best option is hooking to the transmit out pins on the cable side.

Take care to have a 50-Ω impedance to each probe. The differential voltage amplitude should be 2 V, within 5% per IEEE specifications. A low jitter profile from the PHY transmitter will help a marginal far-end PHY receive your signal. Focusing on an edge of the differential signal, look at the eye opening several microseconds after the trigger point. Measure the jitter and divide by two to determine the part's jitter. TP-PMD requires a jitter of less than 1.4 ns.

Other system-level tests can be run by routing the signal through intermediate devices, such as repeaters and switches. Multiple nodes can connect through a hub. A generic view of the system performance can be gathered by monitoring each node's ability to transmit and receive with that hub. Then, interoperability can be tested by attaching nodes from various vendors to the hub and alternating cable distances, etc. This type of system-level testing can be quite extensive and time-consuming. Unfortunately, it's usually not done thoroughly until user problems arise.

While 10-Mbit and 100-Mbit physical-layer and transceiver testing can be exhaustive, IEEE standards and independent test labs like that of the University of New Hampshire can provide much-needed help. The results are stable, interoperable networks that operate with the same degree of reliability as existing 10-Mbit/s environments. And they provide for future operation at 100 Mbits/s.

By applying the above tests, the designer can quickly evaluate the performance of a physical-layer device, as well as understand how much work will be required to implement that solution in a system-level design. Having passed the above tests, the Enable Semiconductor 5VSingle, 3VSingle, and 3VCardbus devices (now available from Lucent Technologies' Microelectronics group) each reflect successful IEEE and UNH testing. They may be used as a benchmark for testing similar physical-layer devices.

The tests outlined in this article serve as a beginning to understanding and testing physical-layer devices. Once that basis is understood, the designer can appropriately grasp, in conjunction with other IEEE and interoperability testing, the limits of the physical-layer device and design networking systems.


  1. IEEE—802.3u 100BASE-T Fast Ethernet Standard: http://standards.ieee.org/catalog/IEEE802.3.html.
  2. University of New Hampshire: UNH provides system-level testing for Fast Ethernet Consortium members. Its test suite is growing and includes a very adequate section on auto-negotiation at this time. It also has a variety of vendors' products that it can use to do interoperability testing.
Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.