Electronic Products & Technology

Overcoming USB 3.1 compliance testing challenges

By Randy White, product manager, Tektronix Inc.   

Electronics

SuperSpeed USB 3.1 is poised to take over as the newest version of this popular interface. This update boosts data transfer rates to 10Gb/s, compared with 5Gb/s for USB 3.0. Along with the higher speeds, additional changes such as the use of 128b/132b encoding and more complex equalization increase overall design complexity and introduce new PHY validation and debug challenges. Other recent updates include the release of the updated USB Power Delivery 2.0 specification and the new reversible USB Type-C connector.

In this article, we will first outline the challenges associated with bumping data rates to 10Gb/s and then provide an overview of transmitter and receiver pre-compliance testing. With the proper planning and execution, these challenges can be managed and overcome.

In contrast to more enterprise-oriented specs such as PCI Express Gen3 and the upcoming Gen4, the number one consideration for USB 3.1 is cost. USB is a consumer-grade interface and as such it has to be cheap, yet also provide reliable operation and keep up with application demands (e.g. faster solid state drives with 600MB/s and faster transfer speeds). These considerations inevitably result in a number of tradeoffs to keep costs down, such as the move to a 1 meter maximum cable length for 10Gb/s data rates (compared to 3 meters allowed previously).

Charting the changes

The comparison chart below shows the differences between USB 3.1 and the previous generation from a PHY validation perspective. With the release of the 3.1 specification, USB 3.0 technically no longer exists. Instead, the 5 Gb/s data rate is now referred to a USB 3.1 Gen 1. Regardless of what it’s called, backward compatibility is still a hard and fast requirement.

Advertisement

(*See Figure 1)

While the most important difference is the higher date rate, anyone familiar with PCI Express will recognize that the encoding for Gen2 has changed to something closer to PCIe (128b/130b). All previous USB PHYs use 8b/10b encoding, which has been around a long time and is used by many serial standards. But now we have a new state machine that speaks another language, 128b/132b. Why the difference? The biggest driver was faster data rates under real world conditions. The use of four bits in the header results in better reliability and better error handling. For example, if there is a 1 bit error it can actually be corrected without a need to re-transmit the data. Not only are we transmitting twice as many bits compared to USB 3.0 but we’re making better use of those bits with better error handling and less overhead (20% for 8b/10b vs. 3% for 128b/132b).

In addition to encoding there are also changes in the compliance patterns although not as difficult to implement as the new encoding. Data is scrambled using a higher order scrambler (same polynomial that is used for PCIe). This same pattern is used for measuring jitter and eye height as well as for doing receiver testing. The jitter measurements also require a separate clock pattern in addition to the data pattern. When a Gen 2 device goes into transmitter compliance mode it will initially transmit the Gen 1 CP0 pattern. To get the CP9 scrambled data pattern and CP10 clock pattern you can use the standard pattern toggle method (ping.LFPS) to step through each pattern.

Of course with the higher data rate more loss is to be expected, which is one of the reasons for the change in channel length. While the target reference channel loss for Gen 1 was under -20dB at 2.5GHz, the loss is now targeted at about -23dB at 5GHz. While much of the signal loss is recoverable, there is some relief from requiring a more complex receiver equalizer model and limiting expected cable length to 1 meter, which is equivalent to -6dB at 5GHz. The target channel length certainly could have stayed at 3 meters but this would likely have added design complexity as well as higher power requirements.

One other change worth discussing is transmitter equalization. Tx EQ for Gen 1 was required using de-emphasis. Now a 3-tap model is used as the reference for Tx EQ and includes a normative Tx EQ setting. This was done to ensure margin to support the long 23dB channel where much of the (now higher) deterministic jitter can be compensated for.

A word about channel length

Everyone knows that signal integrity is not very good after the long cable, which is why you always check the eye diagram to make sure that the signal can be read. But with Gen 2, you might want to take a look at short channel scenarios too. So what about the short channel would cause a Gen 2 system not to work? As we get faster, equalization is becoming the secret sauce to make things work. Equalization is very sensitive to a signal that overdrives the receiver input and is very sensitive to the signal-to-noise ratio. These factors make the short channel a must-test scenario for 10Gb/s. It’s also required for 5Gb/s, but I have never seen anyone fail the short channel case for 5Gb/s. At 10Gb/s we’ll see.

Transmitter testing overview

The required transmitter tests for USB 3.1 Gen 2 are show in the chart below. There are three main groups: clocking with SSC, traditional PHY measurements like jitter, voltage and eye mask and LFPS timing and voltage swing measurements. The tests that are unique to 10 Gb/s are highlighted in red. While there are not a lot of new measurements, the Gen 2 measurements have predictably tighter limits.

(*See Figure 2)

Transmitter testing first starts with connection to the device under test using an approved set of fixtures and phase-matched cables. Before taking measurements, you’ll want to make sure your equipment is calibrated and channels are de-skewed since these are differential waveforms. We recommend the use of real-time oscilloscopes with a minimum bandwidth of 20GHz although more or less may be appropriate depending upon your design and other factors. For jitter analysis you will need to capture both a clock pattern which is CP10 and a data pattern which is CP9. Both of these waveforms are convolved with a software channel and then the resulting closed eye is opened up with the reference receiver.

In a typical setup of the physical connections as shown below, it may seem strange that we are doing far end measurements with the scope connected directly to the fixture output. In this case, the scope’s software embeds the long test channel into the waveform and then makes the measurements on the post-processed waveforms. The receiver input is connected to a signal generator that transmits a bursted signal to change the compliance pattern to what is needed for a given test. If the host is sending a CP9 data pattern and the current test needs a CP10 clock pattern, you can have the generator send two bursts of a 20 MHz signal to the host which will tell it to switch patterns.

(*See Figure 3)

The software receiver used to embed the channel is also used to open up the eye. The Gen 1 reference receiver used a passive model of poles and zeros. The Gen 2 reference receiver now uses an ac gain parameter with an active model that should improve the eye margin. This in addition to a 1 tap decision feedback equalizer is required to overcome the worst case channel that is used for compliance testing.

Receiver testing overview

The role of receiver testing is to validate that the receiver is capable of properly recovering a worst case signal. Beyond validating functionality, a jitter tolerance test is required for certification. Receiver testing requires a transmitter source to generate an impaired pattern that can be sent to the receiver. The receiver is then put into a mode that echoes or loops back the signal. The loopback method must be far end retimed, thereby testing both the clock data recovery as well as the signal conditioning blocks such as equalization. The loopback signal is then compared to the actual generated signal to make sure there are no erro
rs. Idle symbols like SKPs are not compared during the BER test.

One of the most important parts of receiver testing is link training to ensure the receiver equalizer is tuned properly for a given channel. Gen 2 add three new sub-states in the polling state machine to allow a port to identify itself as 10Gb/s capable and to synchronize with another link partner. In order to test a receiver, you need to enter loopback mode after full link training. One of the challenging parts of link training with test equipment will be to ensure proper stimulus and response for loopback entry. The loopback process, which includes a full link handshake, not only allows for PHY synchronization but also for proper equalizer adaptation. In fact, the USB 3.1 specification was recently updated to double the length of the TSEQ training pattern to allow a receiver more time to test many coefficient settings.

In looking at the jitter tolerance parameters required for Gen1 and Gen2 in the chart below, a few points stand out. The corner frequency has moved from 4.9 to 7.5 MHz. Rj is much lower than Gen1. In fact, the original USB 3.1 spec specified Rj at about 1.3ps rms but now this has been lowered to 1 ps to accommodate for higher Dj in the channel. Finally a calibrated receiver setup for Gen 1 used 3 dB de-emphasis but with Gen 2 this requires a 3 tap emphasis setup.

(*See Figure 4)

Just because receiver testing is a go-no go test, don’t be fooled by its apparent simplicity. One of the more time consuming parts of testing is calibration, which in this case involves calibrating the generator to proper minimum swing along with the prescribed amount of jitter. Then you need to calibrate all the stress together with the long channel included. Because of the symmetrical nature of the Type C connector, the setup is the same for host and device calibration and testing (more on this in the next post).

Calibration is much easier and more accurate with software automation. There are three parts to this setup. First is the stress generator. This is the BERT which is the source as well as the error detector for the BER test. A real-time oscilloscope is used to capture the clock and data patterns. Finally compliance software is used to verify each part of the stress recipe.

This may sound strange but I actually recommend if you have time to go through a calibration procedure manually at least once. While this can be painfully slow, it will help you see how important each step is in the process. It also allows you trust the automation because you’ll know what to expect.

With higher speeds, a new connector and much improved power delivery capability, the excitement around USB 3.1 is already starting to grow. There is much work to be done of course to create a robust ecosystem, but USB 3.1 is already shaping up to be one of the most important versions of USB yet.

# # #

Randy White is a product manager at Tektronix Inc., where he has worked for the last 10 years, with a focus on high speed serial measurements solutions for industry standards. He is actively involved in various standards bodies including T10, SATA-IO and USB-IF. He holds a BSEE from Oregon State University in Corvallis OR.

Advertisement

Stories continue below

Print this page

Related Stories