All-Format Analog Front Ends Handle Video And PC Formats

May 13, 2002
As designers conquer a myriad of video-interface standards, users will reap PC/TV benefits at home.

Much-hyped PC/TV convergence is becoming a reality in many display products. Consider an LCD PC monitor with a TV input, or a big-screen TV coupled to your home PC for Web surfing. Also, projectors for business presentations are shrinking to the point where they can easily be connected to your DVD player in a temporary home-entertainment setup. Al-though the simultaneous support of video (TV) and PC inputs might be obvious to the end user, the hardware and software designers must contend with a multitude of video-interface standards and the instruments upholding them.

Historically, PC and TV formats developed with little in common. Video targeted over-the-air transmission. Consequently, it's a bandwidth-limited signal that has black and white (luma) and color (chroma) information frequency-multiplexed into one composite-video signal using interlaced video scanning. Standards evolved separately in Europe and the U.S. for composite video: PAL and SECAM in Europe, NTSC in the U.S.

However, graphics signals developed for a point-to-point connection between a PC and a monitor aren't bandwidth limited. They require separate red, green, and blue (RGB) components, with video-synchronization signals carried on dedicated lines. While there's a common framework for PC graphics signals with no regional differences, several industry-standard formats exist, including legacy IBM and MAC graphics formats.

Enter digital TV (DTV), which introduces component, not composite, video transmission using the YCbCr color-space representation (Y = luma; Cb and Cr are chroma difference signals), instead of RGB. DTV also unlinks the video and transmission formats; the same transmission system can be implemented for several formats—actually up to 18 variations for terrestrial DTV. Some of these are the common 1080I and 720P high-definition (HDTV), 480I standard-definition (SDTV), and 480P enhanced-definition (EDTV) formats.

Creating A Video/Graphics Front End: Today, designers are building all-format video front ends using several available ICs: an analog-input video multiplexer, a (digital) video-decoder IC, and a high-speed triple-graphics ADC analog front end (AFE). These front ends handle the following signals (Fig. 1):

  • CV: composite video (PAL/NTSC/SECAM) that combines luma (Y) and subcarrier-modulated chroma (C) in one signal. The color subcarrier for NTSC is at 3.58 MHz, while for PAL it's at 4.43 MHz.
  • Y and C: the luma and modulated chroma components of S-Video. This signal carries Y and C on separate signals. Thus, CV = Y + C.
  • YCbCr or YUV: component video with luma and both color-difference signals (red minus luma and blue minus luma) on three separate signals. So, C of S-Video = subcarrier-modulated CbCR. U and V are scaled components of Cb and Cr, respectively, prior to subcarrier modulation.

Presently, real-world video interfacing remains very much analog. All TV-based equipment requires analog video inputs. Recent market research suggests that more than 80% of today's flat-panel PC desktop monitors have analog interfaces.

Any display application that requires both video and graphics inputs needs a "dual" front end. The separate AFE is necessary because the front-end ADCs in the video decoder IC can't handle the high sampling clocks for enhanced and high-definition formats, or for PC graphics (see the table).

An independent sync-separator IC is needed to operate with formats that carry video synchronization embedded in the Y component, such as all component DTV formats, and some graphics formats. Another complexity is that sync-separator ICs are traditionally developed for SDTV sync formats, not for HDTV. PC formats, on the other hand, carry their video syncs on separate Hsync and Vsync signals. Once all formats are digitized, a video back end takes care of both scaling and video de-interlacing, such as in a flat-panel display, or video compression in video-storage applications.

Design Issues With Discrete ICs: The AFE will digitize either graphics to RGB or component video to YCbCr (because the component analog video interface uses YCbCr). But all displays require RGB-style signals for the internal display element, whether it's an LCD panel, a CRT, or a digital micromirror device for DLP-based projection equipment. Because the clamping circuit in the AFE must be configured to properly digitize either RGB or YCbCr component video or graphics, the digital back end must detect the color space used by the source, and initialize the video clamp. If necessary, conversion to RGB is performed via a matrixing operation.

Second, PC monitors have a linear transfer characteristic, while traditional CRT-tube TVs need to be overdriven in the lower-amplitude (black) regions. Their I/O relationship is approximated by a power-function: light output = (voltage input)γ, where γ represents the nonlinear light output versus voltage input relationship of the picture tube. Because of a tube's low light output in dark areas, its gamma number is greater than 1 (usually about 2.5). Thus, low-intensity areas of the picture (near black) are compressed, and high-intensity areas (near white) are expanded.

To compensate for this, the video signal is emphasized prior to transmission using an inverse gamma curve. This process is known as gamma correction. The designer potentially faces the need to "de-gamma" the video signal when the display has a linear transfer characteristic (γ = 1), as in most flat-panel displays, rather than the traditional nonlinear characteristic of CRT displays.

Finally, there's the issue of video-input synchronization and format detection. Today's PC graphics signals don't carry their own identification, so the AFE needs to detect which graphics format goes to the display, then properly initialize the front end to the correct sampling frequency and phase. Graphics signals are stair-stepped, nonbandwidth-limited signals. So, in addition to controlling the sampling frequency, the clock phase needs to be set to avoid sampling during pixel transitions.

Today's AFEs demand external detection of the incoming format (based on video sync frequencies and pixel-analysis) and only provide low-level access to the phase-locked loop's (PLL) frequency/phase controls. They don't offer an integrated "auto-lock" algorithm.

Further aggravating this video detection and synchronization issue is the fact that, as noted, the video sync can be carried in several ways: Hsync/Vsync for most graphics formats, embedded composite sync for video, or even composite sync on a dedicated line. Thus, the system needs to detect the synchronization method employed by the video/graphics source. This depends on the video format, which is yet to be determined!

This matter of auto-format detection, including auto-sync and phase control, is probably the most important "system-level" problem for an analog video front-end design. That's because it has a direct impact on performance and user-perceived quality. Everybody expects a more expensive digital display to sync-up as easily and as fast to a new incoming video format change, as did the old CRT multisync monitor.

Note that no mode detection is necessary in analog CRTs. As long as the CRT monitor's deflection circuitry can track the horizontal line rate, the monitor will sync. Plus, because it's an analog display, the video doesn't have to be scaled to any inherent monitor resolution.

Next to the three issues listed above, other secondary concerns exist at the system level. Among them are auto-power save and wakeup for PC graphics, video-input standard detection on the decoder, and manual or automatically detected active-video input switching.

Some of these issues cause tedious interactions between the analog video/graphics front and back ends. With today's ICs, all of these system aspects require the intervention of a host controller, and significant design effort from the hardware and software designer. The back end will have to perform front-end related tasks like color-space conversion and gamma correction. The system CPU will run the auto-lock algorithm for sampling frequency and phase control of the front end.

Good video picture quality should be achieved with both standard and nonstandard video sources. Picture quality includes its visibility (blurring and artifacts), plus horizontal, vertical, and color-lock speed and stability. Nonstandard sources consist of weak and noisy signals, as well as signals from VCRs, DVD players, and video game consoles.

Weak signals occur due to the reception of weak RF signals from a distant transmitter. VCRs may produce a signal whose time base is unstable and dependent upon its mode of operation: normal play or trick mode (pause, fast forward, and rewind). The video line frequency may vary up to 5% from its nominal value, and before the vertical-sync interval, the VCR head switch can produce horizontal sync jumps of up to 15 µs. During trick modes, the number of lines per frame may also vary by ±5%, and the vertical sync may change shape. All of these effects present real challenges for a digital video decoder.

Furthermore, copy-protection methods, such as the popular Macrovision technique (Macrovision Corp., Santa Clara, Calif.), can be added to the video signal, presenting more problems with horizontal and color lock, and automatic-gain adjustment.

The decoder's front end clamps the input video to a reference voltage, performs automatic gain control (±6-dB range) and offset adjustment, then completes an a-d conversion. But the actual video decoding occurs in the digital domain. Proper synchronization must be achieved and maintained with a compressed or enlarged horizontal sync.

AC nonlinearities in the front end cause distinctive spurious frequencies that can show up as beats in the luma or chroma output of the decoder, while DC nonlinearities can appear as unnatural quantizing effects in ramp-like video inputs. Cross talk between channels, and between the analog and digital domains, can produce artifacts in the displayed image too.

For standard video, a low pixel-clock jitter is a key requirement for a clean, clear, crisp picture. Line-locked video decoders use a horizontal PLL to lock the sampling or pixel clock to the input source's horizontal sync. A large time constant and a damping factor of less than 1.0 result in low jitter and maximum noise immunity. For VCRs, a smaller time constant and larger damping factor supply a faster response to the sync jump, but more clock jitter.

Color demodulation quality also depends on clock jitter effects. The input's color burst is a very stable frequency reference. For a VCR it's generated from a crystal oscillator. Only a stable local-color oscillator frequency ensures proper color demodulation. If the variation in the local color oscillator becomes too large, flashing color and horizontal color stripes may appear in the picture.

Nonstandard video signals weaken horizontal, vertical, and color lock. Introducing noise distorts the sync and color-burst waveforms and makes it difficult to detect and lock to them. Even if lock is achieved, the noise induces more clock jitter, reducing lock quality. Artifacts and color streaking may appear in the picture. A metric is the signal-to-noise ratio at which lock is lost (­10 dB is a good target).

The decoder's "comb" filter separates luma and chroma from the composite-video input. Two- and three-dimensional filters are common. Two-dimensional comb filters average pixels across lines and adapt to luma and chroma contours in the input. Three-dimensional comb filters average pixels across frames and must adapt to motion.

Incorrect luma/chroma separation brings undesirable artifacts to the picture. Luma frequencies within the bandwidth of chroma are demodulated and appear as false color in the picture. Test patterns, like vertical black and white lines, plus concentric circles, help in evaluating the comb filter's cross-color suppression.

Chroma that is output in the luma channel produces crawling dot artifacts in the picture. Test patterns containing horizontal-color bar boundaries are used to evaluate the suppression of such cross-luma by the comb filter. Noise and clock jitter also affect the performance of the comb filter.

Figure 2 illustrates false color and artifacts that occur in an NTSC test pattern when the comb filter is switched off. The effect of false color (cross color) is clearly visible in the area of the high-frequency vertical lines. Because the high-frequency luma is present around the color subcarrier frequency, it's erroneously decoded as chroma by the decoder when only a simple notch filter (also called a chroma trap) separates both.

Figure 3 shows the result of a digital comb filter on TI's TVP5145 video decoder. There's much better separation of luma and chroma (no false color), along with reduced false luma that would come from color information incorrectly demodulated as luma.

To simplify the design of a universal video/graphics input circuit, an "all-in-one" video decoder/AFE would integrate several of the discrete ICs in Figure 1. The video decoder module for NTSC/PAL/SECAM composite and S-video would be combined with the triple ADC channels for component video and PC graphics.

Such a universal front end needs to handle color spaces for both component video or graphics inputs (YCbCr and RGB) and convert between both to provide a consistent output interface to the next device. Furthermore, an embedded-controller function implements many system features, like format detection, reliable sync separation, and timing generation.

An example of such a universal front end is the TVP5200 "all-format" decoder. On this device, the exact functionality of an embedded RISC CPU can be configured at power-up via microcode downloaded through a host-port interface. With 165-MHz high-speed ADC channels, the device supports all graphics/video formats in the table, up to UXGA at 60 Hz. A separate 10-bit ADC channel performs high-quality, composite-video decoding of standard and nonstandard video signals, including Macrovision encoded video.

The authors want to thank Li Zhang for his assistance in providing the captured video images.

SAMPLING SPEED FOR GRAPHICS/VIDEO SIGNALS
Video format Sampling speed (MHz)
SDTV (1X/2X oversampled) 13.5/27
PC VGA 640 by 480 (60/75/85 Hz) 25.175/31.5/36.0
EDTV (480P) (1X/2X oversampled) 27/54
PC SVGA 800 by 600 (60/75/85 Hz) 40.0/49.5/56.25
HDTV (720P/1080I) 74.25
PC XGA 1024 by 768 (60/75/85 Hz) 65.0/78.75/94.5
PC SXGA 1280 by 1024 (60/75/85 Hz) 108.0/135.0/157.5
PC UXGA 1600 by 1200 (60/75/85 Hz) 162.0/202.5/229.5

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!