Edge-triggered interrupts are useful in microcontroller (MCU) applications for processing asynchronous events like switch closures, level transitions, and pulses. However, low-cost MCUs offer limited on-chip resources to handle such interrupts.
Yet designers can “trick” an unused on-chip UART into interrupting the MCU to detect edges in the input waveform. This concept avoids the need to upgrade to a more costly MCU when all the conventional on-chip resources for edge-triggered interrupts have been exhausted but the application needs to process one more edge-driven input.
In this scheme, the Rx pin of the asynchronous receiver acts as the edge-driven interrupt pin. When the Rx pin receives a high-to-low transition (that is, a low-going edge in the input signal), the UART interprets this as the arrival of a start bit.
The baud-rate clock goes on sampling the input and senses a byte value of 0x00 with a “break error” bit set. The error is caused by the absence of the stop bit in the input waveform, since the input voltage level is still logic zero when the UART expects a stop bit.
The Interrupt Service Routine detects this “zero byte with break error” and signals to the application program regarding edge detection in the input waveform. The baud-rate should be high enough to minimize the latency involved in the edge detection but low enough to filter-out, or debounce, spurious noise disturbances.
This way, if a spurious noise glitch interrupts the controller, the input will return to a logic high before the baud-rate clock finishes reading a byte (Fig. 1). The Interrupt Service Routine will now detect a “non-zero byte” without any error. The application routine (Fig. 2) can recognize this as a spurious glitch and ignore it.
To detect a positive rising edge, simply add a transistor inverter to the input to create the desired falling edge.
This IFD does show a novel way of using the UART inside a microcontroller, i.e., to generate an external interrupt. However, the value of the idea is very limited, and it seems to be a solution in search of a problem.
Many microcontrollers allow any pin to be used as source of external interrupts, so designers are likely to run out of I/O pins before running out of interrupt inputs. (Pins often are multiplexed in terms of function, so the UART RX pin probably will be an I/O pin as well.) Also, a micro that is sophisticated enough to have a built-in UART is bound to have plenty of pins that can generate an interrupt. Finally, an interrupt generated at the end of a “received-byte” on a serial port will have considerable latency from the start of the event that is when the “start bit” occurs.
For example, at a baud rate of 112 kbits/s with an N, 8, 1 setting, 10 bits should be completed before the interrupt is generated, i.e., a latency of 89 µs. If a microcontroller is running fast (at a megahertz clock rate), interrupt latency is often of the order of a few clock cycles as it has to complete its current instruction before it can branch to an interrupt subroutine, i.e., a latency of a few microseconds.
On the other hand, if the micro is running slow, then it can’t generate this fast of a baud rate. As a result, the interrupt latency will also be high.
So if you’re really stuck with a primitive microcontroller that doesn’t have many external interrupt pins, but it happens to have a free UART (and a corresponding free RX pin for this UART), and you have really run out of pins that can generate an interrupt, and if short interrupt latency isn’t really an issue, then this Idea for Design does provide a good way of generating an interrupt.