OEMs and battery pirates are constantly in a quality-control tug of war. So how do companies ward off deviants and dodge the lemons?
Battery-management ICs fall into three basic categories: chargers, gas gauges, and authentication chips. Chargers control how a charging current is safely applied to a battery pack. Gas gauges tell the system how much charge is stored in the battery at any given time. Authentication chips indicate whether the installed battery is approved or not.
Although charging is the most fundamental technology, designing a charging circuit isn't nearly as exciting as foiling battery counterfeiters, or determining precisely how much time is left before a user's application goes to a black screen, so we will take up those three battery functions in reverse order.
Suppose you're selling laptops or cell phones and your products develop a reputation for bursting into flames and burning uncontrollably. If you say that the victims of those events were using pirated batteries, particularly if those pirated batteries were superficially indistinguishable from the real thing, your reputation will surely sink. You want your system to be able to distinguish carefully manufactured and testapproved batteries from something hijacked from recycling bins and prettied up with counterfeit labels.
Battery-authentication schemes range from simply customizing the battery's form factor to a hierarchy of challenge-and-response schemes. Making the battery packs in different shapes isn't much more secure than using no authentication at all. If the end-product is built in any great volume, there's considerable incentive for pirates to copy the battery package. If the end-product volume is low, it probably uses a generic battery that can also be profitably copied by pirates.
The most basic form of electronic authentication is one in which a controller in the end product sends a command to read identification data from the battery (Fig. 1). Such data typically includes a product family code, a unique identification number, and a CRC value. The response to an interrogation is compared with data accessible by the controller in the end product. If the response from the battery doesn't match, action is taken.
Incidentally, this action may not be rejection of the bogus battery. In some end applications, such as power tools, lots of counterfeit batteries can infiltrate the community, and the power-tool OEM doesn't want a reputation for making fussy chargers. In this environment, it may make sense to just have the charger top off the counterfeit battery at a safe (i.e., very slow) rate, ensuring that the battery, and others from the same source, hit the recycle bin after a few slow-charge experiences.
In these basic schemes (in fact, in all schemes), the host controller generally communicates with the chip in the battery through a dedicated general-purpose I/O. This leads to the Achilles heel of simple "send me your ID data" approaches. It's too easy for pirates to read battery IDs and copy them in pirate batteries.
A more robust alternative involves the host controller sending random sequences of bits as a challenge to the battery pack. In this case, both the controller and the battery chip contain a secret key for an algorithm that operates on the random sequence.
When the battery returns a response, the controller compares it with the transformed sequence that it generated. If there's a match, the battery is authentic. Because every challenge is different, the scheme is fairly secure- until a pirate gets his hands on the key, either by applying cryptographic analysis to a large number of responses or by simple bribery.
Taking challenge-response up to a higher security level involves cyclic redundancy checking (CRC). This approach combines a challenge and a secret ID, processed through a CRC algorithm with a random polynomial and seed value.
Typically, a 16-bit CRC response is generated from a 16-bit CRC seed, a 96-bit device ID, and a 32-bit random challenge. The CRC polynomial, CRC seed, and 96-bit ID are different in every battery. They're stored as encrypted text in public memory and as plain text in private memory.
In a challenge, the controller in the end product reads the encrypted device ID, along with the polynomial and seed values from the battery authentication chip's public memory. It decrypts those values using its secret key and then generates a 32-bit random challenge, which is transmitted to the battery chip. In turn, a plain-text version of the polynomial coefficients, seed, and device ID, along with the 32-bit random challenge from the host, are used to calculate the authentication CRC value, which is then returned.
Subsequently, the controller in the end product uses the polynomial coefficients, seed, and device ID that it decrypted, along with the 32-bit random challenge that it sent to the battery, to calculate the authentication CRC value. Then, it compares its results with what the battery returned.
Rather than giving a lesson in bomb-making, let's just say that this approach can also be beat. The highest-level (and more expensive) approach presently preferred employs the SHA-1/HMAC secure hash algorithm. This algorithm is used on the Internet to authenticate transactions on VPNs and for digital certificates.
It works in a fashion similar to the CRC scheme, but with a different algorithm. With a SHA, the controller in the end product reads the battery's 128-bit encrypted device ID from the battery's public memory. It then decrypts those values using its secret key and generates a 160-bit random challenge and sends it to the authentication chip on the battery. That chip uses the plain-text version of its ID (stored in its private memory) and the 160-bit random challenge to calculate an authentication "digest" value, which refers to the result of the hash algorithm processing the message to produce a condensed representation of the message, called a digest.
If the message was altered (about a 1-in-2000 chance), it's virtually certain that the algorithm will produce a different digest. As in the case of the CRC, the controller in the end product compares the digest it produced with the digest returned by the battery and acts accordingly.
Older laptops and handhelds had primitive battery-life gauges that provided only a rough approximation of time left before the system shut down. Next-generation systems will be able to tell users exactly how different kinds of applications-watching movies, listening to music, phone calls, etc.-will affect time remaining on the battery.
Gas gauging starts with coulomb counting. Current into and out of a battery is measured with an analog-to-digital converter (ADC) across a sense resistor, and the value in an accumulator is incremented and decremented accordingly.
One could say this is effective but crude, except it isn't terribly effective. The reason is that guard-banding for the inaccuracies of simple coulomb counting gives end users less accurate battery-life predictions than they could have had with more effective gas gauging. That's not good because longer perceived battery life is a key selling point for cell phones and other handheld gadgets.
The limitation of simple coulomb-counting concerns its environmental effects on batteries. For example, lithium-ion (Li-ion) cell capacity varies with temperature and discharge rate (Fig. 2). The plot shows a particular cell's charge capacity in milliamp-hours as temperature and discharge rate are varied. The "Full" line on the chart is the point at which the cell is considered fully charged. The "High Current Empty" line is the point at which the cell is considered fully discharged by a 1-C rate at each temperature.
Charging rates are defined in terms of a parameter called C, which is the same as the ampere-hour capacity rating of the battery. The "Low Current Empty" line was plotted for a discharge rate of 0.2 C. The capacity of the cell at a given rate and temperature is the difference from the "Full" line to the corresponding "Empty" line.
Problems arise when the cell is charged and discharged at different temperatures and different rates. Hence, more sophisticated coulomb counters account for cell temperature and charge/discharge rates in their algorithms.
Aging is another challenge. The older the Li-ion cell, the less able it is to store charge. Empirical data shows that aging affects the "Full" characteristic only. The "Empty" line remains unchanged. Thus, keeping track of the number of charge/discharge cycles adds additional complexity to gas-gauging algorithms. The smarter algorithms work by comparing the values in the coulomb counter's register with pre-stored standard "Empty" and "Full" values for that cell type. (For a detailed explanation, see Maxim's application note at www.maxim-ic.com/appnotes.cfm/appnote_number/131.)
Recently, Texas Instruments introduced a new level of gas-gauging sophistication that it calls impedance tracking. To simplify the concept a little, another way to look at the phenomenon in Figure 2 is to consider the discharge curves of the battery in terms of voltage versus time for different temperatures (Fig. 3).
Texas Instruments' engineers noted that "The key variable in discharge capacity variation is the internal impedance of the battery cells, which shifts the discharge curve by the IR drop." TI's impedance-based fuel-gauge chips incorporate the measured impedance of the battery's cells in their capacity prediction algorithms, measuring and storing the battery pack's resistance as a function of state-of-charge in real time. These real-time resistance profiles, along with stored tables of battery open-circuit voltage versus state-of-charge, are used to predict the battery pack's discharge curve under any conditions of system-use and temperature.
In practice, the TI algorithms use coulomb counting when the system is on and open-circuit voltage measurement when the system is off or in sleep mode to adjust remaining stateof- charge (RSOC) as appropriate. This provides extremely realistic predictions of remaining battery life.
One key side benefit of impedancetracking's real-time updating of actual state of charge is that it allows the battery gas gauge to reside on the system board, rather than on the battery pack. That means one gas-gauge chip per system, rather than one per battery pack, can be used to account for end-user battery swaps.
According to TI, if the battery pack remains in system, an impedancetracking gas-gauge chip uses the created cell profile as the basis for its fuel gauging. When removing/re-inserting or replacing the system's battery, the chip arbitration algorithm compares the measured characteristics of the inserted battery pack with the default profiles and the previously created cell profiles and chooses the profile that matches the characteristics of the battery pack best.
Battery-charger chips range from autonomous drop-in ICs to highly programmable devices. With so many contemporary chargers hiding their algorithms on-chip, it's hard to find resources for explaining what actually goes on in battery charging. Happily, a Maxim Integrated Products applications note (www.maxim-ic.com/appnotes.cfm/an_pk/680) from 2000 explains basic battery charging using the state diagram in Figure 4. Here's an edited and condensed version of what Maxim had to say:
The state machine starts even before the battery is connected, with the charger initializing itself and performing a self-test, including checking whether a battery is present at its output. The point of the test is to catch events in which the charging process has been interrupted, perhaps by a user unplugging the charger before charging is finished.
Actual charging begins with cell qualification, a state in which the charger detects the installation of the battery and determines whether it can be charged. The charger may look for a voltage on its charging terminals or look for the external jumper or thermistor that's present in some battery packs. If an authentication scheme is used, the charger will determine whether the battery is an approved type. The next state, qualification, determines whether the cell is functional. The charger checks for opens, shorts, and (sometimes) temperature.
After qualification, the next state for some batteries, notably nickel-cadmiums (NiCds), is preconditioning, essentially discharging the battery output down to a level of 1 V over the course of several hours. This is done to help prevent the NiCd's notorious (and misnamed) "memory effect." It's not necessary for more modern battery chemistries.
What happens in the fast-charge state depends on battery chemistry. Broadly speaking, for fast-charging NiCd and nickelmetal- hydride (NiMH) batteries, the charger applies a constant current while monitoring battery voltage and other variables to determine when to terminate the charge. The most common fast-charge rate is C/2, so a totally discharged battery with a 2-Ah rating would be fast-charged at 2 A for approximately two hours.
Of course, without preconditioning, a battery may be connected to a charger at any state of charge. Therefore, the charger must be able to avoid overcharging. When a constant charging current is applied in NiCd and NiMH batteries, the cell voltage rises slowly and eventually peaks. NiMH charging should stop when dV/dt hits zero. NiCd charging should stop when dV/dt inflects downward. Chargers that use faster charging rates than C/2 monitor temperature as well as voltage and terminate fast charging based on the rate of increase in cell temperature.
Li-ion battery chargers need more precise control of charging voltage than nickel chemistry, and their maximum charging rate is set by current limiting. They also add a top-off charging state after reaching the nominal float-voltage.
Different battery chemistries possess different self-discharge rates, which affect the trickle-charge state. Li-ion batteries self-discharge very slowly, and chargers designed for them rarely include trickle-charging. The Maxim note observes that "NiCds, however, can usually accept a C/16 trickle charge indefinitely. For NiMH cells, a safe continuous current is usually around C/50, but trickle charging for NiMH cells is not universally recommended."
As for those autonomous chargers (a market in which Maxim is a fierce competitor), designers may find their task considerably simplified from what it was when the app note was written. To play fair, I'll pick an example from Linear Technology's line card, the LTC4075 Li-ion charger, introduced last spring.
Figures 5a and 5b show a block diagram of the charger and the charging curves it produces. With this charger, programming is achieved via three package pins. The pins labeled IUSB and IDC use resistors to program the currents for USB and wall adapter voltage sources. A resistor on ITERM programs termination current threshold. The other pins provide status indications or accept input power.
This is not to say that all chargers must have this degree of autonomy. For a contrast, take a look at Summit Microelectronics' SMB137 (see "Where Are We?" p. 50).