Electronic Design
Conferences Highlight Smart Grid Opportunities

Conferences Highlight Smart Grid Opportunities

Design opportunities will emerge as North America, Europe, and Asia implement their smart grids. In searching for these opportunities, it’s helpful to dig deeply into the papers and talks at technical symposia to gain a deeper understanding of the challenges that have already been uncovered.

The term “smart grid” refers to various efforts around the world to create national and trans-national electric-power networks consistent with 21st century technology. Goals are different in different places, though.

China has a blank slate and is building from the ground up, more or less. In contrast, the United States has a mixed infrastructure that dates back to the days of Edison and Steinmetz in urban areas, to Franklin Roosevelt’s administration in rural areas, and to relatively recent times in the suburban fringe. Despite good intentions and the best piecemeal efforts, much of it is old and brittle (see “Hey, The Lights Went Out!”).

Beyond blackout vulnerability, the demand for generating capacity keeps growing. But a variety of well intentioned special interests makes it next to impossible to site new power plants or run transmission lines.

Canada, with a population concentrated along the 49th parallel, is influenced by the U.S. Yet it has more relatively untapped natural resources that, for better or worse, might be traded in raw form or as electricity with its southern partner.

The European Union is in a similar situation to the U.S., but its post-WWII physical plant is newer. And even though they’re “greener” than the U.S., Europeans are more comfortable with nuclear power, thanks mostly to the French, who are selling surplus power to Italy and the United Kingdom.

Thus, the U.S. has two main goals. First, it needs to make the electricity supply more blackout-proof. Second, it intends to deal with the difficulty of siting new power plants and transmission lines through “load-leveling,” which means making the demands on the power-generation and distribution system more constant across the 24 hours of the day.

In practice, load leveling would mean more distributed generation, which could extend down to the fine-grain level of storing energy in electric car batteries at off-peak hours and recovering it at times of higher demand. However, it would more practically mean encouraging new industrial solar, wind, and fuel-cell capacity, with battery storage, to be used to augment fossil-fueled power when necessary.

This enhances the prospect of “microgrids.” In concept, these isolatable, small-scale power-generation systems can interface with the larger grid, but maintain themselves in the event of external failures.

Many smart grid initiatives take things down to the level of individual citizens, who will be incentivized to manage their demand by dynamically adjusted electricity rates that may be downloaded to their electric meters via RF or power-line communications (PLC), which is a slow frequency shift keying (FSK) technology that uses low-frequency carriers that can go through power transformers.

Continue on next page

At homes and businesses alike, the meter (or some other kind of repeater) would communicate the pricing information to “smart” appliances or thermostats, allowing them to incorporate dynamically changing electricity rates in their control loops.

In North America, cents per kilowatt-hour of electricity would be determined, at least in part, by the free market, based on Enron-style trading in the New York and Chicago Mercantile Exchange.Energy traders would base their decisions on fluctuations in demand and availability reported in real time by the generators of electricity and the country’s transmission and distribution (“T and D”) organizations.

Interestingly enough, local power companies are generating the electricity less and less these days. These organizations have been divesting themselves of actual generation facilities.

These changes won’t happen overnight, and there will be lots of tweaking along the way.The National Institute for Standards and Technology (NIST) and the IEEE are involved in the U.S. to ensure interoperability. Beyond interoperability, liability is another key standards driver, as the first line of defense for power-company lawyers is strict adherence to industry standards.

For more information, the seminal document laying out the rationales for the Smart Grid remains “NIST Special Publication 1108, NIST Framework and Roadmap for Smart Grid Interoperability Standards,” available at www.nist.gov/public_affairs/releases/upload/smartgrid_interoperability_final.pdf. It runs 145 pages and identifies stakeholders, problems, and tasks.

In January 2010, the IEEE Power and Energy Society (PES) and Communications Society held a conference addressing the Smart Grid at NIST in Gaithersburg, Md. The event’s procedings are available (for a fee for non-members) at ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5619950.

During his talk at the conference’s plenary session, G. Larry Clark, a principal engineer at Alabama Power Co. (APC) and a nationally recognized expert in distribution automation, pointed out a practical question about load-leveling that was going to take considerable monitoring and statistical analysis over the next decade or two as the Smart Grid is implemented.

As long as there was a diurnal cycle in customer demand, APC’s accumulated data demonstrated that substation transformers could be safely operated at peak loads greater than their nameplate ratings if their time operating at that level were limited and there was plenty of time where lower load levels allowed the transformers to cool down.

Clark pointed out that nobody can tell where the level will ultimately settle down in load leveling or precisely how “level” the leveled load will be. An equipment policy that was conservative under the status quo might lead to premature failures in the future. The only thing to do in the meantime would be to accumulate data and extrapolate trends.

It turns out that making accurate and precise measurements on multi-phase transmission lines requires some specialized silicon. That’s because making instantaneous measurements on any single phase is relatively simple, but making measurements across multiple phases turns out to be much more subtle.

Continue on next page

This challenge wasn’t really solved until 1988, when Arun Phadke and James Thorp invented the phasor measurement unit (PMU) at Virginia Tech. A PMU synchronizes real-time phasor measurements across sections of the grid by means of GPS-derived time signals.

Charles Steinmetz first applied phasors to ac power, but it took 99 years and the implementation of a time-based global navigation system before a commercial product for measuring them simultaneously could be offered, according to Macrodyne.

Measuring phasors at a single point in a three-phase transmission system presents interesting challenges, right down to the silicon level. The analog-to-digital converter (ADC) used for the task needs precision and accuracy, says Martin Mason of Maxim Integrated Products. Also, he says, all of the voltage and current measurements must be made simultaneously. Multiplexing ADCs are out of the question.

Among other things, Mason is responsible for Maxim’s power-line monitoring chips (Fig 1), including the MAX11046, an eight-channel, simultaneous-sampling, 16-bit ADC that samples at 250 kbits/s. Also, see Analog Devices’ six-channel ADE7764 for similar applications.

Following the IEEE Power and Energy society’s January conference at NIST, the October NIST/Communications Society conference further demonstrated that not every challenge facing the Smart Grid will be solved in silicon. Solutions to information-theory conundrums are largely based on mathematics, and there was a lot of that of that at the conference.

One paper showed how sometimes mathematical problems can be attacked by appropriate scaling as well as how the path to rigorous solutions can be elucidated by fairly low-budget experimentation.

To whatever extent the distributed-generation aspect of the Smart Grid relies on electric-vehicle batteries, the widespread use of electric cars depends on the ease with which depleted batteries can be recharged or replaced. Questions then arise about what ratio of vehicles to charging stations is optimum, how to rank customer priorities at charging stations (or how to direct customers to less busy charging stations that are still within range), and so forth.

Clark Hochgraf, Rahul Tripathi, and Spencer Herzberg of the Rochester Institute of Technology (RIT) implemented a practical experiment based on simple SMS text messaging and GPS locating. Stated that baldly, it sounds trivial. But the rigor of the analysis and the cleverness of the investigation won the authors a spot among the 1002 papers at the conference.

The object was to create a low-cost system for managing and monitoring an actual PEV’s (plug-in electric vehicle) charging circuit and helping it optimize the vehicle’s route with an eye to keeping in range of charging stations. The hardware the team developed involved a GSM modem with a built-in GPS that also included a Python interpreter that could run custom applications.

Code running on the interpreter made it possible to parse coded SMS messages and send commands to the charging circuit. SMS messages from the module also made it possible to obtain the PEV’s GPS location, charging state, and battery level on command.

Continue on next page

This basic capability allowed the team to build a virtual infrastructure in which the electric utility could control the charging of multiple PEVs. It also anticipated a scenario in which the utility would charge a customer a lower rate per kilowatt-hour in return for the ability to regulate charging, with override possibilities.

Furthermore, the team built user interfaces around these fundamental abilities to display information about the PEV’s location (Fig. 2), charge state, and battery level. A database accessible through the modem housed information about the location and status of charging stations reachable by the PEV, given its battery’s current state of charge. This data is used in the routing algorithm.

By having all of this information, the utility can monitor and control demand so it doesn’t exceed available capacity. For drivers, knowledge (and a good graphical presentation of it) is the key to choosing a route and a time to stop for a recharge before the actual state of charge becomes critical.

Still, is it realistic to expect a system based on texting and GPS to scale to what’s needed for the national Smart Grid? As with APC’s concerns with transformer ratings and load leveling, it’s necessary to remember that the evolution of the Smart Grid is going to take decades and must proceed incrementally.

As for the short term, the RIT team offers some important points in its paper. “Smart Grid communication between the charger and the utility allows the ISO/RTO to reduce peak demand while still serving the electric vehicle charging load, albeit with a delayed completion time for full charging,” the paper says. “Controlling when the vehicle is charged, or at what rate it is charged (kW), may allow for introduction of electric vehicles without requiring additional generation capacity.”

Going further, the team believes that PEVs are unlikely to achieve market penetrations beyond the low single digits until sometime after 2025.

Within that context, the RIT demonstration system supports the Society of Automotive Engineers (SAE) recommended practices for Communication between Plug-in Vehicles and the Utility Grid (SAE J2847/1) and the IEEE’s Guide For Monitoring, Information Exchange, and Control of Distributed Resources Interconnected with Electric Power Systems (IEEE 1547.3) in their present form. Both are still in the hands of their respective working groups.

Other papers related to electric vehicles at the October conference were presented by IBM’s Zurich Research Laboratory (“Architecture and Communication of an Electrical Vehicle Virtual Power Plant”), Accenture (“Assessment Framework of Plug-In Electric Vehicle Strategies”), Siemens and the University of Passau (“Interconnections and Communications of Electric Vehicles and the Smart Grid”), and Los Alamos National Laboratory (“Locating PHEV Exchange Stations in V2G”).

Somewhat along the lines of the RIT paper, Konstantin Turitsyn, et al., of Los Alamos National Laboratory presented “Robust Broadcast-Communication Control of Electric Vehicle Charging,” during the “Control and Communication” session.

These scientists were less focused on two-way communications than the RIT team, concentrating on control algorithms based on randomized EV charging start times and simple one-way broadcast communication, allowing for a time delay between communication events. Using arguments from queuing theory and statistical analysis, they sought to maximize the utilization of excess distribution circuit capacity without causing a circuit overload.

Continue on next page

One example of unexpected design opportunities at the October conference relates to ways to obscure and encrypt usage data before smart meters transmit it to the electric company. The scheme involves “load signatures,” big batteries, chargers, and inverters, and its history includes a campaign against smart meters by anxious Dutch householders.

Dutch citizens had revolted against smart meters, but not for the same reasons they’re facing opposition in the U.S., such as surprisingly high bills and interference with baby monitors. According to a paper titled “Privacy for Smart Meters: Towards Undetectable Appliance Load Signatures” presented by Georgios Kalogridis of Toshiba’s Telecommunications Research Laboratory in the U.K.,who spoke in the session on “False Data Injection and Privacy,” the problem in Holland lay in the fine granularity of meter readings.

Typically, ordinary meters are read once per billing cycle. But smart meters designed to monitor demand in real time are read often enough to reveal a daily pattern of electricity use. Burglars who gain access to those patterns could identify homes that are likely targets for felonious visits when the owner is away—homes where the oven doesn’t warm up until after 4 p.m., for example.

At the crudest level, Toshiba proposed adding a substantial battery and associated inverter electronics to each home to smooth out the fine-grain detail in those readings.

Of course, the solution couldn’t be as simple as that. The purpose of fine-grain sampling of electricity use is to provide a more or less real-time picture of demand so operators can decide when to bring generating capacity online or shut it down, when distribution routes can be adjusted, when pricing can be reset, and so forth. If that information is muddied, what’s the point?

As the Toshiba paper makes plain, the battery scheme merely makes it easier to make the data more secure while providing the information that the data represents in a form that can only be used to manage capacity, routing, and other functions. The first step, obviously, would be to amalgamate the fine-grain information from all the homes in the neighborhood before sending to the electricity supplier it over the backhaul.

However, that leaves a big back door open for hackers. To close it, the Toshiba researchers propose a device called a load-signature moderator (LSM) that works with appliances, the battery (standalone or in a vehicle), and the charging station/grid-tied inverter. Devices that work with the LSM can be large or small.

“An example would be a kettle drawing 2 kW of power when switched on; the power router could be configured so that 1 kW is supplied from a solar panel, 0.5 kW from a battery, and 0.5 kW from the mains electricity supply,” the paper says.

“The main role of LSM is to ‘detect a privacy threat’ and respond by ‘configuring power routing.’ The LSM may detect a threat after it identifies a power consumption event (within the home),” the paper continues. “This could be a power trigger generated by a consumer or supplier, such as a change in power consumption (e.g. appliance switch-on/off event).”

The paper describes multiple ways to accomplish this. For example, in one “idea is to resist (to the degree possible) against power load changes, i.e. to maintain a constant metered load as such. The algorithm will force the battery to either discharge or recharge when the required load is either larger or smaller (respectively) than the previously metered load. The power and duration of battery charging/discharging is configured to equal the power differences, unless battery bounds are reached.” After that description, the paper goes into a more detailed cryptographic analysis.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.