Among other reasons, conferences such as the NIST/IEEE PES meeting in January are held so companies, the government, and academia can share concerns about the subtleties and downsides associated with activities that are generally perceived to be good, or at least benign.
At the plenary session of the Smart Grid conference in Gaithersburg, Md., G. Larry Clark, a principal engineer at Alabama Power Co. (APC) and a nationally recognized expert in distribution automation, pointed out an interesting conundrum with a graph of the average load on the APC distribution system over the course of 24 hours.
The graph was roughly sinusoidal, peaking around mid-day, with a smaller peak corresponding to dinner and TV’s prime time, and with a nadir after midnight. One goal of the Smart Grid is to flatten that demand curve, allowing APC to spend less on imported electricity or fire up its less clean plants less often—surely a good thing.
But, Clark said, something else happens when the curve flattens. After years of study, APC has learned to derate some of its substation transformers, running them at higher than nameplate ratings for a portion of peak periods and letting them cool off-peak, without experiencing premature failures. The result has been considerable savings in capital equipment costs.
But what happens if Smart Grid load-leveling is successful and the demand curve flattens out? If there is overheating, is there enough time for cooling? The derating issue will need to be reexamined, new data collected, and extrapolations made. Fortunately, the Smart Grid will not be implemented overnight, so there is time for that. Also fortunately, engineers like Clark are already asking the “what-ifs.”