Grid unlock: simple fixes, better modelling are keys to reliable energy supply
Two presentations at a recent conference point to a more efficient, less expensive power supply. Richard A Lovett reports.
Everyone who’s ever been inconvenienced by a tree limb falling across a power line or a drunk driver colliding with a telephone post knows how easily the electrical power distribution system we all depend on can temporarily crash. Even something as mundane as a squirrel crawling into a utility company transformer can black out entire neighbourhoods for hours.
But those are just nuisances.
Several times a year, says Adilson Motter, a physicist at Northwestern University, Illinois, US, parts of the world are hit by power failures that are more than inconveniences.
In North America, the largest on record came in August 2003, when 55 million people in Canada and northeastern US lost power for two days, with an estimated economic impact of between $10 and $100 billion dollars.
And that’s by no means the world record. In 1999, 97 million people lost power in Brazil – including, according to reports, 60,000 who were trapped on the Rio de Janeiro subway. And in 2012, a whopping 620 million experienced lights-out in northern India.
After the 2003 North American outage, then-President George W. Bush vowed to fix the US power grid so that such an event would never happen again.
In a recent presentation to the annual meeting of the American Physical Society, Motter and colleagues presented findings to explain why the President’s good intentions were doomed to failure.
A practical solution couldn’t be found, the researchers explained, because power grids are complex systems. In the US, for example, there are more than 100,000 interlinked transmission lines, with millions of individual components.
“And it’s a fact of life that no matter how well you design these components, some will fail,” Motter says.
What needs to be addressed, he adds, isn’t so much the individual components but the “cascading failures” in which a breakdown in one location takes down equipment in another, the effects spreading like rapidly falling dominos until a significant fraction of the entire system has collapsed.
To address this, his team created a model of the entire North American network, which they can use to simulate hundreds of thousands of local failures in order to determine which ones, under which conditions, might trigger such cascades.
Most of the system, it turned out, isn’t particularly vulnerable.
But in some areas, localised failure can trigger larger cascades, especially if several of faults occur in close proximity, as might happen if the region is hit by extreme weather, such as a hurricane, tornado, or massive rainstorm.
The value of the model is that it can be used to show which of the 100,000-plus power lines are most vulnerable to cascading failures. That would allow resources to be devoted to upgrading them, so that even under such stresses, they wouldn’t fail.
But that’s just a start, because the upgrades themselves alter the system, shifting its weaknesses. “Other lines become vulnerable,” Motter says.
To cure that, it’s necessary to re-run the model to account for each new generation of fixes, fine-tuning the process until all vulnerabilities have been ferreted out.
“We call this ‘failure-based allocation of resources’,” Motter says.
The challenge of doing all of this, he adds, is increased by the move toward renewable energy, which causes power generation to be more intermittent, driven by the vagaries of wind and weather.
“You now have a new layer of disturbances that is not easy to predict,” he notes.
That said, he is clear to state that this is a challenge for upgrading the power grid, not a failure of green power.
“For every problem, there is a solution,” he says.
For example, the traditional approach to power generation is to adjust energy production to meet demand. But when you have fluctuating production, it’s possible to reverse the paradigm and work to adjust consumption — something that could be done by creating economic incentives such as real-time power pricing.
In another presentation at the same meeting, Amory Lovins, cofounder of the Rocky Mountain Institute, in Colorado, US, suggested that another way to relieve stress on the grid might be by reducing power demand via “radical energy efficiency by design”.
Not that he was directly addressing Motter’s concerns. Rather, he was looking at ways to reduce carbon emissions by reducing overall energy usage, of which, he says, an estimated 85% could be avoided with known, practical methods.
For example, pipes and air ducts are more efficient – with vastly less energy-sapping friction – if they are “fat, short, and straight”, rather than “skinny, long, and crooked”.
Pipefitters tend to go for the latter, he says, “but they’re not paying for the pumping equipment or the electric bill. If all pipes and ducts had 80 to 90% less friction, they could save half the world’s coal-fired electricity.”
And that’s just one of many simple ways, he explains, in which we can save far more energy than we realise, simply by improved planning.
“This is a design method, not a [new] technology,” he says.
“Across most energy uses in all sectors of the economy, basic physics principles offer astonishing opportunities to make the world richer, fairer, cooler, cleaner, healthier, and safer.”
It’s an approach, he argues, that can greatly shrink not only carbon emissions, but also a country’s entire “energy appetite” – which, by implication, would also reduce the power-grid stresses addressed by Motter.