Advanced SMRs: No Fuss, (Almost) No Muss
December 27, 2024 § Leave a comment
The potentially catastrophic condition that a nuclear reactor can encounter is overheating leading to melt down of the core. Conventional reactors need active human or automatic control intervention. These can go wrong, as they did in the 3 Mile Island accident. Small modular (nuclear) reactors (SMRs) are designed to share the trait of passive cooling down (automatic, without intervention) in the event of an upset condition. SMR designs to achieve this control differ, but all fall in the class of intrinsically safe, to use terminology from another discipline. This is the no fuss part.
The muss, which is harder to deal with, entails the acquisition and use of fissile nuclei (nuclei which can sustain a fission reaction), and then the disposition of the spent fuel. Civilian reactors use natural uranium enriched in fissile U-235 to up to 20%. At concentrations greater than that, theoretically a bomb could be constructed. The most common variant, the pressurized water reactor (PWR), uses 3 to 6% enrichment. Sourcing enriched uranium is another issue. Currently, Russia supplies over 35% of this commodity to the world. The US invented the technology but imports most of its requirement.
In all PWRs and most other reactors, nearly 90% of the energy is still left unused in the spent fuel (fuel in which the active element is reduced to impractical concentrations) in the form of radioactive reaction products. Recycling could recover the values, but France is the only country doing that. The US prohibited that until a few decades ago, for fear that the plutonium produced could fall into the wrong hands. Geological storage is considered the preferred method but runs into local opposition at the proposed sites, although an underground site in Finland is ready and open for business.
One class of reactors that defers the disposal problem, potentially for decades, is the breeder reactor. The concept is to convert a stable nucleus such as natural uranium (U-238) or relatively abundant thorium (Th-232) to fissile Pu-239 or U-233, respectively. The principal allure, beyond the low frequency of disposal, is that essentially all the mineral is utilized without expensive enrichment. In both cases, the fuel being transported is more benign, in not being fissile. One variant uses spent fuel as the raw material for fission. The reactor is the recycling means.
At a recent CERA Week event, Bill Gates drew attention to TerraPower, an SMR company that he founded. For the Natrium (Latin for sodium) offering, which combines the original TerraPower Traveling Wave Technology (TWR) with that of GE Hitachi, the coolant is liquid sodium (they are working on another concept which will not be discussed here). Using molten metal as a coolant may appear strange, but the technical advantage is the high heat capacity. The efficacy of this means was proven as long ago as 1984, when in the sodium-cooled Experimental Breeder Reactor-II at Idaho National Laboratory, all pumps were shut down, as was the power. Convection in the molten metal shut down the reactor in minutes. That reactor operated for 30 years. So, that aspect of the technology is well proven. TerraPower’s 345 MWe Natrium reactor, which broke ground in Wyoming earlier in 2024, is not technically a breeder reactor, although it utilizes fast neutrons, which is helped by the coolant being Na, which slows neutrons down less than does water (the coolant in PWRs). Natrium uses uranium enriched to up to 19% as fuel.
Natrium has two additional distinguishing features. The thermal storage medium is a nitrate molten salt, another proven technology in applications such as solar thermal power, where it is an important attribute to provide power when the sun is not shining. For an SMR, the utility would be in pairing with intermittent renewables to fill the gaps. Their business model appears to be to deliver firm power with a rated capacity of 345 MWe and use the storage feature to deliver as much as 500 MWe for over 5 hours. In general, the unit could be load following, meaning that it delivers in sync with the demand at any given time.
The most distinctive feature of the Natrium design is that the nuclear portion and all else, including power generation, are physically separated on different “islands”. This is feasible in part because the design has the heat from the molten sodium transferred by non-contact means to the molten salt, which is then radiation free when pumped to the power generation island. The separation of nuclear and non-nuclear construction ought to result in reduced erection (and demobilization) time and cost. Of course, sodium-cooled reactors are inherently less costly because they operate at ambient pressures, and the reactor walls can be thinner than they would be for an equivalent PWR.
The separation of the power production from the reactor ought also to lend itself to the reactor being placed underground and less susceptible to mischief. This is especially feasible because fuel replacement ought not be required for decades. This last is the (almost) no muss feature. Disclaimer: to my knowledge, TerraPower has not indicated they will use the underground installation feature.
The “almost” qualifier in the “no muss” is in part because, while the fuel is benign for transport, the neutrons for reacting the U-238 are most easily created using some U-235. Think pilot light for a burner. Natrium uses uranium enriched to 16-19% U-235. However, as expected for a fast reactor, more of the charge is burnt. Natrium reportedly produces 72% less waste. These details support the fact that, their other attributes notwithstanding, SMRs do produce spent fuel for disposal although with less frequency in some concepts, especially breeders, and this is the other reason for the “almost” qualifier.
As in all breeders, no matter what the starting fuel is, additional fuel could in principle be depleted uranium. This is the uranium left over after removal of the U-235, and it is very weakly radioactive. Nearly a ton of it was used in each of the old Boeing 747s for counterweights in the back-up stabilization systems. It was also used (probably still is) in anti-tank missiles because the pyrophoricity of U caused a friction induced fire inside the tank cabin after penetration. Apologies for the ghastly imagery, but war is hell.
Advanced SMRs could play an important role in decarbonization of the grid. My personal favorites are those that use thorium as fuel, such as the ThorCon variant which they are launching in Indonesia. Thorium is safe to transport, relatively abundant in countries such as India, and the fission products do not contain plutonium, thus avoiding the risk of nuclear weapon proliferation.
As in most targets of value, we must follow the principle of “all of the above”*.
Vikram Rao, December 26, 2024
*All together now, from All Together Now, by The Beatles (1969), written by Lennon-McCartney
Drill Baby Drill, Drill Hot Rocks
December 5, 2024 § Leave a comment
“Drill baby drill” is being bandied around, especially post-election, reflecting the views of the president-elect. Thing is, though, baby’s already been drilling up a storm. World oil consumption was at an all-time high in 2023, breaking the 100 million barrel per day (MMbpd) barrier. And the International Energy Association (IEA) projects further demand growth, to about 106 MM bpd by 2028. The IEA also projects the US as the largest contributor to the supply, provided the sanctions on Russia and Iran continue.
Courtesy the International Energy Association
To execute the stated intent to stimulate US production, all that the new White House needs to do is not mess with the sanctions. For ideological reasons they may be tempted to open the Alaskan National Wildlife Refuge to leases. But none of the majors will come, and not even the larger independents. Easier pickings in shale oil and in wondrous new opportunities such as in Guyana. Is it still a party if nobody comes?
Note in the figure above that the projection by the IEA has roughly the same slope as the pre-pandemic period, with a bit of a dip in the out years ascribed to electric vehicles. And if that were not enough, world coal consumption hit a historic annual high of 8.7 billion tonnes in 2023, despite Britain, which invented the use of coal, closing its last mine this year. The largest increases were in Indonesia, India and China, in that order. Let me underline, both oil and coal hit all-time highs in usage last year. So much for the great energy transition.
So, what gives? China and India, two with the greatest uptick in coal usage, need energy for economic uplift, and for now that means coal for them, since they are net importers of oil and gas. Consider though that the same countries are numbers 1 and 3 in rate of adoption of solar energy. What this means is that solar and wind cannot scale fast enough to keep up with the demand. Making matters worse is the ever-increasing demand created by data centers.
One reason for not keeping up with demand is land mass required. Numbers vary by conditions, especially for wind, but solar energy needs about 5 acres per MW, while wind on flat land typically needs about 30 acres per MW. Compare that to a coal generating plant, which is 0.7 acres per MW (without carbon capture). Wind also tends to be far from populated areas, so transmission lines are needed, and much wind energy is curtailed due to those not being readily constructed. To add to the complication, both solar and wind plants have low capacity factors, under 40%. So, nameplate capacity is not achieved continuously, and augmentation is needed with batteries or other storage means. Finally, governments would like the communities with retired coal plants to benefit from the replacements. This is hard at many levels, not the least being availability of land mass, and because the land area required is many times that which was occupied by the coal plant being replaced. All this holds back scale.
Geothermal Energy. Two types of firm (high capacity factors) carbon-free energy that fit the bill in terms of land mass, are geothermal energy and small modular reactors. Here we will discuss just the former, which involves drilling wells into hot rock, pumping water in and recovering the hot fluid to drive turbines. Fervo Energy, in my opinion the leading enhanced geothermal (EGS) company (disclosure: I advise Fervo, and anything disclosed here is public information or my conjecture), has been approved for a 2 GW plant in Utah, which has a surface footprint of 633 acres. This calculates to about 0.3 acres per MW. The footprint of Sage Geosystems is also similar. Sage also has an innovative variant which takes advantage of the poroelasticity in rock, and which could provide load following backup storage for intermittency in solar and wind, thus enabling scale in a different way.
Aside from the favorable footprint of Fervo emplacements (incidentally, the underground footprint is significant because each of the over 300 wells is about a mile long), the technology is highly scalable for the following reasons. All unit operations are performed by oilfield personnel with no additional training, and therefore, readily available. Certainly, the technology is underpinned by unique modeling (developed in large part in the Stanford PhD thesis of a founder), but the key is that when oil and gas production eventually diminishes, the same personnel can be used here. In fact, an oil and gas company could have geothermal assets in addition to their oil and gas ones, and simply mix and match personnel as dictated by demand.
The shale oil and gas industry found that when multiple wells were operated on “pads”, cost per well came down significantly. Those learnings would apply directly to EGS. Accordingly, I would expect EGS systems at scale to deliver carbon free power, 24/7/365, at very favorable costs.
Governments and investors ought to take note that EGS variants are possibly the fastest means for economically displacing coal, and eventually oil. In the case of the latter, even that displacement does not eliminate jobs.
As the title revealed, the refrain now changes a bit to: Drill baby drill, drill hot rocks*.
Vikram Rao
* Lookin’ for some hot stuff, baby, in Hot Stuff by Donna Summer, 1979, Casablanca Records
AI Will Delay the Greening of Industry
October 31, 2024 § Leave a comment
Artificial Intelligence (AI), and its most recent avatar, generative AI, holds promise for industrial efficiency. Few will doubt that premise. How much, how soon, may well be debated. But not whether. In the midst of the euphoria, especially the exploding market cap of Nvidia, the computational lynchpin, lurks an uncomfortable truth. Well, maybe not truth, but certainly a firmly supportable view: this development will delay the decarbonization of industries, especially clean energy alternatives such as hydrogen, and the so-called hard to abate commodities, steel and cement.
The basic argument is simple. Generative AI (Gen AI) is a power hog. The same query made on a conventional search uses nearly 10 times the energy as when it is using Gen AI. This presumption is premised upon an estimate made on an early ChatGPT model, wherein the energy used was 2.9 watt hours for the same query which used 0.3 watt hours on a Google search. The usage gets worse when images and video are involved. However, these numbers will improve, for both categories. Evidence for that is that just over a decade ago, data centers were the concern in energy usage. Dire predictions were made regarding swamping of the electricity grid. The power consumption in data centers was 194 terrawatt hours (tWh) in 2010. In 2018 it was 205 tWh, a mere 6% increase, despite the compute instances increasing by 550% (Massanet et al, 20201). The improvement was both in computing efficiency and power management.
More of that will certainly occur. Nvidia, the foremost chipmaker for these applications claims dramatic reductions in energy use in forthcoming products. The US Department of Energy is encouraging the use of low-grade heat recovered from cooling the data centers. A point of clarification on terminology: the cloud has similar functionality as a data center. The difference is that data centers are often physically linked to an enterprise, whereas the cloud is in a remote location serving all comers. We use the terms interchangeably here. Low-grade heat is loosely defined as heat at temperatures too low for conventional power generation. However they may be suitable for a process such as desalination with membrane distillation, and the regeneration of solvents used in direct air capture of carbon dioxide.
Impact on De-carbonizing Industry
The obvious positive impact will be on balancing the grid. The principal carbon-free sources of electricity are solar and wind. Each of these is highly variable in output, with capacity factors (the time spent by the capital in generating revenue) less than 30% and 40%, respectively. The gaps need filling, each gap filler with its vagaries. AI will undoubtedly be highly influential in optimizing the usage from all sources.
The obverse side of that coin is the increasing demand for electricity by the data centers supporting AI of all flavors. The Virginia based consulting company ICF predicts usage increasing by 9% annually from the present to 2028. Many data center owners have announced plans for all energy used being carbon-free by 2030. Carbon-free electricity capacity additions are primarily in solar and wind, and each of these requires temporal gap filling. Longer duration gaps (over 10 hours) are dominantly filled by natural gas generators. A major effort is needed in enabling the scaling of carbon-free gap fillers, the most viable of which are innovative storage systems (including hydrogen), advanced geothermal systems and small modular reactors (SMRs).
The big players in cloud computing have recognized this. Google is enabling scaling by purchasing power from the leading geothermal player Fervo Energy and is also doing the same with Hermes SMRs made by Kairos Power. An interesting twist on the latter is that the Hermes SMR is an advanced reactor of the class known as pebble bed reactors, using molten salt cooling (as opposed to water in conventional commercial reactors). It uses a unique fuel that is contained in spheres known as pebbles. The reaction products are retained in the pebble through a hard coating. This is not the place to discuss the pros and cons of the TRISO fuel used, except to note that it utilizes highly enriched uranium, much of which is currently imported from Russia. Google explicitly underlines part of the motivation being to encourage scaling SMRs. This is exactly what is needed2, especially for SMRs, whose promise of lower cost electricity is largely premised upon economies of mass production replacing economies of scale of the plant.
Microsoft has taken the unusual (in my view) step of contracting to take the full production from a planned recommissioning of the only functional reactor at the Three Mile Island (conventional) nuclear reactor facility. In my view, conventional reactors are passe and the future is in SMRs. The most recent new conventional ones commissioned are two in Georgia. The original budget more than doubled, and the plants were delayed by 7 years. Par for the course for nuclear plants. Microsoft is certainly aware of the importance of SMRs, in part because founder Bill Gates is backing TerraPower, using an advanced design breeder reactor with liquid sodium as the coolant, and molten salt for storage. The “breeder” feature3 involves creation of fissile Pu239 by neutrons from an initial core of enriched U colliding with U238 in the surrounding depleted U containment. The reactions are self-sustaining, requiring no additional enriched U. The operation can be designed to operate over 50 years without intervention. Accordingly, it could be underground. The design has not yet been permitted by the NRC but holds exceptional promise because the fuel is essentially depleted uranium (ore from which fissile U has been extracted) and the issue of disposal of spent fuel is avoided.
Impact on Green Hydrogen, Steel and Cement
Hydrogen as a storage medium, alternative fuel, and feedstock for green ammonia, has a lot of traction. One of the principal sources of green hydrogen is electrolysis of water. But it is green only if the electricity used is carbon-free.
Similarly, steel and cement are seeking to go green because collectively they represent about 18% of CO2 emissions. Cement is produced by calcining limestone, and each tonne of cement produced causes about a tonne of CO2 emissions. Nearly half of that is from the fuel used. Electric heated kilns are proposed, using carbon-free electricity. Similarly, each tonne of steel produced causes emission of nearly 2 tonne CO2. A leading means for reduction of these emissions is the use of hydrogen to reduce the iron oxide to iron, instead of coke. Again, the hydrogen would need to be green and only high-grade iron ore is suitable, and it is in short supply worldwide. A recent innovation drawing considerable investor interest is electrolytic iron production. This can use low-grade ore. But for the steel to be carbon-free, the electricity used must be as well.
The world is increasingly electrifying. It runs the gamut from electrification of transportation to crypto currency to decarbonization of industrial processes. All these either require, or aspire to have, carbon-free sources of electricity. Now, AI and its Gen AI variant is adding a heavy and increasing demand. Many of these share a common trait: the need for electricity 24/7/365. In recognition of the temporal variability in solar and wind sources, the big players are opting for firm carbon-free sources such as geothermal and SMRs. That is the good news, because they will enable scaling of a nascent industry. The not so good news for all the rest is that these folks have deep pockets and are tying up supply with contracts. Remains to be seen how a startup in green ammonia or steel will compete for carbon-free electricity.
AI could well push innovation in industrial de-carbonization to non-electrolytic processes.
Vikram Rao
October 31, 2024
1 Massanet et al. 2020 Recalibrating global data center energy-use estimates. Science, 367(6481), 984–986.
2,3 pages 53 and 12 https://www.rti.org/rti-press-publication/carbon-free-power
TRADER JOE BIDEN
May 26, 2024 § 2 Comments
President Joe Biden is in the oil trading game. To date he has bought low and sold high, an enviable record. He has used the Strategic Petroleum Reserve (SPR) as a tool for stabilizing oil price continually, and not just in a supply crisis. His nuanced policy on minimizing Russian oil sale profits has not caused a supply-disruption-led oil price rise. In the last two years, transport fuel price has been stable in the US, and domestic oil production has been high. Some folks think he is perpetuating fossil fuel in achieving the latter. Not so. Shale oil wells are notoriously short lived. Other folks think he is taking an inspired gamble with our energy security. He is not. Abundant, accessible, shale oil is our security. And conventional oil traders must be haters. Riding volatility waves is their skill.
When President Biden authorized withdrawal of 180 million barrels (MM bbls) from the SPR in 2022, there were howls of anguish from many sides. The SPR was a reserve, for emergencies, not the sitting president’s piggy bank, he was placing the country at risk and so on. At the time I wrote a blog supporting the drawdown, which entailed 1 MM bbl per day withdrawal for 180 days. My support was premised on the argument that the SPR was no longer necessary at the design level of 714 MM bbls. At the time it was conceived in 1973 (and executed in 1975), we were importing 6.2 MM bbl per day. In 2022 we were a net exporter by a small margin. But the story is better than that. We import heavily discounted heavy oil and export full price light crude. Again, buying low, selling high.

The chart shows the SPR levels over the years. Note the plummet in 2022. In February 2024 it was at 361 MM bbls. This is ample in part because much of domestic production is shale oil, and new wells can be brought on stream within a few weeks. Shale oil is, in effect, our strategic reserve. One argument against that assertion is that many of the operators are small independent producers, who are averse to taking risk with future pricing, and may need inducements.
Biden’s Gambit
Enter President Biden into the quandary. He needs gasoline prices to remain affordable. But he also needs the shale oil drillers to keep at it for the nation to continue to enjoy North American self-sufficiency in oil (domestic production plus a friendly and inter-dependent partner Canada). Gas is a horse of a different color. The US has gone from an importer of liquefied natural gas (LNG) to the largest exporter in the world in just 15 years. American LNG is key to reduced European reliance on Russian gas. How this is reconciled against renewable energy thrusts, is a topic unto itself for another time.
He ordered the SPR release described above. The average price of the oil in the reserve was around USD 28 per bbl. He sold it at an average price of USD 95. All SPR oil is not the same quality, and depending on which tranches were sold, the selling price could have been less for any given lot. On average not a shabby profit. Then in July last year, when the price was USD 67, he refilled the SPR some (see the small blip upward on the chart). In so doing he fulfilled a commitment he made to drillers back in 2022 that he would buy back if the price dropped to the USD 67 – 72 range. Such purchases would, of course, have some impact on raising prices. The mere intent, taken together with the fact that the SPR had sufficient capacity to add 350 MM bbls, would give the market a measure of stability, a goal shared by OPEC, albeit at levels believed to be in the mid-80s.
The purchase in July 2023 was for about 35% of the amount he sold in 2022. The reported profit was USD 582 MM. According to Treasury, the 2022 sale caused a drop in gasoline price of USD 0.40 per gallon. In an election year. And the mid-term election went more blue than expected. Political motivations aside, the tactical use of the SPR to stabilize gasoline prices and at the same time keep domestic industry vibrant, is a valid weapon in any President’s arsenal. As noted earlier, an SPR at a third of the originally intended levels is now adequate as a strategic reserve. Any fill above that level could be discretionary.
Biden’s gambit went a step further. Prices were declining in October 2023. Biden unveiled a standing offer to buy oil for the SPR at a price of USD 79 for up to 3 MM bbls a month, no matter the market price at the time. For the producer this was a hedge against lower prices. While in world consumption terms this was the proverbial drop in the bucket (uhh, barrel), the inducement worked. Investment is reported to have tripled in the period following the offer.
Russian Oil
The Russian invasion of Ukraine prompted actions intended to reduce Russian income while not causing a rise in the world oil price. A combination of sanctions and price caps has certainly achieved the second goal. Russia was forced to sell oil through secondary channels and India became a large buyer, initially at heavily discounted prices. India then refined the oil and sold into all markets, including US and allies. Blind eyes got turned. At first. Now there are additional sanctions. As I noted before, US policy was nuanced. But world prices remained stable and US production thrived.
Trader Joe Biden has shown how deft buying and selling oil can utilize the SPR to achieve national objectives while making a profit*. And in so doing, not relinquish the strategic objective of it as a reserve against extraordinary supply shocks. Future presidents will take note.
* You’ve got to know when to hold ‘em, know when to fold ‘em, in The Gambler, by Kenny Rogers (1978), written by Don Schlitz.
Vikram Rao
May 26, 2024
Wolfpack Déjà vu?
April 1, 2024 § Leave a comment
The last time the NC State men’s basketball team slipped into the NCAA tournament, they made the selectors look like clairvoyants. The selection was roundly criticized. Until the Pack cut down the nets on April 4, 1983. Are we set for a Wolfpack déjà vu?
This time there was no argument about being included. Winning the ACC tournament gave them the slot by rule. A strong belief was that but for this, the team would not have been selected. This belief was bolstered by the paltry 11th seeding given to the team. Even that other time, they received a 6th seeding. Lowly, but not bottom of the drawer.
The ACC certainly was under underappreciated this time around. A perennial powerhouse, they only got 4 selections and the mandatory one for NC State. Yet, 4 reached the Sweet 16. So, the ACC Tournament victor could have been presumed to have some chops. Indeed, that has been proven to be the case, reaching the Final 4 even from a lowly 11 seed position. They beat Duke to get there. A different pairing, and who knows, the ACC might have had two in the Final Four, as was the case two years ago.
In that 1983 tournament, Houston and Louisville were considered to the class of the affair. The luck of the draw had them meet in the semifinal, which was widely considered to be the “true final”. The matchup exceeded all expectation on entertainment. Both teams had high-flying forwards and much of the match seemingly was played above the rim. Houston won that game, led by Clyde Drexler and Akeem (later named Hakeem) Olajuwon. Clyde had been a lightly recruited local Houstonian and Akeem was relatively new to the sport from Nigeria. The title was seen as a mere formality against the surprising NC State Wolfpack.
Jim Valvano, the coach of the Wolfpack, had one key thing going for him. He could slow down the high flying by holding onto the ball. These were the days before the shot clock. Almost as boring as baseball. Or cricket before they wised up and invented the limited overs format, resulting in the sport catapulting into the number two spot in revenue behind the NFL. Baseball is still fiddling with little tweaks.
Back to the championship game. These are my personal memories from watching the game and may be wanting in some respect. Houston had scored 94 points in the semifinal game. The moribund pace of this one had thrown them off their game. The score was tied at 52 with a few seconds left in the game. The Pack had the ball, guarded by Reid Gettys of Houston. A tall guard, he was making it difficult on the shot. But, even with a foul to give, he did not foul intentionally prior to the shot. The player got off a desperation shot. Under the basket, in perfect rebounding position in front of the basket was Olajuwon. The Pack’s Lorenzo Charles was forced to occupy a less choice spot. The shot missed everything and fell into the hands of Lorenzo Charles. He gratefully dunked the ball and that was all she wrote. This was also the last time in the NCAA tournament that the winner did not produce the MVP. That award went to Olajuwon.
I was rooting for my hometown team Houston. So, dredging up these memories are not bereft of angst. But today, as a resident of the Triangle, I am rooting for the Pack. And that earlier game serves to remind that conventional wisdom does not always prevail. And who knows, maybe lightning will strike again*.
Vikram Rao
*Lightning is striking again, in Lightnin’ Strikes by Lou Christy (1966), written by Lou Christy and Twyla Herbert.
How Well Can Electricity Replace Fossil Fuel?
March 24, 2024 § Leave a comment
The UNC Cleantech Summit last week had a panel on this topic, on which I served. Here is one take on this rapidly accelerating trend in the decarbonization of industrial processes. But first some fundamentals.
Processes use fossil fuel in four different ways. The most common use is to produce heat to enable a process. An example in the hard to abate cement/concrete industry is the calcination of limestone in a rotary kiln together with other oxides such as silica to produce clinker. The clinker is blended with crushed rock, known as aggregate, and acts a binder for the aggregate particles. The clinker blend usually comprises about 15% of the concrete. The clinker is often combined with other cementitious materials such as fly ash from coal fired power plants and blast furnace slag, the primary purpose being to reduce the amount of clinker used. This purpose in turn is driven in part by the desire to reduce the carbon footprint, and in part because these other materials ought to be cheaper and but for utilization in means such as this, would be treated as wastes. States such as North Carolina have had to deal with environmental crises such as storm induced overflow of fly ash “lagoons”.
The rotary kiln calcination process causes about 0.9 tonne CO2 per tonne clinker produced. About half of this is from the fossil fuel combusted to produce the 1500 C temperatures required. The rest is from the chemical reduction of CaCO3 to CaO and CO2. One remedy is to produce the heat with electric heating. But this addresses only half the problem. Another electricity-based approach, one that was presented at the cited conference, is by Sublime Systems. They electrochemically decompose silicate minerals to produce Ca(OH)2, which may be used in place of Portland Cement.
Fossil fuel is also used as reactant in the process. An example is the use of coke in an iron blast furnace. The carbon is oxidized to CO, which then reacts with the iron oxide in the ore to produce metallic iron and CO2. The molten iron, containing about 4% C, is sent to a Basic Oxygen Furnace, where it is lanced with oxygen. The combustion of the contained C produces heat and serves to reduce the C to the desired amount in steel, which is usually less than 0.3 % for mild steel. This too produces CO2. Overall, steel has a CO2 footprint of about 2 tonne CO2 per tonne steel. One approach to electrification of the process is that of Boston Metals, where they electrochemically dissociate iron oxide to molten iron.
Fossil fuel may also be used as raw material for a process. Hydrogen is produced today dominantly by the process of steam methane reforming. Methane is the raw material and is reacted with steam to produce hydrogen and CO2. Roughly 9 Kg CO2 is emitted per Kg of hydrogen produced. This is classified as gray hydrogen. If the CO2 is captured and stored, the color turns blue. Hydrogen produced by electrolysis of water is termed green if the electricity is carbon-free. Yet another electrolytic process in late-stage development uses microwave pyrolysis of methane to produce hydrogen and carbon.
Finally, we have the use of fossil fuel to drive an engine. This covers the gamut from internal combustion engines for road vehicles to aviation. Electric vehicles are the best example of fossil fuel replacement. A variant is the use of hydrogen in fuel cells to produce electricity on board electric vehicles. A further possible use is in aviation as engine fuel, although biofuel derived jet fuel is the more likely workhorse. Electric drive planes will be limited in size and scope.
Availability of Green Electricity
The electricity substitutions discussed above are carbon mitigating only if the electricity is carbon-free or substantially so. Carbon-free grids are still at least a decade away, more likely two. This is largely due to the fact that solar and wind are the new sheriffs in energy town. They are the low-cost source of energy in many jurisdictions, clean or otherwise. But they have monthly average capacity factors well below 25% and 40%, respectively. Grids want them for the low cost and renewable feature but must fill the gaps with other sources. The principal longer duration gap filler today, and for the next decade at least, is natural gas. Also, the last 40% is expensive to get carbon-free. Remedies are available, in the form of geothermal, small modular reactors and innovative storage means, but they will be a while getting to scale.
To make matters worse, according to a recent story in the NY Times, all electricity demand is expected to increase steeply over the next decade after being essentially flat over the last one. This is presumably due to the explosion in data centers, most recently compounded by generative AI, which is extremely compute intensive (read power hog). Adding to the demand is the up and down phenomenon of bit coin, also compute intensive. And, of course, electric mobility.
This is not to say that electricity substitution of fossil fuels is impractical*. It is to say that individual operations will find it difficult to get 24/7/365 clean electricity, and it is also to say that carbon-free grids need policy support to accelerate the gap fillers. At least this ought to come in the form of drastic reductions in the times of permitting.
Technology is necessary, but not sufficient.
*Do you believe in magic? From Do You Believe in Magic, by the Lovin’ Spoonful, written by John Sebastian.
Vikram Rao
DIRECT AIR CAPTURE IS A SUBSIDY PLAY
November 12, 2023 § 2 Comments
A recent story in the New York Times describes what the author refers to as the first commercial plant in the US that captures CO2 from the air. This is the class of operations known as Direct Air Capture (DAC). Costs are not disclosed except for noting that similar technologies elsewhere cost up to USD 600 per tonne CO2 removed, and that this company intends to bring costs down to USD 100 per tonne. Corporations are buying carbon credits from the DAC operator and offsetting against their own generation. The federal 45Q legislation will also pay a subsidy which is as much as USD 85 per tonne, the amount depending on the permanence of the sequestration.
Clearly, therefore, declaring victory in the US in this atmospheric carbon mitigation sector involves some sort of subsidy, from a private or public source. In Europe, cap and trade legislation sets a penalty for emission, the avoidance of which would pay for the capture. That price has ranged from € 80 to 100 per tonne in 2023. In both cases, some sort of government legislation is defraying the cost of capture, and the business is not inherently profitable.
In contrast, many of the other carbon mitigation means are profitable in their own right. This certainly includes renewable electricity production by solar and wind, which are the lowest cost providers, at a profit, in many jurisdictions. Electric vehicles, with zero tailpipe emissions, are also profitable. These two examples fall in the category of process change to minimize CO2 emissions. Changing the process is the most effective, and usually involves using a fuel other than hydrocarbons (yes, we consider sunlight and wind as “fuels” in context). But displacing incumbents is hard, in part because the capital being replaced may be far from being amortized. This is the future facing those other two major emitters after transportation, steel, and cement.
The two principal approaches to preventing emissions are changing the process or capturing in the output. The latter is alluded to as point source capture, and while still in commercialization infancy, CO2 could be captured at source for under USD 40 per tonne. That figure could be expected to go down further, to about USD 30. Once captured, it must be stored or put to beneficial use. The highest value beneficial use in practice today is tertiary recovery of oil. Oil left behind after conventional recovery is swept with CO2, which mixes with the oil, thus reducing viscosity and increasing mobility. Much of the gas is recovered on the surface and reused. Many efforts are also under way to mix the CO2 with concrete. The most promising may be the mineralization of igneous rocks to form a stable carbonate, for use or disposal.
Carbon Capture
Capture is accomplished in one of two ways. In the most common process, a liquid such as an amine absorbs the CO2 and the amine is regenerated by releasing the CO2 in concentrated form, which is then disposed of. The other class of processes uses a solid to adsorb the CO2, which is later released by pressure or temperature change, and the solid reused. Once again, the CO2 is in concentrated form. Most DAC processes, including the one cited in the linked story, are some variants of the solid adsorbent method.
In either of the methods, the process is more efficient if the CO2 concentration is higher. Accordingly, capture costs are roughly inversely proportional to the concentration. The cost per tonne captured will be lower for cement kilns, with CO2 concentrations north of 25%, than for iron blast furnaces, with concentrations in the mid-teens. Costlier yet is capture from natural gas power plants, with concentrations in mid-single digits. And then we have DAC. The concentration of CO2 in air is 0.04% (and rising!). There you have it, no wonder then that USD 100 per tonne is seen as a win.
Why DAC?
Why go after the costliest capture, and why not just double down on point source capture, low C electricity production and electric transportation? Many argue that capturing from the air gives a free pass for business as usual. But that argument could also be made for point source capture, albeit not as strongly.
The answer lies in how far behind we are in carbon mitigation. Point source capture will certainly involve retiring of assets, some of which may not be fully amortized. Acceleration of retirements may require policy support (read subsidies). Electric vehicles are being subsidized now. The key point is that even with an all of the above approach with emission prevention, we may fall short of reasonable 2050 atmospheric CO2 targets unless we do something about the CO2 already in the air.
So, yes, we need DAC. But it is indeed a subsidy play, and subsidies come and go, subject to the whims of the folks in power (look no further than the history of subsidies related to wind power) *.
*Drove my Chevy to the levy, but the levy was dry, from American Pie, written and performed by Don McLean (1971)
Vikram Rao
November 12, 2023
Climate Change Increases Wildfire Severity
August 8, 2023 § Leave a comment
We now have the smoking gun
Eight of the ten largest wildfires ever in California occurred after 2017. The August Fire in 2020 burnt over a million acres. At the time, the next largest had been the Mendocino Complex in 2018, which burned 459000 acres. Then, the Dixie Fire in 2021was nearly as large as the record holder. Most, not all, scientists attribute this to climate change. But causality has been hard to establish. Then came the recent paper in The Proceedings of the National Academy of Sciences (PNAS) which was as close to a smoking gun as one could have when dealing with multiple variables. More on that later, but first some basics.
The severity of a fire season is often judged by the acreage burned, not the number of fires. This makes sense because that is the metric which is connected to the impact on society. The difference stands out when fire activity is examined over the last few decades. Shown in Figure 1 are US Environmental Protection Agency (EPA) wildfire figures for both metrics over three decades.

Courtesy US EPA 2019 (Wildland Fire Research Framework 2019-2022)
Figure 1. Time series of numbers of fires and area burned in the period 1988 – 2018.
The numbers of fires have no perceptible trend, although one can see a general progression downwards in the last decade. The acreage burned, on the other hand, has seen a gradual slope up, even though a regression line would show significant uncertainty due to the severe annual swings. Interestingly, the 2020 figure (not shown) is not that different from the 2017, even though five of the 10 largest fires ever were in 2020. A fire is considered large when the area burned is greater than 400 hectares (a hectare is roughly 2.47 acres). Acreage burned is dominated by the larger fires. Obviously, the impact on society is proportional to the area burned and the durations of the fires. One study showed that durations of large fire (>400 hectare) burns averaged 6 days in the period 1973-82 and was over 50 in 2003-12.
In attempting to unravel the distinction between the patterns in numbers and extents of fires, one statistic stands out. In recent years, ninety five percent of fires in California and the Mediterranean region have been caused by human activity (Syphard and Keeley, 2015). These activities include campfires, arson, equipment (such as chainsaws and mowers), vehicles, falling power lines, and controlled burns gone, well, uncontrolled. While human behavior can be influenced (the Smokey Bear program, for example), not surprising is the fact that the trend in the number of ignitions has been generally flat. Among natural ignition events lightning plays an interesting part. Depending on region, it ordinarily clocks in as the 4th to 8th most frequent contributor to ignition. The 2020 fire season changed all that. Cal Fire, a state agency reports that 4 of the 7 largest fires ever were triggered by lightning.
Effect of Climate Change
The progressively increasing severity of wildfires cannot really be in doubt. The data, such as in the figure above, speak for themselves. And, yes, the globe is warming incrementally. Also not seriously questioned is the observation that instances of severe weather have increased, ranging from droughts to floods, as have phenomena such as El Nino and La Nina, which are correlated with severity of weather. The western United States is a reasonable proxy for the relationship between climate change and wildfire severity. The states are arid and depend upon winter precipitation, primarily snowfall, for year-round water for consumption. An increasingly warming climate predicted reduced snowfalls, and earlier snowmelts. This latter statistic was shown to be strongly correlated with areas burned by wildfires. Other observations, such as climate change mediated drought conditions leading to more flammable matter (fuel), have added to the body of belief that climate change was increasing severity of wildfires. Yet, anything approaching causality had been elusive.
Then came the PNAS paper mentioned above.
Courtesy PNAS, Excerpted from Turco M. et al. (2023)
Figure 2. Time series of summer wildfire burn area (BA) and spring through summer monthly average maximum near-surface air temperature (TSmax) for the period 1971- 2021.
The authors used data from California to model area burned in wildfires (BA) against various parameters. Plotted in Figure 2 is burned area (on a log scale) against TSmax, which is the monthly mean of daily maximum near-surface air temperatures. The temperatures were in the period April – October, while the BA was in a smaller summer months subset of May – September. The temperature data are the open circles and black lines, while the BA are the filled circles and red lines.
The most striking finding is that the single parameter of surface temperature correlated extremely well with area burned*. If there is one parameter that is indelibly linked to climate change, it is global warming, characterized by rise in ambient temperatures. The results of the model are a correlation of 0.84 between the two parameters, with a P-value (a parameter indicating statistical significance) <0.01. Statisticians will find that to be strong. Most of the rest of us can see that it passes the eyeball test. Without exception, the years in which the black circles rise or drop, so do the red ones. Not all equally strikingly, but follow they do. The data from the first and last three years of the study are extraordinary in this regard. The authors also found that normalizing for precipitation did not make much difference.
In the end, there is only one measure that will actively address severity of wildfires: slowing down the inexorable march of ever higher near-surface temperatures. Much is happening in that space. More is needed.
Vikram Rao
August 7, 2023
* Everybody look, what’s going down, in For What it’s Worth, by Buffalo Springfield, 1966, written by Steven Stills
References
Turco M. et al. (2023) Anthropogenic climate change impacts exacerbate summer forest fires in California. Proceedings of the National Academy of Sciences 120, e2213815120 https://doi.org/10.1073/pnas.2213815120
Syphard AD, Keeley JE (2015) Location, timing and extent of wildfire vary by cause of ignition. International Journal of Wildland Fire 24, 37–47. doi:10.1071/WF14024
How Green Can Steel Get?
March 25, 2023 § 1 Comment
Steel is considered a “hard to abate” commodity because the production process uses a lot of fossil fuel and alternative processing methods are not readily available. The first step in the production is reducing iron ore (an oxide) to metallic iron. This is a continuous process performed in a vertical shaft furnace known as a blast furnace and the reducing agent is a form of processed coal known as coke. This is the primary culprit responsible for the high carbon footprint of steel, which is estimated to be about 2.2 tonne CO2e per tonne steel. By comparison, that other hard to abate structural commodity, cement, has a footprint of about 1 tonne CO2e per tonne cement.
The molten iron containing a few percent carbon is transferred from the blast furnace directly to a Basic Oxygen Furnace, where much of the carbon is oxidized to produce steel, which requires the carbon to be a fraction of a percent. This is known as primary steel. Steel produced from remelting scrap iron and steel is known as secondary steel, and has a very small carbon footprint, but is in relatively short supply.
A recent report from the Rocky Mountain Institute (RMI) provides a review of alternate ironmaking with lower carbon footprint. Their figure is reproduced below.
Courtesy RMI
They highlight the Direct Reduction Iron (DRI) process as the primary means to greener steel. This process has a vertical shaft variant which uses synthesis gas (syngas), a mixture of CO and H2, as the reducing agent, instead of coke. The operating temperatures are also less than half that in blast furnaces. The result is a reduction of associated carbon to 0.8 tonne CO2e per tonne steel, after the iron is converted to steel in an electric arc furnace (labeled EAF in the figure).
The report advocates a recent variant comprising substituting H2 for the syngas, piloted by a Swedish entity Hybrit. This is straightforward because syngas can be reacted with water in what is known as the Water Gas Shift reaction to produce H2 and CO2, which, if sequestered, makes the hydrogen carbon free (although saddled with the color blue, rather than green). Alternatively, green hydrogen could be produced from electrolyzing water with carbon-free electricity. The report advocates this approach, and further estimates that if green electricity is used in the EAF as well, the carbon emissions associated with a tonne of steel drop to 0.1 tonne (see figure).
So, there you appear to have it. Switch to the DRI/EAF process and use green electricity for the EAF and to produce hydrogen as the reducing agent. The RMI report notes that the steel industry has vertically integrated to ensure supply of relatively scarce coking coal. It advocates that the new process do the same with respect to green electricity supply. This may well be necessary because grids will not be carbon-free for a long time (see https://www.rti.org/rti-press-publication/carbon-free-power). Captive supply will also have a lot of competition. But it could be done, certainly over time.
But there is a fly in that ointment. This is the fact that the DRI process can only use high grade iron ore, with over 64% iron, preferably over 67%. Those who don’t care why should skip the rest of this paragraph. In a blast furnace the mineral impurities such as silica and alumina are removed by combining with oxides of Ca and Mg to form a molten phase known as slag. This floats on top of the molten iron and both are removed continuously. In the DRI process the temperatures are too low for slag formation. Consequently, very small proportions of mineral impurities are tolerated. These small amounts are slagged in the EAF. Low impurities equate to high grade iron ore. Hence the requirement for the ore to be high grade.
Such high-grade ore is in very short supply. Most of the known reserves are in Brazil and Australia. The DRI process has been commercial for decades, but only about 7% of the steel supply comes from this source. The shortage of supply (and of world reserves, for that matter) and the higher cost of the high-grade ore are contributory factors.
Before we get into my opinion on the way forward, two other avenues to green(er) steel bear mention. A story in The Economist describes a way to clean up the blast furnace process. The CO2 emitted is broken down into CO and oxygen using perovskites (essentially known science). The CO is used as the reducing agent in the furnace (instead of coke) and the oxygen is used in the steelmaking. There are practical issues in replacing the structural aspects of the coke. But the allure is that it modifies existing capital equipment. A complete departure from the blast furnace is electrolytic steel. The clever bit in a recent embodiment is their inert anodes. But the electricity must be carbon-free for the steel to qualify as green. And the process uses a lot of it: 4 MWh per tonne of steel. Scaling to anywhere close to the world usage of 2 billion tonnes per year means the need for a high fraction of all the power produced, leave alone the clean power. And as we noted earlier, carbon-free grids are not in the immediate future.
Where does that leave us? My favorite is the DRI/EAF with hydrogen, especially if we are not too choosy on the color of hydrogen; blue will do till green is feasible at scale. It is a tweak to an accepted process, essentially the same work force, so more easily acceptable. And that can be important for a staid industry such as iron and steel. The high-grade ore is the main hurdle to scale. Magnetite is the highest-grade variety, and it could be actively prospected. There will not be enough. We need another source.
One such is ultramafic rocks such as olivine, which are some of the most abundant minerals on earth and close to the surface. These are mixed silicates of iron and magnesium (in the main). Early-stage research offers the promise of extracting the Fe portion, and as luck (and thermodynamics) would have it, the Fe will be in a valence state making the oxide magnetite.
The CO2 in blast furnace emissions can be captured and stored for under USD 50 per tonne of CO2 with technology available today. This is well below the carbon penalty in Europe today. Partial use of hydrogen as a coke substitute would be minimally intrusive.
The two approaches above could handle the bulk of the decarbonization. They could be supplemented by electrolytic steel where captive carbon-free electricity could be arranged.
And don’t forget that Kermit the Frog said*, “It’s not easy being green”.
Vikram Rao
March 25, 2023
*Bein’ Green, Song by Kermit the Frog (Jim Henson), 1970, written by Joe Raposo
Peat Bogs: Nature’s Best Carbon Capture Systems
March 13, 2023 § 3 Comments
Direct air capture of CO2 (DAC) is all the vogue in carbon capture, with considerable innovation occurring. Nature tried its hand at innovating in the passive DAC space a while back. The public is very familiar with the role of forests. To a lesser degree, also known is the role of oceans as carbon sinks.
But it may surprise many that there is a form of vegetation that does a far better job than trees. Five times better per square meter in places. These are plants in peat bogs, which capture CO2 and transfer it over time to the organic layer below, which results in the material we know as peat. Peat may be classified as a very early form of coal, with as little as 40% carbon. Were it to be subjected to higher temperatures and pressures by being buried under sediment, it would eventually convert to lignite, and thence to bituminous and finally anthracite coal. The last clocks in at over 90% carbon and looks like a shiny black rock. Countries, such as Estonia, which are short of other hydrocarbons, have combusted peat for electricity. In other places, regular folks have retrieved buckets of peat from the bogs and burned them for fuel.
Source: Peat Bogs Wallpapers High Quality | Download Free (yesofcorsa.com)
Peat bogs comprise only 3% of the surface of the earth, but account for 30% of the land-based stored carbon, which is double that stored by all forests combined. For comparison, forests cover about 31% of the earth’s surface. A recent paper compares carbon storage by trees and peat in boreal forested peatland (peatland that also has partial or complete tree canopy). They estimate that the organic storage is higher in the peat layers than in the trees and subsoil (11.0–12.6 kg m−2 versus 2.8–5.7 kg m−2) over a “short” period of 200 years.
And yet, saving rainforests gets all the ink. Save the peat bog does not have the same ring. And yet, it should. Admittedly, on imagery alone, a bog finds it hard to compete against a rainforest. Cuddly koala bears versus fanged Tasmanian Devils (mind you, as any Aussie knows, real-life koalas are not to be messed with either, and in a further nod to excellent promotion, they are not even bears, they are marsupials, as are kangaroos). And the comparison is not that straightforward, because forests provide other benefits over bogs. In any case, the global warming situation is so dire that this is not an either/or proposition. The purpose of this discussion is twofold. One is to draw attention to peat bogs as at least just as important as forests for preservation and expansion, including the use in carbon offset programs. The other is to delve into the science of why peat moss is more effective than other plant matter in capturing and storing carbon.
Sphagnum mosses are the dominant species in peat bogs. They are specially adapted to thrive in low pH (acidic), anaerobic and nutrient poor waterlogged environments. The bog microbiome (defined as a mix of microbes) plays a critical role in the fate of the Sphagnum. The microbiome is dominated by bacteria, but also has fungi. The microbes are highly specific to the Sphagnum, indicating plant-microbe co-evolution. This specificity is believed to increase the carbon fixation efficiency, and to adapt to changing climatic conditions. An Oak Ridge National Laboratory investigation showed that heat tolerant microbes transferred heat tolerance to the Sphagnum.
A key feature of the low pH and anaerobic environment is that when the mosses die, they sink into the bog and do not decompose, thus retaining the carbon for incorporation into the peat layer. Meanwhile new moss grows above. This unique ecosystem carries on fixing carbon from the atmosphere in a manner far more effective than any other natural means. Yet, possibly through a failure to recognize the value, or through a desire to repurpose the land for commercial interests, many of the peatlands have been drained. In the state of North Carolina, nearly 70% of peatlands were drained, according to a Nature Conservancy report (Afield, Spring 2023), a reading of which was the impetus for this discussion. Drained peatlands cease to be carbon absorbers and become emitters. In the more spectacular instances, fires lit by lightning strikes have burned and smoldered for up to a year, spewing as much as 270 tons of CO2 per day. This duration of a year is not surprising because the fire can go underground, where the fuel is plentiful.
Reforestation is a laudable goal. As is the support of the many ongoing investigations targeting passive and active capture of CO2 from the air. But, equally, restoring peatlands and protecting existing ones ought to be a priority. Nature has already provided an efficient CO2 sponge. We must feed it*. Adopt a bog.
Vikram Rao
* Sat on a fence, but it don’t work, from Under Pressure, by Queen and David Bowie (1981), written by Roger Taylor, Freddie Mercury, David Bowie, John Deacon and Brian May.



