October 22, 2020 § 4 Comments
The future of oil has been debated for ever since I can remember. When I was an undergraduate in engineering in the early sixties, we were taught that the world would run out of oil in 30 years. Such predictions continued with the concept of Peak Oil oft discussed. But, with the recognition of immense heavy oil reserves, and more recently with the emergence of shale oil, the discussion has shifted to the demand side.
For nearly a century all crystal ball gazing centered on sufficiency of a strategic commodity. Over the last decade or so, oil is well on its way to turning into salt*. Lest you conjure alchemical imagery, I hasten to explain that oil is merely going the way of salt. Salt used to be a strategic commodity. Canning, and later refrigeration turned it into a useful commodity, no longer strategic. This was about the time that the era of oil began, with the discovery of Spindletop and the resultant decimation of the price of oil. The era was abetted by the demand created by mass production of economical cars by Ford, which incidentally killed the auto industry of the time: electric cars. More on the revenge later.
But the demise of oil will be preceded by a protracted hospice stay. Folks will predict X% electric cars by year Y. But that will be for new vehicles. Legacy vehicles will go a long time, especially in countries like India, a major developing market for automobiles. The electric starter was first installed in a Cadillac in 1911. I was still hand cranking our venerable Morris 8 sedan in India (with difficulty; I was 6) in 1950. On the other side of the coin, India is more amenable to conversions to electric drive, in part due to low labor cost and in part due to a way of life that wrings out every drop of value in a capital asset.
The future of oil is now being discussed relative to demand, not so much supply. Peak oil discussions are replaced by peak consumption ones. Shale oil put paid to the supply issue. Even before Covid-19 destroyed demand, a groundswell of movement was present towards oil alternatives for transportation fuel. This was driven by climate change concerns, but also to a degree by emissions such as NOx and particulate matter. But the projections on future demand depend on the tint of the glasses worn. The Organization of Petroleum Exporting Countries (OPEC) is predicting return to pre-Covid levels of consumption by late next year. Somewhat surprisingly, the US Energy Information Administration is also singing that tune as are some oil majors such as ExxonMobil.
Most surprisingly, however, British Petroleum (BP) is very bearish. Their projections, while being scenario based, are causing them to plan a 40% reduction in their oil output by 2030. This is to be combined with a big uptick in renewable electricity production. Shares rose on the announcement. But BP has been contrarian before, along the same lines. Over a dozen years ago they announced a pronounced shift away from oil, renaming BP to stand for Beyond Petroleum. That did not go well. Particularly unhelpful to their reputation for operating in difficult environments was the oil spill associated with the massive Macondo blow out.
The future of oil is not the future of natural gas. Together they share the term petroleum, although it is imprecisely used in the parlance to stand simply for oil. They were both formed in the same way, with natural gas being the most thermally mature state of the original organisms. But in usage they are different. Oil is mostly about transport fuel and natural gas is mostly about fuel for electricity generation and the manufacture of petrochemicals, especially plastics.
The pandemic decimated transportation fuel but had much smaller effects on electricity and less again on plastics. In the post pandemic world, natural gas will endure for long, while oil will be displaced steadily by liquids from natural gas and biogas, and ultimately by electricity. This, of course, excludes aircraft, which will need jet fuel for the foreseeable future. Biomass derived jet fuel will be a consideration, but not likely a big factor.
Electric vehicle batteries costing USD 100 per kWh will be the tipping point, and we are close. At that level, the overall electric vehicle with modest range will cost about the same as a conventional one. The battery and electric motors’ cost will be offset by the removal of the IC engine, gear box, transmission, exhaust systems and the like. For a compact car, each 100 miles in range will add about USD 2500 to 3000 to the capital cost. Maintenance costs will plummet and the fuel cost per mile will be significantly less than with gasoline or diesel. To top it off, the linear torque profile typical of electric motors enables high acceleration from a stop. A progressive shift is inevitable. The revenge of the electric car.
The only debatable issue is the rate of change. And this is where the opacity appears in the future of oil. The main sticky bits are perceptions of range required (and the willingness to pay for more) and charging infrastructure. The latter could be influenced by business model innovation, such as battery swapping rather than owning. But oil is here to stay for decades. Therefore, improvement in efficiency, to reduce emissions per mile, are paramount. The industry appears to understand that. When the US administration announced a drastic relaxation of mileage standards in 2025, four major companies voluntarily agreed to a standard close to the old one. I suspect this was in part because they already had worked out the techno-economics to get there, and certainly the consumer would like the better mileage. Could be also that they had projections of electric vehicle sales that allowed fleet averages to be met. A compact electric vehicle has a gasoline equivalence mileage of about 120. Quite an offset with even a modest fleet fraction.
The oil barrel has sprung a leak. But it is likely a slow one.
October 22, 2020
*Turning Oil into Salt, Anne Korin and Gal Luft, 2009, Booksurge Publishing
October 8, 2020 § Leave a comment
Several readers of my wildfire blog suggested that I had given lightning strikes short shrift as causes of forest fires. In fact, lightning as a cause is not even mentioned because it is not in the top four causes, at least until 2015, as reported in the the paper I used, which is cited at the end of this piece. They plot the proportion of number of fires and area burned by cause of ignition in (a) the Santa Monica Mountains and (b) San Diego County. I am not reproducing the plot here for copyright reasons but interested folks can go to the link in the citation (Syphard and Keeley 2015) below.
Most such studies concentrate more on the area burned, than numbers of ignition, presumably because that is the net effect on the public and on the environment. Lightning as a cause is present in…
View original post 655 more words
September 21, 2020 § 2 Comments
California is ablaze. So are Oregon and Washington. The tally to date is 5 million acres burned, about halfway through the fire season, and well on its way to record territory. Putting that in perspective, the east coast of Australia, devastated similarly earlier this year in the Southern Hemisphere summer, closed the season with 46 million acres burned.
The statistic of greatest concern is that the intensity and scale of the fires is getting worse. Over the last thirty years, the number of fires annually has no discernible trend; certainly, has not gone up. But the acreage burned has; decisively. Both patterns are evident in the figure below. Five of the ten largest fires ever in California are currently active. The largest of these, the August Complex is already at 839,000 acres and still going. The next largest, ever, was 459,000 acres, the Mendocino Complex in 2018. Labeling any of this chance, or poor forestry management, evokes imagery of the proverbial ostrich, and the placement of its head.
The average hectares (a hectare is roughly 2.47 acres) burned has nearly doubled over this three-decade period. Nine of the ten largest fires have occurred since the year 2000. Note that this does not include the ongoing five, which certainly would be in that group, making it 14 of the 15 since 2000. Although a regression line would have uncertainty due to big annual swings, an eyeball estimate indicates a strong upward slope. If this is a predictor of the future, that future is indeed bleak and warrants a study of causes.
The recent EPA report, from which the figure was reproduced, ascribes the pattern of increased fire acreage to higher temperatures, drought, early snow melts and historically high fuel loading (which is the fire prone vegetation, including underbrush). We will examine these separately, although they may not be disconnected. But first, a comment on the pattern of numbers of fires being essentially flat. Ignition events determine numbers of fires. In California, the principal ones are arson, campfires, power lines and equipment. The equipment category comprises items such as power saws, mowers, and other operated machinery. Human behavior, absent intervention, can be expected to be constant. So, the flat profile on numbers of fires is to be expected. Interestingly, the incidences are seasonal, even, counter-intuitively, arson.
Climate change is implicated in many of the causes of increasing severity over the years. While the term has many interpretations, one generally accepted aspect is temperature rise in the atmosphere and in the oceans. The debate is not whether this happens, but how fast it does. Also generally accepted (to the extent any climate change causality is generally accepted) is that oceanic temperature rise causes increased severity in the El Niño phenomenon in the Pacific Ocean, which is responsible for catastrophic droughts. These are accompanied by drenching rains in other parts of the world in the same year. Both disturbances are extreme deviations from the norm, with resultant impact on vegetation and the way of life.
Atmospheric temperature rise can also be expected to change the proportion of rain and snow in precipitation. Lighter snowfall can be a result, as also early snow melts. Both are in the EPA list noted above.
California, being generally arid, gets most of its water supply from melting snow. While less snow in a given year is certainly a drought indicator, the phenomenon most studied is that of the timing of the snow melt. Data from four decades commencing in 1973 conclusively demonstrated that burn acreage was strongly correlated with the earliness of the snow melt (Westerling 2016). Decadal comparisons show that the fire seasons in 2003-2012 averaged 40 more days that the seasons in 1973-1982. Fires in the later decade were more severe. Large fires, defined as covering greater than 400 hectares, burned for 6 days on average in the early decade and for more than 50 days in 2003-2012.
Power lines deserve special mention. Falling power lines were blamed for several fires in 2017 and 2018. The utility has accepted blame and is in bankruptcy. Trees falling on power lines snapped the poles. The tree roots, finding uncertain purchase due to drought conditions, were no match for the Santa Ana winds or any other storm sourced shoves. Those same drought conditions caused the underbrush to be dry. Power lines are usually not insulated. Sparking wires on dry underbrush and the rest is incendiary history. A poster child for distributed power.
The wildfire future is indeed bleak. Climate change retardation is necessary. But it may not be sufficient in the shorter term. We need a reincarnation of Smoky to change human behavior to minimize the ignition events.
Westerling, A. L. (2016) ‘Increasing western US forest wildfire activity: sensitivity to changes in the timing of spring’, Philosophical Transactions of the Royal Society B: Biological Sciences, 371: 20150178. http://dx.doi.org/10.1098/rstb.2015.0178
Vikram Rao September 21, 2020
August 27, 2020 § 2 Comments
A recent story in the NY Times describes four auto makers making climate action policy. OK, so that is a bit of a stretch, but not by my much. BMW, Ford, Honda, and Volkswagen cemented a binding agreement with California to limit tailpipe emissions. Volvo has also agreed to join the other four. This deal was struck despite a Trump administration’s roll back of the Obama era regulation, which had called for fleet averages of 54 miles per gallon (mpg) by 2025. The current target is 38 mpg, and the roll back regulation calls for a mere 2 mpg increase by 2025. The new California deal is for 51 mpg by 2026. The auto makers could have settled for the low target and beaten it handily. But, instead, they chose to take the high road.
This is not the behavior one ordinarily expects from industry. But these are nor ordinary times. Recently, BP announced its intention to reduce oil production by 40% by 2030, planning to produce 50GW renewable electricity by the same date. Not much detail was given and yet the share price rose in response. Green energy resonates with investors, it seems.
Much the same happened when the Trump administration rolled back yet another Obama era environmental protection scheme, regulations ensuring the curbing of fugitive methane emissions. The roll back was opposed by Shell, ExxonMobil, and BP. Methane leakage occurs at various stages of the natural gas production and distribution system. A multi-year study led by Prof. Allen at the University of Texas, Austin, sponsored in part by EDF, has identified the sources of leakage and industry has taken steps to ameliorate. This has been aided by the fact that most of the actions are cost neutral. Natural gas that leaks is natural gas that cannot be sold, so the motivation is not just environmental. Much progress has already been made, especially by the larger oil companies. A reversal of the regulation has no benefit to them. In fact, their opposition to the roll back is in part because they want to represent natural gas as a clean fuel and allowing leakage undercuts that message.
In inking a deal with California, those five auto companies did more than merely push back. They inserted themselves into a federal versus states’ rights dispute. And this was an “in your face” move in response to the roll back, which was intended to “produce far less expensive cars for the consumer, while at the same time making the cars substantially SAFER.” The administration’s move appears to have been designed to be a win for both the consumer and the producer. The only loser was the environment. However, the auto companies must believe that the net cost of ownership will be lower with the new targets. The modest capital cost increase would be offset by the lower cost of operating a fuel-efficient vehicle. Direct fuel injection, combined with temperature measurement and multiple injection capability, can dramatically increase the compression ratio. Increased mileage combined with more muscle; a winning combination that will sell.
I suspect, however, that the bet they are placing is not only on the type of improvement I note above. This is a vote of confidence in electric vehicles, as was BP’s in the move away from oil production to generating renewable electricity. If a substantial portion of the new fleet comprises electrical vehicles, it takes some of the pressure off on improving the current fleet. An electric family sedan can be expected to have an effective “miles per gallon” between 110 and 120. But 2026 is not that far away, and possibly too soon for a major shift to all-electric vehicles. Then again, maybe not. I found the shift of the target date from 2025 to 2026 interesting, indicating a level of granularity that presages a firm plan. Hybrid vehicles already deliver the 51 mpg target. Direct injection combined with elements such as turbocharging are proven mileage enhancers.
One of the drivers for industrial responses to the Trump administration’s roll backs on Obama era environmentally themed rules has been a desire for stability in regulation. Change potentially every four years is disruptive. Logic dictates that this desire will apply primarily to those affecting capital investment with long cycles. Certainly, automobile design changes and oil exploitation of the Alaskan National Wildlife Refuge fit that bill. Methane emissions somewhat less so, but still involving changes in equipment and procedures. Warning labels on machinery probably lie on the other end of the spectrum.
California represents a solid 30% of the automotive sales market. Thirteen other states are committed to California regulations and add to that percentage. Unless President Trump succeeds in taking away states’ rights in this regard, that pretty much means the entire country will be using these vehicles, no matter the regulations in the individual states. This has happened before. Devices such as television sets consume power while on standby. A scant 15 years ago, that could have been as much as 20 watts. Based on research conducted at the Lawrence Berkeley National Laboratories, it was concluded that the full functionality could be achieved for as little as 1 watt, at virtually no extra cost; you just had to design it more smartly. California mandated that in 2006 (I think it was 3 watts at the start) and the rest of the country followed. At that time, a full 10% of all national electricity consumption was this sort of wasteful passive use. Since then, we have taken other actions to reduce passive consumption.
It began with the 1 watt rule, unilaterally laid down by California. Perhaps that bell will get rung again in tailpipe emissions.
August 28, 2020
August 20, 2020 § 4 Comments
Oil drilling leases will soon be available in the Arctic, according to a story in the New York Times. The Alaska National Wildlife Refuge (ANWR), a land-based portion of the Arctic, is cited. But the Arctic is cold, both figuratively and literally. When he took office in 2017, President Trump announced a roll back of a “permanent” ban on Arctic drilling that President Obama instituted as he was leaving the White House. I opined then that the roll back would have no net effect because interest from oil companies would be minimal. I also wrote at the time that President Obama’s action was also largely symbolic, and not material.
The principal reason for these conclusions is that the price of oil has been low since 2015, when US shale oil became the determinant of oil price in the world and the ability of the Organization of Petroleum Exporting Countries (OPEC) to prop up prices was deeply undercut. USD 120 per barrel highs became USD 70 highs. The Covid-19 pandemic has decimated shale oil company ranks, but it has also caused demand, and price, to plummet to historic levels. Accordingly, the crystal ball of future oil prices is murky. Murky crystal balls equate to uncertainty, which, added to the environmental risks, further equates to higher discount rates. Making matters worse on the investment side, any Alaska play has a long-term payout. First oil is likely a decade after the lease purchase. This involves forecasting the price of oil into the second half of the century.
All the indications are that oil demand will reduce significantly by 2040, largely through electric vehicle adoption. Certainly, the super-major oil company BP’s beliefs in this regard have translated into plans for a major replacement of oil revenue with revenue from renewable electricity. They recently announced that by 2030, their oil production will be reduced by 40%, concurrent with major investment in renewables, resulting in 50 GW electricity production. That production is up there with good size electric utilities. This decision also comes at a time when the dividend has been halved and properties divested to raise cash. It also is coincident with the divestiture of their pioneering Alaska North Slope holdings to privately held Hilcorp, during which transaction they sweetened the pot with a loan to ensure closure of the deal. This does not sound like a company that will invest in a US Arctic lease. I do not see any oil company headquartered in Europe doing it either.
Hydrogen is an important industrial commodity even not counting the possible use as electric vehicle fuel. US refineries purchase 2 billion cubic feet per day of hydrogen (in addition to using another 0.5 billion cubic feet produced internally). Virtually all of it is produced from natural gas. As we discussed in these pages earlier, hydrogen produced using surplus electricity during low demand periods is one of the most promising solutions for the problem of intermittency of renewable electricity. Oil companies like BP, doubling down on renewables, are unlikely to miss this point. Also, if conversion to ammonia is more appropriate for storage and transport, who better positioned than an integrated major oil company? In its announcement, BP makes a vague reference to hydrogen. No mention is made of geothermal electricity, but it is highly unlikely they are not watching that space.
Returning to the issue of success of a lease sale in the ANWR, one of the primary challenges is the paucity of high-quality seismic data. These are subsurface images acquired by individual oil companies in proprietary shoots or by seismic operators speculatively shooting to then sell subscriptions to the data in “libraries”. The acquisition and interpretation of the data is the edge employed by oil companies in obtaining the winning bids without overpaying. Less data means more uncertainty. My take on the situation is that there will be fewer bids due to competing capital spend directions, the uncertainty in the price of oil, the environmental risks, and the delays likely due to litigation (case in point the litigation based delays in the Keystone XL oil pipeline construction). But whatever bids that materialize are likely to be low-balled. In that case, the revenue from the sale will be underwhelming. This assumes, of course, that the administration goes ahead with plans to auction the tracts. More than likely this is just another tempest in the Alaskan teapot.
August 20, 2020
August 9, 2020 § 4 Comments
This discussion is about fixing, as in solving, but it is also, and mostly, about fixing as in rendering immobile. The impact of the greenhouse gas CO2 can be mitigated either by producing less or by capture and storage. A recent paper in the journal Nature triggered this piece. It discusses the feasibility of fixing CO2 in the form of a stable carbonate or bicarbonate by reacting atmospheric CO2 with minerals in the volcanic rock basalt, one of the most ubiquitous rocks on earth. Crushed basalt is to be distributed on farmland. The bicarbonate fraction is water soluble and run offs take it to the ocean, where the alkalinity mitigates ocean acidification. The reaction products are also a desirable soil amendment. This paper is mostly not about the technology. It studies scalability and the associated economics. The authors estimate the process can be accomplished at a cost ranging from USD 80 to 180 per tonne of CO2. Putting that in perspective, the current US regulation has a 45Q Federal Tax Credit of USD 50 per tonne sequestered in this fashion. This lasts for another 12 years. While no business ought to be built on the promise of subsidies, the length of time allows cost reduction to occur. At USD 80, the lower end of the range noted by the authors, the cost is in an acceptable range.
The use of basalt to fix CO2 is a part of the genre referred to as mineralization of CO2. Divalent species, but principally Ca and Mg, are present in rocks. In low pH conditions they react with CO2 to produce a carbonate (or bicarbonate). Olivine, another common mineral, often found in association with basalt, is a mixture of MgO.SiO2 and FeO.SiO2. The reaction product is MgCO3 and SiO2. For CO2 sequestration purposes this may be accomplished in situ or ex situ. The term sequestration most properly includes both capture and storage, but is often used just for the second step, and that is how we will use the term here.
A promising approach for in situ storage of CO2 is injection into oceanic basalt deposits. Basalt is formed when the magma from volcanic eruption cools rapidly. When it cools slowly, it produces species such as granite, with large crystals and high hardness, a rock more suitable for structural applications. Basalt on the other hand is fine grained and weathers easily. This is good for reactivity. In oceanic deposits it is even more so the case when the rapid cooling in water results in “pillows”, which partially disintegrate to be permeable. They are often overlaid with later placements of magma sheets. These impermeable layers act as barriers to injected CO2 escaping, affording time for mineralization. The mineralization is further accelerated if the injected CO2 is in the supercritical state (achieved at greater than 31 oC and 1070 psi). All fluids in this state have properties of both gas and liquid. Here the supercritical CO2 permeates the rock as if it were a gas and reacts with the mineral as if it were a liquid.
Ex situ fixing of CO2 follows the same chemistry as in situ, at least in the aspect that the product is a carbonate. The raw material can be tailored to the need if cost permits. The CO2 capture cost is the same in either case. However, an ex situ process has many advantages over in situ ones. The process kinetics can be advanced using higher rates of reaction using standard process engineering methods such as fluidized beds. Catalysis could also be employed. The products could also be expected to have value, such as in substitution of concrete ingredients. But, as in the case of fly ash from coal combustion, also a simple additive to concrete, the realization of that value can be elusive. Niche uses can be found, but monetization on the massive scales required to make a dent in climate change will require concerted effort.
The cost of production will still dominate the economics and the largest component of that is the acquisition of CO2 from the industrial combustion process or air. Air capture is a relatively recent endeavor and targets production cost of USD 100 per tonne CO2, at which point it becomes extremely interesting. The principal allure of this method is that it can be practiced anywhere. If located near a “sink”, the utilization spot, transport costs and logistics are eliminated. This underlines a key aspect of ex situ sequestration, the availability and cost of CO2 in the form needed.
The original premise for this discussion, mineralization of CO2 from the air, skips the CO2 acquisition constraint. But the focus shifts to the procurement of massive quantities of rock and crushing into small particles. Two pieces of good news. One is that basalt is possibly the most abundant mineral on earth, although a lot of it is at ocean bottoms. The other is that basalt crushes relatively easily, especially if weathered (contrasted to its country cousin granite). But the elephant in that room is that procurement still involves open pit mining, anathema to environmental groups. In recognition of this, the authors of the cited Nature paper encourage a study of availability of tailings from mining operations as basalt substitutes for oxides of divalent ions. They opine there are vast hoards of such tailings from mining operations over the years. They also suggest the use of Ca rich slags from iron making. These are oxides of Ca and Si in the main, with some oxides of Al. Lest this idea be extrapolated to slags from other smelting operations, a caution: the slags from some processes could have heavy metals and other undesirables such as sulfur. On the plus side of that ledger, the processing of certain nickel ores entails a beneficiation step that results in a fine-grained discard rich in Mg silicates, which ought to be very reactive with atmospheric CO2.
While the use of industrial waste for sequestering CO2 is technically accurate, acquisition and use of alkaline earth rich oxides will have hurdles of location, ownership, and acceptability to farmers, to name just a few. I am also reminded of the fact that when “waste” products with no or negative value create value for someone else, the price will often be revised, upwards. But the method in the cited paper certainly is a useful addition to the arsenal of measures to mitigate global warming, provided field operations verify the predictions on rates of reaction. This battle will only be won with many different arrows in the quiver.
August 9, 2020
August 2, 2020 § 10 Comments
The two principal sources of renewable energy share a serious shortcoming. As has been discussed in these pages over the years, wind and solar do not generate electricity when the wind does not blow, and the sun does not shine. Germany gets 40% of its power from renewable sources. But on certain days, that percentage jumped up to 75% and on other days it plummeted to 15%. The (literally) rainy days had electricity augmented from a variety of sources, including batteries. But the days of surplus sometimes required idling of the generation.
Great advances have been made in lowering the cost per unit in both wind and solar. But the need to level the load has never been more important because those very advances have increased the footprint. Some have rushed to use natural gas generators to fill the intermittency gap. This has caused consternation, with some positing the notion that renewables perpetuate fossil fuels because of this dependency. This concern ignores the fact that storage is being investigated at many levels.
Electrochemical storage is the only reasonable option for devices that are carried or move. In many cases, the options are even more limited to light weight batteries. But stationary applications have other options. One that has been in use, where feasible, is pumped water storage. Excess electricity is used to pump water to a high storage site, such as at a dam. When needed, it flows back down to generate electricity. Danish windmills utilize Norwegian hydroelectric sites for this purpose.
The flavor of the day is hydrogen. Excess electricity is used to electrolyze water, producing hydrogen and benign oxygen. The hydrogen may be stored on location to be used to power turbines to produce electricity when needed. In this it serves a similar purpose as does natural gas for the back up generators. As in the case of natural gas, the relatively low duty cycle stretches the pay back period of the capital equipment. Efforts are under way to reduce capital and operating costs. In the former area, expensive platinum electrodes are being replaced with base metal with novel coatings. Operating efficiency improvements are also being targeted. By its very nature, the method is conducive to small scale distribution. Electrolysis to produce hydrogen may be here to stay.
Produced hydrogen could find applications other than for generating electricity. An interesting variant has been piloted for over a year in Cappelle-la-Grande, a town in northern France, by the energy firm Engie, where the hydrogen is blended into existing natural gas pipelines. Hydrogen is a very small molecule and initially there were concerns regarding leakage. But a 25% blend was found to be retained and did not materially corrode the pipes. Furthermore, household burners were found to operate efficiently with that mix. In fact, the mix produced a cleaner burn. Most European countries permit the blend. Some are considering repurposing natural gas lines to exclusively distribute hydrogen.
Hydrogen is an important reagent used in all refineries. Hydrogenation of edible oils is another application. But the workhorse application for this source may well be the admixture into natural gas lines for domestic and industrial use. Because of the low volumetric energy density of hydrogen, storage of hydrogen in the form of ammonia is also being considered. The liquid is easily stored and transported under conditions similar to those for propane. The conversion to ammonia, using nitrogen from air, is straightforward. Utilization can be directly as a fuel in an internal combustion engine, or by catalytic dissociation back to hydrogen for use in that form in a fuel cell for an electric vehicle or any other purpose. Research is under way for improvements in this space, including ammonia production at lower temperatures.
Pipeline transport of hydrogen is feasible but expensive, especially for small volumes. Ammonia, on the other hand, can be transported in pipelines at a cost of about USD 0.20 per kg hydrogen per 1000 miles. This is less than 5% of the expected cost to produce renewable hydrogen at solar and wind installations. The US currently has nearly 3000 miles of ammonia pipelines. Ammonia is a leading candidate for renewable hydrogen storage and distribution.
The main takeaway from this discussion is that renewable energy requires storage, and that storage in fluid form is likely to lead the way. An alternative to using the stored fluid to generate electricity is to use it for a different purpose. This solution for monetizing electricity from periods of excess supply would require the supply troughs to be augmented from another grid source. Hydrogen and ammonia will be important players in the renewable energy world. Alas, silver bullets went out with the Lone Ranger.
August 2, 2020
June 22, 2020 § 3 Comments
In a New York Times story, Taylor Branch, a historian of the civil rights era is quoted as saying: “A movement is different from a demonstration. It’s not automatic – it’s the opposite of automatic that a demonstration in the street is going to lead to a movement that engages enough people, and has a clear enough goal that it has a chance to become institutionalized, like the Voting Rights Act”.
He was discussing a social movement that challenged the orthodoxy. Demonstrations, even ones with a coherent and unified message were unlikely to persuade a majority. But, once the message took hold, it would be the new orthodoxy.
This has striking similarities with the concept of disruptive technology, a term coined by Clayton Christensen exactly a quarter century ago. Considerable detail is to be found in his very readable book Innovator’s Dilemma, but all you really need to know is in the (free!) 1995 Harvard Review paper by Bower and Christensen. A disruptive technology is one that initially is rejected by industry as not a good fit or too unreliable and costly. After success in niche applications, the appeal broadens. Eventually, it becomes the norm and usually completely displaces what preceded it. It is now the new orthodoxy in that technical space. Hence the term “disruptive”.
I lived through the development of one such technology in the oil and gas space. My company, Sperry Sun, had been a leader in developing the technology of horizontal drilling and the enabling technology of measurement while drilling. Early horizontal wells cost 2.7 times conventional wells of the same length. But they multiplied production in the Austin Chalk, by intersecting oil-bearing vertical fractures. That business segment put up with the teething pains for the value created. A U S Department of Energy survey showed that in a few years horizontal wells cost just 17% more and delivered 2 to 7 times the production. Eventually, it was the key enabler for the development of heavy oil in Canada and for the shale oil and gas boom in the US. A Shell Oil Company executive once confided in me that in the early going one needed permission to plan a horizontal well, but by the late 1990’s, one needed permission not to use horizontal wells. That pretty much defines a disruptive technology: looked at askance at first, becoming the norm afterwards.
Being a techy, I may be ill qualified to opine on matters that follow. But I propose to do so just the same! Caveat emptor! Ideas that upset and transform societal norms appear to be have underpinnings similar to those of disruptive technologies. Ideas initially appeal to just a minority of the populace. The appeal may be broader, but the only a minority may be undaunted by the enormity of the task of getting wide acceptance. In the civil rights era, it took decades for that to happen. Today, communication technology and especially the social media variant have changed the game. This may in part explain the rapid breadth of the movement for reform in policing. Or could be that the incendiary pile had grown with a succession of events and just needed the spark.
The descriptor Defund the Police is unfortunately worded if broad acceptance is the objective. A better one would possibly be Reform Policing. My take on what it means, or at least what I think it ought to mean, is for policing to emulate medicine in addressing both the symptom and the cause. Eliminating police departments is as absurd as outlawing doctors. But more emphasis ought to be placed on addressing the underlying causes for crime. This certainly already happens to different degrees in many jurisdictions but is clearly not the norm. Funding that ordinarily would simply go to enforcement, ought to be diverted in part to ameliorating the causes of crime.
The other layer, that of codifying behavior by individual police-persons to be more humane, while still protecting their own selves, is certainly needed as well. Leadership is coming from many quarters, including police officers. Houston’s police chief Art Acevedo was recently quoted as saying, “It’s not about dominating, it’s about winning hearts and minds.”, in a clear reference to one of President Trump’s comments on the subject.
Disruptive technology is one of the best-known terms in the lexicon of innovation. Here, disruption does not carry the plain English pejorative connotation. So also, should it not in the term Disruptive Ideas.
June 22, 2020
June 9, 2020 § Leave a comment
When was the last time you saw a headline such as this? Probably never. While indulging in a modicum of hyperbole, as you will see, the headline is not too much of a reach. The environment here is that experienced in household air pollution (HAP) and is the direct cause of an estimated 3.6 million deaths annually. Substantial radiative forcing is also expected from the elemental carbon emissions, with the HAP source estimated to provide 20% of the loading worldwide, and a much higher proportion in Asia. Radiative forcing has a direct impact on climate change.
3 billion persons use biomass as a cooking fuel, almost all in low- and middle-income countries (LMIC’s). The biomass is largely the wood of convenience but can also be animal waste (dung). In countries such as the Sub-Saharan Burkina Faso, 95% use biomass for cooking. Improving cookstoves has been a pursuit for decades. While improving efficiency of the stove does reduce the emissions per cooking episode, the overall reduction is not sufficient for a significantly favorable mortality outcome. For a clue regarding the reason for this, consider that the PM2.5 count can be as high as 600 μg/m3, as compared to the WHO guideline of 35 μg/m3, and the US standard of 12 μg/m3. Some of the “improved” stoves reduce the emissions by 60%. That does not cut it. To make matters worse, the emissions/impact curve is supralinear, meaning the impact curve flattens at high exposures, despite being linear at the very low numbers. This means that the gains are relatively small for exposure reductions from high to moderate numbers.
This relatively recent realization (a 2014 publication) has led to interventions involving complete substitution of the biomass fuel. The leading candidates are alcohols, liquefied petroleum gas (LPG) and electricity. Electricity is not a good idea. Target villages lack electricity for the basic necessities of lighting, fans and cell phone charging, with the occasional refrigerator. Diverting what little is available to cook stoves, when other alternatives exist, is a bad idea. Ethanol is too expensive and in many countries the production would compete with a food use. LPG has become the favored substitute in many countries.
LPG is a mixture of propane, butane, and some larger molecules. It is derived from oil and gas production. It is delivered in pressurized cylinders, typically holding 14.2 kg fuel. The particulate emissions from an LPG stove have been observed to be close to the WHO guideline of 35 μg/m3, compared to up to 600 μg/m3 with traditional fuel and stoves. Many countries have doubled down on this fuel substitution. India has programs for distribution of stoves and substantial subsidies on the fuel.
But the health benefits from this substitution have not been quantified in randomized control trials until recently. Many of these are ongoing. The largest of these is the USD 30 million Household Air Pollution Intervention Network (HAPIN) trial in 3200 households in India, Rwanda, Guatemala, and Peru. In a recent advance in the state of the art in PM2.5 monitoring, a small wearable device, RTI’s Enhanced Children’s MicroPEMTM is utilized. This follows the personal exposure on pregnant women, other adult women, and children under 1 year of age. This cohort is the most affected by HAP caused by cookstoves. Carbon monoxide is also measured in the cooking area. Studies have shown that the total PM exposure captured by the wearable monitor is usually less than would be measured in the ambience of the home. In any case, the monitor comes closest to determining what the person breathes and will likely become the standard of practice in trials. The filters in the monitors are archived and the collected PM may be used in in vitro studies to assess the toxicity of the particles.
Even if the health benefits of LPG substitution of biomass are established, issues remain. Almost all the affected LMIC’s are net importers of LPG. The price is pegged to the marginal kg, which is basically the world price. Propane pricing may be used as a proxy for LPG. Natural gas liquids, including propane, generally track the oil price. Short term volatility in the price of oil has become a way of life since about 2014. In the US, propane has been priced as low as USD 3.63 per million BTU in January 2016 and a scant two years prior to that was at USD 15. These fluctuations will be very hard on the poor in villages, even if respective governments act to ameliorate with subsidies. The practice of “stacking”, comprising switching back to wood, at least in part, will vitiate the gains.
LPG, undeniably carrying the label of a fossil fuel, may well be the means for improving air quality for the poorest and for addressing the single biggest public health problem in the world today. Labels can be deceiving.
May 25, 2020 § Leave a comment
London Underground railway platforms have warnings to “mind the gap”. In those cases, the meaning is literal: curvature of the platform often creates a variable gap between the concrete and the first step on the train. In any Presidential election year news and views are sometimes difficult to separate. For this discussion I am ignoring the obvious disinformation promulgated by conspiracy theorists and the like. The gap we are minding is more along the lines of spin.
Spin has always been a permissible technique in society. These are facts presented in a light favorable to a point of view or cause. This year the main issue that will get spun is the economy. Had the pandemic not intruded, the argument would have been the causes of the Dow at 29000. The present administration handed a vibrant economy on a plate by Obama or Trumpian wizardry. Well, Covid-19 took care of that. Now, it will be about how the pandemic was managed to keep deaths to a minimum, while also minimizing damage to the economy caused by the shutdowns. On the one hand you have Australia and New Zealand, who took early decisive actions on distancing and now have remarkably low mortality. On the other end of the spectrum is Sweden, which conducted a massive experiment by relying almost solely on herd immunity. Every state in the Union is relaxing distancing differently. While only time will tell, the spin business is in high gear.
In the reporting of improvement in the economy, mind the gap in how percentages are used. Mind the denominator. Consider a commodity, say oil at USD 100 a barrel. A 50% drop (as happened in late 2015) would take it down to USD 50. Then a 50% gain at that point would take it to USD 75, still USD 25 short of where it started. Each a 50% change, but the denominator intrudes. A recent headline stated that air travel had surged 123% in just the last month. The comparison was with a period that had seen a drop of 96%. The 123% gain brought it up to a figure that was still 91% short. This is not to say that the increase was not welcome and noticeable; it is just that some reporting leaves that detail unsaid.
An interesting variant on dicey comparisons is a story in the NY Times on the price of oil. It reports that price was USD 31.82 on May 18, 2020. Then it goes on to say, “That may seem like a minor miracle given that the price is more than $60 above where it was about a month ago”. While not inaccurate, the fact is that the drop to negative USD 37 a month ago was anomalous and for one single day due to an oddity in trader behavior. The comment was in the context of prices at which oil production could be profitable. However, the producer never saw the negative price, just the traders. The true price at that time was the price a day later, about USD 17. Still a huge jump to USD 31.82, but not USD 60 and not in the miracle range, minor or otherwise.
Currently, possibly the biggest gap is in matters relating to ameliorating or avoiding Covid-19. Investigative papers are placed online before they have been peer reviewed. The intent is to get them out for use by other investigators, but an eager press does not always underline that fact. Vaccines get a spotlight. While understandable, in view of the promise of such things, the “we are nearly there” feeling underlies many of the stories. They even move markets, as did the recent success of Moderna’s vaccine in a very limited trial. Dueling well-intentioned experts add to the gap, the minding of which proves daunting for the general public.
Now, this last is for the many of us who are in baseball withdrawal. In the parlance, throwing a curve is not the same as putting spin. In baseball, a curve may have some spin, but so can a fast ball. And fielders must be ever vigilant in minding the gap. Else, a single could turn into a double or worse. Now I return you to regular programming.
May 23, 2020