November 18, 2015 § 1 Comment

The VW cheating episode has put the spotlight on whether NOx control is economically viable. If the answer is No for a portion of the market then the diesel market will indeed be limited.


Rudolf Diesel

Rudolf Diesel invented his engine before the gasoline engine came into being. The high compression combined with the intrinsically higher energy content of diesel afforded these engines a mileage advantage of about 30% over gasoline engines of like size. But they suffer from emitting small particulate matter (PM) and NOx, both implicated in lung and heart disease. The PM is relatively simply captured in filters and is generally kept low by the more complete combustion afforded by running a “lean” mixture. We previously discussed the Lean NOx Trap (LNT) method employed by VW. It works, but at the expense of engine performance. Small car engines (2L type for example) can ill afford the loss of torque. More particularly, it erodes the fuel economy advantage of diesels. These cars are bought largely for the fuel economy, so impairment in that area is a potential show stopper.

Before concluding that small cars are off limits for diesel usage, let us study the commercially available alternative to the LNT. Selective Catalytic Reduction (SCR) involves injecting a urea/water mixture into the exhaust stream. It vaporizes and breaks down into ammonia and carbon dioxide. The ammonia in the presence of oxygen reacts with the NOx on a special catalyst to produce innocuous nitrogen and water. This works, and that is the good news. The not so good news is that in small cars the equipment required is a tight fit. It needs a urea tank, heater, a pump and a dispensation system. They fit fine in trucks and larger cars. Furthermore, it is reported that VW decided to go the way they did because the SCR had a net cost increase of $50 (presumably over the LNT alternative). Yes, $50. Consider now their mind boggling losses due to failure of the gambit. There is also of course the nuisance to the driver of periodically replacing the urea tank, but that is done at the gas station.

Something more than just a parlor game is who knew what at VW and how high the decision went. The CEO at the time, Martin Winterkorn, was an engineer by training (yes engineers, not all CEO’s are Harvard MBA’s, at least not in Germany) and famously a detail guy. But more to the point, consider that the original decision to go with the LNT has to have recognized the performance loss during the NOx adsorbent regenerative step. Someone very high up asked about the duty cycle and hence the fraction of time spent in the poor fuel economy and lowered torque mode. An engineer leader would know to ask that question. So, I am afraid it does not look good for blaming minions on this one. Few things matter to a leader of such a company than performance and mileage, especially one bent on world leadership in cars sold.

An interesting alternative had been suggested by Dan Cohn at MIT nearly a decade ago. If methanol is injected directly into the cylinder, the latent heat of evaporation cools the chamber down. Engines these days are routinely monitored for chamber temperature. The best time for such an injection would be when it got very hot. The cooling effect would allow for even greater compression ratios than current. In Cohn’s model and prototype testing, a relatively small engine delivers the power of a larger one. But here we would be looking for the cooling effect primarily, not higher compression and more power, although that could be in play. Lower combustion temperatures result in less NOx production. Simple as that. Now the lower volumes could be captured more simply.

Cohn’s idea requires significant modifications to the engine, although less so now than when he introduced it. Multiple injection schemes are common now. Mazda uses it in the SkyActiv gasoline engine and obtains good performance from the evaporative cooling of just the gasoline (they run compression ratio of 13 on just regular gasoline). Methanol would be even better because the latent heat of evaporation of methanol is nearly three times that of gasoline. But it will require a separate tank and so forth. But it could be done.

What then is the future of diesel? Perhaps small cars cannot carry the cost burden of emissions control. Some company, though, should take a crack at Cohn’s idea or some variant thereof. And this conundrum would never have surfaced but for a professor at the relatively obscure West Virginia University. A Goliath of the auto industry was taken down by such a David. Which is just too bad; it should never have happened. The world has German engineering to thank for a lot. This blemish reflects not on German engineering excellence but the avarice of a few in charge.

Vikram Rao


November 16, 2015 § 2 Comments

NOx (oxides of nitrogen) are a Front of the Box pollutant. The effects are short term and on health, in contrast to CO2 whose effects are more in the long term, on targets such as severe weather and drought, leaving some room for doubt relative to causality. Consequently, NOx emissions from devices such as automobiles could be expected to be a public concern. Yet, much of the attention from the VW emissions cheating episode is directed to the behavior and not the attendant pollution. In fact the reporting has shown that much of the industry has cheated in one way or another. In Europe the emissions testing is done by the companies with no regulatory oversight. The use of non-standard vehicles during the tests is a common practice to which everybody turns a blind eye. For mileage testing cars are routinely stripped of wind drag components such as wing mirrors. The real world  kilometers per liter are in the vicinity of 35% worse than in these tests.

idling car

VW managed to do something that was shocking even against this backdrop of routine avoidance of emissions regulations. Interestingly evidence is piling up to indicate that the “defeat devices” they use (more on it below) may not have explicitly violated any European strictures. No such doubt exists in the US. The issues that I will address below are: 1. what was the technology for NOx reduction they employed, and 2. how was it circumvented and why.

When a diesel engine is burnt “lean”, it performs the best, especially with respect to fuel economy. This condition is defined as air somewhat in excess of the stoichiometric amount required to combust the fuel. Less unburnt fuel is also good on emissions. However, the excess air causes more production of oxides of nitrogen, NOx. This must be reduced in the exhaust gas stream.

VW use a technology known as the Lean NOx Trap (LNT). There are two steps (there is a preliminary step which we will skip for this discussion for simplicity). In the first NOx is captured on a coating that adsorbs NOx. Adsorption is a surface phenomenon that is easily reversed. When the coating is considered filled up, the second step kicks into gear. This involves removing the NOx to regenerate the coating activity. This is the key step that got VW in trouble. The NOx is reduced to nitrogen and CO2 on a special catalyst by reacting it with some mixture of hydrocarbons, hydrogen and CO. This mixture is created by switching the engine to a “rich” burn mode, away from the lean. The reactant is the fuel from the cylinders that is only partially combusted. Not surprisingly, during that time, engine performance drops for reasons noted above. The gas mileage reduces as does the torque.

VW was attempting to penetrate the US market with diesels. This was part of an overall goal of being the top seller worldwide. The US consumer had been recalcitrant compared to the Europeans. Also, the NOx regulations in the US were stricter. In the US the regulators do spot checks. It appears that the decision was made to “defeat” the device during normal road operation. This was achieved through a reasonably sophisticated algorithm which detected that the vehicle was in a test mode. When in this mode the engine was allowed to run rich for the needed period to perform the function of the LNT. But importantly, in normal driving the vehicle ran lean all the time, giving the needed performance in miles per gallon and torque. In other words it was peppy (high engine torque) with high mileage and was great on emissions. Keep in mind that all diesel are better on mileage than gasoline engines. In part this is because the fuel has about 10% more energy content and in part because diesel engines run on much higher compression ratios. But for decades they have had a reputation for being smoky and smelly. This is no longer the case. Particulate filters take away the smoky aspect. The only remaining concern had been the NOx emissions. VW claimed to have met those while delivering a superior driving experience. There is no dispute that they cheated. The key point is not so much the cheating on the testing, but that buyers expecting to get a low emissions car were not getting one.

What were the alternatives to LNT available to them, and why they chose not to use those, are the US rules too stringent to be achieved by small low cost cars, is diesel simply not viable for these cars, will electric cars and hybrids be advantaged, these are all topics for the next post.

Vikram Rao



June 2, 2015 § 2 Comments

A century or so ago Tesla and Westinghouse beat Edison in the war of electricity transmission and AC became our way of life. In an odd modern twist, the first, and most famous electric car is named after Tesla, but runs on DC current. Most electronics run on DC, but AC continues as the transmission medium, dooming us to the ubiquitous “brick” converting to DC for our phone charging, computers and so on. The DC worm is turning. In some measure this is due to fact that the output of solar panels is in the DC mode, as is that of back up batteries. Organizations such as the EMerge Alliance are making some inroads in commercial buildings with a proposed 24 V wiring standard. But curiously the lead for the resurgence of DC usage in homes may well be from India. AC DC image

wtih apologies to the Australian rock group

Power shortages are a way of life in most developing nations. Consumers who can afford it have back up devices which are inherently inefficient. The rest simply do without for several hours at a time often each day. Most governments respond with more power plants, which in many countries are coal fired, with attendant effects on public health and climate change. The Indian Institute of Technology, Madras (IITM), has initiated the Uninterrupted DC (UDC) program. This is an innovative scheme to provide continuous power even during the intervals of shortage. This is accomplished through some changes in the grid system at a sub-station level, combined with households using energy-efficient DC devices. Widespread acceptance of this concept will require some equipment to be redesigned. But many other common devices such as computers and cell phone chargers, as well as energy efficient LED lights already operate on DC. DC powered fans are already available. Large scale adoption will improve consumer experience through uninterrupted service and reduced costs and have a net positive impact on the environment.

India is poised for rapid economic growth. This growth brings with it increased requirement for electric power at the industrial and consumer levels. Chronic power shortages especially at peak intervals have to be managed. Industrial consumers rely on diesel powered back up power, which has its own issues with particulate matter emissions. Private consumers have two choices. Those that can afford to install inverters in each home which charge batteries for use during the outage. AC power is converted to DC for storage and then reverted to AC for running devices. Each of these steps has an associated loss. Furthermore, when the power comes back on, each of these systems charges up for the next time, creating a surge on the grid. The UDC system is targeted at providing limited service continuously while at the same time reducing the overall energy consumption. In essence this is an aspect of Demand Side Management. It fits with the overall direction from the International Energy Agency that any reasonable carbon emission targets in 2050 can only be met by using 50% less. India and China are routinely cited as major contributors to atmospheric carbon due in part to reliance on coal for power. Program such as UDC could lead the way to mitigating the environmental impact of coal for power. Uninterrupted DC (UDC) technology is so named by its inventors, to emphasize that it delivers a useful quantity of power in uninterrupted (24×7) mode, and in DC form, incentivizing use of efficient DC appliances. Devices powered by DC can be 50% or more efficient than their AC counterparts. Use of such devices and the systems to enable these are central to the concept of UDC. In low to moderate income households the critical devices for continuous operations are lights, fans and either cell phone chargers or LED televisions. A home that typically uses 1 kW of AC peak power, could get by with 100 W of DC with somewhat reduced functionality.

The UDPM is a new device at the spot of the current meter and is the heart of the UDC system. It incorporates the existing AC meter and adds capability to split the incoming power into a DC 48 V line and a conventional AC 230 V line. The house is rewired to accommodate a few low voltage lines to run the low voltage devices. In a peak demand period the sub-station will send 10% of the normal electricity to each home instead of turning it off, as is the current practice. The UDPM at the home will utilize it solely for the 48 V service. During the period of the brownout the sub-station steps down the power to 4.2 kV from the normal 11 kV. The UDPM detects this voltage drop, cuts the AC output, and limits the 48V DC output to, say, 100W. This robust signaling is another innovative feature of the system. Importantly, during normal operation, both home circuits are in use, but the DC output is always limited to the brownout level of 100W. This allows for the utilization of the low power DC devices all the time and not solely during the brownouts. The consequential lowering in the power bill is a positive for the homeowner, and the continuous use incentivizes the manufacturer.

Fit with Solar Energy:   While the initial focus of UDC is reasonably moderate income homeowners, the middle and upper-middle class segment could also be addressed through the addition of solar energy. This source is DC power to begin with and is artificially converted to AC for conventional appliances. This can still be allowed while a significant portion could be used in the DC mode. Typical solar outputs are 12 V and four together add up to 48 V. Perhaps this is why IITM chose that particular voltage, not to mention that 48V has been the standard DC voltage for telecom equipment worldwide. 12 V is also the output of standard lead-acid storage batteries. Ultimately one could expect even compressors for refrigerators to go the DC mode. Air conditioning would be next, but for the drier parts of India air coolers using water function quite well and those components are DC amenable.

Conclusions: UDC is an elegant addition to the Demand Side Management arsenal. It generally falls in the category of technology solutions although a small element of behavioral change exists. Utilities will undoubtedly welcome this development. Since the changes have to be at the sub-station level, the conversion could be staged community by community. IITM reports that pilots have already found word of mouth spread of the demand. An innovative business model may be necessary to pay for the modifications in the homes. Widespread use of this technology is certain to reduce the overall national burden on the power sector. Countries could justifiably claim advances in GHG mitigation.

Vikram Rao


April 30, 2015 § Leave a comment

A recent issue of the Economist points out that types of innovation have changed over the last hundred and fifty years. The piece is based on a recent paper in the Journal of the Royal Society Interface and it relies upon data from the US Patent Office. The primary conclusion is that in the early years of patenting new classes of invention occurred, whereas in the modern day inventions rely on using combinations of existing classes. William Shockley’s transistor is cited as creating a new class, while Edison is seen as merely combining the classes represented by heated filaments, electrical supply and a vacuum.
At first blush, Edison’s light bulb being relegated to a mere “combinatorial” class is surprising in part because many consider it to be the quintessential invention. The Aha moment signaled by a light bulb cartoon is in the annals. So what gives? Basically, the authors believe that inventions creating new classes are due greater honor because they likely are seminal in generating new areas of endeavor. They imply that inventor superstars are those that create new industry, and cite names like Goodyear, Morse and Stephenson as the essential catalysts for the Industrial Revolution. The transistor certainly fits that mold.

Economist article on innovation

The figure shows the growth in total patents issued and how many are new codes (sub-classes) as opposed to combinations of classes. Note that the vertical axis is on a logarithmic scale. Whether an invention falls into a given class, say batteries or a sub-class say solar energy type (both real examples of classes), is determined by the patent office. So, to some degree, this point is a bit academic in that it relies on a judgment by these folks. Consequently, the determination to form a new class need not necessarily be the harbinger of great industrial activity. It could simply be because they could not find a place to slot it.
Possibly the greatest invention in the commercial practice of biology is the (1993) Nobel Prize winning Polymerase Chain Reaction (PCR). Some have described it in terms of biology having two epochs: before and after PCR. It allows the amplification of DNA sequences. This giant invention by Kary Mullis (incidentally a North Carolina native) nevertheless built on the work of others and in particular that of 1968 Nobel Laureate H. Gobind Khurana (in fact Dupont challenged the validity in a losing cause). It was an aha moment not unlike that of Watson and Crick in visualizing a double helix in the X ray diffraction images of DNA produced by Rosalind Franklin (who might have shared the 1962 Nobel had she not died of ovarian cancer in 1958). I seriously doubt it created a new class or sub-class. Yet it transformed a field of endeavor. I think we could conclude that a truly new class of invention may well be seminal, but that this quality can be achieved in combinatorial fashion.
On the matter of combinations, the patent office is very prescriptive on what constitutes invention. The granting of a patent requires two hurdles to be crossed: novelty and non-obviousness. One may build on the work of others but these two tests must be met. It is the second item that things get subjective. The invention must not be obvious to one of “ordinary skill in the art”. IP lawyers make a living splitting that hair. In the last few years the Supreme Court has raised the bar on this test. It has also raised it on what is known as enablement: the claimed invention must be described in sufficient detail so that a person of ordinary skill can replicate it. Gone are the days of “paper patents”.
This discussion would not be complete without noting that remarkable innovations may not be inventions in the legal sense. Innovative business models to capitalize on inventions are cases in point.
Vikram Rao


April 23, 2015 § 1 Comment

The price of oil is going to look like saw teeth for some time to come. For purposes of simplicity I will stick to using Brent, the benchmark price for the rest of the world. As I have opined before, if the US lifts the ban on export of our oil, WTI price will rise to Brent levels. These two benchmarks were in lock step for years and then began diverging in 2011 when shale oil seriously hit the market. While on the face of it simply a correlative point, I believe it is causal. When condensate exports began being allowed in 2014 the spread narrowed. I believe that when exports are permitted the spread will disappear altogether.

Brent crude price chart 2015

The graph shows Brent pricing up to late February 2015. Of interest is the fact that while the original drop was massive, nearly halving the price, the recent excursion is 25% off a new floor. True demand alteration is hardly ever that sudden. This is likely a result of real or perceived change in supply. Around that time Libya, which had fairly suddenly come on stream with 700,000 barrels per day (bpd) in late 2014, dropped to 200,000 bpd following sabotage and ISIS sourced violence.
On a go forward basis, the reason for price excursions will be real changes in shale oil production together with speculative beliefs in this regard. I have asserted in previous posts that the US has unwittingly become the swing producer, meaning when it sneezes world oil catches a cold. The Saudis used to have this status together with OPEC determinism of oil supply. Recently Boone Pickens shared a stage with former EPA head Carol Browner and ex-secretary of energy, Steve Chu, discussing the environmental safety of shale oil and gas production; no doubt the debate was entertaining. Associated with this occasion Pickens stated to the press that the US was responsible for the oil price crash, not the Saudis. While this is not exactly news to at least readers of my posts, I cannot recollect a causal link being suggested by any person vested with expertise. Most of the press has been on why the Saudis did it, rather than whether they did it. Damaging US shale oil production and hurting the economy of Iran and weakening Syria’s Assad (the latter through impoverishing financier Russia) were the principal theories advanced. Assuming the validity of Pickens’ assertion, one can conclude that if US production brought the price of oil down, then reduction in the same would send it back up. One theory of Saudi motivation would be supported.
Were the US production in question from conventional resources such as offshore development, one would not expect discontinuities. Conventional production has long latencies: many years to get going and it is not economically viable to turn off and on. Shale oil on the contrary is relatively easy in this regard. Producing wells can be “shut in” with relative ease, especially gas wells. Since these wells tend to decline rapidly in production, mere maintenance of rates in any given area requires drilling new ones. Simply not drilling new wells will have the net effect of reducing US production, which will in turn result in a rise in the price of oil. When the price is high enough they will begin drilling again. A new well can go on stream as soon as ten days after commencement. That period is even shorter for the over 3000 wells that reportedly are in “fracklog” bucket.  This is a backlog of wells which have been completed all but for the final fracture stimulation step.  Speculators are aware of this. They will drive price up when storage levels drop and the price has achieved a bottom of sorts. This cycle of price increase, new production then depressing the price, followed by reduction in drilling and production will repeat. The visual effect on a graph such as the one above is that of a saw tooth pattern.
Predicting the price of oil at any time is an exercise in futility. But my best guess at this time, based on continued weakness in China’s GDP growth, is that Brent pricing will fluctuate in the range $45 to $60 in the saw tooth pattern mentioned above. Whether OPEC can or will intercede in any way to affect this is not known. But it is unlikely that they will curtail production to raise prices. All that will achieve is more US shale oil production. I think the saw teeth are here for a while.
Vikram Rao

We are on line again

March 31, 2015 § Leave a comment

Not my usual blog.  Just letting folks know that the site was down since March 16 due to a technical glitch.  Even now you may get “certification warning”.  Switch to Firefox or Bing or Explorer from whichever one gives the warning.  Don’t ask why; I barely get by understanding these gremlins.  Going to Firefox from Explorer did it for me.  Sorry for the snafu.



February 24, 2015 § 4 Comments

My 2012 post High Octane has consistently had very high readership to this day.  This merited a revisit.  It is also a fitting topic on the heels of my last post regarding alternatives to petroleum based fuels being hurt by low oil prices.  This price crash did more damage to that cause than just the already extended sojourn to the depths.  It raised a specter that has always been in the psyche of oil old timers: the price can crash any time and it has in the past.  In the recent past the dogma has shifted to volatility only north of about $90 per barrel.  This was based in large measure on OPEC providing a floor and the juggernaut represented by the growing economies of China and India keeping demand pumped up.  This last was bolstered by the well-known relationship between per capita GDP and car ownership.

car ownership revised

Then the economic growth rates of China and India faltered.  Furthermore, China started making a concerted push to use coal derived methanol as a gasoline substitute.  India is experimenting with ultra-small cars such as the Tata Nano (70 mpg).  Indian Prime Minister Modi recently lifted the restraints on genetically modified (GM) oil seeds.  Rape seed oil (a variant is more conservatively named Canola) is expected to be an early beneficiary.  Canola oil, ordinarily used for cooking, can be processed very simply into diesel with a process known as transesterification.  In fact it is so simple that a garage operation would be quite economical.  Also to be noted is that India consumes nearly three times as much diesel as it does gasoline, so oil seed conversion is advantaged.

But my favorite is Jatropha, which is indigenous to India, much of East Asia and Florida, for that matter.  As I mentioned in a post two years ago, the time is right, and even more so now than when I wrote that piece.  Jatropha created a lot of excitement in India and other places a decade ago because it was not a food crop and was drought resistant.  The problem was that wild type jatropha was too variable in yield and other economically important parameters.  Now with the plummeting in the costs of DNA sequencing, high throughput screening and associated data analytics a GM jatropha with great qualities need not be far away.

In some ways the foregoing discussion is something of a distraction from the premise of the original High Octane.  There I suggested that ethanol, the legislative favorite displacer of gasoline, was not being properly utilized.  Today Congress is seriously considering revising the flawed Renewable Fuel Standard.  The principal flaw is the insistence on cellulosic ethanol, which has proved economically intractable.  In today’s gasoline pricing scenario it is even more so.  Technology simply has not kept up with congressional wishes and is unlikely to do so.

The biggest problem, however, is not that at all.  It is the fundamental problem of trying to fit a round peg into a square hole.  The two most viable gasoline substitutes, ethanol and methanol, will deliver 33% and 50% fewer miles to the gallon, respectively, in today’s conventional engines.  These engines have been optimized for gasoline for a hundred years, which is why they have compression ratios of around 9.  Higher compression ratios deliver more energy per gallon but cannot be tolerated by 87 octane gasoline.  However, ethanol and methanol respectively have octane ratings of 113 and 117.  A high compression engine will operate effectively with these fuel blends and give back much of the intrinsic energy penalty.

This is essentially a repeat of what I said in the last post.  Now I have more ammunition to enable the substitutes.  Both ethanol and methanol have one more very useful attribute that allows even higher compression ratios.  They have high latent heats of evaporation.  When injected into the cylinder the evaporative cooling effect reduces the temperature.  This is a key because at high compressions the problem is temperature rise causing premature ignition of the fuel, also known as knocking.  This cooling effect will enable very high compressions.

Now to the final point: is it asking too much of the automotive industry to modify the engines for higher compression?  First of all race cars have high compression.  But a more mass produced example just appeared a couple of years ago.  Mazda introduced the Skyactiv engine which operated at a compression ratio of 13 with regular gasoline.  The key step appears to be dual injection of the gasoline, the second one coming in response to temperature sensing and presumably producing evaporative cooling.  This car is rated at about 35% higher highway mileage than the regular counterpart. One of the technology advances along the way has been to measure cylinder temperature and react to it. So they can do it if they want to.

Now consider the following facts.  The cooling from injecting ethanol vapor would be about 2.6 times that from gasoline.  A blend would be somewhere between and Mazda likely are getting a bit of that benefit with the 10% ethanol in most gasoline.  And here is the kicker.  With methanol that number is 3.7 times.  So, even a 20% blend ought to give heck of a boost.  Higher blends are completely feasible and China is piloting these, albeit in conventional engines.  And methanol from inexpensive natural gas is more affordable than ethanol.  Aside from the higher efficiencies, a cooler running engine produces less NOx.  Also, a high compression engine delivers more torque.  Such vehicles will be fuel efficient in the extreme, use less petroleum products, have vastly reduced tail pipe emissions compared to all but electric vehicles, and drive like muscle cars.  They should move off the lot.

Vikram Rao


Get every new post delivered to your Inbox.

Join 434 other followers