HOW ELECTRIC ARE CARS GOING TO GET

January 27, 2016 § 1 Comment

With Electric vehicles are at in interesting inflection point. Car makers are finally getting serious about traversing the main hurdle: battery cost. When the Nissan Leaf first emerged, and for that matter also the Chevy Volt hybrid, lithium cells cost over $450 per kWh (kilowatt hour). As a rule of thumb, each mile driven uses 0.25 kWh. A hundred mile range will require 25 kWh in principle. But it is impractical to drain down to zero and a useful figure is likely 80 or 85%. In other words, 100 mile range likely needs a battery pack with about 30 kWh.
Many of us have posited the notion that cost had to drop below $200, preferably $150 for any sort of widespread use. At $150 per kWh, a 30 kWh battery would cost about $5500, accounting also for the ancillary costs for the pack beyond that of the cells. That is a reasonable fraction of a selling price of $25,000, a useful target for an economy 5 seating car. An all-electric car has no internal combustion engine, no transmission, possibly no differential (if 4 motors are used), all of which reduces cost. But a 100 mile range may not sell broadly (witness the muted enthusiasm for the current Nissan Leaf). At 200 miles, we are talking the battery pack costing $11,000. That probably takes the pre-rebate price up to $36,000. Is that too pricey for most?
A Prius type of hybrid has many of the good features of EV’s: regenerative braking, engine stops when stationary, electric drive for start and low speed, where IC engines are less efficient, to name the principal features. All of these combined will typically add 40% or so to the gas mileage in city driving. I mention city driving for two reasons: one is that it shows off the hybrids the most and two because the all-electrics such as the Leaf are impractical for distance driving at this time. These cost 2 to 4K more than the base model. 200 mile range all-electrics eventually ought to cost about 6K more (after realizing gains on lower cost mechanicals).
Tesla is making things interesting. Their luxury Model S is priced not much more than regular luxury models. The 60 kWh battery is about to be replaced with a 70 kWh pack. They flirted with a 40 kWh pack and it never really left the blocks because of perceived customer reaction. It shows in buying behavior as shown in the 2015 statistics for large luxury cars. It seems that the same luxury for about the same price with zero tailpipe emissions makes it an easy decision.

luxury car sales figures            Source: https://image-store.slidesharecdn.com/f483070b-66bc-461e-9871-895a630ccd93-original.jpeg

The buying habits of this cohort may not comport with those of economy car buyers. So the $36K (before rebates) crossover may not have the same reaction. GM is betting on the forthcoming Bolt (great name by the way, reflective of the fast start possible with electric drive). Priced at $37,500, it will have 200 mile range (which, with a 60 kWh battery, is consistent with our computation above and so is believable) and seat 5. It will have plenty of pep: 200 HP (150 kW) and 206 foot pounds of torque. With the heavy batteries on the bottom of the cabin compartment, the center of gravity is in the middle and low. So in addition to being peppy it ought to handle well. GM can do this because they claim to be getting the batteries for $145 per kWh and much as Tesla has claimed, expect that to drop to $100 by 2020. These prices ought to translate to Nissan as well. So expect a bigger battery Leaf model.
Low gasoline prices, likely for a couple of years, affects some of the decisions. But it comes down to this: a hybrid five seat vehicle can deliver 45 mpg in the city. An all-electric will give about 105 to 110 mpg (computed on the basis of a gallon of gasoline containing 34 kWh of energy). It will cost more but maintenance will be much less, and so on. And there is the environmental benefit. Provided the big guns go forward with their intent the consumer will have choice.
Vikram Rao

THE SCROOGE STRATEGY

December 30, 2015 § 2 Comments

Okay, so this one will be different.  Not even a passing reference to energy.  It is, however, in tune with the Christmas season.

scrooge-iclip

The final performance of the narration of The Christmas Carol was a delight. Staged at St. Mathews Episcopal Church, in Hillsborough, NC, it was the last in a fourteen year series. Stars Allan Gurganus and Michael Malone, authors in their day jobs, pulled off an event far exceeding the expectations raised by the just praise received in the past. The church was packed. Victorian garbed usher Stephen Burke implored for cozier close quarters in the pews to accommodate the last few. The short and sweet carols were just right for the mood setting.
The best line was from Gurganus (Scrooge) when he said “Thank you” to the audience. They were properly appreciative of his funny and obviously extemporaneous line regarding ceasing intercourse with apparitions because of a resolve for abstinence or some such mildly salacious aside. It occurred to me that a Scrooge with a sense of humor may have even more depth to him. Consequently, outlined below is different take on the whole deal.
Being rich was not enough. Scrooge wished to be firmly implanted in the minds of folks for generations. He arrived at a strategy to cultivate an enduring image of miserliness and selfishness. This was hard because he really was a good humored sort as revealed by Mr. Gurganus in a weak moment. He had to suffer privation in diet and living conditions, gruel for dinner, minimal heat and so on. Personally I think he could merely have been mean to others and not to himself. But he was a perfectionist and who am I to judge what clearly worked. Then at the absolutely right time would come a miraculous reversal, exuding generosity and empathy. What a triumph! Brilliant, but needing flawless execution with no leaks to the press.
Performance evaluations the world over are centered on expectations. Typical ones read “meets expectation”, “exceeds expectation” and so on. The brilliance of the Scrooge Strategy, as it ought to aptly be named in the business schools, was to build years of expectation to the point where these were entrenched. These days we call it building a brand. The Coca Cola Company survived the incredibly disastrous New Coke adventure due to the strength of the brand. That would not have been nearly enough had they not brought back the old formula; naming it Classic Coke reminded folks as to why they liked the brand; brilliant ladder out of a deep hole.
The Strategy requires staying power. Ordinarily another member of the family could cause the resolve to waiver. Oftentimes this is the spouse. Scrooge’s strategic plan handled that pretty early on. His sweetheart (wonderful voice of Jane Holding in the reading) accuses him of worshiping the idol of gold and that relationship withers. Had she held on, it would have tarnished the relationship, and the idol for that matter, and quite possibly severely vitiated the grand plan.
The Scrooge Strategy has several key elements. One we have already noted: set up firm expectations. A second is timing. Christmas is a time when his parsimony and lack of good cheer would most be noticeable. This then was the perfect time to strike with a miraculous change of heart. But miracles are hard to come by, unless of course one is a saint or angling to be one. Even those guys are required only to have performed a single one; nobody expects continual production. In any case, a saint he was not, nor did he know any; his carefully developed brand would preclude moving in those circles. He was forced to rely on the occult. The public is inclined to believe in the persuasive powers of apparitions. A deft stroke was to have not one but three, allowing, nay inviting, but not claiming, the three wise men comparison.
The goal of the Strategy was to be memorialized for centuries, to be performed in Hillsborough 175 years later. Simply giving away lots of money will not get that done. Recently Mark Zuckerberg committed to give away 99% of his Facebook derived wealth. A reading in Hillsborough in the year 2290 commemorating the event, not likely. Besides you will note that the Strategy called for the gifting of a (admittedly very large) bird, a handful of banknotes, half a crown to the boy and a pay raise to the accountant that he had guiltily withheld for years in pursuit of the brand. The associated frailty of Tiny Tim had been a severe test, but one has to be resolute, else it falls apart. In these moments of self-doubt he would mutter to himself with a reassuring “It is a far, far better thing that I do”*. A bloke called Charlie Dickens overheard him once and asked if he could use it. “Whatever”, said the momentarily distracted Scrooge. In moments of reflection he sometimes went off script resulting in unintended generosity. Charlie never acknowledged him in his book so the brand remained untarnished by generosity no matter how fleeting.
The Hillsborough reading on December 18, 2015 was billed as the final performance of the amazing troupe. Please, sirs, we want some more*.

*Credits, respectively, to Mr. Sidney Carton (1859) and Master Oliver Twist (1846), as spoken through Mr. Charles Dickens.

Vikram Rao

OIL PLUMBS NEW DEPTHS

December 14, 2015 § 1 Comment

Crude oil prices reached $36 per barrel this week. I had opined in a previous post in April that oil prices would fluctuate in a saw tooth pattern. Well, that has come to pass after a fashion, but not quite in the way I thought it would. First the facts, as shown in the figure below.

Oil prices to end 2015

There is an oscillation. But it is modest and not driven by the assumptions of my model. Those had been premised upon two key factors. One was that OPEC would cease to be deterministic on price and that normal supply and demand conditions would be in play. That has happened. My other view was that when prices dropped sufficiently, demand would pick up, and in turn drive more shale drilling. Months after that kicked in, the new production would dampen price and so on.

Two major macro events have conspired to vitiate the theory, at least for now. China is practically in a recession, at least as compared to their explosive growth of the previous decade. The consumption drop, both real and perceived, is limiting oil demand. India, while not in the same state per se, has simply not delivered on the growth promise of Prime Minister Modi. This is in part because his party does not control the upper house (sort of like the Senate in the US) and in part because his mandate is being severely tested by a huge loss by his party in the populous state of Bihar. Business friendly changes will be slow to come. On balance, the two countries expected to produce increased demand are not showing up.

The other factor has been the so-called Fracklog. This is the inventory of wells that have been drilled but not yet fractured. The impetus for this approach was in part that this differed about two thirds of the cost until prices improved. The other reason to do it this way is to perform like tasks, in this case drilling and casing of the wells, all together. This improves efficiency in the logistics of materials supply and the like. Offshore platforms routinely operate in this way and a variant is known as batch drilling wherein even the drilling portion is done in batches (a single well is not drilled from top to bottom and then the next).

In the case of shale oil the next step, the fracturing, simply has not occurred for a number of wells waiting for better prices. That count is believed to be around 5000 wells. It was a scant 1200 or so early in the year. Assuming initial production from each well in the vicinity of 500 barrels per day (bpd), the effect would be a potential 2.5 MM bpd if unleashed all at once. That is logistically impossible even though each well could begin producing within a week of equipment arriving. But even an additional couple of hundred thousand bpd would move the price needle down measurably. Possibly speculators are concerned that cash strapped owners will do just that at some point. This bearish thinking may be a factor in the price staying down.

Another curiosity as of today (December 14, 2015) is that WTI almost has price parity with Brent. This is unprecedented going back at least 4 years. The spread has been about 10% until recently. It all began when shale oil really took off in volume and export restrictions limited its market. The figure below shows the trends.

brentwti

 

My hypothesis is that the speculators are assuming that the export restrictions will be lifted. There has been a lot of press on Congressional action being imminent. Mind you, the horse trading to achieve that legislation is of the type that often stalls near the finish line. Nevertheless that is the only argument that makes any sense of the spread disappearing.

At this point I feel that the saw tooth behavior is still likely but at lower numbers until true demand creation and some destruction of the fracklog. Some smaller oil companies will fail but the properties will be snapped up by the better heeled independents; the majors will not participate much in this. They in turn will eschew the ultra-high cost developments such as the Arctic, which is all to the good. Their forays to date have been unproductive and in my opinion the environmental risk is not worth the reward.

Vikram Rao

 

WILL NOX CONCERNS LIMIT DIESEL MARKET?

November 18, 2015 § 1 Comment

The VW cheating episode has put the spotlight on whether NOx control is economically viable. If the answer is No for a portion of the market then the diesel market will indeed be limited.

Rudolf-Diesel

Rudolf Diesel

Rudolf Diesel invented his engine before the gasoline engine came into being. The high compression combined with the intrinsically higher energy content of diesel afforded these engines a mileage advantage of about 30% over gasoline engines of like size. But they suffer from emitting small particulate matter (PM) and NOx, both implicated in lung and heart disease. The PM is relatively simply captured in filters and is generally kept low by the more complete combustion afforded by running a “lean” mixture. We previously discussed the Lean NOx Trap (LNT) method employed by VW. It works, but at the expense of engine performance. Small car engines (2L type for example) can ill afford the loss of torque. More particularly, it erodes the fuel economy advantage of diesels. These cars are bought largely for the fuel economy, so impairment in that area is a potential show stopper.

Before concluding that small cars are off limits for diesel usage, let us study the commercially available alternative to the LNT. Selective Catalytic Reduction (SCR) involves injecting a urea/water mixture into the exhaust stream. It vaporizes and breaks down into ammonia and carbon dioxide. The ammonia in the presence of oxygen reacts with the NOx on a special catalyst to produce innocuous nitrogen and water. This works, and that is the good news. The not so good news is that in small cars the equipment required is a tight fit. It needs a urea tank, heater, a pump and a dispensation system. They fit fine in trucks and larger cars. Furthermore, it is reported that VW decided to go the way they did because the SCR had a net cost increase of $50 (presumably over the LNT alternative). Yes, $50. Consider now their mind boggling losses due to failure of the gambit. There is also of course the nuisance to the driver of periodically replacing the urea tank, but that is done at the gas station.

Something more than just a parlor game is who knew what at VW and how high the decision went. The CEO at the time, Martin Winterkorn, was an engineer by training (yes engineers, not all CEO’s are Harvard MBA’s, at least not in Germany) and famously a detail guy. But more to the point, consider that the original decision to go with the LNT has to have recognized the performance loss during the NOx adsorbent regenerative step. Someone very high up asked about the duty cycle and hence the fraction of time spent in the poor fuel economy and lowered torque mode. An engineer leader would know to ask that question. So, I am afraid it does not look good for blaming minions on this one. Few things matter to a leader of such a company than performance and mileage, especially one bent on world leadership in cars sold.

An interesting alternative had been suggested by Dan Cohn at MIT nearly a decade ago. If methanol is injected directly into the cylinder, the latent heat of evaporation cools the chamber down. Engines these days are routinely monitored for chamber temperature. The best time for such an injection would be when it got very hot. The cooling effect would allow for even greater compression ratios than current. In Cohn’s model and prototype testing, a relatively small engine delivers the power of a larger one. But here we would be looking for the cooling effect primarily, not higher compression and more power, although that could be in play. Lower combustion temperatures result in less NOx production. Simple as that. Now the lower volumes could be captured more simply.

Cohn’s idea requires significant modifications to the engine, although less so now than when he introduced it. Multiple injection schemes are common now. Mazda uses it in the SkyActiv gasoline engine and obtains good performance from the evaporative cooling of just the gasoline (they run compression ratio of 13 on just regular gasoline). Methanol would be even better because the latent heat of evaporation of methanol is nearly three times that of gasoline. But it will require a separate tank and so forth. But it could be done.

What then is the future of diesel? Perhaps small cars cannot carry the cost burden of emissions control. Some company, though, should take a crack at Cohn’s idea or some variant thereof. And this conundrum would never have surfaced but for a professor at the relatively obscure West Virginia University. A Goliath of the auto industry was taken down by such a David. Which is just too bad; it should never have happened. The world has German engineering to thank for a lot. This blemish reflects not on German engineering excellence but the avarice of a few in charge.

Vikram Rao

DEFEATING NOX: THE VW SAGA

November 16, 2015 § 2 Comments

NOx (oxides of nitrogen) are a Front of the Box pollutant. The effects are short term and on health, in contrast to CO2 whose effects are more in the long term, on targets such as severe weather and drought, leaving some room for doubt relative to causality. Consequently, NOx emissions from devices such as automobiles could be expected to be a public concern. Yet, much of the attention from the VW emissions cheating episode is directed to the behavior and not the attendant pollution. In fact the reporting has shown that much of the industry has cheated in one way or another. In Europe the emissions testing is done by the companies with no regulatory oversight. The use of non-standard vehicles during the tests is a common practice to which everybody turns a blind eye. For mileage testing cars are routinely stripped of wind drag components such as wing mirrors. The real world  kilometers per liter are in the vicinity of 35% worse than in these tests.

idling car

VW managed to do something that was shocking even against this backdrop of routine avoidance of emissions regulations. Interestingly evidence is piling up to indicate that the “defeat devices” they use (more on it below) may not have explicitly violated any European strictures. No such doubt exists in the US. The issues that I will address below are: 1. what was the technology for NOx reduction they employed, and 2. how was it circumvented and why.

When a diesel engine is burnt “lean”, it performs the best, especially with respect to fuel economy. This condition is defined as air somewhat in excess of the stoichiometric amount required to combust the fuel. Less unburnt fuel is also good on emissions. However, the excess air causes more production of oxides of nitrogen, NOx. This must be reduced in the exhaust gas stream.

VW use a technology known as the Lean NOx Trap (LNT). There are two steps (there is a preliminary step which we will skip for this discussion for simplicity). In the first NOx is captured on a coating that adsorbs NOx. Adsorption is a surface phenomenon that is easily reversed. When the coating is considered filled up, the second step kicks into gear. This involves removing the NOx to regenerate the coating activity. This is the key step that got VW in trouble. The NOx is reduced to nitrogen and CO2 on a special catalyst by reacting it with some mixture of hydrocarbons, hydrogen and CO. This mixture is created by switching the engine to a “rich” burn mode, away from the lean. The reactant is the fuel from the cylinders that is only partially combusted. Not surprisingly, during that time, engine performance drops for reasons noted above. The gas mileage reduces as does the torque.

VW was attempting to penetrate the US market with diesels. This was part of an overall goal of being the top seller worldwide. The US consumer had been recalcitrant compared to the Europeans. Also, the NOx regulations in the US were stricter. In the US the regulators do spot checks. It appears that the decision was made to “defeat” the device during normal road operation. This was achieved through a reasonably sophisticated algorithm which detected that the vehicle was in a test mode. When in this mode the engine was allowed to run rich for the needed period to perform the function of the LNT. But importantly, in normal driving the vehicle ran lean all the time, giving the needed performance in miles per gallon and torque. In other words it was peppy (high engine torque) with high mileage and was great on emissions. Keep in mind that all diesel are better on mileage than gasoline engines. In part this is because the fuel has about 10% more energy content and in part because diesel engines run on much higher compression ratios. But for decades they have had a reputation for being smoky and smelly. This is no longer the case. Particulate filters take away the smoky aspect. The only remaining concern had been the NOx emissions. VW claimed to have met those while delivering a superior driving experience. There is no dispute that they cheated. The key point is not so much the cheating on the testing, but that buyers expecting to get a low emissions car were not getting one.

What were the alternatives to LNT available to them, and why they chose not to use those, are the US rules too stringent to be achieved by small low cost cars, is diesel simply not viable for these cars, will electric cars and hybrids be advantaged, these are all topics for the next post.

Vikram Rao

 

EDISON SMILES

June 2, 2015 § 2 Comments

A century or so ago Tesla and Westinghouse beat Edison in the war of electricity transmission and AC became our way of life. In an odd modern twist, the first, and most famous electric car is named after Tesla, but runs on DC current. Most electronics run on DC, but AC continues as the transmission medium, dooming us to the ubiquitous “brick” converting to DC for our phone charging, computers and so on. The DC worm is turning. In some measure this is due to fact that the output of solar panels is in the DC mode, as is that of back up batteries. Organizations such as the EMerge Alliance are making some inroads in commercial buildings with a proposed 24 V wiring standard. But curiously the lead for the resurgence of DC usage in homes may well be from India. AC DC image

wtih apologies to the Australian rock group

Power shortages are a way of life in most developing nations. Consumers who can afford it have back up devices which are inherently inefficient. The rest simply do without for several hours at a time often each day. Most governments respond with more power plants, which in many countries are coal fired, with attendant effects on public health and climate change. The Indian Institute of Technology, Madras (IITM), has initiated the Uninterrupted DC (UDC) program. This is an innovative scheme to provide continuous power even during the intervals of shortage. This is accomplished through some changes in the grid system at a sub-station level, combined with households using energy-efficient DC devices. Widespread acceptance of this concept will require some equipment to be redesigned. But many other common devices such as computers and cell phone chargers, as well as energy efficient LED lights already operate on DC. DC powered fans are already available. Large scale adoption will improve consumer experience through uninterrupted service and reduced costs and have a net positive impact on the environment.

India is poised for rapid economic growth. This growth brings with it increased requirement for electric power at the industrial and consumer levels. Chronic power shortages especially at peak intervals have to be managed. Industrial consumers rely on diesel powered back up power, which has its own issues with particulate matter emissions. Private consumers have two choices. Those that can afford to install inverters in each home which charge batteries for use during the outage. AC power is converted to DC for storage and then reverted to AC for running devices. Each of these steps has an associated loss. Furthermore, when the power comes back on, each of these systems charges up for the next time, creating a surge on the grid. The UDC system is targeted at providing limited service continuously while at the same time reducing the overall energy consumption. In essence this is an aspect of Demand Side Management. It fits with the overall direction from the International Energy Agency that any reasonable carbon emission targets in 2050 can only be met by using 50% less. India and China are routinely cited as major contributors to atmospheric carbon due in part to reliance on coal for power. Program such as UDC could lead the way to mitigating the environmental impact of coal for power. Uninterrupted DC (UDC) technology is so named by its inventors, to emphasize that it delivers a useful quantity of power in uninterrupted (24×7) mode, and in DC form, incentivizing use of efficient DC appliances. Devices powered by DC can be 50% or more efficient than their AC counterparts. Use of such devices and the systems to enable these are central to the concept of UDC. In low to moderate income households the critical devices for continuous operations are lights, fans and either cell phone chargers or LED televisions. A home that typically uses 1 kW of AC peak power, could get by with 100 W of DC with somewhat reduced functionality.

The UDPM is a new device at the spot of the current meter and is the heart of the UDC system. It incorporates the existing AC meter and adds capability to split the incoming power into a DC 48 V line and a conventional AC 230 V line. The house is rewired to accommodate a few low voltage lines to run the low voltage devices. In a peak demand period the sub-station will send 10% of the normal electricity to each home instead of turning it off, as is the current practice. The UDPM at the home will utilize it solely for the 48 V service. During the period of the brownout the sub-station steps down the power to 4.2 kV from the normal 11 kV. The UDPM detects this voltage drop, cuts the AC output, and limits the 48V DC output to, say, 100W. This robust signaling is another innovative feature of the system. Importantly, during normal operation, both home circuits are in use, but the DC output is always limited to the brownout level of 100W. This allows for the utilization of the low power DC devices all the time and not solely during the brownouts. The consequential lowering in the power bill is a positive for the homeowner, and the continuous use incentivizes the manufacturer.

Fit with Solar Energy:   While the initial focus of UDC is reasonably moderate income homeowners, the middle and upper-middle class segment could also be addressed through the addition of solar energy. This source is DC power to begin with and is artificially converted to AC for conventional appliances. This can still be allowed while a significant portion could be used in the DC mode. Typical solar outputs are 12 V and four together add up to 48 V. Perhaps this is why IITM chose that particular voltage, not to mention that 48V has been the standard DC voltage for telecom equipment worldwide. 12 V is also the output of standard lead-acid storage batteries. Ultimately one could expect even compressors for refrigerators to go the DC mode. Air conditioning would be next, but for the drier parts of India air coolers using water function quite well and those components are DC amenable.

Conclusions: UDC is an elegant addition to the Demand Side Management arsenal. It generally falls in the category of technology solutions although a small element of behavioral change exists. Utilities will undoubtedly welcome this development. Since the changes have to be at the sub-station level, the conversion could be staged community by community. IITM reports that pilots have already found word of mouth spread of the demand. An innovative business model may be necessary to pay for the modifications in the homes. Widespread use of this technology is certain to reduce the overall national burden on the power sector. Countries could justifiably claim advances in GHG mitigation.

Vikram Rao

PATTERNS IN INNOVATION

April 30, 2015 § Leave a comment

A recent issue of the Economist points out that types of innovation have changed over the last hundred and fifty years. The piece is based on a recent paper in the Journal of the Royal Society Interface and it relies upon data from the US Patent Office. The primary conclusion is that in the early years of patenting new classes of invention occurred, whereas in the modern day inventions rely on using combinations of existing classes. William Shockley’s transistor is cited as creating a new class, while Edison is seen as merely combining the classes represented by heated filaments, electrical supply and a vacuum.
At first blush, Edison’s light bulb being relegated to a mere “combinatorial” class is surprising in part because many consider it to be the quintessential invention. The Aha moment signaled by a light bulb cartoon is in the annals. So what gives? Basically, the authors believe that inventions creating new classes are due greater honor because they likely are seminal in generating new areas of endeavor. They imply that inventor superstars are those that create new industry, and cite names like Goodyear, Morse and Stephenson as the essential catalysts for the Industrial Revolution. The transistor certainly fits that mold.

Economist article on innovation

The figure shows the growth in total patents issued and how many are new codes (sub-classes) as opposed to combinations of classes. Note that the vertical axis is on a logarithmic scale. Whether an invention falls into a given class, say batteries or a sub-class say solar energy type (both real examples of classes), is determined by the patent office. So, to some degree, this point is a bit academic in that it relies on a judgment by these folks. Consequently, the determination to form a new class need not necessarily be the harbinger of great industrial activity. It could simply be because they could not find a place to slot it.
Possibly the greatest invention in the commercial practice of biology is the (1993) Nobel Prize winning Polymerase Chain Reaction (PCR). Some have described it in terms of biology having two epochs: before and after PCR. It allows the amplification of DNA sequences. This giant invention by Kary Mullis (incidentally a North Carolina native) nevertheless built on the work of others and in particular that of 1968 Nobel Laureate H. Gobind Khurana (in fact Dupont challenged the validity in a losing cause). It was an aha moment not unlike that of Watson and Crick in visualizing a double helix in the X ray diffraction images of DNA produced by Rosalind Franklin (who might have shared the 1962 Nobel had she not died of ovarian cancer in 1958). I seriously doubt it created a new class or sub-class. Yet it transformed a field of endeavor. I think we could conclude that a truly new class of invention may well be seminal, but that this quality can be achieved in combinatorial fashion.
On the matter of combinations, the patent office is very prescriptive on what constitutes invention. The granting of a patent requires two hurdles to be crossed: novelty and non-obviousness. One may build on the work of others but these two tests must be met. It is the second item that things get subjective. The invention must not be obvious to one of “ordinary skill in the art”. IP lawyers make a living splitting that hair. In the last few years the Supreme Court has raised the bar on this test. It has also raised it on what is known as enablement: the claimed invention must be described in sufficient detail so that a person of ordinary skill can replicate it. Gone are the days of “paper patents”.
This discussion would not be complete without noting that remarkable innovations may not be inventions in the legal sense. Innovative business models to capitalize on inventions are cases in point.
Vikram Rao

Follow

Get every new post delivered to your Inbox.

Join 435 other followers