WHY ARE GASOLINE PRICES HIGH AND VARIABLE?

March 27, 2012 § 1 Comment

Image

Obama and the Pump. The Economist, Web. 17 Mar. 2012.

The Economist recently had a piece which stated that the President could lose the election due to high gasoline prices.  Apparently a high majority of the populace believes that the President has the power to fix it, even if he did not cause it.  About all he really can do is to release oil from the Strategic Petroleum Reserve, and this ought to have an effect for a while.  The last couple of times it made money for the Treasury.

The President did not help himself with the non-decision on the KeystoneXL pipeline.  The primary purpose of this line is to bring heavy Canadian crude down to refineries in the US.  But a key segment would transport oil from the booming Bakken play primarily in North Dakota to Cushing, Oklahoma.  Another portion then takes it down to the refineries in Texas and Louisiana.  Absent this last bit there is a glut of light sweet oil in Cushing.  Incidentally, refiners love light oil unless they have expensive equipment called cokers to process heavy crude.  The reason they love it is because the starting oil has properties closer to that of gasoline and so it is cheaper to refine.  Because of the difficulty in refining heavy oil always sells at a discount and there is more profit in that if you already have the right process equipment.  So the folks with that capability really don’t want the light sweet oil; it costs more and idles some capacity they already paid for.  In this paradoxical situation the “better” oil is less desirable.  This is an overarching theme in refining: the available oil has to fit the blend suited to a given refinery.  We will face this when we reduce imports.  They are not all the same.  Since the new domestic sources are light sweet crude, it makes sense for the first imports to target for reduction to be from Saudi Arabia or Nigeria, both sources of light oil.  Of course, politics could intervene.

The oil stuck in the northern reaches is largely carried by truck and train, an expensive proposition limited by capacity.  This has created a bonanza for refiners in the Mid West.  Since this fluid is sort of stranded, it is not getting the world price, or not even the Gulf of Mexico price.  But the refined product is getting a world price, minus the transport cost of course.  This means the refiners there are getting a low cost feed stock and full market price for the gasoline.  This has created an interesting anomaly that you might have read about.  The US is now a major exporter of gasoline while at the same time importing nearly half of its crude oil requirement.  The folks in Wyoming do not pay less for their gasoline just because their crude is priced low.  So, locally cheap crude is not responsible for variability in gasoline price.  More on the real reasons below.  Also, one could not help ponder that the exports are keeping domestic supply low enough as to keep prices up

It appears that no special permission is required to export refined product.  On the other hand light sweet oil from the Eagle Ford field is currently experiencing a challenging situation.  The refineries are designed for a heavier mix and so this good stuff is not moving at the world price for light oil.  They would be better off exporting it and being in South Texas they are well positioned.  In fact with Mexico’s Cantarell field in heavy decline, that country is looking for light oil to import even as it is exporting the heavy stuff to us.  But, as I understand it, this requires Executive approval.  Interesting that gasoline export does not or that it was granted earlier.

Professor James Hamilton of UC San Diego has an interesting blog on energy related topics.  His latest one deals with this issue of extreme variability in gasoline prices at the pump.  Hamilton is regarded as one of the foremost resource economists.  We have referred to him before in connection with his correlations of recessions with spikes in the price of oil.

He places the blame on the variability squarely on state taxes on the fuel.  He shows a map of the US charting the tax in each state and it is instructive.  Those of us driving up Interstate 85 from the south have noted the considerable increase in gasoline price upon crossing the state line from South Carolina.  That is all about taxes, as it is for cigarettes for those inclined to inhale.  California additionally has special requirements on the fuel, which further increases the cost.  The price of crude local to a refinery will also be a factor.  But as discussed earlier, the export market diminishes the chances of truly lower gasoline prices near cheap production.  Also the local price of crude oil is low only until the pipeline infrastructure is in place.

One last point: increased domestic production will not drive down gasoline prices.  We cannot drill our way to lower prices at the pump.  Oil is sold on the world market and except for issues of access and transportation cost, there truly is a world price.  If the US increases production OPEC could reduce theirs and cause price not to be affected.  Similarly gasoline has a world price.  Nobody is suggesting it, but I suspect limiting gasoline export could push prices down domestically.  Ultimately oil prices will come down when transport fuel substitutes are a significant force and in true competition with oil derived gasoline.  Then gasoline prices will also come down to a  new equillibrium with alternatives.

Vikram Rao

So, Where Did All This Gas Come From Suddenly?

November 13, 2011 § Leave a comment

Few will dispute that shale gas has changed the very make up of the petroleum industry.  At every twist and turn new resource estimates appear, each vastly greater than the previous.  The estimate in 2008 exceeded the one from 2006 by 38%.  As with all resource estimates, be they for rare earth metals or gas, disputes abound.  But through all the murk is the inescapable fact:  there certainly is a lot of the stuff.  How could this suddenly be so?  The last such momentous fossil fuel find in North America was the discovery of Alaskan oil.  But a discovery out in the nether regions is understandable.  In this case we were asked to believe that all this was happening literally in our backyard.

To appreciate what happened we first need to understand how oil and gas is formed and recovered.  Millions of years ago marine organisms perished in layers of sediment comprising largely silt and clay.  Over time additional layers were deposited and the organic matter comprising the animals and vegetation was subjected to heat and pressure.  This converted the matter into immature oil known as kerogen.  Further burial continued the transformation to oil and the most mature final form would be methane.  By and large the only real difference between oil and gas is the size of the molecule.  Methane is the smallest with just one carbon atom.  One of the lightest oil components, gasoline, averages about eight carbon atoms.  Diesel averages about twelve.  So, although we refer to them as oil and gas, chemically they are part of a continuum.  So, it is easy to understand that they could come from a single source.

The key word is source.  The rock in which the oil or gas originally formed is known as source rock.  The figure shows a schematic representation of the location of one such source rock.  This is almost always shale, which we told you was some mixture of silt and clay and sometimes some carbonates.  Conventionally, the fluid in this rock will migrate to a more porous body.

This is depicted as the sandstone shown, which is predominantly silica, an oxide of silicon.  It may also be a carbonate, predominantly calcium carbonate.  These two minerals are host to just about every conventional reservoir fluid in the world.  The fluid (and by the way gas is a fluid, although not a liquid) migrates “updip” as shown to the upper right.  This is because the hydrocarbon is less dense than the water saturated rock and essentially floats up, not unlike oily sheens on your cup of coffee.*  This migration continues until stopped by a layer of rock through which fluid does not easily permeate.  This is known as a seal, and more colloquially, a cap rock.  Ironically this is most usually a shale, not unlike where the fluid originated.  The trapped fluid is then tapped for production.

The trap is often a dome as shown in the upper left.  It can also be a fault.  This is when earth movements cause a portion of the formation to break away and either rise or fall relative to the mating part it just separated from.  In some instances a porous fluid filled rock will now butt up against an impermeable one, and a seal is formed laterally.

Source: Wikipedia

In the schematic shown the yellow zone would be the sandstone, and the updip fluid shown in red now finds itself abutting an impermeable zone shown in green.

In the early days of prospecting they looked for surface topography indicative of a dome type trap below.  These days sound waves reflected back produce excellent images of the subsurface.

Unconventional Gas:  We have described how conventional gas, and oil for that matter, are found and produced.  The current flurry of activity in shale gas is concerned with going directly to the source.  This was previously considered impractical, primarily because the rock has very poor permeability, which is the ease with which fluid will flow in the rock.  The permeability of shale is about a million times worse than conventional gas reservoir rock.  In fact, as we observed earlier, shale acts as a seal for conventional reservoirs.  The breakthrough was the use of hydraulic fracturing.  Water is pumped at high pressures, causing a system of fractures.  These are then propped open with some ceramic material to hold the cracks open.  Without this the sheer weight of the thousands of feet of rock above would close the cracks.  The propped open fractures now comprise a network of artificially induced permeability, allowing the gas to be produced.  This is akin to pillars and beams used in underground mines.

The sheer ability to extract gas from source rock is now well understood as feasible.  But some still doubt the magnitude of the estimated resource.  Here is the explanation of why one would expect this resource to be plentiful.  Consider that for a conventional reservoir to be formed one needed a confluence of two events.  First there needed to be a proximal porous and permeable rock and second, a trap mechanism had to exist.  So it would be easy to believe that more source rock did not have these conditions than did.  In other words the probability of source rock without a release mechanism was greater than with.  This is why it is reasonable to conjecture that the total resource trapped in source rock is greater than the resource that escaped into permeable trapped rock.  Further adding to the potential is that this is fresh territory, relatively unexploited.  Decades of exploitation have denuded conventional reserves, while the source rock remains relatively untapped.

A word on the nomenclature of resource estimation.  A resource estimate indicates the quantity of estimated hydrocarbon accumulation, whether economically recoverable or not.  A subset of that is a reserves estimate.  Reserves are the portion of the resource that one could recover economically and bring to market.  Typically in a new play one would expect reserves to keep getting revised upwards.  This is because every new well put on production increases the certainty of the extent and quality of the reservoir, and the reserves can confidently be increased.  In reading the popular literature it would be well to keep the distinctions in mind; they are often confused.

*Darker roasts produce more oil.  One way to minimize oily sheen is to brew with cold water; also results in a “sweeter” coffee.  This is analogous to “sun tea”.

Beyond Ethanol

November 4, 2010 § Leave a comment

A recent article in the Economist describes an important new direction for biofuels, namely the pursuit of drop-in fuels. These are synthesized hydrocarbons and can be used directly in any proportion for engines running on gasoline, diesel or jet fuel. The last two cannot be served by ethanol.

As we have discussed in the past, ethanol has about 33% less energy than the same quantity of gasoline. This calorific penalty decreases as we move to higher alcohols. The article discusses ongoing work on the production of butanol. It has 4 carbons compared to 2 in ethanol, so it has more calories. It is very similar to gasoline in calorific content and less corrosive and water absorbent than ethanol, and so a better substitute.

Most of the story is directed to the production of alkanes from sugar. These are straight chain compounds with the formula CnH2n+2. Conventional oil derived fuels have this formula as well. The number n is about 7 to 9 for gasoline and about 12 to 16 for diesel and a bit higher for jet fuel. So, alkanes with the right number are for all practical purposes direct drop-ins for these conventional fuels.

Herein lies the attraction. Also, being tailored, often through genetic engineering, the composition will be predictably uniform. This is not the case for the input to refineries from a variety of crude oil sources. In fact oil refineries today are forced to be very picky about the mix of crude they will accept. Seed based oils also suffer from this variability.

No small wonder, therefore, that many of the leading players in the drop-in biofuels space are supported by major oil companies. The list includes ExxonMobil, Shell and Total – all heavy hitters.

The reliance on sugar as feed stock is of note. Today, Brazil is the only source for economical sugar for this purpose. Tariffs apply only for ethanol; at least for now. So the long term potential for this feed stock can be debated.

One company is even reported to be using sugar to grow algae for diesel. This is quite a departure from the original allure of algal diesel. It was seen as using sunlight and waste carbon dioxide, a sustainability home run of sorts. Now we see folks going to the dark side of algae, literally. These algae are grown in the dark! The photosynthetic part is transferred to the growing of sugar. So we still have the sunlight and carbon dioxide (from the air in this case) put to use.

An interesting twist is the use of existing ethanol plants by some of these companies. This is a good trend, to deploy assets created by a flawed national policy and subsequently idled by realities.

So, what of corn and cellulose? Both are challenged by the fact that the chemical structure renders them more difficult to convert to alkanes. Of the two, corn, while simpler to react, is the worse in part because of water usage. Cellulosic materials such as grasses offer the promise of draught resistance. Price of Brazilian sugar over the long haul will be a determinant.

An interesting avenue for biomass in general is pyrolysis such as practiced by RTI International under DOE funding. This produces a liquid akin to crude oil. Close enough to merit inclusion as a portion of the feed to a refinery. This is the pre-refinery analog to a post-refinery drop-in fuel. Requiring no modification to current practices. A chemical plug and play, as it were.

Finally, the Economist story discusses the place of electric cars in this context. They opine that while alcohol will get trampled, drop-in fuels will survive. In the next thirty years, gasoline will continue to be used to a significant degree. So, ethanol will continue to be used as a means of assuring complete burn of the fuel. This use as an oxygenate came about from the outlawing of MTBE, but is only needed at the 6% or so level. Beyond that, ethanol is a liability on many grounds and will probably fade away.

But drop-ins will hang around a lot longer. The Beyond Ethanol story will feature electric cars but drop-ins will get serious second billing.

LNG, Shale Gas and Politics in India

July 24, 2010 § 4 Comments

Basking in a Bangalore breeze, with a mango tree swaying outside the window, I am reminded of a fairly recent article concerning liquefied natural gas (LNG) imports into India.  This story discussed a plan to import LNG from Qatar.  There were a couple of points of note that are grist for this particular posting mill.  First was the contemplated price of about $13 per mmBTU and the second was the mechanism for arriving at that price.

But first some background relative to Qatari motivation for long term deals such as this.  The abundance of shale gas in the US has essentially taken that country out of the running as a Qatari LNG destination.  Europe continues to be a valid target, but shale gas will likely be a factor there as well.  Russia could well react to domestic shale gas in Poland and elsewhere with price drops.  LNG may face lower prices but unlikely to see a US type debacle.  Relatively close markets such as India shave 50 cents or more off a US delivered price.  So, India could be important.

The truly curious aspect to the story cited is that the landed price is tagged to a Japanese crude oil basket price.  For a few years now there has been a disconnect between oil and gas prices based on calorific value.  Curiously, the more environmentally challenged one, oil, is currently priced at roughly three times gas price.  That is commodity pricing.  The disparity is even greater when one factors in refining costs.  Transportation is something of a wash, although gas is cheaper to move than crude oil or refined products, at least on land.  All of this is singularly premised upon the internal combustion engine being the workhorse of transportation.

Natural gas pricing is regional, largely due to the high cost of ocean transport.  If local gas price is low, it is difficult for LNG to compete, which is why the US will be off limits unless demand takes a huge jump.  Even then the abundance of the shale gas will likely keep the status quo.  Local gas price in India was under $3 per mmBTU until recently.  It is now $4.20, close to current prices in the US.  That is the controlled price paid to domestic producers of gas.  So, to contemplate imported gas at three times the price is the sort of action possible only in settings such as these: government control on commodity pricing.  But pegging the price to an oil market basket, a Japanese one no less, is where logic takes flight.

Oil prices in coming years are likely to see sustained increases.  Natural gas, on the other hand, will see a moderation in the US due to shale gas.  If shale gas resources are found in other countries, one could expect similar pricing behavior.  So, pegging any natural gas price, LNG or otherwise, to oil prices will result in a windfall for the producer and one that is not justified by supply and demand arguments. 

Consequently, the main problem with the contemplated Qatari deal is not even the current high price.  It is the possibility of up to a doubling in ten years.  At anything close to that the incentive to use natural gas evaporates.  Entire industries will shift offshore.  It will be cheaper to make fertilizer, polypropylene and the like abroad and import the finished product.  This will have a lasting negative impact on domestic jobs and the balance of trade.

An interesting subplot in the Qatari deal is the statement by them that they supplied cheap gas in India’s hour of need a few years ago.  It was landed at $2.53 and has crept up to around $7 more recently based on whatever oil linked formula was used.  The implication is that they should be rewarded now with a better deal.  A fairly high fixed price would fit that scenario while still being unfair to domestic production.  Pegging to oil defies logic and is simply bad business.  The story is now four months old.  Perhaps sanity prevailed.  It nevertheless gave us an opportunity to discuss the underlying fallacies.

MIT Natural Gas Report Glosses Over Environmental Issues

July 1, 2010 § 1 Comment

MIT’s most recent report on energy is on the Future of Natural Gas, following similar reports on coal and nuclear energy.  It is co-edited by Ernest Moniz and Tony Meggs.  The latter recently left BP as CTO.  As reported in Forbes recently, the report emphasizes the role of shale gas in enabling natural gas substitution of coal.  The authors see this as a transitional strategy for a low carbon future.  We agree with that and have expressed similar ideas in the Directors Blog.

However, the report is surprisingly shy about discussing the environmental issues seen as facing shale gas exploitation.  While we believe these are indeed tractable, they merit much more discussion than they were given.  Accordingly we repair some of that omission here.

The most significant issues center on three matters:  fresh water withdrawals, flow back water and collateral issues, and produced water handling and disposal.

Fresh Water Withdrawals and Flow Back Water:   Typical wells use between 3 and 5 million gallons per well.  Industry practice has been to use fresh water as the base for fracturing fluid.  The water that returns to the surface after the fracturing step is known as flow back water.  Shale operations are unique in that only about a quarter to a third of the water returns, the rest staying in the formation.  Also, the flow back water is usually more saline than the injected water.  So, in principle it cannot be re-used.

Handling salinity is the first step to water conservation.  The key is ability of the fracture water to tolerate some level of chlorides.  Recent research has shown that not only is this possible, but that it can be beneficial.  The chlorides actually stabilize the clay constituents of the shale and improve production, although companion chemicals such as friction reducers need to be modified.  This has two possible implications to water withdrawals.  One is that after some measure of treatment, the flow back water should be usable.  But because all of it does not return, withdrawals for make-up water will be necessary.  This is where the second implication comes in.  Moderately saline water from another source could be used since salinity is tolerable.  The most important implication of the foregoing is that flow back water could over time be completely re-used and this then ceases to be an issue with respect to discharge. 

So, now let us discuss numbers.  In current practice the tolerance for chlorides is likely about 40,000 ppm.  Flow back water with higher salinity will need to be desalinated to some degree, or diluted by fresh water.  In some parts of the country this may be viable.  Another option could well be to use sea water, if that were to be the water of convenience.  Sea water tends to contain around 30,000 ppm chlorides.  That is already in the range of acceptability with the possible removal of some minor constituents.  Finally saline aquifers are a potential source.  These are in great abundance, with variable salinities.  Saline water wells drilled as companion to the gas wells are very likely in areas where fresh water withdrawals compete with agriculture or other endeavors.  In general, if the shale gas industry can utilize water unsuited to agriculture and human consumption, then it will be seen in a completely different light.

Produced Water

Water associated with the gas is produced at some stage of the recovery, usually towards the end of hydrocarbon production.  In some cases early production occurs due to infiltration of the fractures into the underlying saline water body often present.  Whether from connate water or the water layers below, produced water will be very saline, in part because of the age of the rock.  Disposal of this water is a major issue, especially in New York and Pennsylvania and can cost upwards of $10 per barrel, when even possible.  Concern regarding illegal discharge is high among the residents.

The treatment of produced water represents a significant business opportunity.  Several outfits are developing forward and reverse osmosis schemes for desalination.  Others are working on bacteria eradication, heavy metal removal and the like, using methods such as membrane filtration and ion exchange.  Some of these are already in service on a limited basis.

Produced water offers the promise of being usable for make-up water after some modest treatment.  The salinity may be directly tolerable but the bacteria would need to be removed prior to re-use.  This is because many of these cause the production of hydrogen sulfide downhole, which makes the gas less valuable and causes corrosion in the equipment. 

Contamination of Drinking Water

There have been anecdotal reports of well water contamination by gas, most recently sensationalized by a documentary.  The popular literature ascribes two hypotheses to this phenomenon.  One is the migration of fracturing operation cracks from the reservoir up to the water body.  The other is gas leakage from the well.

Hydraulic fracture cracks will not propagate the significant distances to the aquifers.  Were they inclined to do so, they would heal due to the earth closure stresses.  In terms of distance, the closest fresh water aquifers are about 5000 ft. and 3000 ft. away, respectively, for the Barnett and the Marcellus.  So this really is not likely.

Gas leakage from the well is preventable if the well is drilled and completed correctly.  A fundamental feature of regulation has always been to design for isolation of fresh water in all petroleum exploitation, not just in the shale.  Between the produced fluids and the aquifer lie two layers of steel encased in cement.  The cementing operation is designed for preventing fluid migration.  Tests are run to ensure competence of the cement job and remedies are available for shortcomings.  At these shallow depths the operation is extremely straightforward and amenable to regulatory oversight.

See Also: New York Times’ response to the study

Electric Car Drivers may need Training Wheels

May 4, 2009 § 1 Comment

Training wheels are a wonderful invention to aid the tot with two wheel transport anxiety.  More often than not the anxiety resides with the parents, but regardless of source, the wheels get installed.  Now, in purely engineering terms, the extra wheels are pedestrian in design.  Clearly intended for the short term, they are not of particularly robust construction, because not much use is anticipated.  The added cost is modest when compared with that of the bicycle.  Yet, the comfort to the psyche is enormous.  Now, all of this really only applies to the munchkins.  Were you to learn to ride a two wheeler at an advanced age, as was I at age 11, the training wheel option is essentially out.  Even if available, the derision of the cohort group would not be sustainable.  So, what does all of this have to do with electric cars?

Electric cars will come in two flavors:  all electric (EV’s) and hybrid electric (PHEV’s), both with the ability to conveniently plug into wall outlets and both utilizing the energy of braking to charge a battery.  Both will use electricity alone to drive the wheels, so there will be an essential simplicity to the mechanics: no transmission, no gear box, no cam shafts and minimal mechanical maintenance.  The essential difference between the two will be the auxiliary gasoline engine in the hybrid electric, that will charge the batteries if they run down.  The all electric will not have this back up feature.  So, it will rely solely on batteries for range.  The early entry vehicles will have an electric range of 40 miles for PHEV’s and 80 to 100 miles for EV’s, not counting boutique cars such as the Tesla.  One can reasonably expect the EV numbers to double within a few years, provided advances are made in battery technology to provide more capacity in the same volume.

The car buying public will face a choice.  Since the EV, when mass produced, could be expected to be cheaper to make, despite the bigger battery, the list price will be lower than that of a PHEV, with one manufacturer expected to offer it at a price comparable to the gasoline counterpart.  The PHEV on the other hand, while more expensive, will have the much greater range afforded by the gasoline back up.  The “fuel” costs will be comparable when run on electricity.  The key difference will be a new term that has entered the transport lexicon: Range Anxiety.  We can roughly define this as the fear of running out of juice without a convenient fill up station.  The PHEV Chevy Volt’s electric range of 40 miles is based on studies indicating this as serving commute needs of 75% of Americans.  A full tank of gasoline extends that range another 600 miles.  The initial entry EV’s will have ranges of 80 to 100 miles and charging times of less than half an hour to six hours for a full charge, depending on the sophistication of the charging equipment.  Home charging, at least initially, will be at the higher ends on time.  Early deployment will be in cities that will install some measure of distributed charging infrastructure.  Battery swap business models are in play, wherein charging stations plan to exchange a fully charged battery for a depleted one.

In the end, the buying public will have some fraction afflicted with Range Anxiety.  This is where PHEV’s play the role of training wheels.  With such a vehicle consumers have the luxury of sorting out their driving habits, their discipline in charging every night, and all other manner of behavior impinging upon their ability to live with the range of an EV, at all times secure in the notion that the gasoline engine can bail them out.  There will also be a segment of the population eschewing this aid to behavior modification, in effect wobbling on to the bike, as your truly did some decades ago.  A skirmish with a thorny bush sticks, as it were, in the memory.  Thorny situations will undoubtedly lie in wait for the first time EV-ers.  And then again, perhaps PHEV’s will always have a place.  Choice is a good thing, in cars, colas and presidential elections.

Can North Carolina be a domestic source for lithium for electric vehicle batteries?

February 14, 2009 § Leave a comment

Making transport fuel fungible with electricity offers options to net importers of oil such as the US.  As a state, North Carolina is in the unenviable position of importing all of its fuel from other states.  While biofuel will undoubtedly play a role in reducing this import, electrifying the fleet offers another avenue.  The primary mission of electric vehicles(EV’s) would be the reduction or elimination of tail pipe emissions, the notoriously most difficult site for carbon dioxide capture, although a secondary one may be to act as a storage medium for the grid.  The FRDM program, led by NC State University, targets creating all elements of a Smart Grid, which would be a key vehicle in grid optimization.  So, North Carolina is already well placed to take a lead in electrifying the passenger vehicle fleet.

EV’s such as GM’s Plug-in Hybrid (PHEV), the Volt, scheduled to be marketed in 2010, are intended to be charged in conventional electrical outlets, with a gasoline engine for charging the batteries if needed to go beyond the nominal range, 40 miles in the case of the Volt.  Pure EV’s, running solely on electricity, such as one scheduled by Nissan for limited entry in 2010, are also likely to be part of the equation.  If such vehicles are to become a substantial portion of the passenger vehicle fleet, several economic hurdles will have to be crossed, some possibly needing subsidies.  The principal of these is the expected higher cost of the vehicle (pure EV’s, because of their simplicity of design, will be somewhat lower in cost than PHEV’s), driven largely by the cost of the battery.  Research to reduce cost and increase range is ongoing in this and other countries, and the current administration has announced the intent to significantly fund this endeavor as part of the Stimulus Package.

Batteries: The Lithium Ion battery is the clear leader in this field and many believe it will continue to be so for the foreseeable future.  Other manner of sophistication, such as augmentation with super capacitors for short  bursts of power, is expected to reduce the load on the batteries.  However, the current unit costs are high, although high volume throughput has not yet been in place.  One can expect the costs to come down over time.  A point of note is that while the technology is domestic in many cases, all battery manufacture is currently in other low labor cost countries.  However, as in the case of foreign designed cars, domestic manufacture may become feasible.  Location of such capability in North Carolina would go hand in hand with any decision to make North Carolina a primary launch state for electric vehicles.

Lithium: A more pernicious issue is the sourcing of the critical commodity, Lithium.  World reserves are considerable, but the majority of these are in Latin America, including some countries such as Bolivia who are not in close alignment with the US.  There is the risk of trading foreign dependency of one commodity for another.  Unlike the battery manufacturing situation, a mineral is uniquely situated, as in the case oil.  North America does have sizeable reserves of lithium ore, in the form of spodumene, an oxide, but with current technology the processing costs are high when compared to the cost of processing the brine based deposits in other countries.  The vast majority of spodumene reserves in this country are in North Carolina, in an area northwest of Charlotte.

Call for Action: The technology for spodumene processing deemed non economic is at least half a century old.  Hints exist in the literature for more innovative methods.  In the national interest a research program should be instituted to investigate the possibility of economic recovery of Lithium from oxide ore.  RTEC has commenced a scoping exercise in this area, currently involving a literature search, but a fully fledged investigation will require State or Federal funding.

Flexi-Fuel Fairy Tale

December 11, 2008 § Leave a comment

The Utopian State, known the world over as the US, was in the throes of a dilemma.  Much maligned for not doing enough to limit carbon dioxide emissions, it developed a plan that seemingly in one fell swoop tackled global warming associated with automobile emissions while at the same time reducing import of oil from nations, some of whom were deemed unfriendly, at least in the rhetoric of elections.

This solution was known as the 20/10 plan.  The goal, to replace 20 percent of gasoline with ethanol in 10 years, was seen as visionary, if for no other reason as that 20/10 was about as good as one got with vision.  However, even before vast quantities of alcohol had been consumed, a hangover of major proportions was in the making.  Therein lies the tale.

The Utopian State, as befitted its name, was inclined to believe that the public would recognize a really good thing when they saw it.  They especially believed in the maxim: If You Build it, They Will Come, because said maxim was irresistibly derived from the powerful combination of Kevin Costner, the National Sport and mysticism.

So they built it, a complex web of subsidies to farmers, automobile companies and refiners, and tariffs on imported ethanol, all designed to produce domestic ethanol to blend with gasoline, and vehicles that would run on the stuff.  In a nod to perceived consumer preferences, they incentivized the auto companies to make flexi-fuel cars, capable of using regular gasoline and also E85, a blend with 85% ethanol.

They even created demand for these cars by ordering their agencies to use them and mandating the use of the new fuel.  Waivers to the mandate were given generously, no doubt in the Utopian belief that said waivers would not be sought if not merited.  It seems that some of these outfits are seeing a net increase in gasoline usage (Washington Post: Problems Plague U.S Flex-Fuel Fleet, Oct. 23, 2008), a result contributing in no small measure to the aforementioned hangover.

At the core of Utopian belief is that folks will “do the right thing.”  So, purchasers of flexi-fuel vehicles were expected to purchase E85, even from filling stations some distance away, ignoring the fuel consumption getting there and back.  Then word filtered through that E85 delivered 28 percent fewer miles per gallon.  In short, it was more expensive to use and harder to find.  They started filling up with regular gasoline because the flexi-fuel vehicle allowed that; filling stations noted the drop in volume and stopped stocking E85.

In time, it became apparent that the federal policy and legislation underestimated, or ignored, the fact that even in the US only market-based policies function.  Into this nightmare scenario stepped in Prof. Wunderbahr from a prestigious eastern university, with an engine design that delivered a small car running  on E85, delivering fuel economy and the muscle of a larger vehicle.  The design took advantage of the high octane number of ethanol (113 versus 87 for regular gasoline), which allowed effectively high compression ratios, which in turn improved the efficiency of combustion.  The result was elimination of the gas mileage penalty from using ethanol, increased power for an engine of given size, and retention of the improved emissions associated with ethanol usage.

Auto makers vied with each other to retool and produce these cars without any federal incentive because the public actually wanted them.  Fuel distributors rushed to install E85 pumps and realized that this was simply achieved by eliminating one grade of fuel.  They came to the realization that all vehicles on the road today specify either 87 or 91 octane.  A third grade was not needed, and the third pump was now available with modification to dispense E85.  The US government, not wanting to be left out of this, set policies to further these steps.  Ethanol from sources non competitive with agriculture became cheaply available.  All was well again.

And then they elected a new President who resolved never again to set policy that was not market-based.  The country united behind him on this and it was never quite the same again.  The country was henceforth known as the United States.

Where Am I?

You are currently browsing entries tagged with gasoline at Research Triangle Energy Consortium.