BEYOND GASLAND

October 8, 2011 § 3 Comments

No shale gas production issue may be more fraught with partisan rhetoric than that of water well contamination.  The award winning documentary Gasland  leveled accusations and energized entire communities.  Industry reponse was equally summary in denial.  We need to get beyond all that.  Here is an attempt at clarity.

Well water contamination is very personal and frightening.  Think Erin Brockovich.  Airborne species appear not to get the same reaction.  Certainly, carbon dioxide in the air barely registers on the average personal anxiety scale.  Consequently, assaults on the quality of well water make for avid reading and activism.  In the case of shale gas, industry response has also been sweeping in denial.  Both sides play fast and loose with the English language, as will be shown.

There are two potential ways in which shale gas operations could contaminate aquifers.  One is through leakage of the chemicals used in fracturing.  These then would be liquid contaminants.  The second is the infiltration of aquifers by produced methane.  This is a gaseous contaminant, albeit in the main dissolved in the water.  If present, a portion may be released as a gas, as spectacularly depicted in  Gasland.  Natural occurrences such as the Eternal Flame Waterfall in the Shale Creek Preserve in New York, shown in the picture, demonstrate methane intrusion into a fresh water source.

Natural contamination is either from relatively shallow biogenic methane from decomposing vegetation or from thermogenic gas from deep deposits escaping up along faults and fissures.  The last is generally due to tectonic activity at some time.  The two types of gas have fairly different fingerprints and can often be distinguished on that basis.  Good oil and gas exploitation practitioners will avoid producing in areas with significant vertical leak paths because they vitiate normal sealing mechanisms.

The distinction between potential liquid and gaseous contamination is important because the hazards are different, as are the remedies and safeguards.  Also, because well water could not naturally have the liquid contaminants, any presence at all is evidence of a man made source.  Therefore, simple testing of wells proximal to drilling operations is sufficient, with the only possible complication being some source other than drilling, such as agricultural runoff.  This is easily resolved because of the specificity in the chemicals used for fracturing.

Unfortunately, the two get lumped together in the statements by shale gas opponents and also the genuinely concerned public.  Some see methane intrusion as proof of well leakage as a whole and therefore equate it to chemical contamination as well.  Gasland reports “thousands of toxic chemicals” as the hazard.  In actuality, the mechanisms for possible leakage are quite different.  Methane as gas is much more likely to leak out of a badly constructed well than is a liquid.  Also, the mechanism by which methane could leak is well understood and it is not conducive to leakage of fracturing fluid.  The public cannot be expected to know this and so it is easy to see why the two get banded together.  To them, a leaky well is a leaky well.  Fortunately, this is not the case.

So, do producing gas wells sometimes leak into fresh water aquifers?  The answer is yes.  In all cases this is because of some combination of not locating cement in the right places and of a poor cement job.  Many wells will have intervals above the producing zone that are charged with gas, usually small quantities in coal bodies and the like.  If these are not sealed off with cement, some gas will intrude into the well bore.  This will still be contained unless the cement up near the fresh water aquifers has poor integrity.  In that case the gas will leak.  You will notice nothing in the prior discussion says anything about fracking.  In other words a badly constructed well is just that, no matter how the gas was released from the formation.

This distinction is lost on many.  The recent notable paper by Robert Jackson and others is the most comprehensive work of its kind to date.  It unequivocally shows no fracture chemical intrusion into water wells.  It also shows gas intrusion in disturbingly many cases, although later studies will take care to normalize for possible natural seeps and prior drilling activity.  Yet the title of the paper is Methane contamination of drinking water accompanying gas-well drilling and hydraulic fracturing. (Emphasis added) The last three words infer a causality that is not proven and in fact is contraindicated by the absence of fracturing chemicals in the water wells.

Industry proponents on the other hand make statements such as “hydraulic fracturing has never contaminated ground water”.  Lisa Jackson of the EPA testified recently under oath “I’m not aware of any proven case where the fracking process itself has affected water, although there are investigations ongoing.”  In precise terms this may be right in that fractures have not propagated into ground water.  Take the case of a well associated with fracturing operations that leaks gas but not liquid. One could argue that the poor construction would simply not have occurred but for the desire to fracture the shale reservoir.  So an opponent would take those very data and say “hydraulically fractured wells contaminate ground water”, while the proponent could say “hydraulic fracturing did not contaminate ground water”.  Neither would be wrong.  It is the public that will be confused with this license taken with the language.

Rhetoric aside, proper stewardship of our resources and the environment is possible.  Some possible measures are listed here.  Permits must be given only to oil companies with good track records, thus maximizing the chances of diligence in well construction.  Water wells proximal to intended operations (Jackson suggests 2,000 feet, I believe) be tested prior to drilling at the cost of the operator.  Logs be required to to assure cement integrity.  At a minimum the Cement Bond Log; this famously was not run on the Macondo well that blew up in the Gulf.  Routine testing of the water wells, with a prompt attempt to seal the well, if leaking.  This occurrence should also prompt a severe penalty.  All of this and adherence to sound drilling and completion practice will ensure the sustainable production of a valuable resource.

NATURAL GAS VERSUS COAL: DUELING REPORTS

September 28, 2011 § 1 Comment

Until recently, natural gas was seen indisputably as a cleaner alternative to coal.  Robert Howarth at Cornell University changed all that, at first abortively in 2009, when his study was demonstrably flawed.  His revised report, which now includes the contribution of fugitive methane in coal mining, has been published.  A hailstorm of criticism notwithstanding, some of the issues beg debate.  A more recent study appears to be in support as well.  In contrast is the report by the Worldwatch Institute, conducted in collaboration with Deutsche Bank which unequivocally concludes the superiority of natural gas, nevertheless recommends attention to fugitive emissions.

So what is the public to make of all of this?  They are right to assume that science is deterministic at least in the broad swaths of the argument in question.  So there is no dispute that when combusted, natural gas produces about 50% less carbon dioxide than coal in producing the same amount of electricity.  Where the dueling reports diverge is in the area of fugitive emissions:  these are releases of methane during the operations involved in producing and transporting the fuels.  There is also no dispute that methane is about 25 times more potent than carbon dioxide in its global warming proclivity.

The bulk of the debate surrounding the Howarth work has been around the time scale for the analysis.  This is because, although methane is much more potent, it turns out that this potency dissipates much faster than in the case of carbon dioxide.  So, one gets a different result when the effects are studied for twenty years as for a hundred years.  The latter has been the accepted standard.  But Howarth and others make an argument for using the shorter time span, which turns out to disfavor methane.  Of note is the fact that when carbon sequestration in deep saline aquifers is considered, the yardstick they are held to is well in excess of a hundred years.  In other words the sequestered gas has to be guaranteed to not leak over that period.

In the case of coal, the emissions comprise methane found in association with the coal.  For centuries this has been a known hazard of coal mining, both from the standpoint of poisonous atmosphere for miners and from the possibility of explosions in confined areas of the mines.  In the past, canaries were famously used as indicators of methane.  If they died you got out in a hurry; a sort of go no-go device.  Some of those still awake through this discourse no doubt are skeptical in that you know you can smell a gas leak in your kitchen.  Well, it turns out methane has no odor, but the producers deliberately introduce one for precisely the intended purpose of olfactory detection of leaks.

Natural gas production and distribution can leak in two principal areas.  One is in transportation.  The system of pipelines and associated valve assemblies at various points can leak after aging induced malfunctions.  But this can be addressed through maintenance mechanisms.  The main source of fugitive emissions is the natural gas produced prior to the existence of a pipeline to move it.  This is in the early days of the prospect.  Even in areas riddled with pipelines, a spur line to the new rig in question does not exist at the outset.  Current custom is to not invest in that until the reservoir is proven commercially viable.  The initial gas produced during the discovery process has nowhere to go.  It is often released.  Hence the problem.  Now, it could be flared, which is the process of simply burning it on the end of a pipe.  This would dramatically reduce the problem since the released pollutant would be carbon dioxide, not methane.  But one imagines this approach is not taken probably because flaring draws singular attention to the enterprise.  This gas produced in the very early days of the well is the problem.

The public may well ask why something useful is not done with the gas.  The answer lies in part in the short duration of the production.  It cannot economically warrant any sort of capture and use.  But if such a technology were to be developed, the potential would be significant.

A final note:  fugitive methane emissions from livestock exceed that from oil and gas operations.  This results from the fact that ruminants such as cows produce methane as a normal consequence of their digestive process: they belch methane.  An outbreak of vegetarianism would help the environment!

MAKING A VIRTUE OF BEING LATE

August 12, 2011 § 2 Comments

This statement has the makings of an oxymoron.  In many settings it certainly is.  So, for example there can be no discernible virtue of being late for your own nuptials.  Being late for one’s own funeral, if that could be pulled off, has decided good points.

Source: ACUS.org

Being late is not precisely the same as coming in second.  Nobody knows that Tom Bourdillon and Charles Evans were within 300 feet of the summit of

Everest three days before the second team of Edmund Hillary and Tenzing Norgay got to the top.  Bourdillon and Evans likely did not even make it into Trivial Pursuit.

In the business of innovation there is a body of literature on the value of being first.  “First mover advantage” is firmly in the business lexicon.  But so is the “fast follower” principle.  Indubitably, fast followers could be faced with patents preventing that from happening.  Intel went out in front early and was never materially threatened.  But many businesses have been built on the premise of letting somebody else build the market and make the mistakes.  There is that old adage:  the people in the front get shot.

So, what does all of this have to do with energy?  The history of development of shale gas is instructive.  After the realization that horizontal wells and fracturing enabled gas production from these tight rocks, the early attempts employed methods previously used. In particular, those involved in using sugars as thickening agents to easily fracture the rock.  The sugar residue impaired production.  Newer techniques, in areas such as in the Marcellus, use “slick water”.  The results have been dramatic, albeit at the expense of higher volumes of water.

All of the foregoing is just plain building on the experience of the past.  This post on the virtue of being late keys on the point that if fate has dealt you a hand that causes you to be late to the party, find ways to make that a positive.  This is the opportunity presented to the areas of the east coast that have not yet materially been swept up by the shale gale.  These include Ohio, West Virginia, Maryland and North Carolina.  These states must institute measures whereby the exploitation of the resource is done in an environmentally sound fashion while still maximizing the realization of economic value for the communities affected.

The important measures required fall in the following categories:

  • Ensuring that the water related issues are dealt with from the start.  The foremost is the requirement to re-use all the fracturing water, because improper discharge has plagued parts of Pennsylvania.  Fresh water usage must be replaced, over time, by saline water.   This is technically feasible and simply needs execution.  Water wells proximal to intended drilling should be tested prior to drilling and then routinely thereafter.  The cost of this must be borne by the operator.  Chemicals used must be publicly disclosed with very few exceptions, and even in those cases, full disclosure must be made to the authorities.  The use of toxic chemicals such as the BTEX family and diesel in the fracturing fluid is technically unnecessary and should be expressly disallowed.
  • The latest technologies to minimize environmental impact should be employed.  These include the use of pad drilling to minimize road traffic and measures to prevent fugitive methane emissions.  Enabling rule-making, such as unitization schemes to allow pad drilling and mandatory sensing for emissions and indications of casing leakage, must be instituted.
  • A significant fraction of royalties collected should be ploughed back into giving relief to the affected communities.  This includes hardening of farm roads unsuited to the heavy vehicles associated with the exploitation, and the water handling infrastructure.
  • The public must be educated on all the issues and opportunities for dialog should be created.  A clearing house of information is needed for affected parties such as potential land leasers and homeowners proximal to production activity.

The Secretary of Energy commissioned a study whose findings have just been published for public comment.  This is a balanced report with a very positive attitude that is in keeping with the position we have been taking:  shale gas is a game changer and it is incumbent on us to enable it responsibly.  Produced in a scant 90 days, the report is necessarily short on some detail.  But the message is clear and there is an air of optimism.  For this it will undoubtedly be pilloried by some interest groups.

HOW VIABLE IS A CARBON TAX?

July 31, 2011 § 1 Comment

Source: Harvest Power

Most discussion regarding a price on carbon emissions avoids the ”tax” word.  Conventional wisdom goes that supporting a carbon tax is a poison pill for re-election.  Consequently Europe resorted to a cap-and-trade scheme in the belief that this was tantamount to the free market setting a price.  But political pragmatism in the form of holidays for certain industrial sectors severely diminished the effectiveness of the instrument.

Cap-and-trade does have allure.  It permits businesses to buy and sell carbon credits, thus providing a measure of flexibility.  However, by its very nature, the scheme engenders uncertainty.  In Europe, the price has fluctuated from about 13 euros ($18) per tonne of carbon –dioxide today to a high of 34 euros ($47) in 2006.  This sort of uncertainty discourages investment capital.  Uncertainty equates to a higher discount rate, which increases the transaction price.  This applies in particular to carbon sequestration schemes, all of which are expensive.  Cleaning coal derived power to natural gas levels will cost around $30 per tonne.  The European price today could not support that, and more to the point, there would be no certainty as to the price years hence.

So, over half a decade of European cap-and-trade experience is not encouraging.  The US Congress flirtation with such a scheme was dead on arrival even in a Democrat controlled House.  Today, with DC painted red, the prospects are essentially nil.  As for a tax, that is even more inconceivable.

BC could soon stand for Beyond Carbon:  A very interesting piece in the Economist describes some early success for a British Columbia experiment in carbon mitigation.  In 2008, the sitting Prime Minister Stephen Harper was re-elected in part because of his stance against a carbon tax.  Pretty conventional political wisdom at work here.  Almost unnoticed at the same time the province of British Columbia had a governor, Gordon Campbell, who introduced a carbon tax.  At first pilloried by many, it now is seen as a success by all factions.  The two key elements to success appear to be as follows.  First, and likely most important, the entire revenue from this was ploughed back into tax refunds to individuals and companies.  Second, the initial tax was modest at $10 per tonne of carbon dioxide, with a prescribed annual rise of $5.  It will be $30 in 2012.  The predictability goes right to our point earlier: it is capital investment friendly.

The BC experiment appears to be a success story, although it is still Year 4.  Per capita fuel consumption is down by 4.5%.  Economic indicators are all positive, albeit in a petroleum rich province.  Campbell was re-elected a year after this was instituted, and the new governor, from the other side of the aisle, kept all the elements of the popular program.  The current tax of $25 is 25% higher than the effective price in Europe and yet acceptable.

So, what makes British Columbia different?  All Canadian politics is skewed to the left no matter the party.  And yet, Harper appeared to have been elected in part due to the platform of no carbon tax.  BC is more temperate in climate than the rest of the country, so per capita energy use is likely less, not unlike California and Oregon, who have had their successes with energy policy.  This entire rationalization aside, the trick probably was the re-use of the revenue to benefit the public and industry.  The province of Alberta does something similar by taxing bigger producers and putting the revenue into a fund for improving the environment.  This too is a popular measure, but less far reaching.

Now Australia takes the plunge:  Australian Prime Minister Julia Gillard just this month announced a tax of A$23 per tonne on the 500 worst polluters, mostly coal mines and steel companies.  She needs the Green Party support in Parliament, and this was a factor, as it was for Angela Merkel in Germany on the decision regarding no new nuclear plants.  When all the teeth gnashing is done, some bare facts are illuminating.  The net addition to the price of coal is estimated to be A$1.50 per tonne against the backdrop of a near all-time high of A$300 per tonne feeding a voracious Far East market.  The proverbial drop in the bucket.  High quality Australian iron ore is currently also at a high, with prices in the range of A$150 per tonne, against a cost to produce of A$40.  In general, the Australian economy is on a high; unemployment is low and exports strong despite an incredibly strong Australian dollar.  Taxing industry in this situation is a lot easier, especially when the resulting revenue is being returned.  In fact, in the next four years, the related expenditure will exceed the revenue substantially.  But this economy can afford it.

The key to any acceptance of the Australian scheme is, once again, the manner in which the revenue is used.  Half will be used to reimburse consumers for increased electricity cost due to it.  Another 40% will go to aiding the transition of the hardest hit companies to cleaner technology.  The plan will pass Parliament but may not at first be popular with the public.  But nor was it so, in the early days in British Columbia.  The plan may endure with the right results, but Australia’s first female prime minister may not survive the next election.  She was elected this time around on a firm platform of no carbon tax.  Opponents will be keen to remind the public of that.

Impressions from Plug-In 2011

July 26, 2011 § Leave a comment

Plug-In 2011 was held in Raleigh, NC, the first time ever outside California.  The attendance was just short of last year’s at San Jose.  The Public Night drew 1,300 folks, on par with the San Jose event.  So, the experiment may be deemed a success.  Next year:  Texas.  They will have tough shoes to fill.

On balance, the programming was a good mix of depth and breadth.  The competitive spirit between Nissan and Chevy was something short of collegial.  This was disappointing for a forum such as this.  At this stage, with just two entrants, the task is to get the public enthused about the genre.  The Volt and the Leaf have dramatic differences that allow choice.  Leave it at that.

One speaker made the observation that young girls were particularly enthused about electric vehicles.  This is interesting because cars and trucks used to be the domain of little boys.  When the observation was made from the floor that soon girls would be playing with zinc die cast cars, Chelsea Sexton wryly raised her hand from the podium.  OK, so Chelsea, unmistakably of the female gender, was one such, but we all know that she is unique!  The trend, if verified, holds the promise of another breach in the science and mathematics wall that young women face in today’s society.

Batteries will be the difference:  A prediction was made by a speaker that battery costs could drop to $200 per kWh by 2018.  This is in keeping with our own view.  A scant two years ago this figure was believed to be around $950.  A recent report from the UK offers evidence for Nissan batteries costing about $375.  The report cites GM, Ford and Toyota, all being bearish about numbers below $500.  Admittedly several factors are in play for this seemingly steep drop to date.  Mass production inevitably dropped the cost.  Further, Nissan, in partnership with battery giant NEC since 2007, could well have made breakthroughs in manufacturing.  This is all to the good because battery cost and performance will be the key driver for electric vehicle uptake.

Cost aside, the key attributes of batteries in play are range, energy density (size for a particular range), speed of charging and discharge characteristics as linked to ultimate life.  Depending on how these play out, the choice between hybrids such as the Volt and all-electrics such as the Leaf will be impacted.  Take speed of charging.  If a 22 kWh top up could be accomplished in say 20 minutes, Range Anxiety gets a dose of Valium.  This is because 22 kWh will give you about a 100 miles and one could envision “refueling” stations in sufficient number to allow longer drives.  Lest this sound like wishful thinking, know that at least two outfits are working on improving the lithium iron phosphate cathode to accept a blindingly fast charge rate.  If this came to pass, the comfort of the gasoline backup in the Volt could be less of a factor.  In fact, the 40 mile electric-only feature begins to look puny in terms of low cost emission free range.

An improvement in energy density helps both types of vehicles, but likely more the all-electric.  Presently the all-electric is advantaged in weight because it loses a lot of heavy equipment such as the internal combustion engine, transmission, gear box and so forth.  But the larger battery adds significant weight.  Energy density improvements will help, although the temptation may be to give more range for the same weight and volume.

Motors could be the difference:  Not in the same way as batteries, advances in motors could nevertheless have a material impact.  Shown at the conference were in-wheel motors.  Fitting in any 18 inch wheel, these allow elimination of the differential and could provide very precise four wheel drive capability.

Most of the motors today use permanent magnets with the Rare Earth elements Neodymium and Dysprosium in it.  In fact, the Prius has up to 25 pounds of Rare Earth elements.  Neodymium in particular is a problem because the price has quadrupled in the last year.  Substitute elements are being sought for the magnets.  But an entirely different avenue is to eliminate the use through the use of induction motors.  Nicola Tesla invented this back in 1888.  For proper functioning, it requires precise computer aided controls.  Only recently has it been rendered truly functional and economical.  The Tesla Roadster has one such and several companies including BMW are planning on having one.  Aside from non-reliance on Rare Earth elements, no permanent magnet means no need to cool the unit (permanent magnets do not function well at higher temperatures). This type of motor may not need a gear box, as is the case in the Tesla.

Plug-In 2011 exuded optimism.  All data to date, including consumer feedback, appears to justify it.

ELECTRIC VEHICLES USE LESS ENERGY

July 6, 2011 § 4 Comments

The most obvious benefit of electric vehicles (EV’s) is the replacement of imported oil with electricity.  The zero emissions at the tailpipe are another plus, but as is often pointed out, the problem is merely shifted to the power generating plant.  Carbon sequestration is more tractable at such locations than at the vehicle.

A relatively less known fact is that electric vehicles use less energy than conventional cars.  To be specific, EV’s expend fewer units of energy to travel the same distance.  This is important because simply using less energy to receive the same gratification is a powerful arrow in our carbon mitigation quiver.  So, how much less energy do they in fact use?

In a departure from blogs of the past, we will calculate this right here.  We will use the following facts and assumptions:

  • A gallon of gasoline has 116,100 BTU which equals 34 kWh
  • The average car being replaced delivers 35 miles per gallon
  • For years the dogma has been that EV’s use 0.2 kWh per mile.  Nissan claims that the Leaf averages 0.25 kWh per mile.  As in all electric and hybrid cars, stop and go gives better mileage than continuous operation.  So, that number could be higher in some cases.  We will use the 0.25 number for this exercise.
  • Refining oil to give gasoline consumes 20% of the energy in the oil
  • Coal fired plants have efficiency of 40% (by using coal not gas we are being conservative, but this figure is that of newer supercritical combustors)
  • Electricity lost in transmission is 8%
  • Energy to get the oil out of the ground washes with coal mining.  Had we used the less conservative gas source for electricity, the offset would have been precisely correct

So, energy losses for gasoline prior to being consumed in the vehicle are 20%.  Energy used after combustion is:  34 kWh in a gallon divided by 35miles to the gallon, further divided by 0.8, equals 1.25 kWh per mile

Energy losses for EV’s are 60% at the generating plant, minus 8% in transmission, equals 32%.  Energy used by EV’s equals 0.25/0.32 equals 0.78 kWh per mile

Ratio of these two puts it at 1.6.  In other words, a conventional vehicle uses 60% more energy as an EV for the same purpose.  Is this exactly right, probably not, but it is not off by much.  The key take away remains that the EV advantage has a facet that is not commonly recognized in quantitative terms.

Implications to other oil replacement means:  The other principal avenues to replacing oil for transportation are:  natural gas fired vehicles, biofuels and gas-to-liquids derived fuel.  For the latter two, the energy used to produce is certainly worse than for conventional gasoline or diesel.  So, the EV advantage holds in this regard.  Natural gas powered vehicles offer some interesting possibilities.  The energy to produce should be somewhat less than for gasoline.  But the intriguing possibility for additional efficiency is in taking advantage of the high octane rating of around 125.  Diesel type compression ratios (14 to 22) would certainly provide more work per unit volume of gas, although I know of no move to capitalize on this opportunity.  But the sheer inefficiency of the Carnot Cycle will doom it to always compare unfavorably with EV’s.

The foregoing notwithstanding, oil replacement is too important an objective to not pursue all the alternatives.  For one, the alternatives discussed can be retrofitted to the current fleet.  EV entry will necessarily be slow and conventional vehicles will continue to be built.  We need to provide alternatives to imported oil to power them.

Implications to federal targets on vehicle mileage:  A recent story reports on negotiations between the White House and the auto industry on mileage standards.  The proposed target of 56.2 mpg by 2025 is almost double of the average today.  On the face of it, the only real impact of EV’s is that of an effectively very high mileage to offset the guzzlers.  I am not clear on how they calculate the mpg for EV’s given that no liquid fuel is involved except for Plug-In Hybrids when not in electric drive mode.  One way to do it would be to use the data shown above.

The Nissan Leaf drives 100 miles using 0.24 X 100 = 24 kWh.  Gasoline has 34 kWh per gallon.  So the gasoline equivalent of 24 kWh is 24/34 = 0.705 gallons.  So, the equivalent mpg for the Leaf is 100/0.705 = 141.8 miles per gallon.

The mileage target has two purposes:  reduce import of oil and reduce emissions.  The latter is recognized in that EV’s have zero tailpipe emissions, but we need also to take into account that the overall energy used is less.  Since the majority of electricity is produced using fossil fuels, using less per mile driven is a direct reduction in fossil fuel based emissions.   This somehow needs to be recognized in the mileage target debate.

Gallons Per Mile: Its Time May Have Come

July 6, 2011 § 1 Comment

Three years ago Richard Larrick and co-workers at Duke University proposed the logic of using gallons per mile instead of miles per gallon (mpg).  In their paper they pointed out that mpg was not linear and hence not intuitive in making decisions.  Such a decision may be for example when considering replacing one of your two vehicles with more fuel efficient ones.  Vehicle X is a land cruiser with 10 mpg and Vehicle Y is a compact with 25 mpg.  One would assume that replacing Y with a 40 mpg vehicle would save more than replacing X with a 20 mpg vehicle.  Not so.  Not even close.

Assume that you drive 200 miles a week.  Vehicle X would consume 20 gallons of fuel and the replacement would consume 10 gallons.  At $4 a gallon, the saving would be $40.

Vehicle Y would consume 8 gallons and the replacement would consume 5 gallons.  The saving would be $12.

This underlines the character of mpg: it is not intuitive for decision making.  Gallons-per-100-miles is completely linear.  Using that metric it is abundantly clear that the Vehicle A replacement results in more savings.

Three years ago that paper did not catch people’s fancy.  One possible reason is that we have difficulty with a smaller number being more desirable.  Also, gallons per mile would be a fraction; not wanting to deal in fractions forced the use of gallons per 100 miles or some such. The biggest reason was likely inertia and not wanting to replace the familiar.

But now it’s time may have come.  Electric vehicles (EV’s) could force the issue.  The easiest way to compute the “fuel” cost in EV’s is to determine the electricity used per mile and multiply by the cost of electricity.  EV’s are likely to be very similar to each other in the charge used per mile.  This is a huge departure from conventional vehicles, where the fuel efficiency can be dramatically different.  EV differences will primarily be premised upon body weight.  Other than that the variability will largely be in the type of driving: stop and go city driving will be more economical than distance driving, largely due to the regenerative braking systems.  This is of course the opposite of conventional cars and the EPA will have to consider this in their testing.

The capacity of an EV will be measured in kilowatt hours (kWh).  The Nissan Leaf has a capacity of 24 kWh and uses 0.25 kWh per mile, so it has a range of 100 miles for average driving on a full charge.  There you have it.  The kWh per mile is your gallons per mile analog.  For EV’s nothing else makes more sense as a metric because the consumer knows what she pays for electricity at home, and easily computes fuel cost.  In fact the ease of computation is going to be a key to driving the right battery charging behavior.  We expect utilities to incent night time charging by heavily discounting rates at night.  Absent that, significant day time charging could create serious problems for the grid.

Now, one could argue for EV’s to most usefully employ kWh per mile but for conventional cars to remain as before.  Sure, but when being responsive to federal fuel efficiency mandates, the mixed fleet will need to be assessed.  In so doing, the gallons per 100 miles, taken together with kWh per 100 miles for EV’s will allow direct comparison.  This is because we know that a gallon of gasoline has 34 kWh of energy.  So gallons per 100 miles could convert to kWh per 100.  Somehow I doubt the populace will go for this wholesale switch to kWh.  But changing from mpg to gallons per 100 miles will allow an easy computation of fuel efficiency of a mixed fleet and allow direct comparison with EV’s for consumer choice.

IS SHALE GAS PRODUCTION PROFITABLE?

July 4, 2011 § Leave a comment

A The New York Times piece on June 26, 2011 discusses this proposition and is very bearish on the prospects.  We acknowledge the principal points: some in the industry worry about the profitability especially given the low prices in the last year or two.  We present here a case for optimism.  These are early days in the exploitation of a completely new type of reservoir.  Continuous improvement, as in any industrial endeavor, can be expected.  In the case of shale gas the learning curve is likely to be steep.  In part this is because of the sheer volume of activity.  Each well will drill and produce in as few as twenty one days.  The setting is almost akin to a factory, which we all know is the type of setting amenable to rapid learning curves.

Production from shale gas wells declines rapidly:  The decline is steep, with a drop of 60% to 80% in the first year. (Conventional reservoirs decline 25% to 40%)  After year two there is a gradual decline.  The mechanism is likely premature closure of the fractures.  This could be due to insufficient penetration of proppant into the formation. (Proppant is sand or other ceramic material injected into the hydraulically created fractures to “prop” them open to allow gas to flow; absent this natural stresses would close the fractures)  Industry is working on materials and techniques to cause improved and more sustained flow.  A Rice University originated product sourced from nanomaterial is in early stages of commercialization.

Refracturing:  This is where new fractures are initiated in existing well bores, often directly on top of the old ones.  In the few cases that it has already been attempted in the Barnett, the results have been dramatic.  Initial production rates have reached and exceeded the original starting production.  And sometimes they decline at the same rate as before.  This is indicative of the possibility that new rock pores are being accessed.  Research, at the University of Texas to name one, is ongoing and one could expect results to be variable for some time.  At present research indicates that the optimal time to refracture is two to three years after initial production.

Somewhat ironically, a shortcoming of the resource, the poor permeability (a measure of the ability of fluids to flow in the rock), may be why this technique works.  Ordinarily, poor permeability means less flow, and hence less production.  Fracturing improves that.  But if the fracture paths are impaired as explained above, the gas does not get fully drained.  But it is available for new fractures, and is for all practical purposes from new rock despite being proximal.  From the standpoint of economics of the prospect, all that matters is that each operation causes enough production to assure a rate of return.  The fast declines are not highly material if this economic threshold is met.  One final point: refracturing is at a fraction of the cost of the original well because no new well bore is drilled.  So the newer gas has a cost basis that could be a third or less of the initial gas.  Does wonders for prospect economics.

Wet Gas:  There is a passing allusion to this in the NY Times piece but it deserves serious attention because of the dramatic effect on profitability.  Wet gas is defined as natural gas with a significant component of hydrocarbon species other than methane.  The economic significance lies in the spread between natural gas and oil prices.  Gas on the basis of energy content is currently priced at about a fourth of oil.  Decades ago these used to be in parity.  Natural gas liquids, the “wet” part of wet gas, are priced in relationship to the price of oil.  Condensate is at or somewhat higher than oil price, butane is definitely higher than oil because it is essentially a drop-in replacement for gasoline.  Propane is at a discount to oil, as is ethane.  Ethane is the least costly, at about half the price of oil.  But all these are vast improvements over the price of methane.  A typical Marcellus wet gas prices out about 70% over dry gas.  Range Resources reports that at a flat $4 per million British Thermal Units (MMBTU) gas price (incidentally the average for 2010 was around this figure), their Internal Rate of Return would be 60%.  That is way more profitable than any conventional gas prospect.

Marcellus, the largest and most prolific of the North American deposits, has a wet character on its western side.  The as-yet not important producing states of West Virginia and Ohio are advantaged in this regard, as is western Pennsylvania.

How things will play out:  Given the facts above, expect the wet gas prospects to be produced first.  Over the next few years, the price of methane will rise because of demand.  Massive switching from coal fired electricity to gas will occur.  This is because even without a price on carbon, the all-in cost of electricity from gas is less than from coal at gas prices below $8 per MMBTU.  In a recent publication we present a model predicting gas prices as having a lid at about $8.  This stability will contribute to switching of oil to gas.  The switches will include methane propulsion of vehicles and gas-to-liquids derived diesel and gasoline.  Over time this plus electric vehicles will make a significant dent in our $400 billion annual imported oil bill, and hence our balance of payments.  Importantly, gas prices will be less subject to the whims of the weather because heating and cooling will be an ever decreasing component of gas usage.

The demand creation will allow a gradual return to dry gas production.  Some of the earlier plays are profitable at $4 already.  But a rise in the floor price will ensure the supply that will be dictated when the trends described above mature.

And one day the NY Times will have a page one above the fold piece on how shale gas transformed the US economy.  Then I will wake up.

Will Cheap Natural Gas Hurt Renewables?

June 18, 2011 § Leave a comment

A recent story posted by the Worldwatch Institute addresses this issue.  The story in of itself has nothing new, in that it discusses the various elements in play but offers no new insights.  But it does cause us to mull the issue, because it has come up repeatedly at lectures I have given on natural gas-related matters.

We have blogged on and published the view that shale gas production will keep gas prices low.  This is largely due to shale gas wells being on land and shallow by industry standards.  These wells can be in production in 30 to 60 days after commencement.  This short duration effectively keeps a lid on the price.  If the three month strip is seen as going up, new wells can be in production well within three months.  This sort of certitude will also discourage speculative investment in the commodity.  The floor price will get set by the conversion from coal to gas for electricity.  50 percent of coal plants not expected to meet the latest EPA standards on mercury and NOx are over 40 years old.  So these fully depreciated plants will not be refurbished.  The only options are new coal, nuclear and natural gas.  New coal is disadvantaged on price alone until a natural gas price of $8 per million BTU.  Today that price is $4.40.  So, with the aforementioned ceiling, coal is not the economic choice.  Nuclear has suffered a blow due to the Fukushima Daiichi disaster.  So, natural gas will be the fuel of choice.  Eventually, the shift to gas will cause the price to rise, but the lid will still be around $8.

Cheap natural gas will also cause a shift from oil to gas whenever possible.  This additional demand will keep the price up in the medium term.  So, let us assume a price of $8 as the stable price.  At this price, electricity will be delivered at a little under 7 cents/kWh.  This is the grid parity price that alternatives will have to meet on a direct economic basis.

This benchmark price is lower than the fully loaded price of new nuclear plants, which will be over 10 cents.  Currently, wind delivers at 9 to 16 cents, depending on where it is.  Offshore wind may be higher yet at this time.  Wind also often suffers from the need to add transmission infrastructure.  This is especially the case for offshore facilities.  There is also the celebrated case of Boone Pickens terminating a major land-based investment due to absence of concrete plans to add transmission lines.

Strictly from a techno-economic standpoint wind still has an upside.  Engineered solutions are likely to drop the price from current levels.  But it continues to suffer from diurnality, and so needs to be companioned to another source or to storage mechanisms.

Policy Matters:  Without a price on carbon, the carbon-free alternatives of wind, nuclear and solar are seriously disadvantaged.  Taxes are anathema to the current Congress.  Cap and trade has not worked particularly well in Europe, in part due to the uncertainty, which effectively increased the discount rate on investment.  Also, any cap and trade conceived by Congress will undoubtedly have numerous exclusions and grandfathering.  The province of Alberta in Canada has an interesting model.  They tax high carbon footprint heavy oil production over a certain volume.  The money is placed in a special fund expressly for the purpose of addressing environmental issues associated with oil and gas.  Such directed use of tax proceeds is more palatable.  Conceivably, the fund could subsidize renewables for a period of time.

Finally, one could resort to the current method of imposing a renewable portfolio standard.  This in effect is a tax on the consuming public because the renewable energy costs more.  The solar subsidy in Germany is passed on directly to the consumer as well.  But that is largely possible due to the considerable influence of the Green Party.  Short of taxing conventional oil and gas, consideration could be given to decreasing the incentives and redirecting those funds.

Conclusion:  Cheap natural gas will place every other source of electricity production, including renewables, at a disadvantage for the short to medium term.  Reliance on market forces alone will slow the introduction of renewable energy.  Policy mechanisms are needed to level the playing field, at least from the standpoint of carbon neutrality.  The most equitable methods may be a U.S. analog to the method used in Alberta.  By all accounts, that policy is embraced by the public and industry alike.

Fukushima and Beyond

April 27, 2011 § 1 Comment

Source: Areva

This is fairly representative of the discussions at the Breakfast Forum held on April 21

The Fukushima Daiichi disaster is now classed with Chernobyl in magnitude.  It is clear that the single most significant cause of the radiation leakage was the power blackout.  The cascade of events began with the tsunami, which breached the wall.  Tsunamis are classed by the height of the wave upon shore impact.  This one was judged to be at 46 ft and the breach occurred because the seawall was designed for 19 ft.  Two previous tsunamis, in 1933 and 1896, had reportedly been substantially higher than this one.  The numbers themselves may be somewhat suspect because these measurements are not standardized, and certainly were not a century ago.  Nevertheless, the seawall appears to have been under designed.

The flooding caused what is known as a station blackout.  No power of any sort.  The primary reason for this appears to be that the backup diesel generators were in the basement, and got flooded.  The reactor had in fact been shut down well before the wave hit.  But nuclear reactors continue to generate heat after shut down, although this progressively decreases.  Cooling water is required, in the absence of which the fuel melts and releases radioactivity.  Due to extended power failure the core sustained damage.  The resulting reaction with water produced free hydrogen.  The systems designed to prevent explosion of this hydrogen did not function, in part due to the lack of power.

The power loss was the single weakest link in the chain.  The significance of this to the US situation is that spent fuel is stored in a water-cooled environment at 80 locations and these are largely already at full capacity.  This is occasioned by the policy decision to not store at Yucca Mountain.  So, while tsunami mediated flooding is not likely on the mainland, power loss due to other reasons, including sabotage, cannot be ignored.

Policy Implications:  Each new disaster increases the understanding of some aspect of the cause of catastrophic events.  Corrective actions are taken and the next one likely has a different root cause.  This does raise a question regarding the futility of attempting to prevent Black Swan or Perfect Storm (take your pick on popular descriptors) events.  The generally held belief is that these are results of combinations of very low probability events.  So further decreasing the probability of each is possibly not terribly productive.  But doing nothing in the face of calamity is simply not an option.  Besides, in the case of Macondo, certain measures have emerged which definitely will improve overall safety.  In this one, a clear finding with broad scale implications is that of the loss of cooling and the implications to back up power and other safeguards.  The applicability to spent fuel storage is direct and decidedly important.  Were it not for this incident, the public certainly would have seen the spent fuel repository debate as relegated to the cognoscenti.  Now they can be communicated to with simplicity and accuracy on the risks of distributed storage.  Maybe, just maybe, this particular ostrich will raise its head out of the sand and we will get a national plan that makes economic and environmental sense, and yet is responsive to the nuclear proliferation concerns.

The time span between major energy related incidents was 25 years in this case and 30 years for deepwater oil spills.  One cannot help but wonder whether complacency is a factor.  Certainly in the case of the Macondo spill, industry experts have acknowledged this as a factor.  One theory advanced in our Breakfast Forum was that of worker training and longevity on the job.  The Navy was cited as an organization that deals with reactors routinely but does so with a workforce that is exceedingly well-trained and with strict safeguards.  The energy industry tends to be cyclic and profit motivation is definitely in play, as repeatedly alleged with regard to BP in the Macondo incident.  Regulation, preferably self imposed by industry, could address the matter.

Of interest is the observation to date that the earthquake per se did not damage the reactor.  This despite the fact that at 9 on the Richter scale, this is one of largest ever recorded.  California can take some measure of comfort, one assumes.

Consequences to Power Production:  Germany has already reacted to withdraw the permit to extend the life of about 8 reactors, which is a reversal of an earlier decision.  Switzerland is stopping issuance of new permits.  Any country that takes measures such as these is left with few choices.  The principal one is natural gas generation.  In the case of Germany, this means increased reliance on Russian gas or LNG imports.  Also, replacing nuclear with gas creates a carbon deficit.  All previous scenarios for carbon mitigation relied heavily on nuclear as a zero carbon source.  To the extent that this is seriously compromised, the low carbon future targets are even greater jeopardy.

Wind is the leading candidate to provide a carbon credit.  It is closer than solar in achieving parity with conventional sources.  One feature in its favor is that it is highly modular; production can commence quickly even as more capacity is being added, provided the delivery infrastructure is in place.  Aside from the fact it too is buffeted by environmental and aesthetic opposition, the diurnal aspect appears to limit the total contribution in a given area.  Many believe the cap to be around 20% but credible studies supporting a particular number are not in evidence.  Also, we don’t know the extent to which new storage measures and the smart grid could ameliorate this drawback.

The nuclear option faces an interesting dilemma.  Already burdened by high capital costs and long lead time to first production, the additional risk is bound to increase the discount rate.  This will increase the investment even more.  Purely from the standpoint of economics, extending the permit life of existing reactors, albeit with improvements, will be the economically driven choice.  The irony here is that this will perpetuate older, and presumably somewhat less safe, technology.

All energy comes with a price.  It is a question of choice.  You can’t leave it, so best learn to love it.

 

Where Am I?

You are currently browsing the Uncategorized category at Research Triangle Energy Consortium.