MIT Natural Gas Report Glosses Over Environmental Issues

July 1, 2010 § 1 Comment

MIT’s most recent report on energy is on the Future of Natural Gas, following similar reports on coal and nuclear energy.  It is co-edited by Ernest Moniz and Tony Meggs.  The latter recently left BP as CTO.  As reported in Forbes recently, the report emphasizes the role of shale gas in enabling natural gas substitution of coal.  The authors see this as a transitional strategy for a low carbon future.  We agree with that and have expressed similar ideas in the Directors Blog.

However, the report is surprisingly shy about discussing the environmental issues seen as facing shale gas exploitation.  While we believe these are indeed tractable, they merit much more discussion than they were given.  Accordingly we repair some of that omission here.

The most significant issues center on three matters:  fresh water withdrawals, flow back water and collateral issues, and produced water handling and disposal.

Fresh Water Withdrawals and Flow Back Water:   Typical wells use between 3 and 5 million gallons per well.  Industry practice has been to use fresh water as the base for fracturing fluid.  The water that returns to the surface after the fracturing step is known as flow back water.  Shale operations are unique in that only about a quarter to a third of the water returns, the rest staying in the formation.  Also, the flow back water is usually more saline than the injected water.  So, in principle it cannot be re-used.

Handling salinity is the first step to water conservation.  The key is ability of the fracture water to tolerate some level of chlorides.  Recent research has shown that not only is this possible, but that it can be beneficial.  The chlorides actually stabilize the clay constituents of the shale and improve production, although companion chemicals such as friction reducers need to be modified.  This has two possible implications to water withdrawals.  One is that after some measure of treatment, the flow back water should be usable.  But because all of it does not return, withdrawals for make-up water will be necessary.  This is where the second implication comes in.  Moderately saline water from another source could be used since salinity is tolerable.  The most important implication of the foregoing is that flow back water could over time be completely re-used and this then ceases to be an issue with respect to discharge. 

So, now let us discuss numbers.  In current practice the tolerance for chlorides is likely about 40,000 ppm.  Flow back water with higher salinity will need to be desalinated to some degree, or diluted by fresh water.  In some parts of the country this may be viable.  Another option could well be to use sea water, if that were to be the water of convenience.  Sea water tends to contain around 30,000 ppm chlorides.  That is already in the range of acceptability with the possible removal of some minor constituents.  Finally saline aquifers are a potential source.  These are in great abundance, with variable salinities.  Saline water wells drilled as companion to the gas wells are very likely in areas where fresh water withdrawals compete with agriculture or other endeavors.  In general, if the shale gas industry can utilize water unsuited to agriculture and human consumption, then it will be seen in a completely different light.

Produced Water

Water associated with the gas is produced at some stage of the recovery, usually towards the end of hydrocarbon production.  In some cases early production occurs due to infiltration of the fractures into the underlying saline water body often present.  Whether from connate water or the water layers below, produced water will be very saline, in part because of the age of the rock.  Disposal of this water is a major issue, especially in New York and Pennsylvania and can cost upwards of $10 per barrel, when even possible.  Concern regarding illegal discharge is high among the residents.

The treatment of produced water represents a significant business opportunity.  Several outfits are developing forward and reverse osmosis schemes for desalination.  Others are working on bacteria eradication, heavy metal removal and the like, using methods such as membrane filtration and ion exchange.  Some of these are already in service on a limited basis.

Produced water offers the promise of being usable for make-up water after some modest treatment.  The salinity may be directly tolerable but the bacteria would need to be removed prior to re-use.  This is because many of these cause the production of hydrogen sulfide downhole, which makes the gas less valuable and causes corrosion in the equipment. 

Contamination of Drinking Water

There have been anecdotal reports of well water contamination by gas, most recently sensationalized by a documentary.  The popular literature ascribes two hypotheses to this phenomenon.  One is the migration of fracturing operation cracks from the reservoir up to the water body.  The other is gas leakage from the well.

Hydraulic fracture cracks will not propagate the significant distances to the aquifers.  Were they inclined to do so, they would heal due to the earth closure stresses.  In terms of distance, the closest fresh water aquifers are about 5000 ft. and 3000 ft. away, respectively, for the Barnett and the Marcellus.  So this really is not likely.

Gas leakage from the well is preventable if the well is drilled and completed correctly.  A fundamental feature of regulation has always been to design for isolation of fresh water in all petroleum exploitation, not just in the shale.  Between the produced fluids and the aquifer lie two layers of steel encased in cement.  The cementing operation is designed for preventing fluid migration.  Tests are run to ensure competence of the cement job and remedies are available for shortcomings.  At these shallow depths the operation is extremely straightforward and amenable to regulatory oversight.

See Also: New York Times’ response to the study

What Really Happened Out There in the Gulf

June 23, 2010 § Leave a comment

On June 21, 2010, coincident with the longest day of the year was the longest page 1 investigative report I have ever seen in the New York Times or any other prominent newspaper for that matter.  I refer to the story entitled Regulators Failed to Address Risks in Oil Rig Fail-Safe Device, nearly three pages long and entirely devoted to the esoterica surrounding blow-out preventers.   This is good because prior to this I would not have dared post a piece discussing blow-out preventers, not to mention blind rams.  It is quite well written relative to the operational detail.  But there are minutiae that would leave most fatigued.  So, here is the short explanation together with some commentary.

The last line of defense against blow-outs is a system of machinery aptly known as blow-out preventers or BOP’s.  Multiple other lines need to be breached prior to these being in play.  In keeping with the Times authors, we will not discuss these except to point out that nobody really wants to resort to the last line.  Some of the reporting has attributed sentiments to personnel to the effect that “that’s why we have the BOP’s” as an explanation for risk taking.  If true this is not usual.  To use a soccer World Cup analogy (it is the season), full backs who espouse such a belief with respect to their goalkeepers have short careers.

photo from NYTimes

There are three types of BOP’s.  The most benign, and this one is used for pressure testing as well, is the Annular Preventer.  This is composed of elastomeric elements that can seal off the pipe on the outside or seal the hole when no pipe is present.  This is a fully reversible action and the Preventer with the least deleterious consequences of use.  According to the Times, there were two of these on this rig.  A 60 Minutes segment had reported a worker observing chunks of “rubber” several days prior to the accident, which he conjectured to imply failure of the Annular Preventer sealing elements.  Congressional testimony indicates reports of pressure integrity tests which showed anomalies that appear to have been discounted by the decision makers.  These test the competence of the completion.  These could not have been conducted if the Annular Preventer was not sealing.  So, one of them was likely functioning at least at the time of the tests, which was not long before the event.  So, it is plausible that this line of defense was functional close to the time.

The next line is the Casing Shear Ram.  These are essentially irreversible if there is pipe in the hole.  They are shear devices that can cut through the casing but they are not designed to seal  the flow.  They are primarily used to permit emergency disconnect of the vessel.   No real data are reported on whether the Casing Rams were functional.

Then we have the centerpiece of the Times story, the final line of defense, the Blind Shear Rams (is it not odd that all the words could apply to sightless sheep; memo to animal activists: the rams are not being killed, they are doing the killing).  These are the most sophisticated of the three types and are designed to cut through the pipe and seal firmly in place.  The well pressure is designed to help augment the closure mechanism and hold it in place.  The reporters make much of there being a single point of possible failure of the hydraulic system and the reports of unreliability.  I assume they did their homework here, but have no other insight.  But very interesting is their observation that this rig had only one of these.  This is surprising for a deep water rig.  Here’s why.  The pipe it is designed to cut through is not a continuous cylinder.  At intervals of 40 feet, sometimes 30 feet, there are joints.  The blind shear rams cannot penetrate the joints.  So if by bad luck a joint is in its path, the mechanism will not succeed.  This is why a second one is important to have, and at a distance no less than 4 feet from the first, but not much further such that there is no likelihood of another joint being encountered.

The story also noted that gamma ray testing had shown that at least one side of the blind shear ram had deployed (the other side could not be imaged) but stopped short of cutting.  The evidence shows that at least one of the Annular Preventers was functional at the start.  We know nothing conclusive about the Casing Shear Rams.  Somehow, these lines of defense crumbled.  Unfortunately, the key data indicating hydraulic and other health of these devices did not survive the explosion.  Apparently these data are not shipped to shore.  Virtually all data related to drilling and completions are streamed to shore.  So, where do we go from here?

In keeping with past disasters, such as that of the Space Shuttle Challenger, one can expect a careful examination of each failure point and the production of engineered solutions and associated management of human behavior to minimize the probability of each of the events.  The list of suggested remedies should include certain legislation and increased enforcement authority.  Certainly on that list ought to be:

  • Requirement for two or more Blind Shear Rams on every deepwater rig
  • Requirement for an expert level of shore support for all key well control decisions, including involvement of the appropriate federal agency, which should be staffed at an expert level.  Through the use of real time support centers covering a number of wells each, the federal agency cost need not be high.
  • All key data upon which well control decisions are made should be stored in a Black Box.  Ideally, they are already on shore and stored as part of the expert review process mentioned above.

Finally, taking measures such as those above will achieve important results such as avoiding costly near misses, but in the end likely will not avoid the occasional blow-out, in part because other factors may come into play.  But, we can be in a state of readiness to dramatically reduce the collateral damage to the environment by minimizing the size of the spill.  We urge a joint industry action to study the best form of defense beyond BOP’s.  This should be clean page look at all alternatives and should be led by a non-aligned person.  Then the industry should agree collectively to have such a system built and ready for deployment at the shortest possible notice.

The Energy/Water Nexus

June 23, 2010 § Leave a comment

This piece is loosely based upon the RTEC Breakfast Forum on June 15, 2010

Sustainable energy can fall in two buckets.  One comprises all the means to lower the carbon footprint of current energy sources.  This would include clean coal, using natural gas in place of coal to produce electricity, combined cycle approaches to energy production, and the like.  The second bucket is that of renewable energy.  The outstanding examples of that are biofuels, wind energy and solar energy.

Each of the foregoing has very different water utilization.  One billion persons do not have access to drinking water.  Should efficiency of water utilization be a factor in our choice of alternatives, and not just carbon footprint?  Going further, should water usage be a litmus test in areas in which the citizenry suffer a high level of privation?  This was the subject of the RTEC Breakfast Forum on June 15, 2010.

We tend to use fresh water for everything when something less could do the job.  This is likely an artifact of water being relatively cheap.  If some of the major users were able to tolerate less than fresh water, water would be freed up for human consumption. An extremely topical area for this thought is shale gas drilling in the US.  Each well uses up to 5 million gallons per well as the main component of fracturing fluid.  Only about a third of the fluid used returns to the surface.  Currently it cannot be re-used because of contaminants, salt in particular.   Even if this were to be cleaned up for re-use, the other two thirds would need to be made up from fresh water sources.

Fortunately, industry is taking a hard look at the problem and is moving to modify formulations to be able to tolerate significant salinity.  So, not only would the flow-back water be re-usable, but other saline waters of convenience, such as sea water, come into play.  In an odd twist, it turns out that salinity is actually good for the operation (it stabilizes the clays).  Lemonade from lemons, as it were.

While not particularly applicable to the shale gas play in the eastern United States, a lot of “tight gas” exploitation occurs in the middle of the country, in areas that are severely drought prone.  Here, water for energy competes with that for agriculture.  The ability to tolerate salinity would be huge.  This is because saline aquifers are plentiful.  Supporting technology would be required in areas such as benign biocides.  Bacteria in these waters are often pernicious, some being sulfate reducing, and thus producing hydrogen sulfide in situ when used for fracturing fluid.  But these are all tractable if the major issue of some level of salinity is traversed and if innovations in cost effective water treatment are forthcoming.

The key to water treatment is to have a fit-for-purpose output.  Potable water is the most expensive.  An intermediate product could be adequate and meet the economic hurdles.  Today almost all desalination approaches have fresh water as the output.

Agriculture tolerant of brackish water is a new area without significant currency today.  The most obvious example is algae for bio fuel production.  Algae, of course, thrive on salt water (and consume carbon dioxide as another plus).  A class of plants known as halophytes make themselves saltier than the salt water, thus causing fresh water to flow into them by osmosis.  Most such would likely be for biomass for energy production, not food.

Water used in conventional energy production is also highly variable.  The paper by Mulder et al describes water efficiency of different energy production methods.  Any eye-opener is the significant difference between closed and open loop cycles.  An interesting nuance is also the difference between water withdrawal and water use.  For example, if a facility such as a nuclear plant, withdraws water from a river, and then returns hotter water, the subsequent evaporation downstream is not counted in some measures.  The withdrawal number remains low, even though the net usage was higher.

Using less water is not always productive.  Apparently in some areas drip irrigation leads to salt build up around the plant.  Also, drip irrigation returns no water to the aquifer.  But on balance that must still be more effective than spraying, where evaporative losses may not necessarily be returned as convective rainfall.

Drought tolerant biomass is highly touted these days.  Jatropha in India and elsewhere is seen as an important crop for biodiesel production.  However, an interesting twist on this is that these plants can tolerate drought, but they grow much faster with more water.  A farmer with water access will draw on it.  So, what is needed is clever business models and associated policy drivers to encourage water conservation in the face of a compelling economic driver to use more.  An interesting problem for a behavioral economist.

Afghani Lithium: Much Ado About Perhaps Little

June 15, 2010 § Leave a comment

Afghanis should rejoice that people are discussing Afghani lithium, not opium.  But, based solely on the popularly reported data, initially by the NY Times, there is little reason for celebration.

The original Times story was largely about the mineral finds in general.  An Afghani economy strongly dependent on opium should welcome diversification into minerals.  But the subsequent stories underlined the lithium, including quoting the Pentagon as referring to Afghanistan as the Saudi Arabia of Lithium.  Hyperbole has an honored place in selling copy, and often has a basis in fact.  We went looking for it.  Here is what we found.

The bulk of the underlying data are at least three years old.  The current release by the Pentagon, including General Petraeus’ use of the word “stunning”, is clearly tactical.  The lithium is found as an ore (mixture of oxides) as well as in salt or brine deposits.  We were unable to find the relative distribution of these.  The importance of this is that the cost of extraction from the ore is two to three times more than from brine.  This despite the fact that the ore has more of the stuff, up to 7.5%, compared to a fraction of a percent in brine.  The economic fact renders most ores impractical at this time, even if easily accessible, which this one might not be.  For example, the US imports the vast majority of lithium it uses, despite substantial domestic ore deposits, most of which are in my home state of North Carolina.  The domestic production, such as there is, is from brines.  Lithium from ore is commercially attractive only if there is collateral production of other values, such as potash.  A breakthrough in smelting technology could change all that.  None is known to be in the offing.

Lithium salt deposits are either brine (salty solutions) in lakes, or associated crystalline salt formed from natural evaporation.  These chlorides are relatively easily reacted with soda ash to make lithium carbonate.  This then is the marketed commodity from which all else is made, including metallic lithium.  The reported values of lithium content of Afghani brine is roughly .028%.  This is at the lower end of commercial concentrations.  In other words good, but not great.

Why, then, was lithium singled out from the mineral mix in the story?  It is the key ingredient in batteries for electronic devices today, and for electric vehicle batteries for at least the next twenty years.  All electric vehicles such as the Nissan Leaf will use over 30 Kg of lithium carbonate per vehicle (Hybrids such as the Prius use a tenth of that).  The vast majority of lithium brine deposits are in South America, with nearly half of that in Bolivia.  There is concern about trading oil dependency for lithium dependency.  The questionable stability of the sources is a factor.  This is why a vast new source is seen as news.

Based on the data revealed to date this is much ado about possibly very little.

The Oil Plateau and the Precipice Beyond

June 1, 2010 § Leave a comment

I’m certainly not the first to raise the specter of an oil plateau. This is not the same as Peak Oil, although there are similarities.

The first intimation of the concept was by Christophe de Margerie, the CEO of Total S.A., based in France, who first described this issue back in the fall of 2007. Subsequently PFC Energy went public with their research.

de Margerie’s statement made quite a splash. Here was one of the top five oil companies in the world, and the CEO was saying there’s a plateau coming. He put the plateau at 100m barrels a day. At that time the world was producing about 85m.

After that I personally, publicly asked a CEO of a major oil company to comment on de Margerie’s prediction. He acknowledged the plateau was real. He said, “I’m not sure I’m going to subscribe to the 100 number, but there’s a plateau coming.”

Shortly before that I spoke to the head of the the French Petroleum Institute (IFP), and they confirmed that their modeling showed the same thing. They pegged it at a somewhat lower number.

So here we have substantial people saying there’s a plateau coming and yet nobody acknowledges it publicly. Nobody wants to discuss it. Nobody really wants to act on it.

Causes

Now you’ll ask the reasons for the plateau. First of all there is a technical model thatpredicts a plateau, courtesy of PFC Energy in DC, but if you want to speak conversationally, the reasons are multifarious.

For example, national oil companies have realized they have a resource they need to husband. International oil companies used to move in and extract oil via Production Sharing Contracts, which made the incentive to get the most oil out as quickly as possible.

There’s a truism in oil and gas production: if you extract the petroleum quickly, then the net recovery, that is the fraction of fluid in the reservoir that is ever recovered, reduces. When the international oil companies went into these nations, they were drawing as quickly as they could because their contracts ended in X years. That was not in the best interest of the national resource.

Increasingly, the nations have figured that out. Now they are forcing the issue, telling the international oil companies, “We’ll do it ourselves. We don’t need you.” The key point is they want to bleed the oil out in a more measured fashion. Guess what that does to production rates?

Most of the major oil companies like Exxon are therefore forced to seek unconventional sources of oil — for example, Canada’s Tar Sands — which are largely heavy oil. Additionally, now the Tar Sands may get a carbon tax.

Then you’ve got Matt Simmons, a highly respected figure in oil and gas investment circles, who says Saudi Arabia will not be able to open the spigots: that they don’t have the oil.

The fact of the matter probably is that the Saudis have the oil, but they’ve got a different view of it now and how to release it. They have been the leaders in the application of technologies to maximize recoveries.  They’re not going to get bullied into releasing it faster just because the world wants a lower price on oil. People thought of Saudi as the buffer, that they’d just open the dams, but it just doesn’t seem like they will. Matt Simmons takes the position that they can’t. It’s irrelevant: they won’t. Whether they can’t or won’t compensate shortfalls elsewhere in the world, it comes to the same thing: they won’t.

Consumption versus Production

The estimated plateau of 95 million barrels a day — I think PFC at this point is talking about 90-92 million barrels a day — comes dangerously close to the 87 million barrels we’re supposedly consuming. I say supposedly because I think current consumption has dropped. In this country we decreased consumption from 21 to 16 million barrels a day from one year to the next. The decreased consumption is not going to last: we’ll become profligate again.

Consumption is the key to determining the impact of the plateau. Where is the point where consumption and production cross? If in fact the plateau is there, and in fact economic recovery is coming (which it is), and you base your models on consumption and PFC Energy estimates of 1.5% annual growth in oil usage, the crossover comes in 2020.

The key factor is the speed of the recovery with respect to automotive use. In the United States at least, oil is about transportation. Gas is about power and petrochemicals. The plateau is real and the recovery is real. It’s very real in China and India, which never really saw much of a recession. In China and India what do you think a newly prosperous person does? They buy a vehicle. They go from a bicycle to a motorcycle to a car. Everything consumes fuel except the bicycle.

There are statistics on per capita automotive usage in these countries versus the so-called advanced countries and it is staggeringly different. All of this says that transport fuel usage is likely to keep increasing, and that if it does, the crossover point between consumption and production is probably sooner than later (I’m not talking electricity — that’s a completely different argument).

If you want to reduce consumption of oil, you’ve got to switch transport fuels. People say very silly things about oil prices and imported oil juxtaposed to wind and solar. There’s no meaning there. The only meaning will come years from now when electric vehicles are a significant fraction of active automobiles.
The plateau is coming and if consumption continues at the current rate, there is a crossover coming. And at the point of the crossover, we’re not talking a spike in prices. We’re talking a sustained price increase. A spike is driven by a shortage at some point. This is not a shortage at some point. This is a plateau.

But let me end on a very simple point: do you really want to test the plateau theory? The alternative to testing it is doing something smart, like replacing oil with something that is more environmentally responsible. Are you going to argue with me about models, or are you going to do something that’s right to do anyway? Let’s just do the right thing, especially if it also happens to ameliorate, and in the limit, nullify, the plateau problem.

A case for decision science research in energy

March 16, 2010 § Leave a comment

A sustainable low carbon future is seen by most to center around breakthroughs in technology and the associated economics.  Most of the attention has been on carbon sequestration, biofuels, renewable sources of electricity and the like.  A number of states and countries have instituted policies to make some of these happen.  Many also see electrification of transportation as an avenue to zero emission vehicles and energy security of net oil importing nations.  All of these cause people to make choices, in many cases requiring changes in behavior.  Introducers of technology know that the barrier to wide scale adoption is particularly high when it involves substitution of something familiar.   The science of why people make the decisions they do, especially those involving green alternatives, merits further investigation, if for no other reason than that it may guide product and process development into areas with higher success rates of adoption.  It will undoubtedly be effective in informing on policy.  An example is in the area of solar energy.  If the primary driver for adoption is “seen as being green”, then hiding photo voltaic devices inside shingles would be counterproductive, as also the policy of many neighborhoods to disallow visible displays of solar panels on homes.

The International Energy Agency (IEA) has posited that for any reasonable 2050 targets for atmospheric carbon dioxide nearly 40% of the mitigation has to be from energy efficiency.  Their most recent forecast calls for 57% of carbon mitigation by 2030 as being from energy efficiency (and interestingly only 10% from carbon sequestration).  Undoubtedly this will in large measure be accomplished with engineering designs that provide the same utility for less energy. This has been the case with up to 90% reduction in standby power of household appliances through the simple expedient of low energy power supplies and modified circuitry.  Since standby power constitutes 10% or so of all electricity usage in IEA countries, this is a huge gain.  The Energy Star and similar efforts have produced further results, although some of these fall in a different bucket, that of the same utility at a somewhat greater price.  In the case of compact fluorescent bulbs, the initial price is higher but the life cycle cost is lower.  Now this begins to get into the realm of decision science because the consumer is required to understand and appreciate life cycle costing.  We are firmly in it for cases where the costs are substantially higher, as in the case of hybrid vehicles. Electric cars will get squarely into the behavioral arena from the standpoint of range anxiety, which is roughly defined as the fear of running out of charge.

Electrification of transportation is an RTEC priority because we see it as the fastest route to energy security through making electricity fungible with oil.  Furthermore, well to wheel efficiency of electric cars is about 45% better than that of conventional cars and the tail pipe emissions are zero, although the burden is shifted to the power producer, where it is more tractable.  Consequently, enabling the public’s acceptance of electric cars is an RTEC priority.

Addressing range anxiety and other behaviors falls at least in part in the area of decision science.  Some of it can be addressed with technology.   For example, Nissan’s introduction of the Leaf later this year will be accompanied by features such as remote monitoring of the state of charge of the battery and driver notification, including identification of the nearest charging station.  But in most instances, technical advances only take us so far.  When smart electricity meters are installed in homes, there is high variability in the manner in which the data are used by the homeowner.  Behavioral studies are needed to guide the programs to achieve the best results.  Non price interventions that rely on behavioral proclivities, such as conformance to societal norms, can likely be used to advantage.

In their matrix of program thrusts, DOE’s newly formed unit ARPAe has a matrix element that intersects social science efforts with transportation.  RTEC believes that this could be a fruitful area of pursuit for RTI/Duke/UNC collaboration.  One possible project would combine conventional survey based approaches with behavioral economics ones in addressing the electric car range problem.  At this time this is based on guesswork premised upon beliefs regarding consumer preferences when driving conventional cars.  Statements such as “the consumer expects a range of 300 miles” are rife.  A definitive study of driving distances in metropolitan areas that are initial target of electric vehicle entry could then be used to devise behavioral studies, the results of which could be expected to drive out interventions, both price based and not.  To aid this, the original study would be broken out by age, income and other relevant demographics. Finally, the interventions themselves could be tested on a population.

The foregoing notwithstanding, RTEC believes that the greatest gains for society in the realm of sustainable energy are going to come from simply using less.  Consequently, a major focus will be to encourage and assist members in devising social science based research with this goal in mind.

Natural Gas as a transition fuel for Carbon Mitigation

February 11, 2010 § Leave a comment

Synopsis:

Natural gas is increasingly being proposed as a transitional fuel for carbon mitigation; even by NGO’s that in the past were firmly opposed to all fossil fuels.  RTEC has examined the underlying premise and concludes that it is well placed as an organization to play a significant role in informing on the policies that will drive the energy sector in this area.  This is in keeping with a key RTEC goal for this year: to be a more visible player in energy.

Why Natural Gas?

The most popular carbon mitigation strategies center on renewable energy sources.  The foremost among these are wind, solar and biofuels, with just the last addressing oil replacement.  This discussion will focus solely on power production.  The majority of power is produced from combustion of coal, especially so in China and India.  Despite strong support for coal in Washington, and the technical viability of clean coal, a confluence of events suggests a slow down in coal combustion is likely.  These are discussed below.

  • California has already taken the lead to require coal plants to reduce emissions to the levels of natural gas plants, which is a fifty percent reduction, as opposed to ninety percent that previously was seen as a target.  Federal legislation is likely to emulate this in some manner.  This means that gas burning plants require no CO2 sequestration.
  • The lower requirement reduces the cost for sequestration at coal plants.  For post combustion capture, depending on the technology, the cost is likely to be in the general vicinity of 3 to 3.5 cents per KWh.  The current cost is about 6 to 6.5 cents per KWh.  So the fully loaded cost will be close to 10 cents.
  • The cost of electricity from natural gas can, as a rough rule of thumb, be estimated to be one cent per KWh for every $ per MMBTU.  So, at today’s natural gas price of about $4 per MMBTU, the cost is roughly 4.5 cents per KWh.  At $10 per MMBTU the cost would be about 9.5 cents per KWh.  In the last two decades, gas spot price has been above $12 for only four months, non contiguous.   If domestic supply holds up from the new shale gas reserves, few expect the price to go beyond $8, certainly not $10.  $10 is the effective breakeven with cleaned up coal, and with much lower capital investment.   Consequently, purely on economics and environmental compliance, gas plants make a lot of sense.
  • Gas plants are an effective complement to renewable sources, which have diurnal and other variability.

Why Not Natural Gas?

  • A shift away from coal to natural gas has to meet the critical hurdles of affordable gas and supply assurance.  The UK took this step in the belief that North Sea natural gas would be plentiful. This forecast did not hold up, and now the UK is forced to import, often at high cost.  For the US, reliance on foreign sources of Liquefied Natural Gas (LNG) would present issues, not the least being the high carbon footprint of LNG.  Alaskan gas, while plentiful, has deliverability issues.  So the future of such a shift relies upon the ability to exploit the massive shale gas reserves.  As noted above, if available, the price of gas is likely to be competitive with that of cleaned up coal.  Also, unlike oil, gas will not have any hidden military costs associated with assurance of foreign supply, since it would be entirely domestic.
  • The bulk of the shale gas potential is in New York and Pennsylvania, states that are substantially unused to petroleum production (despite Pennsylvania being essentially the birthplace of oil in the US).  Public push back has been substantial, on the grounds of pollution believed to be caused by the fracturing operations essential to the production.  Drilling in parts of New York has ceased on account of this.  When ExxonMobil purchased XTO for over $30 billion, they considered the threat material enough to make closing of the deal conditional on freedom to operate.  Resolving the looming impasse could be critical to any strategy to replace coal with natural gas for electricity production.

Role for RTEC

  • There does not appear to be any entity that has knowledge in the areas of the issues mentioned above and yet is non-aligned.  This is the opinion of executives at two petroleum related companies and two NGO’s with whom we have spoken.  A stated goal for RTEC is to identify compelling energy issues and play a key role in matters pertaining to a select few of these issues.    RTEC members have in depth understanding of the technology and economics associated with clean coal and natural gas production.
  • In the critical area of economic viability of producing shale gas in an environmentally acceptable manner, RTEC will enter the debate with insights regarding the validity of public angst and the ability of industry to be responsive to the issues with merit.  In particular, we have been approached by the Sierra Club to work with them and others to craft legislation in Pennsylvania.  The Sierra Club, World Watch and EDF have all realized that their absolute objection to new coal derived electricity is not reasonable without support for an alternative.  Consequently, they are backing natural gas as a transitional fuel.  However, they want this to happen against the backdrop of environmentally secure production of shale gas.  Hence their need for a respected third party to weigh in on the issues.  RTEC expects to source one or two other non-aligned experts to augment its expertise, provided the costs are borne by the Sierra Club or another entity.  The Sierra Club is clear on the point that RTEC does not support their opposition to clean coal and is merely acting as a resource to resolve shale gas issues.
  • If we feel we are making a real difference, we will consider measures to have a cadre of experts on call for consults from NGO’s and government bodies.  This may require seed funding, especially if a relational data base is part of the solution.  Ultimately, this could be a free standing unit whose span of influence could expand into other areas.


Potential Impact on US Energy

If natural gas fired plants are employed for new capacity, either for demand growth or replacement of ageing coal facilities (Progress Energy just closed thirteen coal fired plants in North Carolina), it provides breathing room for alternatives.  In particular, it gives time to resolve the issues surrounding clean coal, whether real or perceived.  RTEC continues to hold the view that clean coal is a viable part of the energy mix, especially when one considers the world at large.  Specifically, we expect post combustion capture and storage to be strongly in play for existing coal fired plants, especially those with many years depreciation remaining.

Eventually new base load capacity could go to Integrated Gasification Combined Cycle (IGCC), the long term clean coal solution.  We would expect also, that in the next ten years or so the nuclear option will be selected for new base load capacity and natural gas will begin to be phased out.  Price and availability of gas will determine the rapidity of this decline.  This is where the shale gas comes in.  If the known reserves can be accessed, there is reason to expect availability to be high.  Unlike offshore reservoirs, the time horizon between decision to drill and actual production is relatively short.  This is likely an effective antidote to rising demand driving up prices to double digits per million BTU.  Much of the new shale gas is profitable at $5 per MMBTU.  All of this leads to the hypothesis that natural gas prices will stay in single digits.  If they do, gas will remain competitive with clean coal and with lower up front investment, and so a shift away from it may not happen until nuclear power build up is significant.

In conclusion, if shale gas can be recovered in a fashion acceptable to the public, the reserves could be sufficient to support natural gas as a transitional fuel until cleaner alternatives become viable.  RTEC is positioned to play a key role, possibly a deterministic role, in the outcome.

Deep Water Completions Urgently Need Innovation

January 25, 2010 § Leave a comment

The cost of completions in deep water has progressively increased to the point where it can represent over sixty percent of the total well cost.  We are already to the point where this is impacting the economic prospectivity of reservoirs. While this trend is manifest in conventional deep water, it is exacerbated in deep water combined with deeply buried reservoirs such as the Paleogene, variously referred to as the Lower Tertiary.  The recent exit of Devon from the sector is a signal, even though it was undoubtedly driven by a host of factors.  This in one of the most critical issues facing the industry today, in part because deep water activity has to date been relatively immune to the economic travail faced by the industry.  The rig count in floaters in fact went up in 2009 compared to the prior year, and some are forecasting ultra deep (defined as water depths in excess of 7000 feet) rigs to more than double in three years.  The industry can ill afford a hiccup in this bastion of stable growth.  We will enunciate the issues, describe the underlying factors and discuss the viability of innovations to ameliorate the problem.

Sand Management: For conventional deep water prospects this is the single most critical issue.  Deep water sediments are almost always young in age, typically less than 10 million years old, and therefore relatively poorly compacted.  The majority of the prolific reservoirs are in a class known as turbidites.  The unusual manner in which they were formed caused each layer to have relatively uniform particles.  When particles of like size are packed together it allows for good pore communication.  As a result these reservoirs have high permeability, often in excess of a Darcy.  However, the associated high production rates put a strain on the sand body, inducing the production of sand due to the low sand to sand grain adhesion caused by the youth of the rock.  Dealing with this is the principal component in the high cost of deep water completions.

The uniform approach to handling sand production is to screen it out.   Screens of varying sophistication are used to suit the occasion, but the workhorse method in deep water is a layer of gravel followed by a mesh screen, known as gravel packing.  This has been the standard because by and large it performs.  However, it is rig time intensive and the increasing rates for deep water rigs have contributed to the ever increasing costs of the completion.  Also, the need for remediation at some point is almost certain, and for a period prior to that production rates will be impaired.  Another shortcoming is that the testing methodology for determining the need for sand control is imprecise, and the resulting uncertainty causes virtually all deep water reservoirs to be gravel packed, a conservative approach that adds to the cost for the sector.  We will be discussing this issue in some detail and drawing attention to a technique that improves the certainty of the measurement, thus allowing for an approach that we refer to as informed aggressiveness.   Finally, we note that currently we are responding to the symptom of sand production and ameliorating it through preventing ingress.  We will advocate instead treating the underlying cause of sand production with the expectation that in so doing we would be able to make do with simple screen devices, thus reducing complexity and cost.  Additionally, there would be an expectation of extended production before remediation, and this too, if needed, may be accomplished with a lower cost method.

 

Figure 1: New test fixture designed to measure cohesion directly using internal pressure to cause the core sample to fail in tension. The fixture allows cohesion to be measured directly with different saturating fluids to observe the saturating fluid’s impact on strength

Testing for Sanding Propensity:

Cohesion of sand grains is the property that determines whether or not one could expect sand production.  This property has proven elusive to estimate.  Current methods utilize compressive stress/strain measurement on core, using a technique known as Mohr Circle Analysis.  This has two shortcomings.  First it assumes elastic behavior of the rock and we know that to be a bad assumption for young deep water rock, which has plastic and visco-plastic tendencies.  Second, in rock mechanics cohesion is defined as the stress required to separate individual sand grains, and this is clearly a tensile property.  Consequently, therefore, we are using a compressive test to assess a tensile property.  All of this causes sufficient uncertainty in the measurement as to force the decision to gravel pack wells when this may not be required.  Finally, cohesion can also change with fluid saturation; therefore any completion design should consider the effects of such events as water break through later in the life of the well.  Conventional sand prediction tools do not allow for this to be included. This is largely because we cannot predict how increased water saturation will affect cohesion in the formation.  All of the foregoing suggests that a new test is needed; one that more precisely assesses sand grain adhesion and one that allows for experimentally determining the effect of fluid saturation.

One such technique is shown in Figure 1.  The core is subjected to internal fluid stresses designed to fail the sample in tension.  This test cell allows the core samples to be exposed to downhole pressure conditions. As pressure is released from the sealed ends of the core sample, the sample is stressed in tension. In this manner, internal pressure generates the tensile force and induces the cohesive failure of the sample.  The fluid properties can be changed to model expected changes in saturation later in life of the well.

Treating the Cause Not the Symptom: As discussed earlier, current methods deal with sand production as inevitable and deal with it by treating the symptom: minimizing entry into the producing bore hole with screening methods.  Over time the screens clog and remediation is required, often an expensive side track of the well.  A more elegant approach would be to treat the sand to improve grain to grain adhesion without compromising permeability.  This has been attempted for decades using the approach of improving the bulk compressive strength to withstand fracture.  This has had limited success in part due to high chemical loading, impairment of retained permeability and cost. Only recently has the thrust changed to primarily address cohesive strength, with much less emphasis on increasing

Figure 2: Scanning Electron Microscope image of formation material that has been strengthened using new placement techniques where the consolidating materials are selectively placed at the contact points leaving pore spaces open for production.

compressive strength.  Part of the reasoning here is that we now believe that the primary cause of sand production is not rock fracture per se, but the detachment of individual grains from each other.  The low chemical loading and the specificity of the resin in primarily gravitating to the grain to grain interface, results in the pore spaces being relatively unaffected, thus minimally impairing fluid flow characteristics.  Figure 2 shows an electron micrograph demonstrating this effect. (Editing note:  the figure legend will describe this more fully)

Importantly, the efficacy of the treatment can simply be tested using the new testing method, and the treatment can be optimized for various anticipated conditions of saturations, draw down and flow rates.  The foregoing offers the promise of fewer wells being treated for sand control, combined with lower cost completions for those that need it.  Formation strengthening, if successful, will allow for far simpler screening complements.  In the limit gravel packing could be eliminated.  Simplification is particularly of interest in horizontal and multi-lateral wells, both of which have advantages relative to formation exposure and reduced draw down for same production rates.  When the Troll Field oil leg was drilled with Level 5 multi-laterals, the lower draw down contributed to the sand production being delayed.  Such wells are very difficult to gravel pack reliably and reproducibly.

Obviously, aggressive means such as those advocated require a high degree of certainty.  The testing method is a key to selecting the best treatment and assessing likely efficacy.  Also, piloting in cheaper wells and in remediation of wells with plugged screens would be prudent first steps.  We describe this approach as Informed Aggressiveness.  Drilling programs have long used this, as for example in the handling of pore pressure/fracture gradient variability.  Real time pore pressure measurement and associated modeling allows the more aggressive operator to drill closer to balance, thus vastly improving rates of penetration and minimizing formation invasion, while largely avoiding kicks and blow outs.

Dealing with Salt: The majority of the important deep water tracts in the world are overlaid by salt diapirs.  These are sheets of salt, which can be from a few hundred to few thousand feet thick.  When these are outcrops on land, they are often mined to produce table salt labeled rock salt.  The sheets in deep water present immense difficulties to seismic exploration due to the relative imperviousness to penetration of sound waves.  Here we concentrate on the effect on drilling and completion.  As these layers extruded out millions of years ago, the rock below was often reduced to rubble, presenting a zone of uncertain character as the drill bit left the salt.  The completion is more directly affected by the nature of the salt itself.  In a sense the salt is still “live”.  A hole drilled in it is subject to the mechanical phenomenon known as creep, a sustained relatively low stress, but one which could buckle the casing.  Accordingly, the casings have to be unusually robust, adding to the cost.

The difficulty of imaging below the salt makes for greater positional uncertainty regarding the location of the highly productive intervals.  This can lead to tortuosity, with attendant completion difficulties.  The foregoing notwithstanding, the techniques to address these are relatively well understood, with technology in active development and deployment.

The Challenge of the Paleogene: Also known as the Lower Tertiary, this represents a new frontier that many believe to be promising.  The primary distinguishing features of these reservoirs from the standpoint of completions are their age and deep burial.  These rocks are in excess of 25 million years old, compared to normal deep water formation in the mid single digits.  The deep burial combined with the age cause these to be very tight.  The required fracturing to enable production is a first for the deep water, where the conventional rock has high permeability, as mentioned earlier.  Hydraulic fracturing at ambient pressures in excess of 15000 psi, and often greater than 20000 psi is a challenge.  Most surface equipment associated is not rated at over 15000 psi, and even that level is hard to come by.  The pumping equipment is itself in short supply at these levels of pressure.  Finally, many of these prospects are in ultra deep water.  Industry is in fact addressing this problem and one solution on offer is an interesting departure from current practice.  Fracture fluids are typically water based, and therefore with specific gravity close to 1.0.  The innovation is to use a higher gravity fluid, thus using the hydrostatic head to advantage as additive to the pump pressure at the surface.  These fluids, with specific gravities up to 1.49, can allow reductions in surface pressure of 3000 psi and higher.  The ability to tolerate lower pressures at the surface has significant advantages in safety and cost.  This would have application to land operations as well, allowing the use of less costly and more easily available equipment (pumps and surface handling) for deeper higher pressure jobs.

Intervention: For most wells intervention is essentially unavoidable.  For deep water the high costs are occasioned by the need to use floaters.  Approaches such as smart wells will delay, but not usually eliminate intervention.  Two approaches are suggested to address this issue.  One would be intervention friendly completions.  These are defined as completions that provide all the needed functionality and yet their design is inherently more amenable to intervention tooling and operations.  One example would be the use of expandable casing to produce a mono-diameter well.  Aside from the advantages of a single bore, the design would allow for a relatively large diameter at the reservoir.  In this context the mono-diameter feature need only commence at the intermediate casing, and not necessarily go all the way to the surface.  Another example would be the use of formation consolidation discussed above.  In the cases when gravel packing is eliminated, one would pick up a hole size, maybe two, and the associated screen would also occupy less annular space.  In general, though, the industry should be encouraged to devise intervention friendly completions.  The second approach addresses the issue of the vessel.  Over the years the industry has taken stabs at purpose designed light vessels which would be cheaper to operate.  The likely reason these did not take hold is the unpredictability of the need for intervention and hence the difficulty of forecasting utilization.  There is need for an innovative business model.  One such might be utilization by subscription: operators buy take or pay time on the vessel and a system is instituted for planning and timely access.  This would be somewhat akin to a time share vacation rental home but hopefully with a higher degree of sophistication such as preferential rights to access.

Electric Car Drivers may need Training Wheels

May 4, 2009 § 1 Comment

Training wheels are a wonderful invention to aid the tot with two wheel transport anxiety.  More often than not the anxiety resides with the parents, but regardless of source, the wheels get installed.  Now, in purely engineering terms, the extra wheels are pedestrian in design.  Clearly intended for the short term, they are not of particularly robust construction, because not much use is anticipated.  The added cost is modest when compared with that of the bicycle.  Yet, the comfort to the psyche is enormous.  Now, all of this really only applies to the munchkins.  Were you to learn to ride a two wheeler at an advanced age, as was I at age 11, the training wheel option is essentially out.  Even if available, the derision of the cohort group would not be sustainable.  So, what does all of this have to do with electric cars?

Electric cars will come in two flavors:  all electric (EV’s) and hybrid electric (PHEV’s), both with the ability to conveniently plug into wall outlets and both utilizing the energy of braking to charge a battery.  Both will use electricity alone to drive the wheels, so there will be an essential simplicity to the mechanics: no transmission, no gear box, no cam shafts and minimal mechanical maintenance.  The essential difference between the two will be the auxiliary gasoline engine in the hybrid electric, that will charge the batteries if they run down.  The all electric will not have this back up feature.  So, it will rely solely on batteries for range.  The early entry vehicles will have an electric range of 40 miles for PHEV’s and 80 to 100 miles for EV’s, not counting boutique cars such as the Tesla.  One can reasonably expect the EV numbers to double within a few years, provided advances are made in battery technology to provide more capacity in the same volume.

The car buying public will face a choice.  Since the EV, when mass produced, could be expected to be cheaper to make, despite the bigger battery, the list price will be lower than that of a PHEV, with one manufacturer expected to offer it at a price comparable to the gasoline counterpart.  The PHEV on the other hand, while more expensive, will have the much greater range afforded by the gasoline back up.  The “fuel” costs will be comparable when run on electricity.  The key difference will be a new term that has entered the transport lexicon: Range Anxiety.  We can roughly define this as the fear of running out of juice without a convenient fill up station.  The PHEV Chevy Volt’s electric range of 40 miles is based on studies indicating this as serving commute needs of 75% of Americans.  A full tank of gasoline extends that range another 600 miles.  The initial entry EV’s will have ranges of 80 to 100 miles and charging times of less than half an hour to six hours for a full charge, depending on the sophistication of the charging equipment.  Home charging, at least initially, will be at the higher ends on time.  Early deployment will be in cities that will install some measure of distributed charging infrastructure.  Battery swap business models are in play, wherein charging stations plan to exchange a fully charged battery for a depleted one.

In the end, the buying public will have some fraction afflicted with Range Anxiety.  This is where PHEV’s play the role of training wheels.  With such a vehicle consumers have the luxury of sorting out their driving habits, their discipline in charging every night, and all other manner of behavior impinging upon their ability to live with the range of an EV, at all times secure in the notion that the gasoline engine can bail them out.  There will also be a segment of the population eschewing this aid to behavior modification, in effect wobbling on to the bike, as your truly did some decades ago.  A skirmish with a thorny bush sticks, as it were, in the memory.  Thorny situations will undoubtedly lie in wait for the first time EV-ers.  And then again, perhaps PHEV’s will always have a place.  Choice is a good thing, in cars, colas and presidential elections.

Can North Carolina be a domestic source for lithium for electric vehicle batteries?

February 14, 2009 § Leave a comment

Making transport fuel fungible with electricity offers options to net importers of oil such as the US.  As a state, North Carolina is in the unenviable position of importing all of its fuel from other states.  While biofuel will undoubtedly play a role in reducing this import, electrifying the fleet offers another avenue.  The primary mission of electric vehicles(EV’s) would be the reduction or elimination of tail pipe emissions, the notoriously most difficult site for carbon dioxide capture, although a secondary one may be to act as a storage medium for the grid.  The FRDM program, led by NC State University, targets creating all elements of a Smart Grid, which would be a key vehicle in grid optimization.  So, North Carolina is already well placed to take a lead in electrifying the passenger vehicle fleet.

EV’s such as GM’s Plug-in Hybrid (PHEV), the Volt, scheduled to be marketed in 2010, are intended to be charged in conventional electrical outlets, with a gasoline engine for charging the batteries if needed to go beyond the nominal range, 40 miles in the case of the Volt.  Pure EV’s, running solely on electricity, such as one scheduled by Nissan for limited entry in 2010, are also likely to be part of the equation.  If such vehicles are to become a substantial portion of the passenger vehicle fleet, several economic hurdles will have to be crossed, some possibly needing subsidies.  The principal of these is the expected higher cost of the vehicle (pure EV’s, because of their simplicity of design, will be somewhat lower in cost than PHEV’s), driven largely by the cost of the battery.  Research to reduce cost and increase range is ongoing in this and other countries, and the current administration has announced the intent to significantly fund this endeavor as part of the Stimulus Package.

Batteries: The Lithium Ion battery is the clear leader in this field and many believe it will continue to be so for the foreseeable future.  Other manner of sophistication, such as augmentation with super capacitors for short  bursts of power, is expected to reduce the load on the batteries.  However, the current unit costs are high, although high volume throughput has not yet been in place.  One can expect the costs to come down over time.  A point of note is that while the technology is domestic in many cases, all battery manufacture is currently in other low labor cost countries.  However, as in the case of foreign designed cars, domestic manufacture may become feasible.  Location of such capability in North Carolina would go hand in hand with any decision to make North Carolina a primary launch state for electric vehicles.

Lithium: A more pernicious issue is the sourcing of the critical commodity, Lithium.  World reserves are considerable, but the majority of these are in Latin America, including some countries such as Bolivia who are not in close alignment with the US.  There is the risk of trading foreign dependency of one commodity for another.  Unlike the battery manufacturing situation, a mineral is uniquely situated, as in the case oil.  North America does have sizeable reserves of lithium ore, in the form of spodumene, an oxide, but with current technology the processing costs are high when compared to the cost of processing the brine based deposits in other countries.  The vast majority of spodumene reserves in this country are in North Carolina, in an area northwest of Charlotte.

Call for Action: The technology for spodumene processing deemed non economic is at least half a century old.  Hints exist in the literature for more innovative methods.  In the national interest a research program should be instituted to investigate the possibility of economic recovery of Lithium from oxide ore.  RTEC has commenced a scoping exercise in this area, currently involving a literature search, but a fully fledged investigation will require State or Federal funding.

Where Am I?

You are currently browsing the Uncategorized category at Research Triangle Energy Consortium.