November 27, 2012 § 3 Comments
We are much more used to the adage “bigger is better” in most things other than carbon footprints and runway models. And in the case of the latter the operative term is more correctly slimmer as memorialized by the symbolism and actuality of Twiggy. My enduring memory of arriving in this country in the late sixties is being struck by the fact that everything was large. Toothpaste came in regular, large and economy sizes. But marketing and consumer preferences aside, most industrial enterprises have always profited from scale. “Economies of Scale” is firmly embedded in the engineering lexicon. Which begs the question: when is this valid and when is it a hindrance?
In some ways it was Henry Ford who most popularized the notion. An assembly line dropped the cost per unit, in his case for cars. But it applies as well to any enterprise with high fixed costs, which now get spread over more units. But for our discussion we will focus on process economics. The standard power generating plant in the early part of the last century was in the vicinity of 30 MW (megawatts). By the end of the century it was 1000 MW. Some of this came about because the processes were designed to take advantage of economies of scale. In fact, most chemical and metallurgical processes are designed with this as a feature. Seldom will we find oil refineries smaller than 100,000 barrels per day (bpd). The plants converting natural gas to transport liquids are even larger. A counter example in power generation is windmills, which are small by design (3 MW or so).
This reliance on very large production plants has driven the business models. Oil is lifted out of the ground and sent vast distances to be refined into useful products. When oil used to be light, this transport was not onerous because such oil flowed relatively easily. However, oil produced today is increasingly heavier, especially the stuff from Canada, Venezuela and Mexico, three of the top four foreign sources for the US. Heavy oil is very viscous and does not flow without the addition of a light hydrocarbon, known as a diluent. The diluent is recovered at the refinery and reused. But all this adds cost. Ironically the largest pipeline transport distances are those for heavy oil from Canada. This extreme reliance by Canada on US refineries is what created the political football of the Keystone XL pipeline addition in this last election. But large refineries are likely here to stay.
Natural gas processing, on the other hand, could be amenable to innovation. The explosion of shale gas production and the continued ramp up allows one to consider alternatives. This is because pipeline infrastructure is inadequate and will need to be installed. If technologies are developed that economically produce derivatives closer to the source of production, then several benefits accrue. Manufacturing jobs will now be distributed across the country, not just in the Gulf Coast. Today nearly 80% of ethylene cracking capacity is in Texas and Louisiana. Ethane from east coast shale gas operations will need to be piped down over 1200 miles to be cracked.
Smaller facilities are quicker to build and easier to finance. A 100,000 bpd gas to liquids conversion facility will cost about $12 billion. A 1000 bpd unit will cost $120 million. While not pocket change, this figure is much easier to raise for investment. Lest this all sound too much like wishful thinking, several processes are currently in late stage development to produce diesel, jet fuel and methanol from natural gas on the scale mentioned. There is reason to believe that the linear reduction in cost I posted above can be beaten. In other words, the smaller unit may well produce fluid at a lower fully loaded cost than a large one. This is completely antithetical to the concept of economies of scale and is driven by breakthrough technologies that challenge design dogma.
Biomass conversion will be the area most advantaged by the sort of advance mentioned. This is because biomass tends to have very low energy density, thus making transport to distant large processing plants cumbersome and often not economic. A solution is to bring the plant to the biomass site. This is strongly facilitated if small footprint technologies are brought to bear. Interestingly, technology developed for natural gas conversion will apply directly to biomass, although some additional unique steps will be needed. But here too, technologies currently in development offer promise.
Size matters, except when it does not!
November 12, 2012 § Leave a Comment
There are some that say ethanol is for drinking not driving. Other than the clever phrasing, the origins of this aphorism are in the low calorific content of ethanol as compared to gasoline, about a third less. There is a reason that gasoline has endured for over a century as the transport fuel of choice: the energy density is high.
The country as a whole is seeking gasoline substitutes for environmental and balance of trade reasons. The North Carolina legislature laid down a target for 10% of all transport fuel to be produced in state by 2017. Biofuel was seen as the avenue and the Biofuels Center of North Carolina in Oxford took a lead in pointing the way. Here we make a case for alcohol, both ethanol and methanol, to be considered as the dominant means of achieving that objective.
Both alcohols blend with gasoline effectively. E85, a blend with 85% ethanol, is already available in many states. The methanol analog, M85, was piloted in California some years ago. Enabling either requires that cars are Flex Fuel cars, as many are today. The conversion cost is in the neighborhood of $125 when done at the original factory. Bills in Congress today on the Open Fuels Standard would require that most new vehicles by 2017 have flex fuel capabilities.
But North Carolina may not need the adoption of the high alcohol blends to meet the targets. Both ethanol and methanol at the 10% level act importantly as oxygenates. This is a property that allows a more complete burn of the fuel, which is good both for fuel economy and the environment. But even if every drop of gasoline contained state grown alcohols, the legislative target would not be met because the state uses about a third as much diesel as gasoline.
There are two potential solutions to this numbers game. One would be to simply increase the normal blend to 13% alcohol. This would account for the diesel use without even counting the increasing use of biodiesel. There are moves afoot in other states to permit an increase to 15% ethanol in gasoline.
The other measure to help with the problem would be to progressively require all diesel used in the state to contain 20 to 25% di-methyl-ether (DME). This is completely feasible without engine modifications. Furthermore, DME produces zero particulates and has a higher cetane rating, so it is an improvement over diesel in that regard. From the standpoint of production, DME is easily manufactured in a methanol plant by using a simple additional process step. So, a policy standardizing on alcohol as a gasoline substitute enables the production of a diesel substitute.
All biomass suffers from having low energy density. One solution is to process the material to be denser prior to transport. High pressure baling of cellulosic fiber and Torrefaction to produce high density pellets are two such measures. Another approach is to bring the processing facility to the source. This is particularly feasible for crops, because co-location can be planned. The recently announced Chemtex plant to produce cellulosic ethanol additionally plans to take advantage of the low cost nitrogen fertilizer available in parts of North Carolina. The source for the nutrient is hog waste stored in “lagoons”. The liquid is sprayed onto fields and provides the fertilizing action at a fraction of the cost of ammonia fertilizer, which trades for over $600 a metric ton. This underlines an interesting advantage for the state, provided all of this can be accomplished with acceptable air emissions, volatile organic compounds (VOC’s) to be specific.
Woody biomass is by far the most available biofuel raw material in NC. The lignin component is notoriously difficult to break down for ethanol production. But thermal processing to synthesis gas (syngas) is straightforward. From syngas a variety of fuels can be synthesized. Methanol is the simplest, closely followed by DME. Both of these figure prominently in our proposed course of action described above.
An alternative starting point for syngas can also be natural gas. In fact at prices today, it will be more cost effective than woody biomass. The natural gas could be produced in state or imported from a neighboring producing state. Regardless of where the raw material was sourced, the value added product would be produced in state and the jobs would be created here.
Some combination of ethanol, methanol and DME, augmented by existing biodiesel initiatives, is a viable means to achieve the legislative goal of 10% of transport fuel being produced in state by 2017. For the state that pioneered prohibition there is a mild irony in alcohol playing a legitimate role in shoring up the economy.
October 16, 2012 § 6 Comments
The foremost American electric car battery company, A123, declared bankruptcy today. A high flying MIT spinoff, honored by the President at the big house, bestowed with DOE funding to the tune of over $200 million, is on skid row. I am sure the Pundits will weigh in with what this might mean. But it certainly is a datum point that fits in with the lackadaisical sales performance of the electric cars. And it appears to offer support for what I am dubbing The High Compression Gambit by Mazda.
Mazda has made a few moves that signal an explicit strategy to defer electrification to the future. They have introduced a line of cars with high compression ratios operating on conventional gasoline. In my book, Shale Gas: the Promise and the Peril, I advocate the design of high compression engines to take advantage of the high octane ratings of the three viable gasoline substitutes, ethanol, methanol and methane. I suggest piloting through the armed forces, a la the Hummer. It seems as if at least one company is willing to take the plunge and not wait for anyone to lead the way.
But Mazda’s move appears to have nothing to do with enabling the gasoline alternatives. More below on how we can avail of that, no matter their intent. They were shooting for higher engine efficiencies to enable the latest CAFÉ standard to be met. Since the design started years ago, somebody was reading excellent tea leaves.
Conventional gasoline fueled cars have compression ratios (CR) of around 9:1. For a discussion of this parameter, see a previous blog. Regular gasoline experiences premature fuel ignition at higher compressions. What Mazda did was to make the cylinder narrower and longer. But the critical innovation was to provide two injections of fuel during a single cycle. The second injection is right at the point incipient knocking. The evaporative cooling drops the temperature enough to prevent premature ignition. This allowed them to increase the CR to 12:1. Fancy exhaust system management allows them to another notch up to 13:1. For 14:1 they need to use premium gasoline. They are offering this only in Europe. They believe American consumers will balk at the fuel premium, which is around 30 cents in most states.
The Mazda3, offered with just the junior version of 12:1 is reported to improve mileage from the 24/31 City/Highway to 29/39. There is some arm waving on manual versus automatic transmissions, improvements to the latter, and so on. But these are big improvements with no change in the gasoline. This appears to be the gambit. Keep improving the efficiency of the engine to help with the CAFÉ targets and kick in the electric capability when things get more viable.
We have opined in earlier blogs that electric cars, and hybrids for that matter, needed battery costs to drop to under $200/KWh and range to improve. The demise of A123 is not a promising note, but they may still be a factor. Continental Airlines went bankrupt twice before it became a value leader.
As we have noted before in these pages, improving efficiency is the fastest way to reduce emissions. The same gratification for less fuel used. I was not able to locate the predicted mileage for a Mazda3 with CR of 14:1. The benefits diminish in non-linear fashion at the higher CR’s. But it needs 95 octane gasoline.
Now for the punch line. Both E85 and M85, respectively with 85% ethanol and methanol, rest gasoline, will certainly do the job. My favorite is methanol. The evaporative cooling with M85 will be even more effective than with gasoline. In fact, compression ratios of 16 or 17 ought to be possible. Also, it is dramatically cheaper with low cost shale gas and can also be made from coal or biomass, all for lower cost than ethanol from corn. At today’s natural gas prices the cost of methanol is under 45 cents per gallon. With half the energy content of gasoline it is still much cheaper.
So, consider that at today’s regular gasoline price of $3.80, a compact car will have a fuel cost of 10.8 cents per mile (35 mpg assumed). With the same assumptions on range, M85 today will cost 5.8 cents per mile. Because of the units we are using, the energy content penalty of methanol is already counted. The consumer would have to refuel every 200 miles instead of the 350 miles assumed for the gasoline case.
For the CR 14:1 vehicle the comparison would be with premium gasoline. That would raise the cost per mile to about 11.7 cents, while the M85 would remain the same because the 15% gasoline component need only be regular gasoline.
If M85 were available, Mazda could bring the 14:1 car to this country, make the small modifications required to tolerate ethanol and methanol, and allow consumer choice. The driver could use premium gasoline or E85 or M85, whichever was available. Obviously, at the numbers shown above they would demand M85. This would set the ball rolling for the true future: some portion of passenger vehicles with CR’s of 17 running on M85 and many others with CR’s of 14 or so with fuel choice. Oil derived gasoline would be rendered just another option, a step towards reducing oil to merely a useful, not a strategic, commodity. When that happens, OPEC will be defanged as a manipulator of oil price.
September 21, 2012 § 1 Comment
Used as we are to being accused of being extravagant users of energy, the recent report from the Energy Information Administration (EIA) is an eyebrow raiser. It states that the US carbon emissions from energy use dropped to levels not seen since twenty years ago. Before you pop the Champagne (and thus adding CO2 to the air!), these statistics are just for the first quarter of 2012. And in examining the principal factors one is forced to conclude that the country has, in golf terms, received a mulligan (thank you Tim Profeta at Duke for that characterization).
The accompanying analysis by the EIA is sparse. They ascribe three reasons. One is the warm winter and associated reduction in use of heating fuels. But these are quarter to like quarter comparisons. We have had warm winters before in the last decade. The other two reasons are more interesting from the standpoint of a go forward national policy.
Cheap shale gas driven a switch from coal to natural gas in the generation of electricity is clearly a big factor. For the oldest coal plants it was more cost effective to switch to gas rather than to retrofit pollution control equipment. To the extent this startling carbon mitigation statistic has been discussed at all, the principal attribution has been to this factor. Studies making the direct quantitative connection are not in evidence. But, the underlying assumptions regarding the near halving of CO2 from substitution of gas for coal are not disputed. Activism by Sierra Club and others continues to shut down coal plants. One could therefore reasonably expect continuation of this trend towards gas substituting for coal. New wind generation capacity can also be expected, but not as fast.
The third reason cited by the EIA is reduced consumption, largely attributed to the recessionary conditions. The concurrent high gasoline prices likely were a factor. Whatever the reasons, gasoline consumption has plummeted in the last nine months, from about 9.1 million barrels a day in June, 2011 to 8.5 in April, 2012. An improving economy ought to moderate this drop rate even if oil prices remain high.
Looking out to the next two decades, we have the federal target for a drastic improvement in the fuel efficiency of vehicles: 45.4 miles per gallon by 2025. New vehicles today average 24 mpg. This near doubling in fuel efficiency ought to have direct effect on emissions per mile driven.
But there is always Jevons’ Paradox to worry about. Jevons was an economist a century and half ago who predicted that when devices become more efficient people simply use them more, thus offsetting the efficiency improvement. Modern economists term this the rebound effect. In the automobile example, this could entail per capita miles driven to go up because the consumer would note the reduced cost per mile. To the extent that this happens, the positive effect of increased efficiency on CO2 emissions would be dampened. In a recent discussion the economist Richard Newell at Duke, who until recently headed the aforementioned EIA, opined that the rebound effect would be relatively small in this case. For most people work related miles are a principal component. This is not going to change just because the cost goes down.
But fuel economy standards apply only to new vehicles. So they will take a while to become a major factor. Earlier gains would be made through fuel substitution with less polluting alternatives. The simplest ones are natural gas vehicles for public transport and light duty vehicles. Another straightforward thrust would be substitution of up to 25% diesel with di-methyl ether (DME) without engine modifications. DME from cheap natural gas would cost much less than diesel, has zero particulate emissions and a very high cetane rating. Particulates (with associated health effects) are a less publicized target for mitigation. The deployment of DME in the diesel infrastructure could be almost immediate in the case of captive businesses such as fleets for vehicles, and compressors and pumps for all manner of industrial endeavor, including fracking for gas and oil.
While this country can take some satisfaction from the carbon mitigation statistic we cannot rest on those laurels. We may indeed have been granted a mulligan by the coordinated effects of shale gas related coal substitution and a weak economy using less energy. A tax on emissions, implicit or explicit, is not on the cards. We need policy that is industry and consumer friendly and yet effective in reducing emissions. Natural gas substitution of coal and increased proportion of renewable energy will likely continue. We need a national policy and associated research that makes deep inroads into substitution of oil based transportation fuel with less polluting domestic alternatives.
September 8, 2012 § 2 Comments
The words high octane conjure up the vision of powerful, almost inexorable. Scribes in the press refer to a high octane economy. As it turns out, in the transportation world, high octane is not necessarily better. But it could be.
The octane number is a characteristic that defines the ability of a fuel to tolerate high compression engines. In such an engine the fuel is compressed in the cylinder to a greater extent than in regular engines. Consequently, when ignited, the energy released is greater than in the conventional cylinder. This provides a high torque to the wheels. More importantly to the fuel economics and the environment, more energy is produced for the same amount of fuel. In simple terms, the efficiency of the engine is increased.
The octane number of the fuel determines whether this can be accomplished. If the octane number is too low, the fuel will ignite prematurely. This is known as “knocking” in the parlance because of the sound produced, a bit like a rattle. Each car engine manual defines the octane rating permitted. Regular engines use 87 octane fuel and most others use 91 octane. Some sporty cars require 93 octane. This is why gas stations carry three grades. Inexplicably, though, the mid grade is 89 octane, not 91. No car is rated for that value. Ultra-high compression engines such as Indy race cars require octane numbers over 100. That is why they use pure ethanol or methanol, both with octane numbers over 110.
But for a given car, higher octane is not necessarily better. The 93 octane grade often carries the descriptor “super” or “hi-test”. This can be deceiving. A car designed to use 87 octane gasoline will derive no benefit from the higher grade. This is a case of more not being better.
But more can be better if we change the engine. While this may seem an impractical suggestion, consider first an important fact. Each of the three most viable substitutes for oil derived transportation fuel has an extremely high octane number. Ethanol, methanol and methane (the principal constituent of natural gas) clock in at 113, 117 and 125, respectively. Comparing just the liquids in that list, regular gasoline scores an anemic 87.
But ethanol has 33% less energy content and methanol has about 45% less. This is why E85, which contains 85% ethanol, has never been popular with consumers. Flex fuel vehicles (FFV’s) tolerate any mixture ranging from pure gasoline to E85. But without subsidies E85 costs as much or more than gasoline, especially in drought years such as this. And it delivers 28% fewer miles to the gallon. Not a formula for consumer acceptance.
Methanol is much cheaper to produce than ethanol, especially with cheap shale gas. So even taking into consideration the lower energy content an M85 blend would be good value. But one would have to fill up about twice as often than with gasoline.
The principal disadvantage of methane (natural gas) as a fuel is the low volumetric density. Compressed natural gas (CNG) occupies four times the volume as gasoline. Ongoing research at RTI and elsewhere targets halving that disadvantage.
An elegant solution would be a super-high compression engine, with compression ratio around 16. Regular engines are just under 9. At these high compressions, all three gasoline substitutes would deliver very high efficiencies. One could expect the energy density disadvantage to be completely eliminated. Methanol in particular would be significantly cheaper than gasoline per mile driven. At present low cost natural gas would be the raw material of choice. But biomass sourced methanol will be in our future. Policy measures to ensure this ought to include federally funded research to reduce the cost.
The President recently set a goal of 50% reduction in imported oil by 2020. A significant component of that will be more shale oil from the Bakken and similar deposits. We are already producing 10% of our requirements from there, up from essentially zero a scant five years ago. But a big factor ought to be displacement of oil by alternatives. The three discussed above can play big parts. They would have starring roles if we standardized on FFV’s with high compression engines.
In one fell swoop we would also take a major stride towards the goal of 54.5 miles per gallon for passenger cars by 2025. This ambitious goal is likely not realizable without a major technical advance such as efficient engines running on high octane fuels. Finally, such engines are relatively simple to design and produce. Detroit ought not to balk.
See announcement of web cast by Vikram Rao and Ann Korin
August 9, 2012 § 4 Comments
Well so OK, not quite cardboard, that distinction belonging to All Bran cereal. But you know what I mean. Beautiful, red, evenly round but bland. Conduct a taste test of one such against one from your closest farmer’s market. In particular, pick a variety like Cherokee Purple, which certainly has not gone through hybridization hell recently.
Now there is a paper from Ann Powell at the University of California, Davis, arguably the Mecca of agricultural research (sorry N C State), and collaborators at Cornell and elsewhere. The original paper, in the journal Science (June 29, 2012 issue), is tough going for non scientists, but reasonably approachable. I am directing you to a popular story describing the paper, which is eminently readable and has the relevant facts.
The essence of the work is that the industry devised the tomato for uniform ripening behavior. They found a mutation in the 1920’s that caused this trait and simply selected for it. This allowed the farmer to harvest evenly ripened fruit for transport to market. The fruit also ripened to an even red color. They are uniformly light green. Your garden tomatoes will have variations in color from beginning to end, shades of light and dark green to start. The aforementioned Cherokee Purple is particularly so, and the flesh is a gorgeous purple. Of course if you prefer plain vanilla red (is that oxymoronic or what?), this is not the one for you. Go for German Johnson or Brandywine (shown in the image). Names with character, in contrast to Big Boy, Early Girl; not picking on Burpee, but hey, I did not name them, they did.
But the genetic manipulation to obtain this trait inadvertently suppressed sugar. The finally ripened fruit was then compromised on taste and flavor. What made this discovery possible was identification of the gene responsible for the uniform ripening. The investigators then were able to locate the position on the chromosome and thence deduce that the sugar producing protein was absent.
This research was uniquely possible because of an international effort to sequence the tomato genome. Took them nine years and success was announced earlier this year. The tomato genome? Seriously? Apparently this was driven in part because the tomato belongs to a family of other vegetables. Oddly enough it has 92 percent of genes in common with the potato. The potato decided to go in the direction of starch in tubers. Tomatoes opted for the above ground show of sugar and color. Until we messed with nature, that is. Others such as eggplant are country cousins.
So is the tomato a fruit or a vegetable? Botanists see it as a fruit. Most of us are likely divided. Although what we just divulged here regarding sugar content might change the vote of some. But the Supreme Court has decided this for us. And you thought they only worried about hanging chads and curbing Arizonian sheriffs. Back in 1893 the Nix family disputed import duties on tomatoes from the West Indies on the argument that tomatoes were a fruit, not a vegetable. They lost. So now it is law.
While humans have 23 chromosomes, tomatoes have only 12. Interestingly, though they have about 32000 genes to our 24000. Some believe the additional complexity is because they have to defend against the environment while remaining static. We can escape on feet.
So, the future likely holds tasty tomatoes that do not sacrifice shelf life. If genetically modified, this ought to be straightforward. If they try to go about it through conventional breeding techniques, could take longer. In India, a genetically modified version of a vegetable staple the eggplant was developed for better yields and disease resistance. A vociferous minority essentially squelched that. In the US almost all our corn and soy bean are genetically modified but most are not aware. If a labeling push goes through as law, opinion could change. Meanwhile, I grow my own Cherokee Purple and Sun Gold.
July 17, 2012 § Leave a Comment
The discovery of the Higgs Boson particle announced on July 4 in Geneva was a science event of moment. For those still in the dark on this matter (by the way this is a poor astrophysics joke) this discovery essentially proved the Standard Model of elementary particles and the forces they exert on each other. The Economist devoted the front cover to it in a recent issue. And none of this would have been possible without helium. And we may be running out of it.
Helium is the second most abundant element in the universe, after hydrogen. But terrestrially it is in relatively short supply, at least in usable concentrations. But first a word or two on what it does for us.
Helium is known as a noble gas because it is relatively inert, that is it does not react easily with other species. Use in welding relies on this property. But most know it for being lighter than air, leading to party balloons and blimps. The most famous blimp in history, the Hindenburg, was lost because of the flammability of that other light gas, hydrogen. Helium when inhaled constricts the vocal chords and one sounds like Alvin the Chipmunk. A somewhat more esoteric property is the ability to stay a liquid down to a temperature of nearly Absolute Zero, which is defined as – 273 C. That may seem to be just a number, but it is the number at which all molecules essentially stop moving. That is pretty daunting. Fortunately, it is believed that this temperature cannot be achieved. But liquid helium brings one within whispering distance.
Extreme cold is need for certain magnets to be effective, especially in the medical diagnostic application of Magnetic Resonance Imaging, commonly known as MRI. Effectiveness of magnets is the critical property needed to make the Large Hadron Collider effective. This particle accelerator at CERN in Geneva is the facility that discovered the Higgs Boson.
So, it is critical for important devices and industrial practices and is considered to be a strategic commodity. And yet, it is cheap enough to be used in party balloons. This oddity is due to the fact that the US government decided in 1960 to create a strategic reserve and then bleed it out for use. This sale is then made at a relatively low cost, today at about $75 per thousand cubic feet (mcf) for raw helium; processed gas costs more. 75% of the helium supply in the world is from the US, and half of that is from the reserve. Consequently the world price is determined by the US government release from Cliffside. This is the name of the storage facility near Amarillo, Texas.
To make things even odder, Congress decided in 1996 to get the feds out of the helium business by 2015, seemingly to encourage privatization of the business. Investors did not get the memo. We are in 2012 now; nearly 35% of the world’s supply comes from Cliffside. Drawing it down to zero by 2015 sounds Draconian even for the current less-than-functional Congress. A bill to extend the deadline has been wallowing around for a while. They inherited the problem for sure, but are likely too busy with other matters such as dealing with Justice Roberts’ epiphany.
So, where does the helium come from presently and what is possible? All the US sourced helium is from the Hugoton natural gas field and from one other in Wyoming. The content is up to 1.9%, which is anomalously high relative to other deposits. At these concentrations it pays to separate it out. The hot area of shale gas is not a promising source. The small helium atom tends to escape in the reservoir and so one would expect the concentrations to be very low. In fact, unfortunately, helium is generally found associated with high nitrogen containing natural gas. This usually renders it uneconomical.
An interesting source would be a bi-product of liquefied natural gas (LNG) production. When the gas is chilled to -162 C to make the methane liquid, the remaining vapor is rich in helium even if the original gas has very little of the stuff. This is in fact happening in Qatar despite the helium being less than 0.1%. Qatar is now the next highest producer to the US. One could expect the same in Australia, with massive LNG facilities, and possibly even in the soon to be permitted Cheniere Energy plant in Louisiana, although for that one the concentration could still be challenging if shale gas is the raw material source. But if LNG becomes popular for long haul transport and the gas source is chosen carefully, helium could be a bi-product.
Suffice to say that if something is not done to improve helium supply much of MRI could be rendered MRIP: magnets resting in peace.