Feeds:
Posts

## Petroleum to Biofuel: How much land is required

One of the primary uses of petroleum is as fuel. As the average carbon dioxide levels have already gone above 400 ppm and global warming is taking place, there have been many calls to reduce the usage of petroleum by substituting it with renewable energy. Biofuels stand very distinct among all other renewables because they could be easily used as a drop in replacement for petroleum based fuels.

## Petroleum usage

Around 84 percent of the distillates are used as fuels including diesel, gasoline(petrol), kerosene, LPG etc. Considering the oil usage at 94 million barrels per day, this amounts to 4585 billion litres per year.

## Biofuel yield

Different biofuel crops have different yields. Typical values are given below.

## Land Requirement to replace the entire petroleum based fuel

Considering the above yield and the amount of petroleum used as fuel, total land usage of different crops to replace the entire petroleum based fuel could be calculated. For comparison two forms of other data is also provided. 1) Total arable and agricultural land available. 2) Land area of some of the larger countries in the world.

Could Brazil increase the ethanol production 100 times utilizing the entire area of Brazil itself? Otherwise could the entire Sahara Desert or the United States be used for producing palmoil based biodiesel? Could any of these options be possible without touching the remaining tropical rainforests in the world?

As far as land usage is concerned, algae is the only source with a potential to replace the entire petroleum usage. But it is still a reasearch topic for many years, far away from being commerically available.

## Solar: Is it an option for aircrafts and shipping

Solar energy is considered as the ultimate source of energy. In the last few years due to technology and abundant production of silicon, solar technology has become very commercially viable and it is rapidly approaching grid parity.

Effect of this could be seen in transportation also. Numerous solar vehicles have been tried or announced. Some of them are solar airplanes and solar powered ships. As a solar enthusiast, it looks very interesting to me. But, after looking at some of the traditional ‘flagship’ vessels and aircrafts, it is a different story. Some of the details are given below.

Largest aircraft ever built is the Soviet Antonov 225, with a maximum take off weight (MTOW) of 640 Metric Tonnes. But, practically the most widely used heavy aircraft is Boeing 747 with an MTOW of around 450 Metric Tonnes.

Consider the case of this aircraft becoming solar powered. The first choice is to fit solar panels on top of the wings and next, on top of the aircraft itself. How much area could be covered? As per Wikipedia, the wing area is around 550m2. The aircraft has a length of 70 metres and a width of 6 metres. So combining together, totally around 1000m2 would be available. With solar radiation of around 1000W/m2, total available solar energy reaches 1MW typically. For this exercise, we do not care about the cost of the solar cell, so let us consider one of the best triple junction cells at 40% efficiency. After installing, we could get 400KW of electricity.

Coming to the ocean front, the largest ship ever built was Seawise Giant or Knock Nevis with a DWT of 564763 Metric Tonnes. Her length was 458 metres and the deck area was 31541 m2. That one is an oil tanker, where as the largest container ship is Emma Maersk.

Let us repeat the previous exercise, Knock Nevis could collect a maximum of around 32MW of solar energy. With the best solar panels, around 12.8MW of electricity could be produced.

Note that this is the maximum solar power production. We are not at all considering about electricity storage system so that power is available when Sun is not shining.

Now let us see the energy requirement of these aircrafts and ships. Boeing 747 engines could produce around 1000kN of thrust where as it could carry more than 150 Metric Tonnes of fuel. During take off time, the fuel consumption rate is around 12000 US Gallons per hour, that is 10.25 kgs of fuel per second. With an energy density of 43MJ/kg, and a typical turbofan engine efficiency of 35 percent, that comes to 150MW of power production.

Coming back to Emma Maersk. The main engine produces 81MW and electrical generators produce 30MW. Fuel consumption could reach around 20 Metric Tonnes per hour. These ships carry around 5000 to 1000 Metric Tonnes of fuel.

That is the main point. Don’t think about a trip to Hawaii in a solar powered aircraft or cruise liner. A 747 requires around 400 times more power compared to what the best solar panels could produce, where as Emma Maersk requires around 8 times more. Even though solar could not be used for primary power, it could be used for a lot of auxiliary power applications.

Correction:The word ‘aircraft’ is both singular and plural. As per correct English, ‘aircrafts’ is not a correct word.

## Compressed Air Energy Storage, Entropy and Efficiency

The basic operating principle behind Compressed Air Energy Storage (CAES) is extremely simple. Energy is supplied to compress air, and when energy is required this compressed air is allowed to expand through some expansion turbines. But, as and when we approach this simple theory, it starts becoming more complex because of the thermodynamics involved.

Air gets heated up when it is compressed. This could be easily seen had you ever used a bicycle pump. Depending upon how air is compressed, it could be broadly classfied according to two thermodynamic processes, Adiabatic and Isothermal.

Adiabatic Compression: In this process, the heat of compression is retained, that means, there is no heat exchange resulting in zero entropy change. So the compressed air becomes very hot.

Isothermal Compression: The temperature of the gas is kept constant by allowing the heat of compression to get transferred to the environment. The entropy of the gas decreases as it gives out heat, but the entropy of the surroundings get increased by the same amount as it is accepting heat. Since both are equal, the net entropy change is zero.

Pure adiabatic and isothermal processes are very difficult to achieve. Practical compressors are somewhat in between these two. Let me put it in simple words. Take a bicycle pump, insulate the cylinder using a rubber sheet and compress it very fast in one second, that would be more of an adiabatic compression. Touch the cylinder of the pump, you could feel it. Where as, take the same pump, put it in water so that it remains cool. Compress it slowly say by 10% of the cylinder length, allow it to cool, continue compression and cooling a few times. Let the whole process take 1 minute instead of 1 second, that would be more of an isothermal compression.

The same holds good while expansion also, if the gas is not allowed to take heat from outside, then it would be adiabatic expansion resulting in a drop of temperature. But, in isothermal expansion, the gas is allowed to expand by taking heat from the surroundings and keeping the temperature constant.

In practice, isothermal compression is achieved very similar to the second bicycle example given above. Compress the air with a small compression ratio, allow it to cool without changing the volume, repeat this cycle until the required compression is achieved.

We could see that a reversible Isothermal compression is,
$Isothermal = \lim_{R \to 0, N \to \infty} \sum_{1}^N Adiabatic_N + Isochoric_N$

In effect, repeat an infinitesimal Adibatic compression followed by an Isochoric (Constant Volume) cooling, N times so that the temperature does not change. Applying Limit, when N tends to infinity the process becomes an ideal Isothermal compression. Here R is the compression ratio of the Adiabatic compression. It could be easily seen that, multiplying each compression ratio R of every cycle would give the total compression ratio. But, in normal systems a definite number of compressor stages are used with intercoolers as heat exchangers between stages to provide isochoric cooling and drop in pressure.

Expansion process is a bit different. The air goes through adiabatic expansion of multiple stages with heat exchangers in between stages. These heat exchangers are used to perform opposite function of the compressor intercoolers, that is to reheat the air by taking heat from the surroundings and there by an increase in pressure. It is assumed that all the stages use adiabatic expansion using a constant volume ratio, with the exception of the last stage. In the last stage, adiabatic expansion at constant pressure ratio is used so that the output is of ambient pressure. If constant volume ratio is used, the output of the last stage expander would have much lower pressure than that of the surroundings. The discharged air from the last stage subsequently gets expanded and heated up by taking surrounding heat using an Isobaric (Constant Pressure) process. This could be compared to the expansion and heat rejection processes of a Brayton cycle gas turbine.

### Efficiency of Processes.

Theoretically both pure adiabatic and isothermal processes are reversible. That means whatever energy supplied during compression could be retrieved back during expansion, that implies 100% efficiency. Entropy change justifies both. In adiabatic there is no entropy change at all. Whereas in Isothermal, the entropy change of the system and surroundings are opposite with the same value because heat is exchanged at constant temperature, so that there is no net entropy change. But in practical compressed air scenario it is far from correct because of many reasons.

• Pure adiabatic or isothermal processes are not possible.
• If adiabatic storage is used, air temperature and pressure could be very high for higher compression ratio. The container should handle this large pressure and temperature.
• If isothermal storage using mixed adiabatic and isochoric stages are used, it would lead to reduction in efficiency.
• Efficiency could be improved by increasing the number of stages, but that would increase cost and complexity.
• Air is not an ideal diatomic gas
• All intercoolers and heat exchanges do a mixed Isochoric and Isobaric heating or cooling.
• Mechanical parts are subject to friction and other inefficiencies.

### A few examples

In all these examples ambient air at 25C and 1atm (100kPa) with an initial volume of 1.0m3 is used. All compression and expansion are assumed to be adiabatic. And heat transfer are through Isochoric process except the last expansion stage which uses Isobaric process.

Attachment: In order to do calculations, I wrote a python script. It could be downloaded from here. (Again wordpress is not allowing me to upload a python text file. So I uploaded it as an ODT file. Download and save it as thermo.py with execute permissions). It could be invoked with parameters like number of stages, compression ratio etc.

Example 1:
Compression: A single stage compression using volume ratio 100, followed by isochoric cooling.
Expansion: A single stage expansion using a pressure ratio 100 followed by isobaric heating.

Reference Isothermal Process: Pure isothermal compression requires 460.5kJ of work, reaching a pressure of 100atm and volume of 0.01m3. The heat rejected is the same as work done and the entropy change of the system and surroundings are equal at 1545J/K. An ideal isothermal expander could get back the same energy during expansion process.

But, if adiabatic compression is employed, keeping the the same volume ratio, the compressor has to do 1327kJ of work. This work reflects in increasing both pressure and temperature to 631atm and 1607C. Since it is an adiabatic process, there is no change in entropy, so an ideal adiabatic expander would get back the same energy as work.

Let us see the associated isochoric cooling. During the cooling process, the entire 1327kJ of heat is rejected to the surroundings as expected. The entropy of air is reduced by 1545J/K, the same as that of ideal isothermal compression. But, on the other side, the entropy of surroundings got increased by 4449J/K resulting in net entropy increase of 2904J/K. Looking carefully, it took 1327kJ instead of 460.5kJ of an ideal isothermal process, giving a mere 34.6% efficiency. Since the entropy change of the air is same in both cases, the maximum work that could be extracted from this air is also the same, that is 460.5kJ.

Coming back to the expansion side. During the adiabatic stage, the air is brought back to 1atm and the volume is increased to 0.268m3 but at a temperature of -193C, also 183kJ of work could be extracted from the process. After that, the air undergoes isobaric heating and expansion taking 256kJ from the surroundings to come back to ambient condition. A part of this heat which is the same as 183kJ is used for increasing the internal energy of the air and the remaining 73kJ is used as work done. During the isobaric process, the entropy of the air got increased by the same 1545J/K and the entropy of surroundings got dropped by 858J/K giving a net increase of 685J/K. So, looking at the whole cycle, the net efficiency is 183kJ/1327kJ = 13.8%, with a net 3589J/K entropy production. This is a near impossible scenario because of the high and low temperature involved. For comparison the melting point of Steel and Iron is 1535C on the hot side, the boiling point of air is -195C on the cold side

Example 2:
Compression: Two stages of compression using volume ratio 10, two stages of isochoric cooling.
Expansion: expansion using volume ratio 10, then isochoric heating followed by another expansion using pressure ratio 10 and isobaric heating.

Here both ideal isothermal stages should have taken 230.3kJ, so the total work done is the same as that of the above example at 460.5kJ. But as adiabatic process is employed, each stage uses 378kJ, but better than the first example. Each isochoric cooling stage rejects the same 378kJ of heat, so the compression efficiency is 230.3kJ/378kJ at 61%.

After the entire compression and expansion process, the round trip efficiency is improved to 35.8%. The total entropy of the air goes through 772J/K at each stage coming back to zero. But the net entropy change of the system and surroundings together got increased by 1464J/K.

Example 3:
Like Example 2, but with 4 stages

Here, it could be seen that the total efficiency is up to 59.3% with a net 704J/K entropy production.

### What could we see from this

As the compression ratio is reduced by increasing the number of stages the difference in work done between adiabatic and isothermal processes decreases. Looking at the entropy front, the net entropy change of the system that is the air under consideration remains the same after the whole cycles, but the entropy of the surrounding increases amounting to an overall entropy increase. As the compression ratio decreases the net entropy production also decreases, correspondingly efficiency increases. That is the beauty of the greatest law of nature, the second law of thermodynamics. The entropy law governs everything.

Pure adiabatic and isothermal process do not add any net entropy, so they have no loss. Actual entropy production takes place during isochoric and iobaric heating or cooling. In these examples, when the number of stages increases, net entropy production decreases improving efficiency.

If there is some way by which the heat is retained instead of dissipating to the surroundings, the overall efficiency could be improved.

### References

1. Compressed Air Energy Storage – How viable is it?

2. Ideal Gases under Constant Volume, Constant Pressure, Constant Temperature, & Adiabatic Conditions
http://www.grc.nasa.gov/WWW/k-12/Numbers/Math/Mathematical_Thinking/ideal_gases_under_constant.htm

3. Wikipedia for general information on different thermodynamic processes

### Equations

As equations are generally disliked, I moved them to the bottom.

Universal Gas Law

$PV = nRT$

The Heat Capacity Ratio

$\gamma = C_p/C_v$
$C_p - C_v = R$

For Adiabatic Process $\delta Q = 0$ so $\delta W = \delta U$
$PV^\gamma = K_a$
$\delta T = K_a (V_f^{1-\gamma} - V_i^{1 - \gamma} / nC_v(1 - \gamma)$
$T_f$ could also be computed using Universal Gas Law
$Work\ Done = K_a (V_f^{1-\gamma} - V_i^{1 - \gamma} /(1 - \gamma) = n C_v \delta T$
$\delta S_{system} = 0$
$\delta S_{surroundings} = 0$

Isothermal Compression

For Isothermal Process $\delta U = 0$, so $\delta W = \delta Q$
$Work\ Done = P_f V_f ln\frac{P_i}{P_f}$
$\delta S_{system} = -\frac{|Work \ Done|} {T_{ambient}}$
$\delta S_{surroundings} = \frac{|Work \ Done|} {T_{ambient}}$

Isochoric Cooling

For Isochoric Process $\delta W = 0$, so $\delta U = \delta Q$
$\delta Q = n C_v \delta T$
$\delta S_{system} = -|n C_p ln\frac{T_f}{T_i} -R ln\frac{P_f}{P_i}| = -|n C_v ln\frac{T_f}{T_i}|$
$\delta S_{surroundings} = \frac{|\delta Q|} {T_{ambient}}$

Isobaric Heating

$\delta U = n C_v \delta T$
$\delta W = n R \delta T = P \delta V$
$\delta Q = n C_v \delta T + n R \delta T = n C_p \delta T$
$\delta S_{system} = n C_p ln\frac{T_f}{T_i}$
$\delta S_{surroundings} = -\frac{|\delta Q|} {T_{ambient}}$

## World Energy Consumption: 2250 Tsar Bombs or 18800 WW2

I like history. Recently I was reading about the 50th anniversary of Tsar Bomba, tested on 1961 October 30 by Soviet Union. Tsar Bomba is the single most powerful thermonuclear weapon ever used. It has a power of 50 to 57 megatons of TNT (210 PJ) which is 10 times the combined power of all explosives used in Second World War or around 3000 times as powerful as the Hiroshima bomb.
Tsar Bomba Video

Tsar Bomba was a part of the Sabre Rattling and display of power during the Cold War and Nuclear Arms Race. Both USSR and the US had a total stockpile of around 25000 megatons of nuclear weapons and the world was at the brink of nuclear war a few times. There were numerous Peace Advocates, Environmentalists and Anti Nuclear Activists protesting against Nuclear Arms Race and Cold War.
Nuclear Arms Race
Nuclear Weapons Stockpile

That was history, but at the same time I thought about the current energy scenario. How is the World energy consumption compared against this powerful bomb. Initially, I was thinking that energy output of the Tsar Bomba could be enough for a few months or or at least a few weeks at the rate of current energy consumption. After doing some calculation, it was totally surprising.

### World Energy Consumption

Total energy consumption of the world is around 15TW (15 * 10^12) now, coming to around 474 exajoules (474 * 10^18) per year (From Wikipedia). As of now, more than 80% of this comes from fossil fuels, mainly coal, petroleum and natural gas. All of them are non renewable and produce green house gases. Let us see how 15TW or 474 exajoules compared against these powerful nuclear weapons.

In terms of tons of TNT
It comes to 3.59 kilotons of TNT per second or 113200 megatons of TNT per year.

Mass Energy Equivalence, using Einstein’s equation
Completing transforming 1kg of mass per 100 minutes, that means 5250 kg (11600 lbs) yearly.

Hiroshima Bombs of around 18 kilotons
One Hiroshima Bomb has to be detonated per 5 seconds. That is 6300000 Hiroshima Bombs in one year.

Total Second World War Explosives
Total explosives used in Second World War was around 6 megatons. More Details
We have to conduct a Second World War in every 28 minutes or 18800 Second World Wars in one year for the same amount of energy.

Tsar Bomba
Out of all bombs, this would produce the maximum mileage of around 4 hours per Tsar Bomb. That means detonating around 2250 Tsar Bombs yearly.

Total Nuclear Explosions
Total Nuclear Explosions carried out by all countries would come close to around 510 megatons. More Details
These 510 megatons could take us for around 40 hours. That has to be repeated 222 times to meet our yearly consumption.

Total Nuclear Stockpile of 25000 megatons
Our yearly energy consumption is 4.5 times of the much critized stockpile.

### A few thoughts

Just think, how the reaction would be if a country try to detonate even a small nuclear device. All the environmentalists, peace activists, nuclear and war opponents would definitely make it a big issue. There have been many programs like SALT, START, etc. to reduce the Nuclear Stockpile.

But, the above mentioned nuclear weapons look very small compared to our energy requirement and no one is talking about reducing our energy consumption rate. Policy makers try to promote the idea that both GDP and Development are directly proportional to Energy Consumption, just like the computer processor makers used the Megahertz Myth during 1990s to show case that more CPU clock frequency means more performance. Since it is not practical to increase the clock rate above 3.5 GHz, they themselves stopped it afterwards.

So, at the current rate, our energy consumption looks like a war against our own Mother Nature. Normally, every war strategy planner try to think about supply lines and resources – Does it look the same with this energy war? Is it sustainable, could we afford this much of pollution and environmental destruction?

It is an excellent decisions to use more and more Renewable sources like, solar, wind etc. But at the same time, we must also try to limit this insanely high energy consumption.

## Why we require localized food and energy production

Yet another Gandhi Jayanti has come. In the wake of sustainability and issues like global warming, teachings of Mahatma Gandhi is becoming more and more important.

### Some of the Inspirational words of Mahatma Gandhi on Grama Swaraj (Village Self Governance), Sustainability etc.

“The true India is to be found not in its few cities, but in its seven hundred thousand villages. If the villages perish, India will perish too.”

“We have to show them that they can grow their vegetables, their greens, without much expense, and keep good health….”

“Earth provides enough to satisfy every man’s need, but not every man’s greed”

According to Gandhiji, each village should be basically self-reliant, making provision for all necessities of life – food, clothing, clean water, sanitation, housing, education and so on, including government and self-defence, and all socially useful amenities required by a community.
Gandhi’s Concept of Gram Swaraj

If we look at the current scenario, it is essential to include clean technologies and localized sustainability also to the vision of Mahatma Gandhi. Let us see more on some of the important items.

### Increasing Localized food production:

Localized food production should be encouraged to the maximum. This would definitely boost local economy. From a pure consumer point of view, one could get much better and fresher food items. From environmental point of view, this could reduce a lot of emission and pollution arising from the usage of fossil fuels for transportation. The extra cost and wastage arising from paper and plastic food packaging materials also could be reduced.
The Localization of Agriculture

### Localized Energy Revolution:

Clean Energy Generation is getting a great momentum nowadays. Looking from a broader perspective, clean energy could be divided into two separate streams. They are

1. Large Solar and Wind Farms owned and operated by big companies.
2. Small Distributed Rooftop Solar systems and Micro Wind turbines owned and operated by mainly residential customers.

It can be easily seen that the localized option is fully in alignment with Gandhiji’s dream: Self reliance, sustainability – Apart from generating one’s own food and clothes, generate one’s own energy also.

Let us see some advantages of small distributed generation.

• Common people are very much invloved. It is real democracy: Simply speaking, “Energy of the people, by the people, for the people”
• More local job creation, which would improve local economy.
• If a proper financial model is setup, that would boost local banks and finance institutions.
• Minimal land requirement. Most of the rooftops are unused anyway. But, large farms do require a lot of land. Even though in many cases, these farms are constructed on barren and arid land unusable for anything else, still that is nothing but real encroachment on nature.
Battle Brewing Over Giant Desert Solar Farm
• Around 400 Million people in India do not have electricity and majority of them are in villages. This is a unique scenario to India. Large Solar/Wind farms would not make any difference to these people, instead they would cater only the established traditional urban customer bases. But small scale systems could revolutionize these villages.
• Large Solar/Wind farms are the Clean versions of centralized generation. Apart from reducing/stopping carbon dioxide emissions, they have every other problems of large centralized power generation. They are prone to political issues. They depend on national grid to reach customers, which would effectively increase grid congestion, transmission and distribution losses etc. To minimize these issues, setting up and maintainance of new grid would be required. But for small distributed systems, a minimal micro grid would do the real work.
Solar Subsidy in India Bias of Large Solar Farms unwisely goes against the Global Trend of Rooftop Solar System Support
• Large centralized systems are like big brand department stores. They use Client Server approach. Electricity is ‘going’ in only one direction, from producer to consumer. Whereas small systems are like ebay or skype. They use Peer to Peer model. Producer and consumer distinction and their separation is reduced. More of using the Shortest Path approach. This model has many advantages during failures and problems.

This does not mean that large solar/wind farms are not necessary, but small distributed systems genuinely require a lot of attention and that should be given.

Lead Acid battery is touted as the cheapest battery available. In fact, Lead Acid is the family name for a collection of closely related battery types, from simple vented/flooded to advaned Valve Regulated ones. Depending upon the type of usage, there are shallow and deep cycle batteries. Typical examples of shallow cycle batteries are the ordinary car starter batteries, where as deep cycle batteries are used for prolonged deep discharge operations like electric propulsion, UPS etc. For a comparison, some reasonable Deep Cycle flooded batteries are available for around $120 per name plate KWh. This “lowest cost” has given a lot of advantage for Lead Acid batteries in renewable energy applications. But before getting deep into the deep cycle lead acid batteries there are a lot of interesting facts to consider. ## The Fine Prints Both capacity and the state of charge depend heavily upon a factor named Vpc which is nothing but Voltage per Cell. Normally all standard battery manufacturers quote their capacity to 1.75 Vpc with a discharge time of 20 hours. In simple language, it is the capacity until the voltage of the cell reaches 1.75V with a discharge period of 20 hours. 1.75 V is considered as 0% State of Charge. As Depth of Discharge (DoD) is just the opposite of State of Charge, it is nothing but 100% Depth of Discharge. But discharging up to that level puts a lot of stress on the battery, so that, the battery could only handle very limited number of cycles in that manner. In short, 100% DoD is not at all preferred for lead acid batteries. ## All Capacities are equal, but some are more equal than others Another interesting parameter is the “name plate” capacity mentioned on the battery. Normally a “120AH” battery gives an implication that, it could give 1A for 120 hours, or 120 A for 1 hour, or 20A for 6 hours or whatever combination of that which gives 120AH as the multiplication output. But, in reality this is not the case. Faster discharging try to reduce the available capacity of the battery drastically. As stated above in the previous section the “name plate” capacity is quoted at C/20 which means at a very slow pace of 20 hours to discharge the battery. Many standard discharge applications using inverters do require much higher discharge rate. Available capacity of a battery could be computed using an empirical law named “Peukert’s law”. The following figure shows the available capacity of a typical Lead Acid battery against discharge time. 100% capacity is stated for 20 hours. See the interesting fact, if the battery is discharged in 100 hours it could give 145% (actually 45% more than the nameplate) capacity whereas if it is discharged in 6 hours, it could give 75.7% capacity only. ## Number of cycles Total usable cycles of the battery is very much related to the regular depth of discharge. For a regular 80% DoD, a typical battery lasts for around 600 cycles, but if we use 50% DoD, it lasts for around 1200 cycles. There are many telecom batteries which are advertised for 20 years, but they have a rating of 5% to 10% DoD which is ridiculously low(fine prints again). The following graph from windsun.com and Concorde batteries shows the relation between available cycles and Depth of Discharge Apart from that, there are a few “solar batteries” which could give around 2100 cycles at 80% DoD, like the HuP Solar battery. But they also cost a lot, somewhere between$200 to $300 per KWh HuP Solar Information There is a clear disadvantage of the better cycle life and lower DoD. More batteries have to be kept in parallel to store the same amount of electrical energy. That means at 80% DoD, 125% capacity is required whereas at 50% DoD, the requirement would become 200%. If the above mentioned telecom battery is used, the requirement would go more than 1000% !! So, basically it is a trade-off between capacity, DoD and number of cycles. That is the story of Lead Acid battery. Let us consider two other types of batteries. ## Lithium Ion Battery (Lithium Iron Phosphate) Like Lead Acid, Lithium Ion is also a family name for, Lithium Cobalt, Lithium Manganese, Lithium Iron Phosphate, Lithium Polymer, Lithium Titanate etc. I am mainly considering Lithium Iron Phosphate for comparison. It has much better cycle life compared to other types of Li-Ion batteries, but a bit lower energy density. Normally Li Ion batteries could handle much better discharge rate compared to Lead Acid batteries. Discharge rate could go more than 2C, that means discharging the battery in 30 minutes. Another advantage of Li Ion battery is that, it has very low dependency on Peukert’s law, that is, even at higher current ratings the battery capacity would not go down like Lead Acid battery. Modern Lithium Iron Phosphate batteries give around 5000 cycles at 70% DoD or 3000 cycles at 80% DoD. The cost has gone down to less than$400 per KWh. Since they have better energy densites, they weigh less and occupy less space compared to Lead Acid batteries.
Specification of Thundersky Lithium Iron Phosphate battery

## Analysing Specific Capacity and Energy Density of some popular batteries

In my last article, I was concentrating more about the Specific Capacity of different cathode materials. But this is only one part of the story when a complete cell is concerned. To find the Specific Capacity of a particular battery chemistry the whole chemical reaction has to be analyzed.

Essentially the method used here is similar to that of previous analysis. Instead of just the cathode material, we have to consider the complete chemical reaction taking place in both cathode and anode. But the rest of the calculation is nearly the same. In short,
Specific Capacity = (N x F) / (Total weight of all components) where, N = Change in oxidation state or the number of electrons released. F = Faraday constant, 26801mAh/Mole 

This is one of the oldest rechargeable batteries invented, yet ubiquitous. The following is the chemical reaction happening in both Cathode and Anode during discharge process.
-ve Electrode: Cathode: Pb + H2SO4 = PbSO4 + 2H+ + 2e
+ve Electrode: PbO2 + H2SO4 + 2H+ = PbSO4 + 2H2O
The total chemical reaction is,
Pb + PbO2 + 2 H2SO4 = 2 PbSO4 + 2H2O (with 2 electrons through circuit)
Finding the total molar weight, 643g of reactants produce 2 Moles of electrons. Specific Capacity = 2 * 26.801/643 = 83mAh/g Total Energy Density, assuming 2V per reaction = 166 Wh/g 

Lithium Ion (Lithium Ferrous Phosphate):
This is one of the variants in the family of Lithium Ion Battery.
Overall Chemical reaction during reaction is as follows
LiC6 + FePO4 = LiFePO4 + 6C (with 1 electron through circuit)
That means 230g of reactants produce 1 Mole of electrons, at 3.3V
Calculating both Specific Capacity and Energy Density
Specific Capacity = 26.801/230 = 117mAh/g Energy Density = 385Wh/kg

Sodium Sulphur:
Mainly used in grid scale energy storage application, Sodium Sulphur is a variant of molten metal battery.
To give the overall chemical reaction,
2Na + 4S = Na2S4 (with 2 electrons through circuit)
The cell gives out 2V.
In this case, 174g of reactants, give out 2 Mole of electrons.
So Specific Capacity and Energy Density are
Specific Capacity = 308mAh/g Energy Density = 616Wh/kg

Conclusion:
It could be easily seen that Lead Acid battery, even though most widely used has a very low theoretical capacity. An interesting finding is that, the current practical capacities of both Lithium-Ion and Sodium Sulphur batteries are reaching very near to the theoretical capacity of Lead Acid technology.