Thursday 28 December 2017

Battery Storage Blues?

Evaporation ponds concentrate Lithium salts for extraction

Could a Shortage of Lithium Hold Back the Market for Renewables?

Energy storage is increasingly seen as critical to the decarbonisation of transportation and the means to propel the integration of clean renewable energy into the energy supply system.  In this vision of our low carbon future, we all drive in electric cars and store energy from our rooftop solar PV panels in batteries for use at home in the evening. 

The scale of the demand for batteries could be immense.

The registration of electric vehicles is growing rapidly from a very small base.  Silicon valley start up Tesla is currently going through 'production hell' trying to scale up and deliver on its ambitions (and the demand it has stimulated for its electric vehicles -EVs).  Incumbent car manufacturers are falling over one another to announce their own development plans for EVs.  In 2017 we saw announcements from Volvo that all their vehicles would be electric by 2019,    Volkswagen announced that every model would be available with an electric powertrain by 2030, and Mercedes-Benz revealed plans for their own 'gigafactory' to rival that of Tesla.

Politicians and governments have  shown similar enthusiasm for electric vehicles.  The UK government announced that it will mandate that no new cars will run on petrol or diesel by 2040.  France has announced the same goal.

The lithium ion battery is without dispute the technology of choice for applications that are both stationary (home energy storage) and mobile (electric cars and trucks), and for good reason.  Lithium ion batteries have an exceptionally high specific energy (kWh/kg) and energy density (kWh/m3) compared to other battery chemistries, which are both useful attributes for mobile applications.  The technology also scores well for lifetime - the number of times it can be charged and discharged before the capacity falls away, and power output - how fast you can get the energy out of the battery.

So in discussing the emergence of the lower cost, mass produced batteries that will be needed to usher in this new age, one question comes up again and again.  Will the world have enough lithium for all these batteries?

Lithium ion is currently the battery technology to beat

How Much Lithium Do We Need?

The good news is that it turns out that lithium ion batteries are deceptively named. 

It's called a lithium ion battery because lithium ions are the charge carriers that migrate from the cathode to the anode as current is drawn from the battery.  In fact, lithium is the smallest component of a lithium ion battery chemistry.  One of the most common formats is called the NCA lithium ion battery and contains lithium oxide in combination with nickel, cobalt and aluminium in the cathode together with a graphite anode.  Typical proportions for the cathode are:

Li(Ni 0.85, Co 0.1, Al 0.05)O2

Because lithium is such a light element (atomic weight 7), it works out to be only 7% of the cathode weight, so let's estimate around 2% of the weight of the entire cell, including the anode, electrolyte and packaging.

The Tesla Model S uses 18650 format cells assembled into 5.3kWh packs of 444 cells (see this 'teardown video'), making each cell 12Wh.  With 18650 format cells weighing in at around 45g each an 80kWh battery pack for a car would require 6,666 cells weighing a total of 300kg.  2% of 300kg is 15kg of lithium per vehicle.

With annual car sales at around 80m per year, a transition to a future where every single new car was fully electric would require 1.2 million tonnes of lithium each year.

At the rate of 80m new cars a year, it would take 11 years to replace the entire world fleet of cars, which is estimated to comprise around 900 million vehiclesLithium can be recovered from used batteries and recycled to make new ones, so in theory once the whole fleet is replaced then no more would need to be extracted.  So the total lithium requirement to move to fully electric cars would be 13.2 million tonnes.

Is there Enough Lithium?

Lithium is the 25th most abundant element in the Earth's crust and is also present in seawater.  It has been estimated that there is 230 billion tonnes of Lithium in the oceans.  The challenge is that it tends to be found in low concentrations. 

Lithium is found in highest concentrations in underwater reservoirs of brine, and in hard granitic rocks.  In Chile, brine is pumped up from the underground pools into vast  ponds (see image at top) where the water evaporates until the water is rich in Lithium Chloride, which can be precipitated out by reaction with sodium carbonate to create insoluble Lithium Carbonate. 

In Australia the mineral Spodumene is mined for Lithium.  The rock is crushed and heated in a kiln, then mixed with sulphuric acid and roasted again to produce Lithium Sulphate.

Pure Lithium is then extracted by the salts by electrolysis.

Lithium is available from many sources

The United States Geological Survey (USGS) estimates the world proven reserves as 14 million tonnes of lithium, distributed as shown in the chart.  While this figure is similar in scale to the requirement for electrification of vehicles, which leaves little extra for stationary applications and consumer electronics, there's good reason to believe that we have enough lithium:

  • Reserves represent only those resources that have been discovered so far, and that are judged to be capable of economic extraction with current approaches.  The easily extracted crude oil was the first to be exploited, but as demand increased, reserves were discovered in more and more places and technologies capable of their economic extraction from difficult locations (such as under the the North Sea) were developed. It is likely that the same will occur as demand for lithium rises.  USGS currently estimates total resources at 35 million tonnes. 
  • Lithium is already found in numerous locations around the globe (see chart), including many countries that could be judged to be politically stable.
  • As the commercial importance of energy storage increases and the scale of the financial opportunity from battery storage becomes evident to investors, funding will pour in and this will accelerate the development of new battery chemistries that use other materials.   For example, Gridential, a company that claims to have given old lead-acid battery technology a revamp with silicon wafer technology adapted from the solar industry, recently received $11m in two financing rounds.

However, having enough Lithium in the ground is not the same as being able to get it out fast enough to keep up with demand.

USGS estimates that world production rate for lithium at 36,000 tonnes per year.  It takes around seven years to bring new brine extraction capacity on stream and three to four for extraction from hard deposits.  It is highly conceivable that there will be capacity crunches along the way and a scramble by automobile manufacturers to secure supplies, but in answer to the question of whether there is enough lithium for the electrification of transport - it looks like yes, there is.

 Cobalt, another essential ingredient of lithium ion batteries, on the other hand, that's a completely different story...and one for another blog.

Friday 24 November 2017

The Carbon Intensity of UK Grid Electricity

What it Means for Low Carbon Buildings

Take a look at this chart. It's nothing short of astonishing. Up to 2012 the amount of carbon dioxide emissions associated with the delivery of one unit (kilowatt hour, or kWh) of electricity in the UK was hovering around 500gCO2/kWh. Since then, the amount of carbon dioxide that is emitted for each unit of electricity has plummeted. In 2016 the average was 269gCO2/kWh, a fall of nearly half in only four years. This change has far-reaching implications for regulators, not least those involved in ensuring the low carbon transition of the UK building stock, both newly constructed buildings and the improvement of the existing stock.

So what's behind the fall?

The first factor is the retreat of coal-fired power stations. In 2012, the government's Digest of UK Energy Statistics (DUKES) has coal fired power stations producing 44% of our electricity nuclear plants were suffering from outages and gas prices had risen, so coal use was at a high. By 2016 the corresponding figure for coal was only 9%. In the same period, gas fired power stations rose from 24% to 42% of UK power generation. This matters for two reasons. First of all, because coal is made up of long-chain hydrocarbons, with a higher ratio of carbon atoms to hydrogen atoms it produces about 60% more carbon dioxide than natural gas for each unit of heat energy produced in burning. Second, gas is more often burnt in a Combined Cycle Gas Turbine (CCGT) power plants with conversion efficiencies of up to 60%, compared to 40% for conventional steam turbines.

The second factor is the increasing contribution from renewables in the electricity supply. Enormous amounts of wind energy, biofuel fired generation and solar energy have come online. In 2012 renewables and 'other' represented 11% of UK electricity supply. In 2016, this had risen to 27.8%.

As a result the average carbon intensity of electricity in 2016 at 269 gCO2/kWh was only just higher than that for gas (216 gCO2/kWh). When you add in an efficiency for a gas boiler at (say) 80%, the gap disappears.

This is huge.

For years electricity has been the bad boy in low carbon building design. People fretted as a series of reports from the Energy Savings Trust showed that heat pump installations in the UK were operating nowhere near their advertised efficiencies and were consequently underperforming gas boilers for carbon emissions. Simple resistive electrical heating by panel heaters or immersion heaters for hot water were to be avoided at all costs.

Four short years later and all this is is turned on its head.

And we're only just getting started with renewables. In September, Dong Energy announced that it would move forward with the world's largest offshore wind farm, Hornsea 2 off the Yorkshire coast, with development costs that had fallen by half compared to previous offshore farms. A couple of week later, and not to be outdone, the UK's first subsidy-free solar farm was announced. It's still a bit of an outlier combining solar with battery energy storage and using pre-existing grid connections from with an earlier development, but it's a clear sign of the direction of travel. The carbon intensity of grid electricity is heading only in one direction.

But there's another wrinkle to consider. The carbon intensity of the grid is not a static value. It varies constantly as the mix of generators fluctuate to meet different levels of electricity demand and in response to changes in wind and sunlight. On 11th June this year, it was windy and sunny at the same time. Records tumbled. The carbon intensity of grid electricity in the middle of the day on was below 80gCO2/kWh.

So now the moment when you choose to take power from the grid is a strong determinant of the actual instantaneous carbon emissions your electricity use is creating.

Some uses of electricity - for example for preparing domestic hot water, or to some extent space heating buildings could be relatively time independent.  If I'd known ahead of time that carbon emissions would be so low on 11th July, I'd have been able to set a timer for my immersion heater to heat water for me at midday and got my tank of hot water at fully one third of the carbon emissions of using gas heating.

And the technology to do this is just around the corner.  This awesome new grid carbon intensity forcasting service has been recently launched by the National Grid the Met Office and WWF, with an API that software developers could use to do just this kind of thing.


So where does this leave low carbon building?

The current building regulations in England and Wales were last reviewed in 2012 and set minimum carbon emissions rates that developers must design to. The carbon intensity of electricity in the approved calculation (the Standard Assessment Procedure or SAP) is currently 519gCO2/kWh, which was accurate at the time. Now it is woefully behind the curve.

Buildings are normally intended to be long-lasting. If we allow ourselves to imagine a future where digital technologies, the smart distribution of electricity, demand response, energy storage and renewables combine in a so-called 'Smart Grid' then a number of significant observations about low carbon building emerge:

  • Even based on the current carbon intensity, never mind the future direction of travel over the life of a building, it is utterly beyond me that any new build or significant refurbishment should include gas heating.

  • The current enthusiasm among UK policy makers and local authorities for district heating (for example this recent consultation by Scottish government) could also be a troubling dead end. District heating itself is neither intrinsically clean nor green - it all depends what heat source you put at the other end of the pipes you're going to dig up all the streets to install. Gas fired combined heat and power may be seen as low carbon at the moment, but how long will it look so appealing if electricity continues on its current path?

  • Building codes are currently focused on regulating carbon emissions. In a world of low carbon electricity you can meet a carbon target with a draughty garden shed full of electric fan heaters. It's time to move to energy targets (kWh/m2) to create buildings that sip energy and liberate power for the demands created by the electrification of transportation.

If I was building my own Grand Design right now, my future-proof forever home based on these observations here's what I'd go for:

  • High levels of insulation and air tightness to drive down space heating demand to a practical minimum

  • Eliminate the wet heating system - I'd go underfloor electric coupled to a high thermal mass floor to allow price and carbon responsive electricity purchase to heat the slab at times of excess renewable generation

  • Direct electric hot water cylinder - again allowing price-responsive purchase of electricity as well as diversion of excess generation from...

  • the inevitable....beautiful solar panels on the roof - as many as possible!

Could this be the future direction energy efficient buildings? What do you think?

Wednesday 22 November 2017

The Future of Grid Charges, Solar and Battery Storage

OFGEM, the regulator of the UK energy markets, has seen the future and it's worried. The era of solar powered homes, offices and factories generating their own energy and storing it in low cost batteries, will apparently create havoc in the way we pay for the running, maintenance and upgrade of the electricity grid (network costs).  So OFGEM has launched a consultation about how might be the fairest way to apply network charges to energy bills in future.  Here's their latest update on their thinking.

The current model is that the network costs are spread across every unit of energy delivered to an end user. OFGEM estimates the average network charges to be in the region of £120 for domestic electricity customers, or around a quarter of a typical domestic electricity bill.

A house that installs solar energy needs less electricity units from the grid each year. The problem is that as more and more households and businesses install solar energy, the network costs get spread across an ever-smaller number of delivered units of electricity.

The costs of the network don't get smaller though, because the solar homes still need to draw electricity from the grid at certain times. Even when you combine solar with battery storage, there will still be parts of some days when the house pulls from the grid. All that infrastructure still needs to be there and it still needs to be maintained.

 So the network costs charged against each unit of electricity used need to rise and OFGEM is fretting that this is unfair to people who don't have solar panels as they pay more of the increase due to their higher consumption.

But how big a problem is this really? What do these extra costs that are borne by the non-solar homes and how would they change as the level of solar penetration rises? The solarblogger has done the sums so you don't need to.

Here's a spreadsheet.

Assuming an average system size of (say) 3kWp with a yield of 2550kWh/year and self-consumption of solar electricity at 35%, the network charges avoided by one million solar homes works out at £39.16 for each house each year (they pay £80.84 of network costs in their bill). This means that the twenty six million other homes that don't have solar have to pay £1.51 more towards network costs than they would have if no one had solar (£121.51 for network costs).

Hardly reason for panic at OFGEM.

What about the argument that as more and more households go solar, that the problem of network costs being unfairly and disproportionately recovered from non-solar homes? What if more and more are coupled with battery storage and self consumption of solar generated electricity rises?

If we project that half of UK homes have solar and half do not, then the network costs per unit of electricity rises from 4.6pence per unit of electricity to 5.4 pence. Solar homes would then be paying £95.88 of network costs in their annual bill compared to £144.11 for non-solar homes.

Add in battery storage of electricity at this level of solar deployment and taking the self-consumption of solar generated electricity to 70% - the figures become 6.8 pence per unit of electricity as network charges. Solar/battery homes pay £59.63 per year towards network costs and non-solar homes pay £180.37. Even at this extreme scenario, the increase for non-solar homes is a modest £60.37 a year on network costs.

Of course, once everyone has solar the 'problem' goes away and the extra network costs provide a good incentive to install solar or find other ways to reduce your electricity consumption. OFGEM should go and find a real problem to worry about - they've got plenty to choose from!

Thursday 14 September 2017

The MCS Pricing Mess and How to Fix it

New homes often have smaller solar installations.  Image: Viridian Solar

A government sponsored monopoly raises its fees by 233% .  Cue outrage from the industry, not only from the fact of the raise itself - most people accept that the Microgeneration Certification Scheme (MCS) must live within its means - but mostly from the way it was implemented.  There was no consultation, all was decided by the small, self-elected group who run the scheme.  Little thought had apparently been given to how the change would affect the diverse businesses that rely on certifying their installations to the MCS, and have nowhere else to go for this service.  The transition arrangements were wholly inappropriate.

It's not like they didn't know this was coming.  The consultation to slash the Feed in Tariff was announced in August 2015, at which point it was obvious to everyone that the MCS was facing an existential threat to its income streams, 90% of which derive from solar PV.  This could have been implemented with a lead-in time if the managers of the scheme had acted sooner.

The worst affected are  those that do a large number of low value installations, they are hit disproportionately hard by the £20 increase per certificate.  Businesses providing solar installations to house builders are right at the sharp end.  Solar installations can be as modest at one or two panels - representing only a few hundred pounds' worth of business per house - and when you're in the business of doing hundreds of these each month, those extra £20 sure add up.  To compound their situation, they are installing based on quotations accepted and ordered many months ago, and the contract may be expected to run for many months more.  One business owner estimates that this change has taken more than £100k a year from his bottom line.  Oh, and if you were about to suggest that they should just ask for more money from their housebuilding clients - forget it - that is not how it works in construction.

The other reason for the outrage is that the increase throws into stark relief the many ways that the scheme has failed the industry it purports to be there to benefit.  The purpose of the scheme was to  increase consumer confidence in the new clean heating and electricity generating technologies.  Time and again the scheme has shown itself to be incapable of tackling abuses by the small number of bad apples that have the potential to drag down the reputation of the industry.  People would be more supportive if the scheme had ever bared its teeth and kicked a few companies off the list.

So how to fix this?

If you accept that the MCS needs more income, then you have to accept that prices must rise.  But why must they be the same for every single installation?  The scheme covers 'micro generation' which means systems right up to 50kW in size.

A £35 certificate is a vanishingly small cost for a 50kWp solar installation, which might have a contract value of £50,000.  0.07% to be precise.  On the other hand £35 is a much, much larger proportion of the cost of a small 0.5kWp system on a new home.

To those that say "but the certificate costs the same for the large and the small installation" I say "so what?"

Does my seat on a plane cost the airline the same as my neighbours?  You bet!  Did I pay the same price as they did?  Almost certainly not - especially if I bought mine in a big rush last night and they are more organised and planned ahead.  Does a Gucci handbag cost 1,000 times more to make than an unbranded one.  No chance.  I could go on.

Businesses left cost-plus pricing behind years ago - you price your product at the value someone attaches to it.

A fairer way to apportion the cost of running the scheme is to charge a different amount for a certificate based on the size of the system that is being certified.  By way of example, I'm going to propose how it could work for solar PV - similar approaches could be applied to the other technologies covered by the scheme.  I don't have access to the MCS figures on installation size and number, so I'll use the Feed in Tariff (FIT) statistics to illustrate the concept.

The table shows the number of installations registered with the Feed in Tariff in the 12 months to July 2017, and the number of MWp installed, split by the FIT tariff bands.  If the MCS had been charging £35 per installation, it would have netted £1.25m of income from the 35,815 installations.

If, instead, a certificate had cost £10 per kWp installed, the scheme would have netted £1.315m - a very similar number.

I've just used a straight £10/kWp formula - as I'm working with average values.  A formula that had a minimum of say £20, for installations below 2kWp would collect more from the smaller installations, meaning that the increase for the large scale installations could be kept smaller.

Could something like this work better for industry?  What do you think?

Friday 30 June 2017

Vale of Tiers

The Use and Abuse of the Tier 1 Solar Panel Classification

Marketeers have latched on to this designation as a way of shifting more solar panels, but what does it actually mean?  Is a solar panel from a so-called "Tier 1 manufacturer" of higher quality?

What is a Tier 1 Solar Panel Manufacturer?

Bloomberg New Energy Finance (BNEF) is a research consultancy that provides financial information and analysis to investors in the Clean Energy sector.  BNEF tracks large-scale solar farm development projects, their value, the solar panels used and what kind of finance has funded the development.

Banks that finance solar farms can do so with either 'recourse' or 'non-recourse' finance.    Non-recourse finance means that the bank has no charges over the assets of developer that builds the solar farm, so it will want to be confident that if there's a problem once the farm is handed over it can all be sorted out under the warranty of the solar manufacturer. Consequently banks have a 'whitelist' of solar panel manufacturers that they will accept on projects financed on a non-recourse basis.   BNEF realised that its knowledge of development projects allowed it to infer which manufacturers were whitelisted by banks.  It developed a classification to create a list of 'major' or  'bankable' solar panel manufacturers and subscribers to its services can access this list to help inform investment decisions.

Under the BNEF scheme, a tier 1 manufacturer is defined as one that has sold its own-brand, own-manufacture panels to six projects larger than 1.5MWp (around 6,000 panels) in the past 2 years, where the financing of those projects was by six different banks and the finance was 'non-recourse' finance.

To complicate matters further, there are other, competing businesses also offering reports listing tier 1 solar manufacturers based on their own methodologies, for example Navigant Research has released lists in the past.

A few observations immediately arise:

  • The list is dynamic - companies are leaving and entering the list the whole time.
  • The list is backward-looking - if banks decided not to use panels from a certain manufacturer any more, it could take 2 years before that manufacturer lost its tier 1 status based on the way BNEF monitors the market
  • The criteria is a financial one, some would argue that it is an indirectly measure of the quality of the product - but the banks arguably care most about the continuing ability of the manufacturer to rectify problems  
  • The list is not publicly available - which makes it difficult to check claims
  • There is more than one company providing lists - some manufacturers will appear on one list but not others

Just Add Marketing Stardust

People trying to shift solar panels to consumers and businesses have latched on to this tier 1 categorisation with predictable results.  Here are a few promotional claims harvested from different company websites (not naming any names)

  • "We have chosen the solar panels we offer based on quality, efficiency and value we only use Tier 1 solar panels."  
  • "We use the very best performance-guaranteed, tier 1 products" 
  • "the performance of tier 1 products will always outweigh the quality of their competitors" 
  • "we believe in only using the best, which are the Tier 1 solar panels." 

A deliberate confusion of panel quality with the tier 1 list has created the impression that 'tier 1 panels' are better.  Are they?

Is a Tier 1 Solar Panel a Better Solar Panel?

BNEF itself is clear that being on the list is not a direct measure of the quality of the product or even financial stability of the company that made it:

"We strongly recommend that module purchasers and banks do not use this list as a measure of quality, but instead consult a technical due diligence firm....the classification is purely a measure of industry acceptance.  There have been many examples of quality issues or bankruptcy of Tier 1 manufacturers".

So, lets turn to a technical due diligence firm for advice.  Fortunately one such company, DNV.GL  produces an annual report of its findings and recently published this years - the PV Module Reliability Scorecard Report 2017   .  In this document, the accreditation and testing laboratory reports the reliability test results for more than 50 commercially available PV  solar panel models, which, according to the company, makes it the most complete publicly-available comparison of PV module reliability.  They claim that it covers most of the leading manufacturers active in the market today.

The tests are similar to those used in EN61215 type approval testing, comprising thermal cycling, dynamic mechanical load, damp heat, humidity freeze and PID (potential induced degradation).

Power loss after accelerated lifetime testing - products from large manufacturers are shown in orange and are found among the best and the worst performing panels

Critically DNV.GL "do not see a direct correlation between the size of the manufacturers and the performance in accelerated testing".  The graph above shows the power loss after panels had endured artificial aging through thermal cycling.  The results coloured with the orange bars are products from the top-10 global manufacturers by volume.  Some small manufacturers obtained very good results, while some large manufacturers produced panels with poorer performance in this test.

So the take home is– Tier 1 is not what it sounds like, and it’s certainly not the guarantee of quality that sales people from some solar companies are presenting to customers.

Buyer Beware.

Friday 19 May 2017

Energy Efficiency Regulations - Private Rented Properties

England and Wales 

Is the legislation in England and Wales Collateral Damage of the Green Deal fiasco? 

 In 2015 UK Government introduced legislation creating a minimum energy efficiency standard for homes and commercial properties that are rented out - the Energy Efficiency (Private Rented Property) (England and Wales) Regulations 2015 

This legislation captures around 9% of private rented properties in England (see my earlier blog: How Energy Efficient is UK Housing Stock?) - the Impact Assessment reckoned this would be around 360,000 homes that will need to have an EPC raised from F or G level (based on 2012 housing stats).

These houses would have needed an energy efficiency upgrade upon being re-let after April 1 2018 and by April 1 2020 if the tenancy didn't change first.

The legislation also covered non-domestic buildings. The impact assessment reckoned that 18% of business properties have an EPC of F or G this adds around 200,000 buildings to the total requiring an energy refurbishment.

However, in a ham-fisted attempt at joined up legislation, the government linked the legislation to the Green Deal.

This hopeless scheme was going to "transform Britain's buildings" by offering funded energy upgrades. Householders and social landlords were going to queue up to insulate their homes because the cost of repaying the loan would be less than the money saved on future energy bills. (Who could have possibly predicted that this wouldn't be an easy sell?)

It was ignominiously withdrawn in July 2015 having written only 14,000 Green Deal financing schemes due to its bewilderingly bureaucratic design and the unappealing interest rates offered on the loans.  (Damning National Audit Office report here)

How has the withdrawal of the Green Deal impacted the regulations on privately rented homes?

Here's an extract from the Impact Assessment associated with the regulations for private rented properties:

From 1st April 2016, landlords of a domestic property may not unreasonably refuse requests from their tenants for consent to energy efficiency improvements, where financial support is available that ensures no upfront costs to landlords for the measures, such as the Green Deal, the ECO, tenant’s own funds, or national or local authority grants.  
 From 1st April 2018, all new lettings or tenancy renews of applicable private rented properties in the domestic and non-domestic sectors should be brought up to a minimum EPC rating of an ‘E’ if this can be achieved with no upfront costs. 

By adding the requirement that the only energy efficiency improvements that have to be made are those with no upfront costs, the whole thing is effectively defunct. With the Green Deal gone, and precious few local authority grants around the only way that a building is going to be improved is if the tenant pays for it!

The regulations need to be urgently amended to require the Landlord to upgrade the property - perhaps with a cost cap (see information on new Scottish regulations below).

Meanwhile in Scotland... 

This issue looked as if it had been kicked into the long grass in Scotland after the working group tasked with developing Regulation of Energy Efficiency for the Private Sector (REEPS) seemed to get bogged down and put on ice in 2015, (naturally Westminster was blamed for this).

The only regulations affecting private landlords was a rather lame requirement to get an energy assessment (with recommendations) done - but without having to  action any of the recommendations.

However, Scot Gov has sprung into action and designated energy efficiency as a National Infrastructure Priority and published a consultation on a Scottish Energy Efficiency Programme (SEEP).

 Plans for energy efficiency requirements in the private rented sector have also been revealed in a consultation published in April 2017. The proposal is to legislate for privately rented houses covered by the so-called "repairing standard" which will need to meet a minimum energy efficiency rating on the Energy Performance Certificate (EPC).

For new tenancies after 1 April 2019 the house will have to achieve an EPC rating of no worse than EPC band E. After 1 April 2022 (called the backstop date) all privately rented homes will have to achieve this minimum standard, irrespective of a tenancy change. 

Scottish goverment estimates that this change will affect 30,000 properties.

 Where the EPC shows a band of F or G, the owner will need to have a "minimum standards assessment" carried out and lodged on the EPC register before renting out the property. This new assessment is closely based on the EPC methodology, but will include recommendations of the lowest-cost technically appropriate measures to bringing the house up to the required energy standard.

 The owner will have to bring the property up to the standard required by the assessment within six months of the date of the assessment, but subject to a proposed cost cap of £5000.

The minimum standard ratchets up to EPC D, for new tenancies after 1 April 2022, with a backstop date of 1 April 2025.

The property owner will be responsible for getting the improvements required by the minimum standards assessment done. Local authorities will have the power to issue civil fines of up to £1,500 against any owner who does not comply with the standard.

The proposals for Scotland look very good.  If implemented, it would result in a clear process that can deliver low carbon buildings in this sector through the regular tightening of the requirements (beyond those mapped out to 2025).  The proposals seem to be a good mixture of ambition and realism.

By contrast, England and Wales is in disarray in this area and as in the regulation of so many other sectors (new homes, social housing), Scotland is showing the way.

Thursday 6 April 2017

What is a PERC Solar Cell?

 This article was written by guest blogger, Dr KT Tan, Technical Director at Viridian Solar

What is the fuss about PERC?

Anyone who follows the latest developments in the Solar Photovoltaic market will have come across the technical term  PERC at least once in the past year or so. All respectable suppliers compete to launch the best PERC as a means to secure the premium part of the market share.

PERC stands for Passivated Emitter Rear Cell; the concept was first proposed by the University of New South Wales in a grant report in 1984 (1). As a matter of fact, the modern PERC generally refers to two specific configurations called PERT (Passivated Emitter, Rear Totally-diffused), and PERL (Passivated Emitter, Rear Locally-doped), which have proven to be the most viable solutions amongst other PERC configurations (2).

The PERC concept has taken some time to meet with commercial success, with the first high efficiency solar cells being fabricated in a lab with up to 20% efficiency from the 1990s onwards (3). Obviously at the time, there were other competing technologies, e.g. back contact and HIT. Hence, the PERC concept was only one of many promising ideas to increase solar cell efficiency. It took nearly 30 years for the industry to catch up with the concept and produce efficiencies achieved at the research level.

How does a PERC Solar Cell work?

In a nutshell, a PERC solar cell can be created by adding a rear surface passivation film to a conventional crystalline cell. In practice, it involves two additional steps: first, a rear passivation film is applied. Then, either lasers or chemicals are used to open up tiny pockets in the film through which the rear conducting layer can contact the silicon below the passivation layer. Figure 1 compares the configurations of conventional and PERC solar cells.

Figure 1: Comparison of cell configurations 
The above technique enables the efficiency of the solar cell to be improved in three ways:

1. By minimising surface recombination

The atoms at the surface of a silicon wafer have 'dangling bonds' which can capture charge carrying electrons and pull them back into the silicon crystal structure (a process called surface recombination).  As a result, when an electron reaches the back surface of a conventional solar cell, it is likely to be captured and does not contribute to the current.   However in a PERC solar cell, a passivated film is grown on the back surface of the solar cell and reduces this effect by tying up the 'dangling bonds'.  A charge carrying electron that strays too close to the back surface is allowed to continue on its way and the chance it will reach the emitter and contribute to the electric current produced by the solar cell is increased.  (Figure 2).

Figure 2 - How a PERC cell increases efficiency by decreasing surface recombination.

Longer wavelengths (red light) generate electrons near the back surface, compared to shorter wavelengths (blue light) (4). Since the PERC solar cell helps prevent surface recombination, it will still be able to capture these wavelengths (5). This capability increases the solar cell performance during mornings and evenings when longer wavelengths are present, which leads to the claims of better weak light performance by many manufacturers.

2. By Increasing Internal Reflectivity to Capture More Light

The rear film reflects the light that passes through the solar cell without being absorbed. This provides the light with more opportunity for a second absorption attempt (see Figure 3). In other words, the efficiency of energy conversion is just becoming higher.

Figure 3 PERC cell increases efficiency by reflecting light back through the solar cell


3. By Reflecting Counter-productive Wavelengths

Generally, silicon solar cells stop absorbing wavelengths above 1180 nm, instead they are absorbed by the backside metallisation layer and tuned into heat (4). The rear passivated film reflects these counter-productive wavelengths out of the solar cell and hence maintains cooler temperatures. As a result, PERC solar cells are considered to have better heat resistance.

All of the above, if manufactured correctly, will undoubtedly increase the solar cell efficiency. The current commercially available PERC solar cells are in excess of 20% with a record efficiency of nearly 23%. As this technology is piggybacked on conventional silicon solar cells, you can bet the efficiency will not stop here and will keep on improving!

Why has it become the most exciting feature amongst the new PV technologies?

It has been a holy grail in the PV industry to find a product with the highest possible efficiency, while maintaining the low manufacturing cost. To some extent, PERC solar cells provide the answer. Hence, it is not surprising that this technology has a rosy outlook.

From standstill, the PERC commercial production capacity is expected to occupy nearly half of the total solar PV cell capacity by 2020 (see Figure 4) (6).

Figure 4: Global production capacity for PV cells from 2016 to 2020 (Source: Energy Trend)

The main attraction is that the PERC production requires minimum modifications to existing solar cell manufacturing lines. The existing lines can easily be upgraded to produce PERC solar cells without having to invest in large capital expenditures or completely overhaul the entire lines.
In other words, one could increase the solar cell efficiency without having to take huge financial risks. Meanwhile, by recycling most of the existing equipment, one simply makes the money work harder for the original investment. It appears to be a no brainer.

Too good to be true?

Despite the PERC being a thirty year old concept, they have not been tested commercially until quite recently. It was first reported by Ramspeck (7) that PERC cells can exhibit stronger power degradations during the early days through a process called light-induced degradation (LID).

Fraunhofer Centre for Silicon Photovoltaics in Germany subsequently conducted extensive research to find out the degradation mechanism due to illumination (8). They were mainly attributed to chemical and physical root causes, namely Boron-oxygen complex activation and Iron-boron pair dissociation for the former, and elevated temperature for the latter.

To be fair, LID is not a new phenomenon, affecting conventional solar cells too. This type of degradation has simply entered the spotlight with the introduction of mass-produced PERC solar cells.

In response, some cell manufacturers have addressed the issue by adopting small changes to the solar cell process, such as modified process temperatures and different wafer materials (9).


PERC is going to take centre stage in the foreseeable future. This technology will progressively take a bigger market share – the benefits to developers, designers and installers are obvious.

Sooner or later, most of the PV modules will feature this technology. One just has to carefully choose trusted, quality suppliers with proven test records.

This fuss is, after all, worth paying attention to.


1. M.A. Green, A.W. Blakers, J. Kurianski, S. Narayanan, J. Shi, T. Szpitalak, M. Taouk, S.R. Wenham and M.R. Willison, Ultimate Performance Silicon Solar Cells, Final Report, NERDDP Project 81/1264, Jan. 82-Dec. 83 (dated Feb., 1984).

2. M.A. Green, The Passivated Emitter and Rear Cell (PERC): From conception to mass production, Solar Energy Materials & Solar Cells, 143 (2015) 190-197.

3. C.J. Chiangand, E.H. Richards, A 20% efficient photovoltaic concentrator module, conferencerecord,21st IEEE Photovoltaic Specialists Conference, Kissimmee, May 1990, pp.861–863.

4. D. de Rooij, PPERC solar cell technology: why will PERC dominate silicon cell technology? 2015 (

5. PERC cells: production costs down, efficiency up. May 2016. (

6. TrendForce Reports PERC Cell’s Global Production Capacity to Reach 25GW in 2017, Resulting in Doubling of Total Annual Output. 19/1/2017 (

7. Ramspeck, K. et al. 2012, “Light induced degradation of rear passivated mc-Si cells”, Proc. 27th EU PVSEC, Frankfurt, Germany, pp. 861–865.

8. Tabea Luka, Christian Hagendorf & Marko Turek, Multicrystalline PERC solar cells: Is light-induced degradation challenging the efficiency gain of rear passivation? Photovoltaics International, 2016

9. Haase, J. Mono as well as multi PERC cells will get a significant market share, PV Magazine 10/2015, pp.74 -77.

Thursday 9 March 2017

Higher Standards for Housebuilders Do Not Slow Development

The Truth has Been Revealed by Scottish Developers

NOTE - a new blog with updated data is available here

Housing developers say that if you make them build more energy efficient homes, they'll cost more and less houses will be built.

Our politicians have swallowed this argument hook line and sinker time and again.

I've written about this before - the flaw in the argument is the assumption that the developers costs have to rise. They don't. This is because one of the main costs of building the house is what you pay for the land, and if everyone is faced with the same regulations, then the value of the land is driven down and the landowner makes a slightly smaller profit from the deal.

Developers have successfully held up tighter building regulations on numerous occasions.

  • The update to building regulations in England in 2012 ended up as only a 7% reduction in carbon emissions (compared to the significant cut required in the original zero carbon homes policy)
  • In Scotland in 2012 there was no energy efficiency improvement at all.
  • The Housing Standards Review resulted in legislation in 2015 with the intent of limiting local authorities powers to require higher energy efficiency homes through planning (legislation that is still not in force).
  • Finally, after 10 years of clear policy direction, one of George Osborne's last acts before disappearing off to take lucrative directorships, was to tear up the Zero Carbon Homes plan (the existence of which had been mendaciously used to justify the changes in the Housing Standards Review).

So it's very interesting to see what's been happening in Scotland. After many years of shadowing Westminster on building regulations (apart from the obvious requirement to go just a percentage point or two lower on carbon emissions to make a point), Scotland really pulled ahead with its changes to regulations in 2015.  The graph at the top of this piece shows the gap opening up.

If developers' claims that higher levels of regulation would stop housebuilding in its tracks were true, you'd expect housebuilding in Scotland to have gone off a cliff.  Has it?

Has it hell.

The Merton Rule is Not Dead, Long Live the Merton Rule!

Can Local Authorities Still Require Energy Efficiency Higher Than Building Regulations?

The largest housing developers would very much prefer it if they were able to build the same house in Aberdeen that they build in Abingdon. If they can do this, then the cost of the architect and engineers to design their standard properties can be spread across more units, and their buying power can be increased by using the same component parts in every house they build. This is one of the ways they out-compete smaller, more local building companies.

For this reason they dislike local rules and regulations that affect the houses they build.

In 2008 UK government enacted the Planning and Energy Act, which among other things clarified that local planning authorities had the legal right to require energy efficiency standards in new homes that exceeded the national building regulations. This approach to pushing developers to build housing with an energy performance beyond the national minimum had become known as the 'Merton Rule' after the local authority in London that had pioneered the approach.

Here's what the Act says:

1  Energy policies
(1)A local planning authority in England may in their development plan documents, and a local planning authority in Wales may in their local development plan, include policies imposing reasonable requirements for—
(a) a proportion of energy used in development in their area to be energy from renewable sources in the locality of the development;
(b) a proportion of energy used in development in their area to be low carbon energy from sources in the locality of the development;
(c) development in their area to comply with energy efficiency standards that exceed the energy requirements of building regulations.

It has been estimated that around 50% of local authorities took advantage of the new clarity to build such requirements into their local plans.

National builders didn't like it. They didn't like it at all. Different local authorities chose to ask for 10% renewable energy on site, 20% renewable energy on site, Code for Sustainable homes level 4.

The national developers were faced with different requirements up and down the country, a situation further complicated for them by the fact that different local authorities enforced their planning requirements with different levels of enthusiasm and competence. In some areas, particularly those areas that combine both high housing need and low house prices, developers might find a planning requirement for renewable energy on new developments to be highly negotiable. Other local authorities, notably in the south, with high house prices that means developers are queuing up to build new homes there, have been far more successful at holding the line on the aspirations of their development plans.

It could be argued that this is exactly as it should be. The UK has very high geographical differences in house prices. Local authorities could set their local planning regulations to achieve the highest energy performance that was consistent with the economics of housing development in their area, specifically whether the value of development land is sufficient to cover the additional costs of more efficient homes.

However, in 2015 things swung back towards the large developers. The government of Cameron and Osborne announced a 'bonfire of regulations' to free business from costly red tape. The Housing Standards Review was formed to look at red tape afflicting the house builders. Developers successfully argued that this patchwork of local planning requirements was 'red tape' and the review concluded that it should be swept away.

The government chose to add the changes to the Deregulation Act 2015. Section 43 of this Act amends the Planning and Energy Act as shown below.

43 Amendment of Planning and Energy Act 2008
In the Planning and Energy Act 2008, in section 1 (energy policies), after subsection (1) insert—
“(1A)Subsection (1)(c) does not apply to development in England that consists of the construction or adaptation of buildings to provide dwellings or the carrying out of any work on dwellings.”

This amendment would remove the ability of local authorities (in England only) to require developers to exceed the building regulations for energy efficiency. Note that sections 1(a) and 1(b) remain, allowing local authorities to continue to require that a percentage of the energy consumption of a new development to be met with renewable or low carbon energy.

The passing of the Deregulation Act is, however, not the last word in this story. If you work in a local authority in England and a housing developer is telling you that you can't impose higher standards than building regulations on a development, they're wrong. The fact is that section 43 has not yet been brought into force, so the original Planning and Energy Act text still applies.

If you look at the Commencement Section of the Deregulation Act, you'll see how the various elements of the Act are to be brought into force.

Some sections come into force on the day the Act is passed in parliament, others some set number of months later. No special mention is made of section 43, so it falls under this provision:

(7)Except as provided by subsections (1) to (6), the provisions of this Act come into force on such day as the Secretary of State may by order made by statutory instrument appoint.

A check of statutory instruments shows that this has not yet happened for Section 43, fully two years after the Act itself was passed by Parliament.

In fact, during a Lords debate on the Neighbourhood Planning Bill, in response to a question by Baroness Parminter, Lord Bourn confirmed that it was the case that local authorities still have powers to require higher building standards:

"The noble Baroness asked specifically whether local authorities are able to set higher standards than the national ones, and I can confirm that they are able to do just that."

So there you have it, the Merton Rule lives on!

 Local authorities still have powers to drive low carbon development in England, and it's just as well because our central government seems to have lost the will to do so. It's down to the sustainability officers and planning officers to enforce their local plans and they have the power to do so.

The Residual Valuation Model

Why Legislators Have Nothing to Fear from Tougher Building Standards

Simple isn't it? A chain of logic that seems irrefutable.

Legislate to make developers build better (low energy) homes and their build costs will rise.
These homes will not command a higher price, because the market is dominated by the price of existing homes for sale.
So the developer will make less profit.
Less homes will be built at a time when the country desperately needs them.

Our politicians have been buying this argument again and again from well-funded lobbyists working on behalf of housing developers.

The flaw in the logic is the assumption that the developers costs have to rise when you increase building standards. They don't. This is because one of the main costs of building the house is what you pay for the land, and if everyone is faced with the same regulations, then the value of the land is driven down and the landowner makes a slightly smaller profit from the deal.

The windfall from selling land to developers is so significant that a small decrease in the value is not going to slow down the market.