A hundred years of resistance

The world’s first quantum computer – a machine that harnesses the magic of quantum phenomena to perform memory and processing tasks much faster than today’s silicon-based computer chips – was recently sold by D-Wave Systems of Canada to Lockheed-Martin. And, while some question whether the machine is truly a quantum computer, its designers have published articles in peer-reviewed journals demonstrating that the basic elements of this novel computer are indeed superconducting quantum bits.

This spring marked the 100th anniversary of the discovery of superconductivity – the ability of materials to carry electrical current with no loss. Currents set up in superconducting wires can exist for years without any measurable decay.

Because of this property, superconductors have unique features that can be exploited in many ways. They can carry enormous amounts of current, making them ideal for urban power grids. And, when wound into coils, they can produce extremely strong magnetic fields.

Such superconducting magnets have been applied in a variety of technologies. The best-known examples are the magnets that drive the magnetic resonance imaging (MRI) machines found in most hospitals. Perhaps the most exotic are the huge magnets used to accelerate particles in the Large Hadron Collider, which seeks to discover the fundamental principles of matter.

Uses for superconductivity
Despite their great promise, however, superconductors have limits, the primary one being that most superconduct at very low temperatures – indeed, near absolute zero (-273ºC). Such temperatures can be achieved only through liquid-helium cooling. Thus, Swiss researchers caused excitement in 1986 by announcing the discovery of superconductivity in an oxide of copper at twice the temperature of the previous record holder.

Shortly thereafter, researchers in the United States found a related material that superconducts above the temperature at which air liquefies. As Time magazine proclaimed in May 1987, with the discovery of these so-called “cuprates,” the superconducting revolution had begun.

Alas, the revolution soon bogged down. Cuprates are notoriously difficult materials to work with, because they are very brittle. This is exacerbated by their strong anisotropy – the materials have a quasi-two-dimensional structure consisting of a weakly coupled stack of conducting sheets. As such, they are a challenge for industry, though applications are beginning to appear.

Since the cuprates first appeared, a variety of other “high temperature” superconductors have been discovered – one is a simple compound of magnesium and boron, and another involves a mixture of iron and arsenic. Although none of them superconduct at temperatures as high as liquid air, they may ultimately be better materials with which to work. Given the vast number of combinations of elements that can form compounds, there is a good chance that better superconductors await our discovery.

Expansion of use
In the coming years, superconductors are expected to play a growing role in technology. Already, “second generation” cuprate wires are being used to make high-capacity cables for electric-power transmission, and lighter-weight generators for wind turbines. Stronger superconducting magnets are leading to the development of MRIs with more sophisticated diagnostic capabilities.

Superconductors are being used for levitated trains in high-speed rail transport, and as microwave filters for improved signal bandwidth in cellular base stations. The discovery of a new superconductor with enhanced properties could lead to even greater technological innovation.

This brings us to the intellectual challenge of superconductors. It took 46 years from the discovery of superconductivity to the 1957 Bardeen, Cooper, and Schrieffer (BCS) theory of how the phenomenon occurs. Along the way, a number of famous physicists tried and failed to get the answer – Albert Einstein, Werner Heisenberg, and Richard Feynman being notable examples.

Discovering the solution required the development of advanced theoretical techniques. What had been difficult to figure out was how to get electrons to superconduct. The basic discovery of BCS was that if the electrons pair up, those couples could indeed superconduct.

Fortunately, the mechanism for such coupling was known. Although electrons are negatively charged, and therefore repel one another, the positive ions that they leave behind when they flow through a metal can mediate an effective attraction between two electrons under restrictive conditions (for example, the metal must be very cold). The suspicion, though, is that this is not the case in the new superconductors.

Cuprates superconduct at much higher temperatures, but, more importantly, they possess some exotic properties: they are formed by doping electrical carriers into a host material that is a magnetic insulator – the last place one would look for a conventional superconductor. And, unlike BCS theory, in which the pairs are isotropic – with identical properties in all directions in space – the pairs in cuprates are strongly anisotropic, resembling a cloverleaf.

How can one pair electrons without ions holding them together, thereby enabling higher-temperature superconductors? While ideas about this abound, new theoretical breakthroughs most likely will be needed to develop the machinery required to solve such electron-electron theories, perhaps even involving black holes. Whatever the theory turns out to be, it is certain to revolutionise physics.

Michael Norman is Argonne Distinguished Fellow and head of the Materials Science Division at Argonne National Laboratory, and Fellow of the American Physical Society.

The shining light

What would you do if someone told you that the multi-billion euro investment you’ve just made in a new power plant, that will take a decade to bring online, may not actually have been needed? Well it’s potentially not that far from reality.

We are in a time of very strong alignment. It is clear to see that energy demand is increasing everywhere and that CO2 emissions are on the rise, that urbanisation and new economies are going to drive a change in the economic and social order and that the global debt crisis is still hovering over many countries and regions. However, when it comes to the big question: “What are we going to do, so that we don’t cook the planet”, there appear to be many different schools of thought.

First of all – the ‘change the energy mix’ view that strongly believes in replacing base load fossil fuels with renewable sources (solar, wind etc). Secondly – the ‘low carbon fuel’ school of thought that feels that renewable energies will never be sufficient to act as base load and therefore high carbon fossil fuels should be replaced by low carbon fuels such as shale gas and hydro.

However, both of these approaches fail to address the big issue of our ever increasing need to consume energy and purely seek to satisfy demand. We need to move to a mindset of active efficiency, driving out every element of waste in our energy consumption and the related processes.

An abundance of efficiency
So what does active energy efficiency really mean? Well Schneider Electric’s headquarters in France demonstrated it in action back in June 2011, when it became the first site in the world to be certified ISO 50001 (the new energy management standard).
Having divided the energy bill by four since moving to the site in 2009, a combination of intelligent systems, detailed measurement, strong process and engaged employees has enabled a regular year∞on∞year energy reduction of up to 10 percent, reaching 80kwh/m2/per year, with a target of 50kwh/m2/per year by the end of 2012. This is driven by a four-step approach implemented across sites and with customers worldwide:
1. Measure: Know what, how much, when and where you are consuming through effective measurement and audits to be able to take appropriate action. A huge number of people still don’t know how much their energy bill is, let alone the dynamics of their consumption patterns.
2. Fix the basics:  With efficient technology (such as lighting, variable speed drives) and behaviour change – many inefficiencies can be from differing working practices among colleagues.
3. Optimise and automate: The process (a building, an industrial site, a datacenter for example) can be used to create a smart facility that is able to act in real time to the way it is being used.
4. Monitor and maintain: Monitoring performance and efficiency reductions through continuous improvement can be measured and quantified.

Positive change
But does active energy efficiency make sense? It does, and on many levels:
The technology is here. Whereas a few years ago, when passive efficiency (lightbulbs and insulation) were the only real options, the convergence of the worlds of energy and IT are now making intelligent energy and smart systems an actual reality.

The economics work. Changing mindsets and using an energy bill as an asset now enables companies and municipal authorities to pay for their investments with the energy savings they are making, and with the continued increase in the price of energy these projects are having a ROI of two to five years.

Policy and legislation will force change. Either you are ahead of the curve or you will have to react quickly to many of the new challenges being set by governments across all sectors and departments.

Public image is critical. With ‘green’ credentials ever more important to the general public, those who can demonstrate a sustainable way of operating will gain a competitive advantage.

Fossil fuel reserves need energy efficiency. For every unit of energy saved at the point of use, three units of primary energy (e.g. the coal used to produce that energy) are saved, therefore extending the life of fuel reserves significantly.

Access to energy is critical. Over 1.4bn people still have no access to energy and therefore the saving of energy by those who have, frees up capacity to provide those who have not and enables massive social, educational and economic possibilities in the areas that need it most.

So is it better to invest in energy efficiency or in building new power plants? Well it has to be both, not one or the other, however the correct balance needs to be struck between where the focus of policy, financing, incentives and legislation falls.

Active energy efficiency generates a fast ROI, which either reduces operating expenditure or frees up a new source of cashflow to invest and has a major impact on the energy dilemma that we are all facing at work, home and in public services. As many people are now paying for their investment through the savings they are making, it is one of the few activities that makes perfect financial sense.

While there is still an ongoing discussion on global warming and climate change, the energy dilemma has no doubters and we will only be seeing demand and energy prices continue to rise. So by implementing active energy efficiency you can have a big impact. Either you can help positively impact climate change, or if you don’t want to do it for the planet, do it for your wallet. tne

For more information
Jonathan Hart is SVP Corporate and Solution Marketing at Schneider Electric
www.schneider-electric.com

Broken promises

It seemed like the perfect China play, a way for investors to cash in on the world’s fastest growing economy. China MediaExpress Holdings Inc, which provides advertising on buses that clog the smog-choked streets of the country’s largest cities, was on a tear on the Nasdaq Stock Exchange. After rising 45 percent in 2009, the stock gained another 49 percent in 2010. That came to a halt in late January. In a research report, Andrew Left, an investor who runs Citron Research from his Los Angeles home, termed the company a “phantom” that was “too good to be true.” The stock plummeted 14.4 percent after Left’s comment, to $17.84 from $20.86 in one day.

Citron’s report was followed by similarly damning charges from Carson Block of Muddy Waters Research, who called the stock a “pump and dump scheme.” Soon after, the editor of thefinancialinvestigator.com visited the company’s offices and posted videos that he said made it “exceptionally clear” the place was bogus. China MediaExpress’ stock hasn’t recovered. Shares lost 47 percent in four days, and were trading at $11.88 on March 11 when the stock was halted on the Nasdaq. It hasn’t traded since. In March, the company delayed its year-end filings and its finance chief resigned. The Hong Kong-based company said its auditor, Deloitte Touche Tohmatsu Hong Kong had severed ties.

The story of China MediaExpress has become an increasingly common one as US investors chase the next hot Chinese stock – only to find themselves victims of scams.

Many of the questionable Chinese companies gain access to US capital markets through a back door. In what’s known as a reverse merger, a private company buys enough shares of a public firm to essentially become publicly traded. That allows the company to pay a much lower fee to be listed than it would with an initial public offering – not to mention sidestep the more rigorous filing demands of an IPO.

Of the more than 600 companies that obtained entry to US exchanges this way between January 2007 and March 2010, a total of 159 were from the China region, according to the Public Company Accounting Oversight Board (PCAOB). A study by TheStreet indicated such schemes involving small-cap Chinese firms may have cost investors at least $34bn over the past five years.

This has taken US exchanges by surprise. NYSE and Nasdaq have delisted several companies and have a veritable “skid row” of more than a dozen firms that have been halted for weeks or months pending requests for information about accounting problems and late regulatory filings.
What are regulators doing about it? Although their stocks are traded on US exchanges, the companies are based in China. That makes it unclear whose jurisdiction they actually fall under – creating a regulatory void that companies can easily exploit.

On top of that, Beijing has barred America’s PCAOB, established under Sarbanes-Oxley, from reviewing China-based accounting firms – even if they are registered auditors with the accounting agency. That loophole enables Chinese companies to hire big name and no-name firms locally; as a result, they face no redress from US authorities for any bad practices.

“There may be honest firms in China, but you can’t monitor or control them,” said Hamid Kabani, president of Kabani & Co in Los Angeles, a firm that has audited reverse merger stocks. “I can’t see how a US firm can satisfy whether the [Chinese] firm is [is legitimate].”

The shorts
In the absence of stricter regulation on companies and auditors, it is left to independent investors like Andrew Left or Carson Block to ferret out suspicious activity. They too, are not without controversy. Left, Block and their peers are short-sellers who profit when a stock collapses – and critics point out that they can in theory benefit even if their research proves faulty.

“It’s no secret we’re interfering with scams that could net these chairmen tens of millions of dollars,” said Block. “Criminals deprived of such amounts will not take a kind stance towards people like me.”

In November last year, his fledgling firm published a strongly critical report on RINO International Corp, charging that many of the company’s customers were nonexistent and that its accounting “has serious flaws that are clear signs of cooked books.”

Shortly after, Block received threatening letters warning him to retract his allegations and explaining that “severe consequences may result if you do not act appropriately.” An email received two days later mentioned his wife, Kathy: “Are you, Kathy and your dad ready for a bullet? Get ready. It could happen at any time now.”

Less than a week later, RINO’s auditors found accounting flaws. One month after the Muddy Waters report, the clean-tech company was delisted by Nasdaq. Its shares had fallen a monumental 96 percent from a 52-week high reached back in October.

Block is based in Asia, though he would not say exactly where. He didn’t contact the authorities, saying he was “more worried about the people whose threats I haven’t received,” but he did take additional security measures.

Lose the battle, lose the war

The SEC has stepped up its interest in reverse-merger stocks. It has an active probe into foreign companies listed on American exchanges. US exchanges, too, are belatedly tightening rules on reverse mergers. Nasdaq, for one, is now considering adopting stricter listing requirements for reverse mergers. The proposal would require such companies to be traded for at least six months on the over-the-counter market or another national exchange, as well as maintaining a minimum bid price of $4 per share on at least 30 of the last 60 trading days immediately preceding the filing for the initial listing.

A source at Nasdaq said the recommendation was expected to be enacted. When asked if it was undertaken due to the recent assortment of scandals, the source that “we’ve had some feedback.”

Paul Gillis, a professor of accounting who focuses on US-listed Chinese companies at Peking University in Beijing, said China needed to make it easier for its firms to list on Chinese exchanges. “It makes no sense for Chinese companies to have to go halfway around the world to get capital,” he said, adding that China was in a better place to regulate them than the SEC or the Public Company Accounting Oversight Board.

A PCAOB report on reverse mergers published noted there were 56 initial public offerings from China, representing 13 percent of all IPO’s in the US in the three years from January 2007 to March 2010. IPO’s require a greater degree of scrutiny and expense for companies to meet listing and filing requirements. They are an important source of income for such exchanges as NYSE Group and Nasdaq OMX.

As of the report date, the 159 China-region companies that gained access via reverse mergers had a combined market capitalisation of $12.8bn, less than half the $27.2bn market capitalisation of the China related IPOs.

By the end of the research period, 59 percent of Chinese reverse merger companies reported less than $50m in revenues or assets as of their most recent fiscal year.

Analysts hastened to say that there was nothing inherently or particularly suspicious in a reverse merger, but Gillis said such operations “avoid much of the scrutiny that takes place in a normal IPO. That makes them the preferred route for fraudsters.”

Once here, these companies attract retail investors who screen for stocks with high growth rates and low prices, and often run into companies such as this, seemingly diamonds in the rough overlooked by others.

James Chanos, founder of the New York-based hedge fund Kynikos Associates LP, says the Chinese scams follow a classic pattern. “The modus operandi by these stock promoters is to find what the hot area for retail investors is, so 15 years ago it would have been the dot-coms, a bunch of years ago oil and gas and now it is China. You sell the big story,” he said.

China needs investors
Dave Gentry, president and chief executive officer of investor relations at research firm RedChip Companies, points out that 70 percent of China’s double-digit economic growth is created by companies with less than 2,000 employees.

While some companies may be overstating their results to entice American investors, Gentry says in their homebase, Chinese firms more frequently under-report revenues to tax authorities – a problem he said was “systemic.”

“It comes down to the character of the CEO and the management team in these companies and there is fraud. We cannot be in denial about this,” he said in a telephone interview while meeting clients in China. Investor relations firms play a crucial role in helping companies navigate through the listing process, either through a reverse merger or an IPO.

Crocker Coulson is the president of CCG Investor Relations and Strategic Communications, a company which handles investor relations for some 35 companies, many of them Chinese. One Chinese client, Puda Coal Inc. which provides coking coal for steel production, saw its stock plunge and halt on the NYSE Amex stock exchange less than a month ago after another investor, Alfred Little, took aim at the company. His April 8 report alleges the chairman of the company “transferred the ownership of PUDA’s sole Chinese operating entity, Shanxi Puda Coal Group Co., Ltd (“Shanxi Coal”), to himself in 2009 without shareholder approval according to official government filings.” Asked how he felt about companies he works for that have had their shares halted, Coulson paused, shifted his feet uncomfortably, and said: “I’m going to say no comment.”

As for his client Puda, on April 11 the company said it would investigate the allegations. The chairman, Ming Zhao, agreed to cooperate in the investigation. That’s not stopping law firms from sharpening their pencils as a handful have filed for class action status on behalf of investors.

Last line of defence

Some of the worst breaches may be at the auditing and accounting level. “It is no secret that we have not been able to inspect all of the non-US firms we are required to,” PCAOB chairman James Doty told the Council of Institutional Investors on April 4.

At the same meeting, SEC Governor Aguilar raised the issue of how companies are raising capital, a situation he said he finds himself “increasingly concerned about.”

“PCAOB-registered accounting firms based in the United States audited 74 percent of the Chinese reverse merger companies, while China-based registered firms audited 24 percent,” the agency said in March.

Top officials from both the United States and China concluded their once-a-year Strategic and Economic Dialogue meeting in Washington on Tuesday, saying they would work toward enhancing “mutual trust and strive to reach agreement on cross-border oversight cooperation.”

Efforts to inspect Chinese auditing firms have met resistance from Chinese authorities, but Doty told Reuters that he expected progress this year, in part because the various problems with Chinese firms had shown authorities in Beijing the importance of credible auditing. “We will make progress in getting access to those audits,” he said.

Meeting resistance
Drew Bernstein, the co-managing partner of Marcum Bernstein & Pinchuk, a New York-based audit and accounting firm, said he sometimes has to go to extremes to get Chinese company officials to understand the ramifications of shoddy auditing and accounting.

Instead of bowing to the intransigent company chairmen or boards, he explains that if they don’t cooperate and own up to problems, he will be forced to tell the local authorities of alleged fraud, therefore making it a Chinese problem. Switching the jurisdiction changes the calculus. Executives have been executed in China for fraud and corruption.

Many agree the presence of short sellers and the research they provide are useful. As soon as they attest to that, though, they point fingers at unidentified “dishonest” short sellers operating at the behest of hedge funds looking for an exclusive edge.

Winston Yen, CFO of Orient Paper, which is based in the city of Baoding in China’s Hebei province, said his company and investors “feel totally victimised” by the negative research published by Block in 2010, which caused a sharp decline in that company’s stock. Shares in the company closed at $8.33 before Block’s first report published on June 28, 2010. They fell precipitously in the next session and have never fully recovered, currently trading at around $4 a share.

“I don’t think they were right on anything, to be honest with you,” Bernstein said, explaining that Orient Paper hired 15 to 20 professional services firms to investigate.Because investors don’t have the ability to conduct similar due diligence, they “tend to panic” when negative research appears, Yen said.

However on March 23 the company said it would have to re-audit results from 2008 though it maintained that doing so would not impact financial statements for fiscal years 2009 and 2010. The re-audit results are expected at the start of the third quarter of 2011.

Block isn’t buying the company’s view. “It is not surprising that a probe conducted by the company on itself, under the umbrella of the attorney-client privilege conferred by having the inquiry managed by one of the most prolific issuer’s counsel of Chinese RTO (reverse takeover) companies, enabled the company to issue a press release stating that it determined it wasn’t defrauding investors,” Block said.

Claiming security concerns, Muddy Waters removed the firm’s phone number from its website, along with a phony mailing address that had created an array of controversy about the location of the firm’s headquarters. “I felt that the sort of attention I was getting wasn’t the kind we wanted,” Block said.

Thefinancialinvestigator.com’s Boyd, who does not short shares he is writing about, has some reservations himself about shorts.

The reports “were brilliantly reported and laid out, but you can never get past the fact that they’re doing this for money,” he said. “If something doesn’t work out – and I’m not just talking about (Muddy Waters) – these guys could have a situation where they went after a company and made money but couldn’t substantiate their claims.”

Until the auditing problems are cleaned up and greater responsibilities are shared by US and Chinese regulators folks such as Block and Left will have ample opportunity in their chosen business. “Just because it is China doesn’t mean it is a path to riches,” Left said.

Long term infrastructure shaken to the core

Described by Japanese Prime Minister Naoto Kan as the worst crisis to have hit Japan since World War II, the earthquake and tsunami catastrophe that hit Japan in March has had a devastating effect on the country. The Japanese National Police Agency confirmed 14,704 deaths, and 10, 969 people missing across eighteen prefectures. Striking precisely at 14:46 JST (05:46 UTC) on Friday, 11 March 2011, the quake occurred off the north east coast of the country, with its epicentre located approximately 70km east of the Oshika Peninsula of Tōhoku, the nearest town being Sendai. According to a senior science adviser at the US Geological Survey, the quake ruptured a patch of the earth’s crust 240km long and 80km across.

Officially named the Great East Japan Earthquake, the quake racked up a magnitude of 8.9 (Mw), a so called undersea megathrust earthquake, a variant defined by the tectonic activity occurring at subduction zones at destructive plate boundaries, where one tectonic plate is prone to dangerously subduct under another. Large sections get stuck in the dip of the plate boundary, which is shallow. Earthquakes of the megathrust type count among the world’s most powerful, with moment magnitudes (Mw) that can exceed 9.0. Since 1900, all six earthquakes of magnitude 9.0 or greater have been of this particular variety.

In the case of the Japanese monster quake, scientists have been unable to determine – or agree, at least – which exact plates were involved in what triggered the catastrophe, but the amount of slip on the interface between the two plates is believed to have been as generous as 5–10 metres resulting in uplift of the seafloor above the rupture zone by several metres. So far the most powerful earthquake to have hit Japan, the Great East Japan Earthquake was one of the five most severe earthquakes to have ever been noted since the world seismology records began in 1900. Further evidence of the force of the quake, it managed to move the city of  Honshu all of 2.4 m (8ft) east, shifting the earth on its axis by 10 cm (4in). Sparking several nuclear accidents, of which the most serious by far was the catastrophe connected to the Fukushima I Nuclear Power Plant, which resulted in radiation leaks so severe it triggered an ongoing level 7.0 event and a 20km (12mi) evacuation zone. The Fukushima accident was caused by the cooling systems being knocked out. The severity of the disaster was so grave that it’s been compared to the notorious Chernobyl incident, which was also rated a very high level 7.0. Although the two accidents differ a great deal from one another, the Fukushima nuclear disaster involves the prospect of a long∞term health and environmental hazard and has necessitated an expansive clean-up operation.

Severe level of devastation
The quake itself caused serious damage, but the highly destructive tsunami it set in motion added significantly to the level of ruin. Involving waves of up to 38 metres, some of which advanced up to 10km inland, the tsunami was able to form as the epicentre of the quake was about 24.4km, a level of depth shallow enough to set off a tsunami. Moving forward with remarkable force, the speed of a tsunami wave travelling across an open ocean is comparable to a commercial jet’s ground speed.

The aftermath of the disaster caused a massive crisis with a huge economic impact. In addition to significant loss of life, the tsunami left over 125,000 buildings battered or destroyed in its wake, as well as extensive structural damage to roads and railways – even the collapse of a dam. Across the northeast of Japan, 7,735 school buildings were levelled to the ground at worst. Students were forced to jostle for space in the few schools that are still standing. Evacuation centres were filled to the brink within hours of the disaster, and schools and other public buildings doubled as homes to scores of evacuees.

The tsunami created over 300,000 refugees in the Tōhoku region, and brought on shortages of food, water, shelter, medicine and fuel for survivors. About 4.4m households in northeastern Japan were left without electricity, and around 1.5m households were left without water. Preparing for further disasters, residents of Japan, and not only those in the worst affected areas, went into defence mechanism, hoarding food and other essentials.

“Following the disaster, people started to desperately stock up on food and other basics in order to prepare for another catastrophe. It was spooky to enter supermarkets at the time, as the shelves were completely empty, even in Tokyo. The only products available, with any luck at all, were biscuits and sweets,” says Yoshii Shimuzi, a Tokyo resident.

Adversely affecting many other countries and territories outside Japan, shortly after the quake shattered the grounds, the Pacific Tsunami Warning Center (PTWC) in Hawaii issued tsunami watches and warnings for various locations in the Pacific, while The United States West Coast and Alaska Tsunami Warning Center went on to do the same for the coastal areas of larger parts of California, as well as the entire region of Oregon, and the western part of Alaska. In California and Oregon, up to 8ft high tsunami surges hit some areas, damaging docks and harbours and causing over $10m of damage. Meanwhile, waves up to 1.6ft high hit the eastern seaboard of the Philippines, and a number of houses along the coast in Jayapura, Indonesia, were destroyed. Estimated damage to public infrastructure in Hawaii reached $3m, with a fair share of private properties being damaged, too. Many households of Peru also suffered a significant blow, with over 300 homes damaged as a result of the tsunami. Other affected regions included Chile – where about 200 houses took a beating due to waves of up to 9.8ft – and the Galapagos Islands.

In response to the crisis, the Japanese government mobilised the Self–Defence Forces, a move that was further supported by efforts put in by other countries sending out search and rescue teams. Charities based in Japan and other parts of the world also came to the rescue. Since the disaster began, charitable organisations such as the Japanese Red Cross have provided first aid and emergency healthcare to survivors of the natural disaster, and have supplied 70,000 temporary homes with an appliance package consisting of domestic essentials items such as rice cookers, microwave and kettles. The scheme has benefited around 280,000 people in the three hardest hit prefectures of Miyagi, Fukushima and Iwate.

Having set off a chain reaction of disasters, analysts predict that the earthquake and associated tsunami catastrophe will rack up costs exceeding $300bn, making it the most expensive natural disaster on record. Early estimates placed insured losses alone at $14.5bn to $34.6bn. In an effort to normalise market conditions, the Bank of Japan offered ¥15trn ($183bn) to the banking system on March 14. Meanwhile, Japan’s parliament has passed a ¥4tn ($49bn, £30bn) emergency budget for reconstruction that will help fund new housing for tens of thousands of residents who lost their homes, and help getting affected businesses back on their feet.

The ominous threat of aftershocks

Posing further threat even after it has potentially caused major damage, an earthquake is always accompanied by a menacing entourage. Aftershocks can follow the host quake for up to a year after the event, and general guidelines applied by seismologists indicate that an earthquake’s largest aftershock will be one magnitude smaller than the main quake, meaning that a 7.9 magnitude earthquake could hit the region of Japan as far off in time as March 2012. Immediately following the quake, a number of aftershocks of 5.0 magnitude hit Japan, as well as several with a magnitude exceeding six and even a few measuring seven and a tad above. Japan’s latest national seismic risk map gave a 99 percent chance of at least a magnitude 7.5 quake hitting the region in the next 30 years.

Preceding the major quake, a series of large foreshocks occurred two days before with magnitudes of 7.2 about 25 miles (40 km) away, and continued with three other earthquakes, all of which were greater than magnitude 6, according to the USGS.

Ring of fire
A particularly earthquake–ridden country, Japan is situated along the volatile Pacific Ring of Fire – an aptly named zone around the Pacific Ocean where a major portion of the world’s earthquakes and volcanic eruptions occur. To be precise, 90 percent of the world’s earthquakes, and 80 percent of the most large–scale ones strike along the Ring of Fire. Formed by subduction of the Pacific Plate and the Philippine Sea Plate, as many as 1,500 earthquakes are » recorded yearly in the zone, and magnitudes as strong as four to six are not uncommon. Mild tremors occur almost daily in one part or another in Japan, causing buildings to shake ever so slightly.

Other characteristics of the Ring of Fire are hot springs and volcanoes, of which Japan is home to many. In fact, as much as ten percent of the world’s active volcanoes are to be found in the country. Rising above the north shore of Lake Inawashiro, Mount Fuji is Japan’s highest volcano, and it’s constructed above a group of overlapping volcanoes. Another notable volcano is Mount Bandai. Formed of several overlapping stratovolcanoes, the largest of which is O–Bandai, the structure of the host volcano is complex. The O–Bandai volcano itself, meanwhile, took shape within a horseshoe-shaped caldera over a period of about 40,000 years, springing up where another volcano, Ko–Bandai, had collapsed in 1888.

Historic earthquakes of Japan
The recent monster earthquake was not the first major quake disaster to hit Japan. Several parts of the country have experienced devastating earthquakes and tidal waves in the past. The Great Kanto Earthquake, the second worst in Japanese history, hit the Kanto plain around Tokyo in 1923 and resulted in the deaths of over 130,000 people, making it the deadliest earthquake ever to strike Japan. The duration of the earthquake is said to have lasted between four and 10 minutes, and the quake had a magnitude of 7.9 on the Richter scale, with its focus deep beneath the Izu Ōshima Island in Sagami Bay.
Also rated high on the list of devastating Japanese earthquakes, the Great Hanshin earthquake, or Kobe earthquake as it’s also called, occurred on Tuesday, January 17, 1995, in the southern part of Hyōgo Prefecture, Japan. It measured 6.8 on the moment magnitude scale (USGS), with tremors lasting for approximately 20 seconds. The focus of the earthquake was located 16km beneath its epicentre, on the northern end of Awaji Island, 20km away from the city of Kobe. Approximately 6,434 people lost their lives as a result of the disaster, of whom about 4,600 were Kobe residents.

Japan’s business terrain suffers
Most industries in Japan have suffered sharp losses since the disaster struck. Encompassing  both immediate problems- with industrial production coming to a sharp halt in many factories- as well as long term issues brought on by costs to cover extensive rebuilding schemes, the cost of the planned works has been estimated at ¥10trn ($122bn). Being one of the most expensive disasters in the history of global insurers, insured losses will amount to figures between $12bn and $25bn, according to risk modelling firm Eqecat. Painting a bleaker picture yet, competitor firm, AIR Worldwide, has estimated losses of $15bn to $35bn – and estimate that excludes the effects of the tsunami.

The automobile industry is another area that has been badly hit. In April, Japan’s domestic sales of cars, trucks and buses fell by a record amount. Down by a staggering 51 percent, production has come to a virtual halt, coupled with broken supply chains. Much to blame is the shortage of parts and the depleted sales are predicted to continue in the coming months.

Not left out of the depressing equation, consumer confidence has taken a significant beating, too. Known for being zealous shoppers with a ceaseless hunger for luxury goods by megabrands such as Gucci, Burberry and Louis Vuitton, Japanese consumers account for a large chunk (as much as 24 percent in 2010, according to a report carried out by Deutsche Bank) of the global luxury industry’s turnover. Unlikely as it would have seemed even during the global recession in 2008, analysts predict that the days of the shopaholic Japanese society has come to an end, at least for the time being. As a reaction to the disaster and the subsequent suffering, materialism has given way to an altogether more subdued mood, where uninhibited spending has little relevance.

It’s difficult to estimate how long the slump in consumer confidence will last, or if it will even inspire a permanent shift in values, but it’s believed that Japanese consumers will put a lid on their excessive spending habits at least for the next five years, with spending set to plummet by as much as 30 percent.

Trying to return to normality as soon as possible after the disaster struck, well-known brands including Louis Vuitton, Gucci, Hermes and Tiffany all began to reopen their stores in Tokyo. Yet retailers still experience uncertainties and practical issues such as persistent electricity blackouts.

The stresses and strains of Tokyo Electric Power Co
Utilities are badly hit, too, and Asia’s biggest power company, Tokyo Electric   Power Co, is in for a big blow as one of its major clients is the Fukushima Daiichi plant. To indicate the scale of Tokyo Electric Power’s losses, the company made a net loss of more than 1 trillion yen for the year, after taking into account costs to scrap its four damaged nuclear reactors and writing off tax assets. Significantly, this is the biggest ever recorded setback for a non-financial Japanese firm. Stepping in to soften the blow of Tokyo Electric Power’s financial strains, the company’s main creditor, bank Sumitomo Mitsui Banking Corp, along with a host of additional lenders, injected 1.9trn yen in emergency loans in the immediate aftermath of the disaster. There’s been an ongoing battle over who should foot the major bill to compensate the power company’s clients. Nearing an agreement in May 2011, and thus saving Asia’s biggest utility from financial ruin, the Japanese Government is likely to announce a scheme to help the company compensate victims affected by the crisis at the crippled and still leaking nuclear plant.

On a practical level, Tokyo Electric Power Co is still struggling to get reactors at its Fukushima Daiichi plant under control, and the company has admitted that it will barely be able to boost its power supply to cover estimated summer demand. A shortage of energy will potentially plunge Japan deeper into economical trouble as it will result in blackouts in central Tokyo, adversely affecting retail businesses and other commercial establishments that heavily rely on a constant supply of power.

The events that rattled the world

The crisis unfolding around the Fukushima Daiichi nuclear power plant is a narrative which many believe is inconspicuously drawing attention away from the human loss and suffering. A U.S. journalist said recently: “If it bleeds it leads, but if it radiates it fascinates.” This demonstrates, albeit a little bluntly, that the threat of a nuclear catastrophe has been the gravest concern to people worldwide closely monitoring the status of Fukushima’s nuclear reactors. As the situation continues to remain serious at the plant, despite heroic efforts to regain control and renew confidence in the industry, many have begun to wonder what impact the events have had internationally and how they will affect the nuclear power industry globally.

Day 1- March 11
At 14:46 local time an earthquake of magnitude 8.9 struck off the north-east coast of Japan, about 400 km from Tokyo, at a depth of around 24km. Fukushima Daiichi and Daini power plants suffer catastrophic  and critical damage.

Fukushima uses a General Electric boiling water reactor design which operates at 285°C and produces electricity by boiling water, spinning a turbine with the generated steam. The process involves nuclear fuel, in the form of uranium oxide, heating water until it boils and creates steam, which then drives the turbines to create electricity. The procedure is complete when the steam is subsequently cooled and condensed back to water, which is then returned to be heated by the nuclear fuel.

The reactor’s containment unit, an airtight structure made of concrete and steel, holds the pressure vessel, pumps and pipes that contain the cooling water. Its sole purpose is to ceaselessly achieve a complete core meltdown. This is further supported by a solid, concrete composition which is dispensed around the containment unit as a secondary, additional control. Together the containment units are housed within the reactor building which is a shell designed to protect the core from damage. It was this exterior shell that suffered damage, bringing about the reactor explosions.

Press releases published on the TEPCO website and reports by the local media showed the Fukushima Daiichi power plant was not constructed to withstand an 8.9 strength earthquake, especially if directly followed by a larger than expected tsunami. Only moments after the earthquake hit an automatic tripping occurred which shut down nuclear reactor units 1, 2 and 3 immediately. Units 4, 5 and 6 had been shut down beforehand for outages. The shutdown caused loss of offsite power and the damage of the reactors’ exterior power supply triggers the first emergency system in place. The backing systems of the reactor, which continued to work the cooling pumps, are calculated and deliberate in their reaction when dealing with this sort of calamity. Besides, given that the power plant had tripped it could no longer generate electricity by itself. The plants protective seawall, which was designed to defend it from a tsunami of just under six metres, took a lashing by a 14 metre high wave that crippled the reactors at around 15:46 local time.

According to the updates from TEPCO, for around an hour the emergency diesel power generators fired up and supplied the electricity required for the core cooling systems, until the generators stopped working due to flooding. With the diesel generators flooded, the next crisis measure is activated successfully as the operators switch to emergency battery power to chill its core. The high∞pressure coolant injection system and the steam operated reactor core isolation structure delivered cooling to units 1, 2 and 3. Meanwhile, the seriousness of the situation requires further safety measures, especially for workers at the nuclear power plant, and an evacuation order is issued for any person in the immediate radius of 3 km of the Fukushima Daiichi power plant.

Day 2- March 12
The emergency backup battery for the high pressure core flooding system within reactor 3, which lasted eight hours, is depleted. The ability to cool the reactors of units 1, 2 and 3 is significantly reduced which causes the pressure in the primary containment to increase. Although a risk is associated with the release of pressure TEPCO makes the decision to discharge some vapour, which contains some radioactive material, into the air to release pressure within reactor unit 1. The evacuation of residents within a 10km radius of the power plant begins around 15:30 pm local time.

It is only a few minutes after the commencement of the evacuation when TEPCO reports cameras in the Daiichi plant have picked up a vast hydrogen explosion in unit 1 which destroys the outer structure of the building where the spent fuel pool is housed. As the exterior structure collapses the unit 1 spent fuel pool is exposed to the atmosphere. The decision is taken to expand the evacuation zone to 20 km around Fukushima Daiichi. Workers begin to infuse sea water inside the pressure vessel of the reactors through the fire services’ array of mobile trucks.

The possibility of a partial core meltdown at unit 3 becomes increasingly likely, ie if cooling is not restored the core will ultimately melt. However, as the key aim at that stage was to vent reactors 1 and 3 to release built∞up pressure, water and boric acid was used to cool and control further nuclear reactions. Although unit 2 contained a lower than usual water level with a high pressure inside the containment vessel, it was considered to be stable.

Day 4- March 14

Eleven people are injured when a hydrogen explosion occurs at the reactor building for unit 3, causing significant damage to its exterior structure and exposing its fuel pool to the atmosphere. The water level for the reactors of units 1, 2 and 3 is beneath the top of active fuel and fuel damage is suspected in all units, however, containments at all three units remain intact. Autorité de sûreté nucléaire, the French nuclear safety authority, considers the accident serious, as high as a 6 on the International Nuclear and Radiological Event Scale (INES). Operators continue to expel steam occasionally to  control the underlying pressure of the vessel and containment.

Day 5- March 15
Reports of a fire and damage to the top level reactor in unit 4 emerge followed by a hydrogen explosion within the reactor building of unit 2 and 4. The danger of water boiling in the unit 4 spent fuel pool is imminent.

Day 6- March 16
Water inside the spent fuel pool of unit 3 reactor building begins to boil while white smoke starts rising, causing TEPCO to evacuate its workers due to mounting levels of radiation, reaching up to 1 Sv/h. This is usually the maximum acceptable yearly dose for the public from any manmade facility.

Day 7- March 17
Helicopters drop sea water throughout the morning on the spent fuel pools of units 3 and 4 until the latter’s spent fuel pool is filled with water and none of the fuel rods are exposed. Simultaneously construction workers start to supply a working external electrical power source to all six units.

Week 2-3 (March 18–31)
Over the next few days fire services continue to spray sea water into the unit 3 spent fuel pool and 40 tonnes of seawater has been inserted into the unit 2 spent fuel pool, which is powered by offsite sources. For the first time reports start flooding in that milk and spinach from surrounding areas has radiation levels that surpass Japanese standards. Tap water in Fukushima shows higher than authorised levels of radioactive iodine while higher than usual levels of radioactive iodine and caesium is found in Tokyo’s water supplies. An instruction is issued by the Japanese government for the suspension of shipment of certain foods. A few days’ later tests of sea water near units 1, 2, 3 and 4 pick up levels of radioactive iodine and caesium that go beyond regulatory perimeters. Core cooling of the unit one reactor becomes necessary on day 14 when it reaches 400°C, 100°C above its sanctioned design value. Three workers get exposed to elevated levels of radiation while two others receive hospital treatment due to burns from increased level beta radiation. There are constant efforts to cool the reactor cores and now fresh water is used instead. Temperatures are brought down to a controllable 204°C. However, water contaminated with high radioactivity has leaked to the workspace while water found in the turbine buildings of units 1, 2 and 3 contains radioactive substances, as high as 1,000 mSv/h in unit 2 and 750 mSv/h in unit 3. Examination of  numerous water samples taken from the turbine building sub drains on March 30 find detectable and traceable levels of fission products.

Week 4-5 (April 1-12)
Fresh water is injected into the spent fuel pool at all units. Radiation levels exceeding 1,000 mSv/h are found near unit 2 in the pit where supply cables are stored and a 20cm fracture is found on the side of a cavity where water is flowing out into the sea. Works begin immediately to close the cavity in order to reduce additional outflow of water into the environment and succeed only three days later. TEPCO discharges around 11.5 tonnes of water with small levels of radioactivity into the sea to allow more highly contaminated water to be collected in other sites at the plant. On April 7, 26 days after the first earthquake, an aftershock quake of 7.1 occurs but the impact to works at the plant is generally minimal. Over the next few days over 1,320 tonnes of low∞level contaminated water is discharged from the sub drains at unit 5 and 6 in preparation for the releasing of around 11,500 tonnes of faintly contaminated water from the site in order to permit storage of more highly contaminated water that will be used to cool the reactors. On April 12 the government decides to raise the INES grade of the accident to seven due to the high release of radioactive waste.

Week 6-7 (April 12 – 26)
Samples taken from sea water around the power plant contain levels of radioactive material higher than regulatory limits allow and efforts are made to minimise the release of radioactive material from the site. On April 15, Japan’s Nuclear and Industrial Safety Agency (NISA) instructs TEPCO to increase the number or sample points and to take marine samples in order to better characterise and assess the release of radioactive material. The centralised radioactive waste treatment facility to process the contaminated water gathered on site is ready and transfer begins on 19 April 2011. Radiation readings from the unit 1 and 3 reactor buildings are undertaken through the usage of remote controlled robots. The spent fuel pool cooling continues at units 1 to 4 by injecting water directly into the pools with electrical pumps. Cooling of the reactors at units 1 to 3 is ongoing and efforts to manage the spread of contamination from the site with steel plates installed at the unit 2 intake structure.

Ongoing efforts crews keep on cooling the spent fuel pools at all units by injecting water directly into the pools with electrical pumps and by spraying water from concrete pumping trucks. In May a ventilation system was set up to improve the working surroundings within the reactor building of unit 1.

The airlocks are opened on May 9 and measurements of the air dose rate confirm that there is no radiation impact to the outside environment. TEPCO announced a meltdown in one of its reactors while seawater samples near another reactor showed levels of radioactive caesium at 18,000 times the legal limit. An estimated 90,000 tons of radioactive water remains from the pumping of water to cool reactors and the dilemma is mounting by the day, as the volume of contaminated water keeps increasing. French nuclear engineering company Areva and Tepco sign a contract and start treating the array of radioactive water in June.

Post Fukushima

The situation created by the crippled Fukushima power plant has raised further concerns about the safety and viability of nuclear energy. So, what will the disaster mean for the nuclear power industry? Events at Fukushima have shown that even if the situation does not deteriorate any further the damage has already been done. Fukushima has demoralised the industry’s dependability as shockwaves have rippled through the finance and energy communities raising questions about whether nuclear is worth this hassle, or if cheaper and safer alternatives should now be considered.

A question of finance
High profile nuclear utility companies, such as the South Texas Project majority shareholder NRG, have started to rethink matters. The company withdrew from the project in April writing down $481m investment and excluding any further investment. CEO David Crane said: “This tragic nuclear incident has introduced multiple uncertainties around new nuclear development in the United States which have had the effect of dramatically reducing the probability that STP 3 and 4 can be successfully developed in a timely  and prompt fashion.”
Panic among utilities companies also emerged when it became clear how many billions in compensation will have to be paid out to victims of nuclear accidents such as this one. Only recently the Japanese government endorsed a scheme to help TEPCO compensate victims of the disaster at the tsunami-stricken nuclear power plant and save it from financial collapse. Japan’s prime minister, Naoto Kan, approved the plan to create a new body to facilitate funds to utilities companies expecting to pay nuclear accident compensation claims to victims. It is said that a government committee will for a brief period take over the control of management at TEPCO to monitor its restructuring measures. TEPCO’s balance sheet has been wiped out and compensation claims are estimated to reach trillions of yen, primarily from Fukushima residents living in the vicinity of the power plant.

Alternatives to nuclear energy

The recently published annual Nuclear Industry Status Report has highlighted that advanced technology and mass production are progressively making nuclear powers’ renewable competitors cheaper. Wind turbines for instance beat new nuclear costs by two to threefold and solar power by half the report showed. It was noted that prices are gradually decreasing, with a standard crystalline∞silicon photovoltaic module costing $4.20 per peak watt in 2009 when today it is $1.70. Forward pricing has shown that it will be $1.35 by the end of 2011 and $1.00 by mid∞2012. So it comes as no surprise to many that micro power generated around 90 percent of the world’s new electricity in 2008.

Businesses according to the report have over the past few years invested more in renewable energy than in nuclear power. Last year all renewable energy, except for large hydro, received $151bn in private investment globally and exceeded nuclear power’s total global installed capacity. Nuclear power in comparison received no investment at all.

Even before the world was hit by the Japanese disaster, the nuclear industry has been slowly acknowledging that renewables are progressively becoming a realistic competitor. Some continue to claim that wind and solar power’s inconsistency disqualify the growing renewable power sources as an unpredictable opponent. However, the report shows that the reverse is actually correct. It provides the reader with an insight of basic quantitative and qualitative facts regarding nuclear power plants in operation, under construction, and in planning phases throughout the world. Simultaneously it assesses the economic performance of past and current nuclear projects and comparing their development to that of leading renewable energy sources and their accomplishments.

The all∞embracing research offers a country∞by∞country breakdown of nuclear programmes around the world and shows that even before the Fukushima calamity began, the swiftly aging nuclear industry has been powerless at stopping the demur of nuclear energy as it can’t compete with the pace set by renewables. According to the research, renewable facility additions have been steadily outpacing nuclear start∞ups for the past 15 years. Last year the capacity of wind turbines, biomass and waste-to-energy plants, and solar power reached a combined 381 GW, beating the 375 GW produced by nuclear energy.

In April at the last count 437 nuclear reactors were operational worldwide, seven less than a decade earlier. There are 64 reactors under construction in 14 countries according to the International Atomic Energy Agency (IAEA) compared to 233 at the beginning of the 1980s. Within Europe there were 143 operational reactors, down from a historical high of 177 units in 1989.

Nature’s quake∞tsunami double strike is an exceptionally unusual occurrence, but many have queried the wisdom of constructing nuclear power plants in highly active seismic areas.

Only time will show how the current situation will impact on the image of nuclear power. If the catastrophic meltdown is contained and the reactors are successfully shut down with no further release of radiation, then the nuclear power industry will declare that even when they faced the most awful natural disaster, they got through it almost unharmed.

Countries such as Germany and India have started reviewing their nuclear precautions and available options in light of what has happened in Japan. Overall Fukushima won’t lead to the end of the nuclear energy industry ∞ too much is already invested worlwide in the industry for that to be the case. However, the post∞Fukushima circumstances add to the international economic crisis and are exacerbating the struggle that proponents of nuclear energy have to deal with.

Clean-up operation

Renova Energia is a 100 percent Brazilian company with broad expertise in electric power generation using renewable sources. Since its foundation in 2000, Renova has developed a portfolio of alternative renewable energy projects including 2,205.7MW of wind farm capacity and 1,467.5MW in small hydroelectric power plants (SHPPs) in several Brazilian states such as Bahia, Minas Gerais, Mato Grosso, Maranhão, Goiás, Tocantins, and Paraná.

The company was the biggest winner in Brazil’s first wind energy auction, held in December 2009. In this auction, the company sold 270MW of installed capacity across 14 wind farms. Renova Energia sold an additional 153MW of capacity installed in six wind farms in the second reserve energy auction carried out in Brazil in August 2010.

The company’s outstanding performance in energy auctions can be explained by its integrated engagement in exploring, developing, implementing and operating projects. These features account for an unparalleled operating efficiency with Renova’s wind energy projects having  capacity factors of over 50 percent, a much higher rate than averages seen in both domestic and international markets. Among the competitive advantages of Renova are the synergies captured across its portfolio projects, combined with the privileged geography of its farms and employment of leading-edge technology.

Over its 11 years of operation, Renova has invested in training and developing the skills of a solid team comprising highly-skilled subject-matter specialists in the areas of engineering, geology, environment, law, and other fields.

In June 2010 the company completed an IPO and earned its way to Level II Corporate Governance of BM&F BOVESPA (São Paulo Stock Exchange), with its stock trading under the RNEW11 ticker code. Renova thus became Brazil’s first alternative energy company with stock traded in a capital market. The amount raised in the IPO was R$162m and will be used to support the company’s expanding operations.

Renova’s stock increased approximately 130 percent in value between the IPO and April 2011. Today, the company has a market value of R$1.605bn.

Brazil’s installed wind power capacity should experience an annual aggregate growth of 55 percent between 2013 and 2025, raising from 5 GW to 33 GW according to IHS – Emerging Energy Latin America Wind Power Markets & Strategies 2010. Current trends indicate Renova will keep up with the growth of wind power in Brazil.

Among the company’s shareholders are funds InfraBrasil and FIP Ambiental, both independently managed by Banco Santander, and Renova’s founders, Ricardo Delneri and Renato Amaral.

Out of 20 wind farms already included in auctions, 14 have commenced the building work last February and are on their way to becoming Brazil’s largest wind power complex. This endeavour is expected to receive R$1.17bn in investments, with facilities being built in the municipalities of Caetité, Guanambi and Igaporã, in upcountry locations of the State of Bahia, Brazil. The annual output of the facility will reach 1,100GWh/y (Gigawatt-hours per year), equivalent to the power consumption of 650,000 households. Estimates say 2,000 direct and indirect job posts will be created at the peak of the project’s works.

Additionally, Renova’s business model should assist and empower local economies and encourage settlement of small farm owners in country areas. Renova’s model is based on land lease arrangements, which allow small farmers to continue working on their lands but now earning some additional income.  

Initially a total of 184 wind turbines will be deployed, each one standing 80m tall (the same height as a 27-storey building) and with a blade length of 42m. This equipment was procured from GE, and each generator has a nameplate output of 1.6MW. The civil works for the wind farms have been commissioned to a joint venture formed by renowned Brazilian contractors such as Queiroz Galvão-Mercurius. Electric and electronic installations will be carried out by ABB, a global leader in power and automation technology. The energy production will be distributed by a Shared Generation Facility (SGF) to be built by Companhia Hidro Elétrica do São Francisco (Chesf). 

The complex is scheduled to start up commercial operations in July 1, 2012. The term of power purchase and sale contracts entered into with the Electric Power Trading Chamber (EPTC) is 20 years. The EPTC is the Brazilian body responsible for managing and overseeing trading of both regulated and free electric power contracts within the National Interconnected System.

Expertise and competitiveness

In Brazil, power purchase and sale contracts traded in auctions are based on an annual power delivery average in megawatt-hour units. As a result of this structure, the company believes that the successful performance of its projects will enable it to charge a lower rate compared to its competitor wind farms.

In view of this scenario and an increasingly favorable outlook for the Brazilian wind power market, Renova Energia banks on the synergies of its projects, especially in terms of access, transmission infrastructure and management intelligence, as a competitiveness strategy to consolidate itself in the market and participate in upcoming power auctions that will take place in Brazil.

It is worth mentioning that the company is accredited to trade in carbon credit markets owing to the cleanliness of the energy generated through wind farms and SHPPs and the corresponding reduction in fossil fuel dependency. This form of electric power generation is eligible to earn carbon credits, and the company intends to derive additional revenues with the trading of such credits.

Renova believes that one of Brazil’s biggest challenges remains in developing new energy sources without causing major impact on the environment. In this sense, it is the responsibility of society to discuss and understand the complexity of their current energy system, so that good decisions will be made toward building a future that is based on environmental sustainability without overlooking an ever-increasing need for electric power, the critical asset ensuring social and economic development of all nations.

For more information: renova@renovaenergia.com.br; www.renovaenergia.com.br; Tel: 55 11 3569-6746

Natural efficiency

Iberwind’s headquarters are in Oeiras, near to Lisbon, Portugal. Its core activity is the promotion, development and operation of wind energy projects. In November 2008, a consortium led by Magnum Capital Industrial Partners acquired through Iberwind the majority of the wind assets in Portugal, formerly owned by the Australian group Babcock & Brown, as well as the portfolio of service companies that guarantee the management, operation and supervision of those same assets. The Iberwind shareholder structure is mainly based on private equity funds and Portuguese individual shareholders.

Strategic objectives and guidelines
In general terms, Iberwind’s strategic objectives and guidelines are:
– To have an efficient, lean and flexible organisation adjusted to the scale of the assets under management.
– To ensure state-of-the-art management for the wind farms, seeking to maximise energy generation as well as ensuring the market’s highest availability rate and cost-effective operation and maintenance.
– To guarantee appropriate financial management to mitigate the risk and provide long-term access to low-cost funds, including attractive financing for all the wind farms of the portfolio.
– To create value-generating options in attractive European countries from a wind resource, financial and political perspective.

Mission and portfolio
Iberwind is currently the market leader of the wind energy sector in Portugal and one of the main players of the renewable sector in Europe. Iberwind has an excellent portfolio of wind farms located in the most attractive wind resource areas in Portugal, resulting in outstanding energy yields. The Iberwind mission remains to produce electricity using the wind as a clean and inexhaustible source and its goal is to achieve the maximum level of excellence and efficiency in managing and operating the 31 wind farms of its portfolio.

Portugal, with 3,900 GW, accounts for about five percent of the wind energy installed capacity of the EU. Iberwind has installed 680.75 MW of wind energy over the past 12 years, which represents about 18 percent of the Portuguese market in terms of installed capacity and about 21 percent in terms of energy produced. Those 680.75 MW are divided among 31 wind farms, equipped with 331 turbines of 16 different models, supplied by six different manufacturers. Two of the wind farms (Pampilhosa and Candeeiros) are considered internationally to be among the largest in terms of on-shore installed capacity, each with over 100 MW. Iberwind has registered high installed-capacity growth rates in recent years.

Social and environmental responsibility
Iberwind has a clear purpose of promoting the benefits of producing clean energy from a renewable energy source, based on technology to generate wind energy. Their wind farms extend from the north to the south of Portugal, contributing to the development of local economies and promoting the creation of decentralised skill centres. The wind farms are predominantly located in the central and northern regions of the country.

Iberwind’s intervention is not confined to social and economical aspects. Iberwind acknowledges that it has the responsibility to be proactive in its business in order to guarantee that its projects are designed, built and operated in an environmentally responsible way. The wind farms are designed to ensure project compatibility with environmental values present at every site (cultural heritage, geological and geomorphic heritage, fauna and flora, amongst others). Besides monitoring environmental parameters like noise, fauna and flora, the operation and maintenance procedures are planned in order to guarantee the continuous improvement of the environmental performance of projects. An additional environmental asset of Iberwind’s annual energy production is that it avoids the emission of one million tons of carbon dioxide as well as the consumption of one hundred and forty-seven thousand equivalent tons of petroleum.

HR structure and O&M model
Iberwind currently employs 66 people; nearly half of them spread throughout the country at the 31 wind farms of the Iberwind portfolio. To reach a high level of operation Iberwind uses a hybrid concept in terms of operation and maintenance. Basically there are operation and maintenance contracts in place with the turbine manufacturers. However Iberwind also has its own teams that have been trained and qualified by the suppliers to perform a certain level of operation and maintenance on the wind turbines to complement the suppliers’ duties, especially at nights and on weekends.
Iberwind implements close supervision and operations that maximise production. Iberwind has operations staff with training in all models of wind turbines and electric installations, as well as other areas such as meteorology, and health and safety. Iberwind has teams of field engineers and technicians working directly on the wind farms, operating the electrical infrastructure and wind turbines. These teams have fast reaction times, enabling the maximisation of production and assuring the lifetime of the equipment and compliance with all safety procedures. Iberwind uses a Supervision Control and Data Acquisition (SCADA) system to control the wind farms, which was specified and developed by its own staff and later outsourced to a software house that now supplies Iberwind with the service and respective updates.

2010 operational and financial performance

Iberwind’s consolidated EBITDA in 2010 was around 137m euro, with an annual turnover of ¤158m, generated by energy production of 1.7 TWh. The energy production achieved in 2010 accounts for around 21 percent of the wind energy produced in Portugal and approximately three percent of the electricity consumed in Portugal.

Iberwind produces around 21 percent of the wind energy in Portugal from 18 percent of the country’s wind energy installed capacity. This is down to several reasons, two of which deserve highlighting. The first reason is the fact that the Iberwind portfolio has a high load factor. The overall production of the 31 wind farms in 2010 was 2,511 equivalent hours (in terms of installed capacity), which is a very significant number of hours by European standards. The second reason is related to the fact that Iberwind is very availability oriented. Policies and procedures were implemented in order to improve the operation and maintenance efficiency of all Iberwind wind farms. A best practices centre has also been established in order to improve wind farm performance. The result of these policies and procedures is clearly visible in the significant and sustained improvement of the overall level of technical availability of the portfolio, which has improved from 94.9 percent in 2008 to 97.1 percent in 2010, measured as the number of hours the wind farms are really available over a year (8,760 hours) with no deductions at all. The 2010 percentage achieved is extremely high.

Business development
In 2010 Iberwind began the first repowering (full replacement) and overpowering process in Portugal of a 10 MW to a 12 MW wind farm. This 12m euro investment will provide a relevant set of benefits, notably:
– Halting the progressive and inevitable deterioration of the operability/productivity of the wind farm, removing the associated O&M risk and securing a long-term O&M contract, with a positive impact on future cash flows, particularly stability/predictability in the medium to long-term.
– Increasing the productivity level of the Iberwind Group portfolio, taking advantage of a site with excellent wind resources and turning one of the portfolio’s worst performing wind farms into one of the best (at all levels).
– Positively contributing to the rejuvenation, operational standardisation and market image of the Iberwind portfolio.
– Setting an example for other (expected) similar processes that may occur in the near future in the Iberwind Group and, most likely, among other players in Portugal.

In May 2010 a law was passed in Portugal approving these overpowering rules, encouraging the over-equipping of existing wind farms or those in the licensing phase and enhancing the implementation of new wind power in the country. This legislation provides for the possibility of installing up to 20 percent additional capacity in wind farms. Iberwind is currently evaluating the technical, operational, legal and economical feasibility of the overpowering of some of the wind farms of its portfolio.

McKinsey European Benchmarking
Iberwind was invited by McKinsey in 2010 to participate in a benchmark study that included seven other relevant European wind energy producers. This benchmark study was intended to cover the company’s business in 2008 and 2009 and to benchmark parameters such as availability, efficiency, investment, and operation and maintenance costs. The final output and conclusion turned out to be very favorable for Iberwind. McKinsey did not publish the ranking of the companies involved in this analysis; nevertheless, and based on those parameters, Iberwind was generally considered to be one of the best companies analysed by McKinsey.

For more information: Tel: (00351) 214876700; Email: iberwind@iberwind.pt; www.iberwind.pt

Silent menace

If the advent of the internet effectively bridged a gap between the imagination of science-fiction writers and the present day, then the progression of the technologies used in unmanned aircraft systems (UAS) could well be the next link in that hi-tech chain; a glimpse of a future that once seemed far away. A lot of controversy surrounds the use of armed drones in war and conflict, but there’s no denying that from a purely technical standpoint, they represent a step forward in human technological advancement. The first recorded unmanned aircraft systems emerged in basic form in the early twentieth century. Recognising the advantages the innovation brought with it, new variants launched over the years, covering a wide spectrum of different uses.

Drones in their modern incarnation first surfaced about 35 years ago, and were originally used as surveillance tools. Although a large portion of the drones unveiled today are designed for military purposes and armed attacks, drones are also used for transportation purposes or to gather research on military reconnaissance missions. In addition, drones are brought in to assist projects covering scientific and commercial areas, and can also play a significant role when natural disasters strike. Brought in to provide useful data for rescue forces and emergency services, drones have the ability to determine where help is needed most, and they can also access areas that are too dangerous for humans to enter.

When natural disasters strike, the revolutionary UAS, the Aerosonde, comes to the rescue. A brainchild of AAI Corporation, the unmanned aircraft system is used specifically as a hurricane hunter. With power supplies ensuring it can run for over than 26 hours, the vehicle features a full electro-optic/infrared payload and, in the 1990s, was the first UAS to cross the Atlantic Ocean.

Still in demand, National Oceanic and Atmospheric Administration (NOAA) began utilising the 35lb system in 2006, capitalising on its ability to fly into hurricanes and communicate near-real-time data directly to the National Hurricane Centre in Florida. With many strings to its bow, the Aerosonde drone gathers data such as barometric pressure and temperature as accurately as manned hurricane hunters, but it also provides measurements far closer to the water’s surface than previously achieved by any tool designed for the same purpose.

Drones used in war
Kick-starting the use of drones on a larger scale, the U.S. introduced unmanned drones in the early days of the NATO campaign. At the time, drones were intended for surveillance purposes only and were not authorised to fire missiles. Before long, regulations were eased and the first test of an armed UAS was carried out in 2001 by the CIA. Hellfire missiles were fitted to a Predator drone, a vehicle initially used for spying. The menacing new Predator hybrid went on its first deployment in Yemen in 2002, a trial operation carried out, once again, by the CIA. To put the invention to the test, it was remotely controlled to blow up a sports utility vehicle in the middle of the desert.

A few years on, the armed Predator drone has advanced to the stage where it can demolish a lot more than sports cars, and it’s currently the most commonly used armed drone. Since President Obama came to power, drones used in war have spread like wildfire, and the drone influx associated with the war on Iraq was just the beginning. This development has sparked fury among anti-war campaigners, who describe armed drones as mindless killing machines controlled and fired like computer games, leaving thousands of casualties in their wake, be they innocent or not. Despite the heated debate about the immorality of drones, more and more combat scenarios are being supported by armed UAS vehicles. A significant surge occurred in 2010, when drone operators targeted Pakistan’s mountainous Waziristan region. More recently, at the end of April 2011 to be precise, Obama approved the use of armed Predator drones in Libya, permitting remote Predator operators to strike Gaddafi’s defence missions, including air defence, missile and radar sites. Following the death of Osama bin Laden, attacks in Yemen targeting individuals believed to be part of al Qaeda are other drone-fuelled combat missions that have grabbed the headlines.

Man in the loop
Remotely controlling UAS vehicles, trained operators employ a scheme called the man in the loop system. To offer a quick summary of the method, it involves a trained “pilot”, a remote control and a large computer screen. The remote control handles a lot of the operational functions automatically, and once the drone has identified a target, it gives the controller the option to fire the missile, with the simple command, “yes” or “no”. The role of the pilot is primarily to vet the target rather than to seek it out. Although the U.S. military forces had been criticised for recruiting controllers as young as 18, the man-in-the-loop control system generally requires an operator with considerable experience, backed by substantial training. To learn the precise skill of remotely ‘flying’ the missile, the virtual pilot undertakes training gained through simulators or live exercises. Aside from technical know-how, the controller must display an excellent capability to accurately interpret the video imagery and evaluate the missile’s capability to reach the correct target in real time.

Technical specifics
Armed Predator drones, along with a slew of other hybrids such the even more superior Reaper, can travel at a speed of 482kmph. The drones, controlled 6,000 miles away in the Creech Air Force Nevada desert or the alternative base located in Tucson, Arizona, can carry up to 14 missiles. The length of time a drone can stay in the air varies. Typically, a Predator drone plane can stay in the air for up for about 26 hours, whereas some of the unarmed drones are able to stay put in the air for up to 72 hours. Aside from endurance, other pivotal drone characteristics include high resolution cameras and sensors, enabling the drone to both ‘see’ and ‘feel’ objects and living creatures on the ground. Hypersensitive to extraordinary levels, the drone is even able to determine whether there are people present inside a building.

Coupled with the obvious benefit that they are not operated by pilots onboard, and are therefore not putting pilots’ lives at risk, UAS vehicles offer more precise targeting and have the ability to fly very low above the ground, which allows for better visibility. The U.S. defence secretary, Robert Gates, an avid supporter of the drone, has stressed in defence of the drone that it possess capabilities that larger aircraft such as A-10s and C-130s simply cannot provide, precision being one obvious benefit. Although drones don’t come cheap, they are still more affordable than a fighter plane – to compare, a drone can cost $40m, whereas a fighter plane can cost $350m.

The United States might be the most prominent player in the drone arena, with a considerable number of Predators and Reapers at its service, but several other countries are catching up – currently over 40 nations are developing UAS programmes, Russia being behind almost half of those emerging initiatives.

One candidate making significant moves in the highly competitive drone market is Israel. The Israeli military outshone its rivals when it equipped its forces with a new range of spy drones in 2004. Small enough to fit in a soldier’s backpack, the baby drones are a far cry from the larger UAS types previously used by Israel’s military force to spy on Palestinian militants in air strikes. Conceived to take aerial photos of targeted Palestinian territories, the mini drones Birdy and Spy There were both developed by Israel Aircraft Industries’ (IAI) engineering division. The most miniscule version yet to be unveiled by the same establishment is the Mosquito 1, a tiny, barely noticeable spying drone weighing up at a mere 250g. Which could change the way that the millitary track movements.

Domestic drones
In a bid to step up surveillance on society a notch, drones will soon come to serve domestic purposes in greater scale. Sparking a media storm, reports have indicated that police forces in several countries support the use of drones to keep an eye on the public. Turning a blind eye to privacy campaigners’ ardent protests, spying drones represent a progressive end of the UAS spectrum and the idea is widely embraced by several governments. In the UK, domestic spying drones are allegedly already being used for a variety of surveillance operations, albeit not yet officially. According to the Guardian newspaper, drones hit the skies in the summer of 2007 to keep an eye on festival-goers attending the large scale V-Festival held in Staffordshire. By 2012, it’s alleged that police in the UK will start to use unmanned spy drones for the “routine” monitoring of antisocial offenders including motorists, protesters, agricultural thieves and fly-tippers.

In the U.S., meanwhile, the Federal Aviation Administration (FAA) still has strict rules requiring permission for agencies to use the drones in actual operations, but there are plans to relax the rules over the next two years, when police forces across the country might be allowed to routinely fly light-weight, unarmed drones up to 400ft above ground, completely out of eyesight of anyone roaming the grounds beneath. The two drones specifically produced to be used as domestic spying devices are the Dragonflyer x6 and the T-Hawk. Human rights groups will undoubtedly take an interest in how this develops.

Looking ahead
The future of the drone is an area of high priority, and hence we can expect that UAS technology will continue to press forward with pace as new versions designed to outdo their predecessors’ speed, endurance and ability to accommodate weaponry begin to emerge.

Offering a more tangible outlook of what the future might hold for the drone, the Unmanned Aircraft System Flight Plan, released in 2009, presents its predictions on how the drone sphere will develop through to the year of 2047. Radical to say the least, the 81-page long report suggests that the ever more sophisticated vehicles can come to replace every type of manned aircraft, spanning everything from bombers and air-to-air fighters to tankers. Even though the report spells out that the suggestions within are indications only, and that UAS vehicles are unlikely to wipe out the manned air fleet completely but rather serve as alternatives to conventional planes, it nevertheless suggests that UAS vehicles will soon equal the capabilities of traditional planes in all mission categories.

Whether the Airforce will succumb to using an entirely UAS-based air fleet remains to be seen, but drone technology is here to stay and is set to become not only more technologically advanced, but also more widely used.

Eyes in the sky

Orwell’s nightmarish vision of society is very much upon us. So much so that George himself would have been amazed as to what extent his futuristic vision has become reality, even stretching beyond his own imagination. As technology gradually developed, one has become accustomed to an assortment of techniques used by governments, public services and workplaces to keep a close eye on society. Elaborate biometrics, front end verification, data matching, profiling, cross-system enforcement and multi-purpose identification schemes, along with enhanced powers to undertake telephone, email and web use surveillance, have all become part of everyday life. Video surveillance, meanwhile, has become wildly ubiquitous in most corners of the world.

Beating the public sector to it, consumer marketing companies have adopted ever more sophisticated ways to retrieve, analyse and utilise personal data and shopping habits in order to gain consumer intelligence, and spread the info to external parties.

Stepping up the campaign to watch our every move and action, quite literally, Apple is currently at the receiving end of a privacy breech row sparked by the controversial discovery that it plans to monitor its iPhone and iPad users via an encrypted tracking file. The secret file was discovered by two British researchers, and their potent find quickly created media frenzy, both within the blogosphere and among traditional media outlets. Denying the allegations, the company CEO, Steve Jobs, directed the attention to Google instead, claiming that the rival company’s Android phone indeed tracked the whereabouts of its customers, while Apple did not. Offering a statement shortly after the news surfaced, if not officially, a much buzzed email between an iPhone user and Mr Jobs himself read as follows: “Could you please explain the necessity of the passive location-tracking tool embedded in my iPhone? It’s kind of unnerving knowing that my exact location is being recorded at all times. Maybe you could shed some light on this for me before I switch to a Droid. They don’t track me.” Mr Jobs is said to have replied: “We don’t track anyone. The info circulating around is false. In response to the customer’s claim that Android doesn’t track users, Jobs simply replied: ‘Oh yes they do.”

A curtly phrased email didn’t end the drama surrounding that case. Backing the researcher’s findings, a patent filed by one of Apple’ senior engineer managers, Ronald Huang,  sheds light on the fact that the company does indeed plan to use the data mined from tracking its users. Mapped out in the patent application number 12/553,554, which was filed in the U.S. in March 2011. Apple intends to create a ‘Location History’ map which will allow it to track users’ movements on a searchable programme. According to Gawker.com, the first online publication to report on the news, the document describes how the database could be ‘correlated or related’ to other personal information, such as data associated personal pictures,  financial transaction, sensor outputs data, communication and network events and, and so on. Invading its customers privacy yet further, the patent application also spells out that data will be forwarded to remote servers or, to use the precise description, a ‘remote reference database’, assumingly without consent from the user.

Rotten to the core
Suggesting that Apple has been planning this move for some time and is serious about its innovation, details about data compression has also been discovered. The latest developments, according to claims made in the Wall St Journal at the end of April, is that the secret file embedded in iPhones and iPads is designed as to track users even when the ‘location services’ tool is switched off. Location data is an invaluable tool to advertisers as it provides the most key information a mobile phone can offer, namely the consumers’ whereabouts as well as details on where they’re heading and even what they might purchase once they reach their destination, along with precise details how long they stayed at said commercial outlet.

Previously, both Mr Jobs and Google CEO Eric Schmidt have admitted that they use location data for Wi-Fi for databases. Recognising the seriousness of the situation, Senator Al Franken (D-Minn.), the chairman of the Judiciary Subcommittee on Privacy, Technology and the Law, wrote a letter to Apple demanding an explanation for its elaborate master plan. A fragment of the letter read: “The existence of this information — stored in an unencrypted format — raises serious privacy concerns. The researchers who uncovered this file speculated that it generated location based on cell phone triangulation technology. If that is indeed the case, the location available in this file is likely accurate to 50 metres or less. Anyone who gains access to this single file could likely determine the location of a user’s home, the businesses he frequents, the doctors he visits, the schools his children attend, and the trips he has taken-over the past months or even a year.” Franken then went on to list a series of points and questions for Apple to answer with regards to the exact nature of the patent, and how precisely the data gathered by the company would be used.

Due to his health concerns, Apple CEO Steve Jobs did not make an appearance on Capitol Hill to testify in congressional hearings on the tracking file discovery. Senator Al Franken held the hearing from May 10, under the theme “Protecting Mobile Privacy: Your Smartphones, Tablets, Cell Phones and Your Privacy.” Alongside Apple, representatives from Google attended to testify.  As the controversy gathered force, Jobs saw no choice but to come forward ahead of the hearing, to answer some of the questions posed by senators, privacy campaigners and consumers. He posted the following responses on Apple’s website on April 27:

1. Why is Apple tracking the location of my iPhone?

Apple is not tracking the location of your iPhone. Apple has never done so and has no plans to ever do so.

2. Then why is everyone so concerned about this?

Providing mobile users with fast and accurate location information while preserving their security and privacy has raised some very complex technical issues which are hard to communicate in a soundbite. Users are confused, partly because the creators of this new technology (including Apple) have not provided enough education about these issues to date.

3. Why is my iPhone logging my location?
The iPhone is not logging your location. Rather, it’s maintaining a database of Wi-Fi hotspots and cell towers around your current location, some of which may be located more than one hundred miles away from your iPhone, to help your iPhone rapidly and accurately calculate its location when requested. Calculating a phone’s location using just GPS satellite data can take up to several minutes. iPhone can reduce this time to just a few seconds by using Wi-Fi hotspot and cell tower data to quickly find GPS satellites, and even triangulate its location using just Wi-Fi hotspot and cell tower data when GPS is not available. These calculations are performed live on the iPhone using a crowd-sourced database of Wi-Fi hotspot and cell tower data that is generated by tens of millions of iPhones sending the geo-tagged locations of nearby Wi-Fi hotspots and cell towers in an anonymous and encrypted form to Apple.

Your home is your castle. Or is it?

Another privacy busting occurrence that made the headlines in May 2010, is the instance when Google entered every WiFi wireless router in every UK household in its database. The information was collected by radio aerials via the company’s Street View cars, and the data gathered was intended for Google’s Maps Mobile application designed to help smartphone users access local information to the such as the location of restaurants, cinemas, theatres, shops and hotels. The project remained under wraps until an inquiry in Germany revealed the nature of the operation, forcing Google to admit that it had “mistakenly” downloaded data packets. In cases where the networks targeted lacked password protection, the downloads retrieved by the search engine giant may even have encompassed fragments of emails and other potentially sensitive information. Coming to its own defence, and by doing so making matters more spine-chilling for the public, Google claimed that other companies had also adopted the same scheme.

Google claims that the information, which lists the networks’ MAC (Media Access Control) address and SSID (Service Set-ID) number, but not the corresponding house number, is publicly available since the wireless network signal extends beyond the property in which it’s located. Following the contentious bust-up, Google suspended the use of Street View cars across the world, but their work in Britain is already complete, and most homes in the country have been added to the database.

In response to the widespread critique, Google offered that they had not notified data protection authorities because they didn’t think it necessary, but admitted that clearer transparency “would have been better”. To create some order amidst the chaos, a string of different authorities, including those in Britain and the US, have asked Google to retain the downloaded emails pending a full inquiry but as yet, a year later, it is unclear what their obligations are concerning the WiFi data.

Ardent privacy campaigners claim that the liberties taken by Google and their competitors point to the fact that regulation policies are crumbling.  Privacy International- a UK based organisation whose raison d’être is to defend personal privacy- strongly believes that the street view innovation will in the future be viewed as a breach of law, and that Google should never have been allowed to get away with its intrusive project.

“There should be a parliamentary inquiry which should question Google and finally get it to explain what it is up to both technically and commercially. The idea that it can log everyone’s wi-fi details because it is all ‘public’ is a bogus argument. It is bogus because of the question of scale and the question of integration with other information which would amount to a huge breach of our privacy,” said a spokesperson of Privacy International.

Being partially directed to the ICO (Information Commisioner Office), the organisation under fire responded:  “We are aware that the collection of information by Google Street View cars has raised a number of issues which we are considering. “All organisations that process personal information must comply with the requirements of the Data Protection Act. “Organisations are only permitted to collect data for a specific purpose. Similarly, organisations must only retain data for as long as necessary.”

The advancement of biometrics

Not only is data such as location tacking movements and emails recorded these days, our very bodies and facial features are being scrutinised and kept on file, too. At the end of March 2011, an international alliance of organisations and individuals from 27 countries lodged a petition calling on the Council of Europe to start an in-depth survey on the collection and storage of biometric data by member states, in view of the fact that European governments have increasingly started to store biometric data of fingerprints and facial scans from individuals. Personal scans recorded include contactless ‘RFID’ chips used for passports and/or ID cards. Some nations – France, Lithuania and the Netherlands among them – have even implemented database storage.

Wall Street waits

With financial reform legislation barely a year old, republican policymakers, urged on by Wall Street’s influential figureheads, have been vehemently spinning the yarn of the Dodd Frank Act (DFA) into new regulatory fabric, trying to ignore the distraction of initiatives to tear up certain aspects of the existing law. They condemn DFA because a vast amount of the rules that put it into effect are yet to be written. Simultaneously they are trying to restrain the control, influence and power of the new Consumer Financial Protection Bureau so as to diminish DFA’s authorities by restraining funding and consequently ascertaining less rigorous rules on derivatives trading.

When President Obama signed the Dodd Frank Wall Street Reform and Consumer Protection Act (DFA) into law on July 21 last year the event was proclaimed as the most comprehensive makeover of financial regulations since the Great Depression of the 1930s. Its 2,315 pages, which make it 10 times longer than Glass Steagall, took around 18 months to produce and have attracted stern criticism from the start. The rules within the DFA call for more stringent liquidity and capital standards, quicker oversight, limitations on certain investments, and greater accountability by rating agencies.

However, anti-regulatory republicans on the House Financial Services Committee are attempting to hinder implementation of several DFA rules which they maintain will hurt the US financial sector. Among those, the Volcker rule, which they believe eradicates proprietary trading by banks for their own accounts and restricts bank investments in hedge funds. The committee states that the law includes: “a hastily rewritten derivatives provision that has the potential to do more lasting harm to the US economy than perhaps anything else in this 2,315-page legislation.” The committee’s position on the Volcker rule was as gloomy, saying: “Because no other major European or Asian country has adopted similar restrictions, imposition of these rules on US firms amounts to unilateral disarmament in a highly competitive global marketplace.”

Safety nets

Jamie Dimon, JPMorgan Chase CEO, told the US Chamber of Commerce recently that increased capital requirements intended to protect banks from collapse would “greatly diminish growth and threaten to “put the nail in the coffin for big American banks”. George Soros, who made billions in international finance and is now considered the 35th wealthiest man in the world criticised the Dodd-Frank bill, saying: “I can see its failure to address the issues as it was lobbied into incomprehension and inconsistency by special interests of various kinds.” Even former Federal Reserve Chairman Alan Greenspan took a swipe in a recently published FT article stating that Wall Street under the DFA “may create the largest regulatory-induced market distortion since America’s ill-fated imposition of wage and price controls in 1971.”

However, in what seemed the harshest response to critics of financial reforms thus far, Deputy Treasury Secretary Neal S Wolin said of the Dodd Frank Act: “We will continue to oppose efforts to slow down, weaken or repeal these essential reforms.” Wolin, at an event organised by the Pew Charitable Trusts, hit back at bankers, lawmakers and lobbyists whose aim according to him has been to weaken or delay Dodd Frank’s rulemaking. Targeting the law’s critics he questioned whether they had forgotten the damage caused by the financial crisis and the regulatory gaps that facilitated in causing it. “Our response to them remains the same. Regulators have been and are moving quickly but carefully to implement this legislation,” Wolin said.

One by one Wolin attempted to break down the unrelenting critique he had encountered over the past few months and addressed each in turn.  Among others he commented on the “pace of reform”, pointing out that some of the critics who demanded quick clarity on Dodd-Frank after the law was passed are now saying that Treasury and regulatory agencies are moving too fast on implementation and are coming out with too much information too swiftly.

He also addressed the ongoing and persistent critique that “there is a lack of coordination by the regulators”, saying: “Our financial regulatory system is built on the independence of regulators and independent regulators will have different views on complicated issues.” He added: “Dodd-Frank forces regulators to work together to close gaps in regulation and to prevent breakdowns in coordination. We have already worked through the Financial Stability Oversight Council (FSOC) to develop an integrated roadmap for implementation, to coordinate an unprecedented six-agency proposal on risk retention, and to develop unanimous support for recommendations on implementing the Volcker Rule.”

Commenting on increased transparency in the derivatives markets, he said the act will tighten spreads, reduce costs, and increase understanding of risks for market participants. “The critics argue that requiring standardised contracts to be traded on open, transparent markets will harm liquidity. This position ignores the history and the basic structure of our financial system. The equities market, where stocks are traded publicly and price information is readily available, is one of the most liquid markets in the world, because of, not in spite of, transparency.”

However, despite Wolins’ persistence the act continues to pose numerous issues that are troubling Wall Street. According to leading US lawyers however there are four key DFA regulatory issues worthy of attention which require close attention:

The Volcker Rule

The Volcker Rule continues to be the focus of much scrutiny and debate. A few financial holding corporations have already taken measures to separate themselves from proprietary trading activity in its purest form.  The existing and proposed relationships between banking entities and hedge funds and private equity funds will remain particularly complicated until regulations are in place to implement the substantive provisions of the Volcker Rule.  In spite of the ambiguity that remains about potential exceptions some organisations have shied away from full implementation.

SIFIs
Under section 113 of the act, a non-bank financial company may be elected by the FSOC as a systemically important financial institution (SIFI) and become subject to the acts’ regulation by the Federal Reserve Board (FRB). This includes heightened prudential standards and limitations if the FSOC determines that material financial distress, or the nature, extent, size, concentration, interrelatedness, or mix of the corporations’ conduct could create a threat to the financial stability of the US. Close attention should be paid by asset managers, insurers, specialty lenders and broker or dealers to an ongoing proposal by the FRB.  It is looking at companies which are “predominately engaged in financial activities” and established $50bn or above in total consolidated assets as an asset threshold for nonbank financial companies.

Compensation issues

As part of DFA’s compensation-linked reforms the SEC released for comment proposals to implement section 952 which are terms concerning compensation committees and compensation consultants. The new rule imposes numerous significant executive compensation and corporate governance requirements on public companies. These include shareholder advisory votes on executive compensation and ‘golden parachutes’, heightened independence rules for compensation committee members, mandatory claw-back policies and enhanced executive compensation disclosures.

Derivatives

Participants in the derivatives market and bankers should be watching developments at CFTC, SEC, the FDIC and other banking regulators. The CFTC let loose a flood of proposals designed to regulate, in a thorough manner, the over-the-counter swaps market while the SEC is trying hard to keep up with its proposals to regulate the security-based swaps market. The Treasury Department to the delight of many, announced plans to exempt FX derivatives from the new Dodd Frank rules.

However, it is essential that board members keep informed of the probable impact of all of DFA’s regulatory developments. The act’s derivatives reform is said to bring about an elevated degree of transparency, liability and risk management into the derivatives markets with a greater availability of data. However, it is questionable who will be able to take advantage of it and how the cost of compliance will actually impact the effectiveness of the new market dynamic. Although it remains to be seen if it has any effect at all.

Heading offline

As Rupert Murdoch stepped into the Grand Ballroom of San Francisco’s Palace Hotel at the Web 2.0 conference in October 2007, the developers and engineers who packed the room fell into a respectful hush. It was the sort of greeting usually reserved for one of their own tech heroes, not a 76-year-old media mogul.

Sitting cross-legged on a red leather couch, Murdoch looked relaxed in an open-necked shirt. Next to him sat Chris De Wolfe, the co-founder of Myspace, the social network that News Corp’s chairman had acquired for $580m two years earlier. De Wolfe had spent weeks trying to sell Murdoch’s advisers on the conference, which he saw as a kind of digital coming out party for his boss. But those close to the News Corp chairman worried he might embarrass himself in front of the technorati. Murdoch wasn’t concerned. “I’ll do whatever we need to do for Myspace,” he told associates.

Today, Murdoch has clearly lost his affection for Myspace. In fact, News Corp has put the money-losing site on the block. Some believe News Corp might be willing to unload Myspace at a bargain price. News Corp’s bankers Allen & Co have been supposedly approached by a selection of nearly 20 interested parties.

Whatever the outcome, the nearly six-year-old marriage has become the latest example of what can happen when a traditional media company imposes its will – and business plan – on a start-up that has not yet reached its potential. Interviews with executives who have worked for Myspace and News Corp underscore a stubborn cultural divide that they say exacerbated the once dominant site’s problems and hastened its decline.

Internet failures
Plenty of big media companies have misfired in their bids to retool for the internet age. But few stuck their neck out on as high-profile an opportunity as Murdoch did with Myspace. It was perhaps the most closely-analysed media coupling since the disastrous AOL-Time Warner merger in 2001. And while the Myspace deal was relatively small for a company of News Corp’s size, it nonetheless changed perceptions in crucial ways. News Corp came to be seen as a company of the future, while social media was suddenly considered a serious business opportunity.

Before Myspace, Murdoch had a chequered history in the digital arena. He pursued a rash of unsuccessful investments in the 1990s and for a time following those mishaps, he seemed to shun the internet altogether. Then late in 2004, people close to him say, he had a change of heart. Murdoch shifted tactics. He set aside a war chest of $2bn and handpicked a team of young executives to look at possible ways to make use of the web as an effective distribution platform.

They became the backbone of Fox Interactive Media. Right away, they identified several targets including business news site MarketWatch, Career Search, games network IGN and tech news site CNET. “But Myspace was the one they thought was the most intriguing with the highest risk and likely best reward,” said a former News Corp insider at the time.

Founded in August 2003 by De Wolfe and Tom Anderson, Myspace was conceived as a way for friends and fans to connect with one another as well as with their favourite bands and artists. It was a kind of musical version of pioneer social network site Friendster. Myspace fast became wildly popular with teenagers and young adults who spent hours designing their own pages with their favourite digital wallpaper, posting photos and adding friends. Up-and-coming bands loved the site as a great new free marketing platform. By 2004, just a year after its launch, Myspace was adding thousands of users a day.

Great timing
The timing of the Myspace deal couldn’t have been better. In the 12 months after the News Corp acquisition, Myspace saw its unique visitors grow by 155 percent to 55.8m a month, according to web audience measurement service comScore. “We couldn’t get servers into the building fast enough,” recalls a former Myspace executive. “It was exciting but also scary.”

In early 2007, Murdoch brazenly predicted that Myspace would generate up to $1bn in revenue in the next fiscal year. The forecast brought about a sense of panic at Myspace, according to some former executives. “It was a big blunder to say that to the Street,” said the first executive. “When you looked at how Myspace’s numbers had been trending it was possible – but it was a stretch. After that moment it was basically like all the tentacles of News Corp got involved in a bid to make that target, so getting anything done became near impossible.”

The other mistake was underestimating Facebook. Clearly Facebook was growing fast, even though it was building off a much smaller base. It was also coming out with a slew of innovative products.

One major step forward for Facebook was its newsfeed, an initially controversial, then soon very popular feature which let friends know all the latest news about each other, which inspired repeat visits. It would take Myspace 18 months to copy it.

Facebook struck another blow in May 2007, opening its platform to third-party developers so they could launch a range of applications on the Facebook site. With hindsight, this was a key turning point: Facebook quickly became the place for people to play games like FarmVille with their friends, as well as share photos and communicate.

Over at Myspace, executives promised the site would open its platform to developers, but after a series of missteps and missed deadlines, Myspace fell even further behind Facebook. Its platform eventually opened up to other applications nearly a year later.

But what was taking place outside of the US also provided one of the earliest indications of the threat posed by Facebook. “Around November 2007 we noticed Facebook user numbers in Canada jumped from 500,000 users to four million in two months and they were up to nearly six million a month later. We started to get worried,” said the first former Myspace executive, describing the trend as a “canary in a coal mine” moment.

Just a few months later, Myspace executives spotted a similar trend in Britain, an even bigger, more crucial market. And as the numbers improved at Facebook, the growth in visits to Myspace started to stall. For Myspace, the fairy tale was ending. “While monthly unique views had stayed pretty high for a long time, internally Myspace insiders knew the daily visitor numbers had been tumbling for a while,” said a News Corp executive close to the discussions at the time.

Facebook’s ability to design simple, wildly popular features – even as Myspace’s platform became overwhelmed and buggy – was beginning to get noticed by developers and industry commentators. Facebook and its founder Mark Zuckerberg were getting all the buzz. Stories were swirling around Silicon Valley that the then 23-year-old Zuckerberg had turned down a $1bn acquisition offer from Yahoo. Zuckerberg’s great strength, say his one-time rivals from Myspace, was that he and his team were focused on product development and innovation while Myspace had become much too concerned with revenue and meeting traffic targets.

Around this time, the Google agreement, which had been hailed as a major coup by the company as well as Wall Street, started to be viewed by Myspace executives as a double-edged sword. The Google deal required a certain number of Myspace user visits on a regular basis for Google to pay Myspace its guaranteed $300m a year for three years. That reduced flexibility as Myspace could not experiment with its own site without forfeiting revenue.

Now, with the benefit of hindsight, executives said that given a choice it would have been better to ink a flexible deal similar to the one Facebook had reached with Microsoft. That deal was based on a revenue sharing agreement rather than revenue guarantees.

But cash-flow wasn’t the only problem. Myspace simply had not innovated much since it was bought. The site was also built on Microsoft’s .NET technology, considered by many to be a grievous handicap. Few Web 2.0 developers use that platform and it’s not well suited to a site with tens of millions of users.

“There was a tremendous platform stability problem within the site,” said an executive who  worked with Myspace  around the time. “The way that the technical infrastructure was set up it took literally 10 to 15 times longer to build code on the Myspace system than it would on any other technical platform.” The executive, who has since left the company, added: “They never ever understood how weak the site’s interface was and as any great product developer knows you can have the best code in the world but if your design and user experience isn’t phenomenal your site is going to fail.”

Not your friends
By 2008, Murdoch, who was spending increasing amounts of his time concentrating on how to integrate Dow Jones and the Wall Street Journal into News Corp, caught wind of the trouble at Myspace. He began to manage expectations. At the Web 2.0 event in San Francisco, he pulled back from his $1bn revenue target. He said the figure would be closer to around $750m.

Quietly, Murdoch had started to consider his options for Myspace. In early 2008 there was a lot of uncertainty surrounding Microsoft Corp’s ill-fated attempt to take control of Yahoo Inc. Murdoch moved quickly and had discussions with Yahoo about a possible equity stake swap which would value Myspace at anywhere from $6bn to $10bn but those talks came to nothing.

There was nonetheless some good news for Myspace in 2008: the long overdue launch of a joint venture with the major label owners Vivendi’s Universal Music Group, Sony Music, Warner Music Group and EMI Music. Myspace Music was a way of ensuring that Myspace would remain an important destination for music discovery by fans of the labels’ bands and ensure that Myspace kept up its traffic numbers. But even with the boost from the music venture, Myspace was losing its once enormous lead over Facebook. By the end of that year, Facebook had grown to 54m US visitors in a month, while Myspace stalled at a peak of around 76m users.

It was around this time that management issues began to boil over. In early 2009 News Corp announced that Chernin, who was Murdoch’s trusted number two, would be stepping down in June to pursue other entrepreneurial opportunities. The departure was the beginning of the end for the old Myspace. Over the next year, the company would struggle through sweeping management changes, restructuring and layoffs, while losing more substantial ground to Facebook.

In March of 2009 reports started circulating that Murdoch was looking at bringing in Jonathan Miller, the former AOL CEO, who had had mixed success in turning around that ship while it was still part of Time Warner. Miller was confirmed as News Corp’s chief digital officer the following month with a remit to “fix Myspace.” He immediately set about recruiting his own team to help solve the ongoing problem. But his first task was to get rid of the current management team.

Miller time
Three weeks later De Wolfe was out, while Tom Anderson, who was every Myspace user’s first ‘friend’ was initially kept on in a consultant role. As chief digital officer, Miller then recruited the former Facebook chief operating officer Owen Van Natta as Myspace CEO. Murdoch had also reached out to Jason Hirschhorn, the former MTV executive who had unsuccessfully negotiated to buy Myspace for Viacom back in 2005. To round out the team, Miller recruited Mike Jones, who had previously sold his social communications start-up Userplane to AOL while it was run by Miller.

ComScore, the tracking firm, would soon publicly confirm what executives already knew from studying internal data: Facebook had officially overtaken Myspace in the US, which had been its biggest market. In May 2009, both had just over 70m unique visitors. Two months later, Facebook had grown to a total of 77m visitors while Myspace had dropped to 68m.

Myspace was modestly profitable when it was bought in 2005. Soon after the acquisition News Corp invested heavily, upgrading its technology and bringing on hundreds of new staffers. These investments combined with the visitor traffic drop-off meant that Myspace actually started generating significant losses.

Miller and his team quickly came round to the conclusion that battling Facebook head-on was foolish. They began discussing ways to capitalise on Myspace’s roots as a social network for up-and-coming bands and their fans. They also started to think about how to restructure what had become a bloated Myspace with around 2,000 staffers. Jones told Miller if the business was going to survive it would need to shrink to a fifth of the size it was.

In the meantime, the ongoing fall-off in visitor traffic was badly hurting revenue guarantees News Corp had inked in its advertising deal with Google. Myspace had made the $300m target in the first two years of the deal – 2007 and 2008 – but it rapidly became clear that was not going to be the case in 2009.

Murdoch was forced to be upfront with Wall Street on earnings in a conference call that November. “We have not been making minimum guarantees, so our search revenue will not be what was advertised,” he said. News Corp’s digital revenues were down 26 percent from the previous fiscal quarter. Murdoch told analysts that Myspace revenue would fall short by around $100m of its $300m target due to the drop in traffic.

At Myspace headquarters, infighting only worsened in late  2009. By early 2010, Van Natta quit as chief executive. He was replaced by Hirschhorn and Jones as co-presidents. Shortly thereafter, Hirschhorn left and Jones became chief executive. Former News Corp and Myspace executives  were scathing about the management changes.

“There were nine months of talking bad about their product in front of their own people,” said the third former Myspace executive. “I heard they had negative stuff up on white boards and lots of people were disheartened.”

The growing list of troubles at Myspace raised alarm bells at News Corp’s headquarters in New York. Back in June 2009 Murdoch had replaced Chernin with Chase Carey, the former DirecTV CEO and long time News Corp executive. Carey, a straight-talking, dollars and cents kind of executive whose only obvious extravagance is his remarkable handlebar moustache, immediately raised doubts about Myspace, stating what many outside observers considered obvious.

Carey warned time was running out for Myspace, telling Wall Street it had “quarters not years” to get its act together. In the fall of 2010, he began to talk publicly about the possibility of a sale. After months of planning, Myspace re-launched in October 2010. The idea was to pull in users and advertisers by positioning itself as an entertainment site with Hollywood stars and pop bands.

In February, four months after the re-launch, Myspace’s unique users had fallen by 30 percent to 38m, an even faster decline than many had expected. Just as troubling, web advertising measurement firm eMarketer estimated that Myspace’s worldwide ad revenues dropped by 40 percent in 2010 to $288m and forecasts that number will tumble to around $156m in 2012. During that same period Facebook’s considerable ad revenues are forecast to more than treble to over $5.7bn.

Out of time
At News Corp, patience seems to have run out. To some extent, the issue for News Corp goes beyond Myspace’s fall. The world has changed dramatically since Murdoch made his bold play for the site in August 2005. News Corp still sees its future in the digital world, but executives interviewed for this piece repeatedly said the focus is now on harnessing digital distribution platforms for the company’s vast news and entertainment content.

“I think we learned a lot from Myspace but our focus in digital now is how do we take our core businesses and extend them in meaningful ways over digital platforms,” said News Corp spokeswoman Julie Henderson. The most obvious manifestation of this strategy is Murdoch’s latest digital baby, The Daily, this time built from the ground up. The Daily is a digital-only news ‘paper’ designed specifically for tablets such as Apple’s iPad. News Corp is now infusing digital capabilities throughout its existing businesses rather than looking to buy the next Facebook or Myspace.

News Corp executives generally speak of Myspace in the past tense even though both in public and in private conversations they have argued that Myspace might still have a future within the firm – something that now seems increasingly unlikely after the company appointed boutique investment bank Allen & Co to explore options that could include a sale or a spin-off into some sort of joint venture opportunity.

“The best thing for Myspace will be a situation where it can have the time and space to pursue its destiny – and one that, frankly, removes the constant spotlight,” suggested Jonathan Miller.

The new focus on social entertainment means Myspace will attract a different kind of user and initially less traffic, say current News Corp and Myspace executives.

“We expected that in changing the strategy we were going to have a big change in the audience,” said Myspace CEO Jones. “The entertainment category is a really big category if we get the concept right – but it’s just going to take time.” Time that News Corp may not be willing to wait.

Drive my car

Hydrogen-based fuel could ultimately be used as a carbon-free alternative to petrol sooner rather then later, according to its developers. The technology is based on a new way of producing nano-fibres from hydrides, materials that soak up hydrogen like a sponge, and then encapsulating them in tiny plastic beads so small they behave like a liquid.

The process is being developed by Cella Energy, a spin-off from Britain’s Rutherford Appleton Laboratory, who say that the technique allows hydrogen to be released at a much faster rate and at lower temperatures than has been attempted before.

“What we’ve been doing is taking these materials and encasing them in plastic and making them into a very fine powder and that improves their properties,” Cella Energy chief scientific officer Stephen Bennington said. “It also means you can pump it like a fluid and it’s safe. It is not gong to easily burst into flames,” he said.

Hydrogen produces only water when it’s burned and is considered an ideal solution to cutting carbon emissions from petrol or diesel vehicles, which are estimated to cause 25 percent of all carbon release.

But until now, attempts to store hydrogen have not been consumer-friendly so this has not been a viable option. Cella Energy believes their technology would allow people to use the carbon-free fuel with their existing car after a few technical modifications.

“You would pump it into the petrol tank of your car – that would go off, be heated, drive the hydrogen off, which would go and run your vehicle and then the waste little beads that we have created you store in the car. And when you go and refuel your car you have two nozzles. One which puts in the new beads and one which takes out the old beads which then go off to be recycled and have the hydrogen added to them again,” Bennington said.

The development has been to turn hydrides into fibres or beads, 30 times smaller than a human hair, through a process of electro-spinning. This produces a white tissue-like material which can then be controlled to capture and release hydrogen.

The encapsulation process protects the hydrides from oxygen and water, prolonging their life and making it possible to handle them safely in air and, because it behaves like a liquid, current infrastructure should not require extensive modifications to be made.

“You can use tankers to carry the material around,” said Bennington. “You can take it to forecourts and then you can pump it into the vehicle and give the customer the same kind of experience they have now.”

All this is part of the reason Cella Energy believe their process could herald a new era of carbon-free motoring. “The experience that most people have now is using regular liquid fuels where it takes three minutes to fill your vehicle and then you can travel 300 miles,” said  Stephen Voller, Cella Energy’s CEO. “Now you can have exactly the same experience with hydrogen but you can’t have that experience with an electric car.”

The company said hydrogen could be an economically viable alternative to fossil fuels if the gas is produced with renewable energy sources like wind or solar. It has three times more energy than petrol per unit of weight and could power cars, planes and other vehicles that currently use hydrocarbons. It said it is also attracting interest from large, established companies in the energy and transportation sectors.