Neuroeconomics revolution

Economics is at the start of a revolution that is traceable to an unexpected source: medical schools and their research facilities. Neuroscience – the science of how the brain, that physical organ inside one’s head, really works – is beginning to change the way we think about how people make decisions. These findings will inevitably change the way we think about how economies function. In short, we are at the dawn of “neuroeconomics.”

Efforts to link neuroscience to economics have occurred mostly in just the last few years, and the growth of neuroeconomics is still in its early stages. But its nascence follows a pattern: revolutions in science tend to come from completely unexpected places. A field of science can turn barren if no fundamentally new approaches to research are on the horizon. Scholars can become so trapped in their methods – in the language and assumptions of the accepted approach to their discipline – that their research becomes repetitive or trivial.

Then something exciting comes along from someone who was never involved with these methods – some new idea that attracts young scholars and a few iconoclastic old scholars, who are willing to learn a different science and its different research methods. At a certain moment in this process, a scientific revolution is born.

The neuroeconomic revolution has passed some key milestones quite recently, notably the publication last year of neuroscientist Paul Glimcher’s book Foundations of Neuroeconomic Analysis – a pointed variation on the title of Paul Samuelson’s 1947 classic work, Foundations of Economic Analysis, which helped to launch an earlier revolution in economic theory. Ironically, Glimcher himself now holds an appointment at New York University’s extensive economics department (he also works at NYU’s Centre for Neural Science).

To most economists, however, Glimcher might as well have come from outer space. After all, his doctorate is from the University of Pennsylvania School of Medicine’s neuroscience department. Moreover, neuroeconomists like him conduct research that is well beyond their conventional colleagues’ intellectual comfort zone, for they seek to advance some of the core concepts of economics by linking them to specific brain structures.

Brain teasers
Much of modern economic and financial theory is based on the assumption that people are rational, and thus that they systematically maximise their own happiness, or as economists call it, their “utility.” When Samuelson took on the subject in his 1947 book, he did not look into the brain, but relied instead on “revealed preference.” People’s objectives are revealed only by observing their economic activities. Under Samuelson’s guidance, generations of economists have based their research not on any physical structure underlying thought and behaviour, but only on the assumption of rationality.

As a result, Glimcher is rather skeptical of prevailing economic theory, and is seeking a physical basis for it in the brain. He wants to transform “soft” utility theory into “hard” utility theory by discovering the brain mechanisms that underlie it.

In particular, Glimcher wants to identify brain structures that process key elements of utility theory when people face uncertainty: “(1) subjective value, (2) probability, (3) the product of subjective value and probability (expected subjective value), and (4) a neuro-computational mechanism that selects the element from the choice set that has the highest ‘expected subjective value’…”

While Glimcher and his colleagues have uncovered tantalising evidence, they have yet to find most of the fundamental brain structures. Maybe that is because such structures simply do not exist, and the whole utility-maximisation theory is wrong, or at least in need of fundamental revision. If so, that finding alone would shake economics to its foundations.

Risk and reward
Another direction that excites neuroscientists is how the brain deals with ambiguous situations, when probabilities are not known, and when other highly relevant information is not available. It has already been discovered that the brain regions used to deal with problems when probabilities are clear are different from those used when probabilities are unknown. This research might help us to understand how people handle uncertainty and risk in, say, financial markets at a time of crisis.

The hugely influential, John Maynard Keynes thought that most economic decision-making occurs in ambiguous situations in which probabilities are not known. He concluded that much of our business cycle is driven by fluctuations in “animal spirits,” something in the mind – and not understood by economists.

Of course, the problem with economics is that there are often as many interpretations of any crisis as there are economists. An economy is a remarkably complex structure, and fathoming it depends on understanding its laws, regulations, business practices and customs, and balance sheets, among many other details.

Yet it is likely that one day we will know much more about how economies work – or fail to work – by understanding better the physical structures that underlie brain functioning. Those structures – networks of neurons that communicate with each other via axons and dendrites – underlie the familiar analogy of the brain to a computer – networks of transistors that communicate with each other via electric wires. The economy is the next analogy: a huge network of people who communicate with each other via electronic and other connections.

The brain, the computer, and the economy: all three are devices whose purpose is to solve fundamental information problems in coordinating the activities of individual units – the neurons, the transistors, or individual people. As we improve our understanding of the problems that any one of these devices solves – and how it overcomes those obstacles in doing so – we learn something far more valuable about all three.

Robert Shiller is Professor of Economics at Yale University and has been a research associate of the National Bureau of Economic Research since 1988.

(c) Project Syndicate 2012

Unbalanced footing

The outlook for the global economy in 2012 is clear, but it isn’t pretty: recession in Europe, anaemic growth at best in the US, and a sharp slowdown in China and in most emerging-market economies. Asian economies are exposed to China. Latin America is exposed to lower commodity prices (as both China and the advanced economies slow). Central and eastern Europe are exposed to the eurozone. Turmoil in the Middle East is causing serious economic risks – both there and elsewhere – as geopolitical risk remains high and thus high oil prices will constrain global growth.

At this point, a eurozone recession is certain. While its depth and length cannot be predicted, a continued credit crunch, sovereign-debt problems, lack of competitiveness, and fiscal austerity imply a serious downturn.

The US – growing at a snail’s pace since 2010 – faces considerable downside risks from the eurozone crisis. It must also contend with significant fiscal drag, ongoing deleveraging in the household sector (amid weak job creation, stagnant incomes, and persistent downward pressure on real estate and financial wealth), rising inequality, and political gridlock.

Elsewhere among the major advanced economies, the UK is double dipping, as front-loaded fiscal consolidation and eurozone exposure undermine growth. In Japan, the post-earthquake recovery will fizzle out as weak governments fail to implement structural reforms.

Meanwhile, flaws in China’s growth model are becoming obvious. Falling property prices are starting a chain reaction that will have a negative effect on developers, investment, and government revenue. The construction boom is starting to stall, just as net exports have become a drag on growth, owing to weakening US and especially eurozone demand.

Having sought to cool the property market by reining in runaway prices, Chinese leaders will be hard pressed to restart growth. They are not alone. On the policy side the US, Europe and Japan, too, have been postponing the serious economic, fiscal, and financial reforms that are needed to restore sustainable and balanced growth.

Private and public-sector deleveraging in advanced economies has barely begun, with balance sheets of households, banks and financial institutions, and local and central governments still strained. Only the high-grade corporate sector has improved. But, with so many persistent tail risks and global uncertainties weighing on final demand, and with excess capacity remaining stubbornly high owing to past over-investment in real estate in many countries coupled with China’s surge in manufacturing investment in recent years, these companies’ capital spending and hiring have remained muted.

Rising inequality – owing partly to job-slashing corporate restructuring – is reducing aggregate demand further, because households, poorer individuals and labour-income earners have a higher marginal propensity to spend than corporations, richer households and capital-income earners. Moreover, as inequality fuels popular protest around the world, social and political instability could pose an additional risk to economic performance.

Last chance
At the same time, key current-account imbalances – between the US and China (and other emerging-market economies), and within the eurozone between the core and the periphery – remain large. Orderly adjustment requires lower domestic demand in over-spending countries with large current-account deficits and lower trade surpluses in over-saving countries via nominal and real currency appreciation. To maintain growth, over-spending countries need nominal and real depreciation to improve trade balances, while surplus countries need to boost domestic demand, especially consumption.

But this adjustment of relative prices via currency movements is stalled, because surplus countries are resisting exchange-rate appreciation in favour of imposing recessionary deflation on deficit countries. The ensuing currency battles are being fought on several fronts: foreign-exchange intervention, quantitative easing and capital controls on inflows.

Furthermore, with global growth weakening further in 2012, those battles could escalate into trade wars. Finally, policymakers are running out of options. Currency devaluation is a zero-sum game because not all countries can depreciate and improve net exports at the same time. Monetary policy will be eased as inflation becomes a non-issue in advanced economies (and a lesser issue in emerging markets). But monetary policy is increasingly ineffective in advanced economies, where the problems stem from insolvency – and thus creditworthiness – rather than liquidity.

Meanwhile, fiscal policy is constrained by the rise of deficits and debts, bond vigilantes, and new fiscal rules in Europe. Backstopping and bailing out financial institutions is politically unpopular, while near-insolvent governments don’t have the money to do so.

Politically, the promise of the G-20 has given way to the reality of the G-0: weak governments find it hugely difficult to implement international policy coordination, as the worldviews, goals and interests of advanced economies and emerging markets come into conflict.

As a result, dealing with stock imbalances – the large debts of households, financial institutions and governments – by papering over solvency problems with financing and liquidity may eventually give way to painful and possibly disorderly restructurings. Likewise, addressing weak competitiveness and current-account imbalances requires currency adjustments that may eventually lead some members to exit the eurozone.

Restoring robust growth is difficult enough without the ever-present specter of deleveraging and a severe shortage of policy ammunition. But that is the challenge that a fragile and unbalanced global economy faces in 2012. To paraphrase Bette Davis in All About Eve, “Fasten your seatbelts, it’s going to be a bumpy year!”

(c) Project Syndicate 2012

Nouriel Roubini is chairman of Roubini Global Economics and Professor at the Stern School of Business, New York University. His detailed 2012 global growth outlook is available at www.roubini.com

Ofcom makes bid to meet mobile broadband demand

The popularity of smartphones and tablets have increased dramatically in the past few years. The UK is a region in which the appetite for the technology is on the up, and according to Ofcom, high mobile data usage rates could grow 800-fold between 2012 and 2030, meaning that the mobile networks will be under considerable strain in the future.

To keep up with the developments, Ofcom is taking urgent action to allow for a much greater spectrum and has urged the industry to assist it in its quest. Launched in March, Ofcom is the brain behind a consultation exercise that will run until June this year. The aim of the initiative is to devise a long-term strategy to meet the ever-growing mobile broadband demand, and the measures established will affect use of radio spectrum after the year of 2018.

A way to ease the burden of existing networks, meanwhile, would occur if operators offloaded some data requests to wifi and so-called femtocells- minute base stations.

Another alternative would be to turn to newer mobile technologies such as LTE, an innovation known to make more efficient use of radio spectrum. While these solutions will certainly help, more drastic measures may be necessary, and Ofcom is toying with the idea of freeing a chunk of the UHF spectrum, the 700MHz band, currently used for digital TV signals.

A smarter earth – smart grids and the implementation of smart grid technology

The global demand for energy is growing at an unsustainable rate, throwing up challenges that this generation must deal with in order to preserve the planet for future generations so that they are able to enjoy the quality of life they are entitled to.

In large part, this is about addressing the spiralling carbon footprint of modern industries and lifestyles but it is also about ensuring there is energy available to power those industries and lifestyles in the future. This may sound an over-dramatisation to some, but there exists a very real risk that current supply and demand of energy in rapidly urbanising and expanding populations will no longer keep pace with one another in the near future.

Beyond this pressing reality, there is a further fundamental truth: fossil fuels are financially and environmentally unsustainable. They are a finite resource in a world where the needs are infinite. At the current rate of consumption their cost is only going to increase as their supply declines relatively. Furthermore, vulnerabilities within energy supply chains and volatility within energy markets are becoming ever-more transparent. The implications of rapid rises in the price of oil, for example, have been seen to have had a destabilising effect elsewhere, such as air travel and freight, manufacturing, logistics, retail and transport.

Rule breakers
The motivation to effect change, however, is not purely about practicality or sound global citizenship. There is a regulatory compulsion too. The European Council’s ‘Low Carbon 2050 Strategy’ requires an 80-95 percent cut of greenhouse gas emissions below 1990 levels by 2050. This requires wholesale change and smarter thinking about the supply and demand of energy. Additionally, it requires investment in smart energy technologies, clean and renewable energy sources and innovations such as electric vehicles and the infrastructure required to power them.

Substantial progress is already being made with the integration of clean energy sources – principally wind and solar – into the power grid, both from energy companies and individuals. In fact the UN Intergovernmental Panel on Climate Change (IPCC) states 50 percent of the electricity production installed globally between 2008 and 2009 were renewable sources. The same body also believes more than 75 percent of the world’s energy demand can be met by renewable sources by 2050.

The Oracle Utilities ‘Future of Energy’ report, produced in association with Future Laboratory, highlights claims from renewable energy consultancy, Ecofys, who believe that if 100 percent of energy demands are met by renewable sources by 2050, it will cut energy demand by 15 percent over the next four decades and save nearly €4trn. It’s a strong argument for addressing the current situation.

Furthermore, The European Renewable Energy Council predicts that the number of people employed in the renewable energy sector in Europe could increase from 500,000 today to over six million in 2050, though this will naturally be offset by transition within large existing utilities providers, as the move from fossil fuels to more renewable sources takes place within their own businesses.

Getting smart with data
Some of the greatest efficiencies which will be found in the management of the supply and demand of energy will be through the smarter use of technology – principally the building out of smart grid and smart meter infrastructure. In order to enable the required energy transformation, the grid needs to be capable of converging data, energy and information communication technology (ICT), which will allow utilities to increase their operational efficiency. The guiding principles of interoperability, openness, scalability, security and distributed intelligence will ensure that the grid is information∞rich and able to integrate renewable sources. Coupled with this, the smart grid will also provide the foundation for energy and business innovations for more sustainable energy usage.

Often inefficiency within the grid is at the heart of wastage and overproduction. Better management of peaks in demand will enable savings even without a change in the level of consumption. However, smart meters will also allow consumers to better manage their personal energy consumption throughout the day, taking advantage of incentivised and feed-in tariffs.

Furthermore, the in-home-displays which accompany smart meters will also encourage consumers to think more carefully about consumption if they can see what they are actually using. This will give consumers a clearer understanding of energy consumption and an ability to see costs rising, especially when consumers think their home is ‘at rest’, will motivate many to turn off devices and lights and power down heating. Similarly, a greater understanding of smart technologies will help reduce energy consumption, while also educating consumers and easing fears surrounding security in the smart grid. The protection of ICT platforms and integrity of all active smart grid components is a pre-requisite of the platform and will encourage consumer adoption of the technology. Education and realisation are key to a successful implementation. The Organisation for Economic Co-operation and Development (OECD) predicts improved consumer awareness of energy use throughout the day can reduce domestic consumption by up to 20 percent. In addition to this, governments need to help push the awareness and the move towards smart metering systems.

Pushing forward
If implemented correctly the smart grid and metering will transform the current energy landscape and play a key role in achieving required savings and carbon reduction. The rise of the smart grid also encourages the development of new business models creating renewed competition across the utilities sector with fast-growing challengers in the sustainable and smart energy space.

But currently, the momentum behind the move to smart technology is not uniform around the world or even across regions, such as the EMEA. In some countries, smart metering is yet to become a political or social issue.

Undoubtedly this is hindering the rollout and governments must meet organisations such as the EC halfway by implementing and reinforcing their own recommendations for improving efficiency and educating homeowners about the options on offer. So-called “nudge” policies will help accelerate the rollout of the smart grid and must be considered, not least because the need for this move will only increase over time. Firm policies today will be more preferable than dramatic interventions later in time.

By implementing a smart grid infrastructure, capable of delivering improved energy efficiency and demand handling, as well as effectively integrating renewable energy sources, we will go a long way to reducing carbon levels. ICT is integral to this as without it, the transformation of the energy infrastructure won’t be possible. ICT will enable the energy grid to become more active and transparent and allow for the convergence of greater communication to make the new energy supply chain an efficient reality. The smart grid network in turn will also support additional technologies and services which lower carbon levels and costs. To achieve this by 2050, everyone must work together to drive awareness, consumer education and adoption of new technologies. As businesses and consumers it falls to us to take up responsibility and call for change.

Bastian Fischer, VP of Industry Strategy, Oracle Utilities Web: www.oracle.com/us/industries/utilities

Are we there yet? The future of transportation

For over a century, science fiction has been fascinated with the possibilities of revolutionising travel – going boldly to where no man has gone before, and via new, undreamt of technologies.

Some regions are already trying to use renewable energies and intelligent transport solutions as part of their sustainability programmes, with technologies that seem to be from the imagination of film director Fritz Lang. For example, Abu Dhabi is building Masdar City, a hi-tech, clean energy business environment, which uses electric vehicles and ‘Personal Rapid Transit’ vehicles (driverless automated vehicles) to ferry people around and to keep the carbon footprint as low as possible.

This year, space technology company SpaceX’s Dragon spacecraft is launching into space using ruggedised solar arrays to power equipment during its inaugural flight to the International Space Station (ISS) – the first American commercial transport vehicle to do so.

The Dragon is a free-flying, reusable spacecraft, with some of its equipment, including heating and cooling systems, powered by a 5kW solar panel array that unfolds from protective fairings after launch. Its hold is made up of a pressurised capsule and unpressurised trunk capable of transporting pressurised cargo, unpressurised cargo, and/or crew members. NASA has agreed to allow SpaceX to send its Dragon spacecraft to rendezvous with the ISS in a single flight as part of its Commercial Orbital Transportation Services (COTS) programme, a public/private partnership designed to cut the cost of venturing into space and augment NASA’s now recently axed Shuttle programme. The launch will test whether NASA’s COTS scheme can effectively compete with a largely globalised space race.

And space is not the final frontier for transport possibilities. Last July, Bertrand Piccard and André Borschberg’s Solar Impulse HB-SIA aircraft returned to Payerne in Switzerland from Paris-Le Bourget after a 12:31 hour flight powered by solar energy alone. It flew 426km at an average speed of 40km/h. And in January of this year, the world’s largest solar-powered ship, the MS Tûranor PlanetSolar (which means “power of the sun” in J.R.R Tolkien mythology), docked in Abu Dhabi as part of this year’s ‘World Future Energy Summit’, completing its 48,000km round-the-world journey that began last September. The UN estimates that the shipping industry alone is responsible for 4.5 percent of the world’s greenhouse gas emissions.

Drive my car
However, it is an attempt to revolutionise the car that is attracting the most investment and attention – and justifiably so given that it is the most popular vehicle (as well as the most environmentally destructive) on the planet. And some of the innovations that are in the pipeline are simply » staggering and not far off implementation.

For example in December last year, Google was awarded a US patent for self-driving cars. The intellectual property rights relate to a method to switch a vehicle from a human-controlled mode into the state where it takes charge of the wheel itself. It explains how the car would know when to take full control, where it is located and which direction to drive in.

Google suggests that the technology could be used to offer tours of popular tourist locations or to send faulty models to repair shops. The application for ‘Transitioning a Mixed-mode Vehicle to Autonomous Mode’ was applied for in May, but had been hidden from public view until December. The patent describes using two sets of sensors. The first set identifies a “landing strip” when the vehicle stops. This then triggers the second set that receives data informing the machine where it is positioned and where it should go.

The internet giant says that the landing strip could simply be a mark on the ground, a sign on a wall, or lines or arrows showing where the vehicle should be parked. To detect which landing strip it has been parked at, the document says the car could activate a GPS (global positioning system) receiver to find its rough location and then use its sensors to detect trees, foliage or other known landmarks to determine its exact position. Alternatively, the filing says the car could read a QR code – the popular two-dimensional square barcode – which would have details about the landing strip’s location.

Telling the car precisely where it has been parked could be crucial to ensuring it knows where to go. The patent explains that GPS receivers are sometimes only accurate to about 30ft (9.1m). However, if the vehicle can monitor its path and knows where it started from, it can simply be told to drive set distances from that point, adjusting its direction at the appropriate places.

The patent describes how data provided at the landing strip could also tell the vehicle to look up an internet address which would let it know if it needed to drive itself to a repair shop, or simply move to another parking bay to ensure a hire company had its cars spread evenly across its various pick-up spots. It says the landing strip could also provide information about how long the vehicle should pause before driving off.

However, Google’s patent will not prevent others developing rival self-drive vehicles. The patent, which is effective in the US only, would only be enforceable to prevent other companies from using the same specific method and not to prevent other companies also providing autonomous vehicles in general. On the plus side, Google could charge rival developers a large fee for a licence.

Although the technology described may sound fanciful, Google has been testing a fleet of driverless cars for several years. The vehicles combine artificial intelligence with the firm’s Google Street View maps as well as video cameras and a dazzling array of sensors.

And experts say Google’s “passion project” could end up creating a valuable revenue stream. The firm has adapted a fleet of Toyota Prius and Audi TT models which have driven 160,000 miles, including travelling on Highway 1 between Los Angeles and San Francisco, with limited human input and more than 1,000 miles without driver involvement. The firm has also successfully lobbied the state of Nevada to pass a law requiring its Department of Transportation to create rules and regulations for autonomous vehicles on its highways. However, there has been no word so far on what these cars may cost to the public.

Engineers behind the project say that robots can react more quickly than humans, meaning the number of road accidents could be reduced. That might also mean more cars could be on the road at the same time, driving closer together thus increasing road capacity.

Expert’s say driverless cars could become a commercial prospect sooner than most people believe. “Google believe it is a technology that is here and now and will start appearing in motorcars in the near future,” says Professor Alan Woodward from the department of computing at the University of Surrey. “We already have systems that park your cars for you and automatically brake – the next obvious step is to have cars take over the routine driving. Google has funded a lot of this work at universities. Not surprisingly, if they think it is going to be big they want to patent it,” he says. Google is not the only organisation to look at developing intelligent cars. Last October scientists at Oxford University said that they have developed a new car that can “see” the world around it. The modified Wildcat can interpret data from technology such as cameras, radar and lasers to drive itself. The Oxford car differs from Google’s by having fewer sensors and relying more heavily on an on-board three dimensional map of streets. The basic map could potentially be maintained by local councils or highway authorities and updated by vehicles.

The project has been given £1.4m by the Engineering and Physical Sciences Research Council and is a collaboration between the university, BAE Systems and Nissan. The developers hope that the technology will eventually improve traffic safety and cut congestion. The project leader, Professor Paul Newman, said the car could drive without human intervention by being acutely aware of its surroundings. The sensors on this autonomously driven vehicle can pinpoint its exact location and enable it to respond to its environment more safely.

Prof Newman is convinced that on-board computer capacity will have an enormous impact on motoring in the future and believes that car companies will engage in an “arms race” to achieve the greatest number of minutes of autonomous driving per vehicle. “You can imagine one company advertising a model of car which, on average, drives itself for 10 minutes a day and then another manufacturer will come out with one that does 15 minutes,” he says.

The team say that computer-assisted vehicles will not get distracted or tired and can remotely connect to the internet to communicate with other cars. The Department for Transport estimates the of congestion will rise to £23bn-£24bn a year by 2025, so connected vehicles, like this prototype, could help alleviate some of those potential costs by avoiding jams and giving the driver time off to do other tasks.

“We need cars that do the thinking and concentrating for you, cars that do not insist you do the driving all the time,” says Prof Newman. “If the going is slow why can’t I watch the show I missed last night, Skype with the kids, read a book or send that last email and elect the car to handle the drudgery of the trip for me?”

Fueling tomorrow  
Developments in intelligent and eco-friendly transport are by no means limited to the developed world, internet giants and prestigious universities. India has also been busy developing fuel-efficient cars. At the beginning of January, Mahindra & Mahindra, the country’s leading utility vehicle manufacturer, unveiled its three-wheeler HyAlfa at the Delhi Auto Expo. The new car has positioned the Mahindra Group as the world’s first manufacturer to produce a three-wheeler that is powered with hydrogen. The vehicle is a joint-effort with the United Nations Industrial Development Organisation (UNIDO), the International Centre for Hydrogen Energy Technologies (ICHET), and India’s Ministry of New and Renewable Energy, and includes extended support of the Indian Institute of Technology in Delhi.

Mahindra & Mahindra’s general manager for research and development, Dr Mathew Abraham, says that “the HyAlfa is the first vehicle of its kind in the world. It runs on nothing but compressed hydrogen gas and is incredibly engineered to run with absolutely zero emissions, which makes it a pleasure to drive on congested city roads. Hydrogen is, in fact, the technology and fuel of tomorrow and is the long-term solution to pollution, energy security & CO2 emission related concerns.”

The HyAlfa will hit the market in both passenger and cargo variants and is capable of delivering top mileage of around 65 km/hr on city roads. This cutting∞edge technology is aimed at providing eco-friendly mobility solutions. However, the vehicle has not yet been phased out for commercial production as this hydrogen fuelled vehicle has not proved itself as commercially viable just yet. Crucially, this is the key problem – while companies are clearly able to produce such efficient technologies, they have not been able to market them successfully to the public so far. Consequently, as sales are relatively low at the moment, the price of these vehicles remains prohibitively high, remaining out of the reach of many people who might be interested in buying one.

Who’s in charge?  
The result of this can be clearly seen in the US, which has the highest tally of gas-guzzling vehicles in the world, largely because petrol prices there remain lower than elsewhere in the world. The White House had hopes of putting 1.5m battery cars on the road by 2015, but 2011 was not a good indication of what Americans think of electric vehicles.

Hybrids, plug-ins and pure battery-electric vehicles (BEVs) accounted for little more than two percent of the total US automotive market last year, and if conventional gas-electric models, such as the Toyota Prius and Ford Fusion Hybrid, are removed from the equation, more advanced battery vehicles generated barely 20,000 sales. The figures certainly don’t look good for the industry: “I’d say they failed,” says Joe Phillippi, chief analyst with consulting firm AutoTrends.

While there are a variety of products already on the market, most come from niche manufacturers that barely even register on the sales charts. Last year brought about the introduction of two closely watched mainstream models: the Chevrolet Volt plug-in hybrid and the Nissan Leaf BEV. General Motors had declared a goal of selling 10,000 Volts but will likely end the year at around 7,000. The good news for the maker was that momentum had been building during the final quarter but then Chevy found itself in the uncomfortable position of having to explain why several Volts caught fire weeks after being subjected to federal crash tests.

Nissan came closer to its 2011 target of 10,000 but it still does not hit what analysts believe are low targets. They point to the fact that Leaf sales have been slipping for several months, a worrying sign as Nissan prepares to open a new facility in Smyrna, Tennessee capable of churning out 200,000 battery cars annually.

The figures for 2011 could be low because last year there were still a relatively limited range of choices available for customers. However, that will not be an excuse in 2012 as US markets prepare to welcome more models. The options coming to an already congested automotive market are broad, ranging from the Fisker Karma, a plug-in sports car, to the Ford Focus Electric. There are electric minicars, like the new Mitsubishi i, and big SUVs, including the Jeep Grand Cherokee and Mercedes-Benz ML350 that Ohio-based Amp Electric is converting to run on battery power. There will also be mainstream offerings, such as the Toyota RAV-4 Electric, and products from aggressive start-ups, such as Tesla’s new Model S sedan.The Model S will aim to overcome one of the biggest drawbacks of battery power, offering buyers a choice of a base Model S with a range of 160 miles, or optional 230 and 300 mile battery packs, the latter adding $20,000 to the price tag. Most other pure BEVs are limited to less than 100 miles per charge. Even then, these are not cheap; battery vehicles are saddled by excessive price tags that are thousands – in some cases, tens of thousands – of dollars more than comparable gasoline vehicles.

There is no doubt that the possibilities for intelligent transport, greater fuel-efficiency and cleaner technologies are either in the pipeline or ready for the market. The problem is that the costs are still restrictive and that the products are largely confined to city driving, and still to those cities that have the infrastructure – and finance – to service them properly.

One way forward may be to follow the example of Abu Dhabi, with public sector investment leading the way to create a firm market and customer base. Until then, it looks unlikely that the world’s biggest carbon polluters – such as the US and India – are going to achieve their hopes of weaning the public onto more fuel-efficient and green cars.

Print and prosper – 3D printing takes over

To the unenlightened, 3D printing may seem like a novelty, but before long the technology might be no more unique than conventional printing at home. Science fiction-style labs are springing up filled with highly skilled engineers trying to harness 3D printing. The technique itself isn’t particularly new and has been used for over three decades under the term ‘rapid prototyping’. Yet what is remarkable is how much the manufacturing process has developed since the turn of the 21st century. Since 2003, sales of printers have increased significantly while prices have plummeted proportionally. Currently, a printer for personal use can be picked up for £500, while commercial variants are typically priced between £1,000-£2,000.

Domestically, these portable factories invite the user to launch their own manufacturing hub, making anything from mobile phone cases through to toys and Christmas decorations. The type of item that has come to symbolise the movement on a domestic level is the humble clothes peg. The amount of work that now goes into the production of the ubiquitous household item spans an entire logistical chain; producing a cluster of clothes pegs from home as and when the need arises will cut unnecessary production and transportation costs. Thus, the clothes peg owner’s carbon footprint will be improved significantly and a further green benefit associated with domestic 3D printing use is that it can be employed to run repairs, which goes hand∞in∞hand with the recession-apt trend, “make do and mend”.

At the forefront
Areas associated with the 3D printing phenomenon are diverse in kind and extend well beyond conventional bathroom hooks and clothes pegs. Medicine is an industry that is steadily relying on additive manufacturing techniques, which is the more descriptive term for what is now referred to as 3D printing. The technology can be applied to a range of medical facets, one of the most important being the production of surgical guides. Using MRI scans of the patient’s organs or bones, 3D models are created on which surgeons can experiment and virtually plan the operation ahead in detail, giving them an accurate idea of the depth and position of the incision. Surgical guides are used extensively in the fields of dentistry and orthopaedics, »  and the solution saves precious time at over-subscribed surgeries. Another medical segment in which 3D printing has become indispensable is generic and custom implants such as hearing aids and hip replacements.

To keep up with growing demand in the field, and fight off mounting competition, 3D printing firms are hard at work to develop ever-more sophisticated products to meet the ongoing requirements of hospitals and laboratories. Prominent company Objet recently brought out a new printing material called The MED610, which combines bio-compatibility with high-dimensional stability and clear transparency, making it suitable for a wide range of medical and dental applications.

These precisely conceived characteristics assist in creating highly accurate, customised surgical guides. “Objet invests significantly in R&D in order to proactively meet the requirements of our customers. The advanced mechanical properties of the new Bio-Compatible material, including its clear transparency bring benefits to the entire medical and dental workflow – from surgical planning through to the procedure itself,” explained Avi Cohen, head of Medical Solutions at Objet, when the product hit the market during the latter part of 2011.

Designing your life
Far from being confined to practical undertakings, 3D printing can be used creatively as well, and the multi-tasking technology has cultivated a new breed of designers employing the processing method with gusto. For example, Bathsheba is a US∞based label carving a niche for itself with futuristic-looking mini sculptures and pendants printed in metal.

Showcasing the innovative prowess of the technology to the world of design, and demonstrating its boundless potential, 2011’s London Design Festival saw a project launch at the V&A Museum dedicated exclusively to 3D printing. Curated by New York-based gallerist Murray Moss and the Belgian industry mavericks behind additive manufacturing, Materialise, the exhibition presented a string of 3D works inspired by actual pieces from within the museum’s galleries.

Works that made the cut included the newly acquired Fractal.MGX table by Platform Studio and Mathias Bar. Conceived from a single piece of resin, the creation mirrors growth patterns occurring in nature. Significantly, the piece would be impossible to produce without 3D printing – a fact that no doubt impressed the professional design contingent. Another piece that formed part of the showcase was a 3D reproduction of a bust of Lady Belhaven, which dates back to 1827. The creative force behind the piece is notable milliner Stephen Jones, who, fittingly, adorned the fine Lady’s head with a quirky hat.

It might be a while before 3D printing enters the catwalk with aplomb, but already designers have started experimenting in the hope that the technology can assist in putting collections together. Skimpy garments are the most user-friendly alternatives, and swimwear lends itself perfectly to 3D printing, as pieces can be created in one sweep due to the limited amount of material required. Jumping on the 3D bandwagon, Continuum Fashion – which describes itself as: “part fashion label, part experimental lab” – teamed up with 3D print service Shapeways last year to create a piece of swimwear, the so-called ‘N12’ bikini. The print model of the garment emerged as a single piece, with no additional assembly required, and the material used was nylon 12, a solid plastic material that is designed to be so durable that it can be printed at a mere 0.7 mm.

Furthermore, the multi-tasking material offers waterproof properties and can be made into springs able to bend any which way. Combined, these qualities make for quite a handy material. “The bikini’s design fundamentally reflects the beautiful intricacy possible with 3D printing, as well as the technical challenges of creating a flexible surface out of the solid nylon,” says Mary Haung of Continuum Fashion. “Thousands of circular plates are connected by thin springs, creating a wholly new material that holds its form as well as being flexible. The layout of the circle pattern was achieved through custom∞written code that lays out the circles according to the curvature of the surface. In this method, the aesthetic design is completely derived from the structural design.”

Each N12 bikini is printed to order, and are available in an array of sizes. The potential of the concept within the fashion industry is potentially huge; however it may need a few more tweaks before it reaches the high street.

Carbon copies  
Offering huge business potential, the 3D movement has given rise to a new industry within itself and many companies have sprung up in the past few years to hollow out a position for themselves before the market becomes saturated.

One of the most established companies within 3D circles is RepRap, short for ‘Replicating Rapid Prototyper’. The company was founded in the UK in 2005 by Dr Adrian Bowyer, a senior lecturer in mechanical engineering at the University of Bath. While the device allows for domestic generation of products, the ultimate goal of the project is to create a self-replicating device, which will ensure that the advance of 3D printing takes care of its own evolution and distribution. Already, the sophisticated machine is able to replicate a string of its own components.

MakerBot Industries is another notable firm offering accessible printers and associated products. One of its printers is called MakerBot Cupcake CNC. “I am an open, hackable robot for making nearly anything,” reads the company’s introduction to the device. In addition to printers, some other companies – Kraftwurx, Shapeways and Ponoko – offer online 3D printing services to consumers and industry folks alike. The service invites individuals and businesses to upload their own 3D designs. Once constructed, the products will then be shipped to the client.

Saviour of the world?
As 3D technology progresses, it presents a vast array of possibilities. Food can be replicated, as demonstrated by a team of researchers at Cornell University’s Computational Synthesis Lab. In 2010, the team made headlines with a 3D food printer that formed part of the so-called Fab@home project. “FabApps would allow you to tweak your food’s taste, texture and other properties,” team leader Dr Jeffrey Ian Lipton told the BBC while the device churned out what looked like liver pate canapés. Many scientists believe that if direct food replication is pursued, this form of 3D printing could help ease the burden on the world’s poorest and neediest.

Just as much as a breakthrough, it is believed that human organs might soon be created via 3D printing, too. Remarkably, US researchers at Cornell University announced last year that they had managed to engineer an ear made of silicone using a 3D contraption. As ludicrous it might sound to sceptics, hopes are high that they’ll be able to mastermind fully functioning human body parts in the future – an advancement that would solve the pressing problem of organ shortage that is plaguing the medical arena. Making that very possibility more viable, are another set of scientists – notably a husband and wife team serving at Washington State University – who have developed 3D printing techniques with the ability to create human organs, complete with blood vessels ready to be connected to the human recipient.

The creation of synthetic bones is also widely reported. Susmita Bose, who has acquired the figurative but somewhat derogatory nickname, “the bone printer”, has produced artificial bones since the late 1990s. When the innovation was originally brought to the attention of the media, she was subject of ridicule, but at the end of last year she received praise for having managed to grow actual bones around artificial scaffolds.

“We have tested it in small animal models and we have seen that bone grows over them very well,” Bose told The Atlantic Wire in December 2011. “We have also tested them with human bone cells and we’ve seen that bone will grow over them very well.” In an interview recently published by the BBC, Bose went on to paint an ever more promising picture for the future: “The way I envision it is that 10 to 20 years down the line, physicians and surgeons should be able to use these bone scaffolds along with some bone growth factors, whether it is for jaw bone fixation or spinal fusion fixation.” If Bose’s predictions become reality, a second industrial revolution is nigh.

Digital music reigns

For the first time ever, in 2011, sales of digital music overtook CDs and other forms of physical music by volume on the US market. According to figures released by Nielsen SoundScan in conjunction with Billboard magazine, digital music accounted for 50.3 percent of all music purchased last year. The US is well ahead of the UK and the rest of Europe in the digital music stakes, and as many as one in three albums sold in the country is of the downloadable kind, while sales have increased by 8.4 percent compared with the number reached in 2010.

Meanwhile, in the UK, digital music only racks up figures of less than a quarter of total music sales, although downloads of individual tracks far exceed the purchase of CD singles.

Tablet users taking over

The monstrous success of the iPad can’t have escaped anyone who keeps an eye on all things tech and in 2012 the popularity of the tablet device – iPad or otherwise – is set to surge ahead even more. According to the result of a poll carried out by Sky, one in twelve people received a tablet for Christmas, while it also emerged that most households are on average kitted out with a total of six ‘new-wave’ contraptions, including laptops, mobile phones and televisions.

The study also revealed that tablet users preferred the device over conventional computers – 45 percent of people intending to buy a tablet this year believed they would use their standard computers far less as a result. Indeed, two thirds of the individuals taking part in the study admitted that the main reason for buying a tablet was to use the device as a computer.

Robot farm to cover tsunami waste land

Flexing its tech muscles to the world, the Japanese government has announced a plan to launch a robot farm on land flattened to the ground by the Tsunami that struck the country to such devastating effect back in March 2011.

The proposed farm is set to grow fresh produce including rice and beans and will be exclusively operated by unmanned tractors, while robots will get down to subsequent duties – such as picking and boxing the goods – once the crops have been harvested.

The former disaster zone-turned farm will span 600 acres and it looks likely that it will inhabit an expanse of farmland in the Miyagi prefecture, northeast of Japan. The project is the brainchild of the Ministry of Agriculture, and the Japanese government will attempt to boost the innovative venture by bringing native technology heavyweights on board, including Hitachi, Panasonic and Fujitsu.

Electronic wonders exhibited at CES 2012

The LA-based Consumer Electronics Show (CES) never fails to inspire and amuse. The 2012 edition of the show, which kicked off in January this year, saw around 20,000 new innovations launched, some more practical than others. While the motorised shoes, SpnKiX, might not help the ‘globesity’ crisis much, they certainly have some benefits as parking will become refreshingly redundant.

Another much-talked about innovation introduced at the show was Intel’s off gesture-controlled ultrabooks featuring revolutionary assets such as built-in tilt sensors and touch-free credit card readers.

The rise of the tablet continues apace, and French Qooq presented a version intended to function as a cook’s humble assistant. To cut it in the potentially hazardous area that is the kitchen, the Linux-based device has been equipped with little feet and is entirely splash proof. In addition, it can take heat up to 60°C (140°F). Acer and Toshiba also released new units.

Electronic giant Samsung showcased its new line of Smart TVs. Fully web-enabled – they have social networking programs such as Facebook and Twitter fully integrated, while also allowing wireless steaming of movies and television from an assortment of providers.

Huge demand for UK computer chip

A computer system designed by a British firm is struggling to match the sudden surge of demand for one of its smallest devices. The Raspberry Pi, priced at £22, is running over 700 a second and the company currently sees no end according to their primary distributor in Britain.The device has been heralded as a replacement to the early coding devices of the 1980s such as the Sinclair Spectrum and the Commodore 64.

Besides the roaring success the Pi has enjoyed in the UK, the jolt in sales can seemingly be attributed to the large amount of interest from nations within the Middle East, with some countries keen to distribute one device to every schoolchild.

Consisting of just a credit card-sized rudimentary circuit board that can be plugged into a TV, monitor or keyboard, the Pi functions like a normal desktop PC. It can create spreadsheets, word-processing programs and even develop basic games.

Recent improvements to circuit board mean that it can now play high-definition video and connect to wi-fi hotspots. The company, which is a charity, is aiming to integrate the device into schools worldwide, so that children can learn programming.

Overtime limited to reform work

In many offices, official working hours can be hugely frustrating; particularly since unpaid overtime is now usually expected and work emails from bosses are known to find their way to employees’ smartphones at all hours. However, change is now afoot as a group of employees working at car giant Volkswagen has decided to take a stand.

Attempting to put a halt to work out of factory hours, the company’s works council teamed up with the trade union to strike up an agreement with Volkswagen that will see the email functions of Blackberries issued by the company being disabled outside of office hours. The affected group of employees will only have to endure receiving emails 30 minutes before their working day starts, and 30 minutes after it has finished. Expert’s believe that the knock-on effects could improve workers efficiency.