Buildings with bigger brains

The developed world is in the first phase of a major transformation as to how buildings are designed, constructed and operated. The reason for the change is obvious; energy and sustainability concerns have finally pushed design professionals, engineers, architects and building owners to make creative leaps in building design. The repeated, reused and legacy designs – the business realm of some designers – are history. While energy concerns may be the impetus for the transformation, technology is its primary enabler. The technical advances in building materials, use of three-dimensional modelling and design and new power generation systems at building sites are notable. However, the innovations and evolutions in building systems are especially impressive given that it is these systems that will provide the tools to properly manage building performance. Not only are these systems managing and monitoring energy usage, they also have a significant role in how the building is operated and the level of satisfaction of the occupants’ experience in the building. Given that about 75 percent of the lifecycle cost of a building is in the operations phase these systems not only affect energy use but also ongoing operational costs and ultimately the value of the building.

Transformative periods in the building industry have occurred several times in the twentieth century with the introduction of mechanisms and devices such as plumbing, construction cranes and elevators. Thirty years ago, just prior to the mass introduction of personal computers for businesses, the amount of technology in a building was meager. It consisted of the local regulated public telecommunications utility installing services in a building, a mechanical contractor installing a pneumatic control system for the heating, cooling and ventilation system, a fire alarm system and maybe a dedicated word processing system. While we’ve come a long way since those days, we’re still in a very early stage of fully deploying and integrating sophisticated building technology systems.
 
In due course, buildings will become full of technology: walls and ceilings will be embedded with sensors; every aspect of a building’s performance and use will be metered and measured; software tools will be used to automatically optimise building systems without human intervention; real-time information on the building will be provided to occupants and building management relevant to their particular needs; buildings will be fully interactive with the power grid; cars will be efficiently parked via conveyers and geo-spatial location systems will be deployed for every building asset. One of the effects of energy concerns is manifested by new and existing buildings installing more sensors and meters, thus taking more measurements of the building and other attributes, such as weather and sunlight, thus affecting the buildings performance. We now have building systems and devices to sense occupancy, sunlight, weather, motion, CO2, temperature, water flow, power consumption, humidity, air flow, signals from the utility grid, door openings and closings, window breakage, location of the sun, etc. Given the increase in data, more software analytical tools have entered the marketplace to assist building managers in sorting through and analysing the granular data in order to transform it into actionable information.

The integration of  building technology systems does bring several benefits, most importantly, enhancing system functionality that can’t be attained with separate systems. For example, an access control system or video surveillance system can provide data on building occupancy which can be used by the energy consuming systems (HVAC, lighting and electrical plug load) to properly align energy consumption to the actual building occupancy, thereby avoiding over ventilation or unnecessary lighting. In addition, integration allows for more efficient management of data shared between systems. The whole idea of integrated automation was legitimised and given credence with the release of the most recent MasterFormat (the guidelines for building construction specifications) and the creation of a specific “Division” for integrated automation. Integrating building systems is technically done at the physical, logical and application level of the networks. Integration also means that not only are systems integrated “horizontally” to interact with each other, but relevant data and information is shared “vertically” within an organisation; for example, energy information generated from data in the buildings systems is provided to facilities management, the procurement department, C-level executives, tenants, etc. While the integration of building systems is not the “Plug-and Play” of a personal computer, it is facilitated by the use of structured cable plant, open standard network protocols and standardised databases.

The move to smart buildings is accelerated by general energy concerns and specifically by the initiatives around the “Smart Grid”. Intuitively we know that a smart electric grid without smart buildings would be a greatly diminished deployment and a very expensive lost opportunity. Buildings will need two-way communication with the grid and be able to automatically react to signals from the grid. This is the case with demand responses schemes, integration of demand-side energy and resources, and timely information and control options for occupants and consumers. The nirvana of energy consumption is net-zero buildings (somewhat open to definition though). These low-energy-consuming buildings generally have on-site power generation and are operated with automation software and granular sensors that adjust to every change and movement within the building. Some prominent organisations have projected that net-zero buildings will be the building standard 15 to 20 years out; that timeframe is understandable given the difficulties of designing and constructing a net-zero building. However, they may have underestimated the global reach of the transformation, the rapid pace of technological evolution and the vast numbers of innovative intellects collaborating around the planet.

Sophisticated integrated building systems are on the right side of building cost, energy conservation and technology. If there are any concerns it is on the people side. The skill sets and knowledge base to design, construct and most importantly, operate buildings is rapidly changing. There’s no doubt that the challenges in building management have intensified over the last several years. Aside from the overall financial condition, there are tremendous pressures to address energy consumption, deploy technological solutions and work out internal organisational challenges. Moreover, the number of new systems and applications that  need to be managed have grown rapidly. These include such systems as electrical switchable glass, exterior shading systems, demand response planning, sun tracking systems and personal rapid transit systems; systems that only a few years ago either did not exist or were marginalised to a few specialties. However, these are challenges that can be addressed with education and training, they are inherent in any such transformation and will present tremendous new opportunities. Buildings will become smarter. If the ancient Egyptians achieved engineering excellence constructing some of the world’s most iconic structures with primitive methods, our technology-laden world certainly has the means and motivation to dramatically transform the spaces where we live and work.

Power for the people

Cost-effective, simple and innovative. It’s difficult to over-estimate the impact of VoIP technology on business during recent years. VoIP, of course, dramatically increases communication mobility. It can slash your cost base. It’s not distance or location dependent. And your VoIP number is completely portable.

So how will the technology change in the future – and what business benefits will it bring? Well, generally speaking, the network’s data capabilities are increasing significantly says Oswald Ortiz, CEO of Zurich-based Qnective. “This will allow VoIP services to be deployed in regular networks without having to fear bandwidth shortage.”

“From the consumer perspective the acceptance for VoIP is increasing, we would say. After this first step consumers now need to be convinced by mobile VoIP through solid and intelligent services that enhance their communications on a day-to-day level.”

The Qnective difference
So what is the Qnective difference? Essentially Qnvective’s technology was originally designed to address each of the main stumbling blocks to replicate GSM quality voice and data services through mobile IP. “In addition, the security aspect of mobile communication was becoming a real issue for leading corporations in sensitive industries. So Qnective incorporated the ability to encrypt voice and data in all its portfolio of products and services for added user peace of mind,” says Ortiz.

Qnective’s core services offer consumers communication features like presence, one number (multiple end devices connected to one number), conference calling and specific call handling mechanisms such as call hunting with several numbers. And, of course, an encrypted version: a solid, state-of-the-art secure communication service.

“Our network operator partners encounter a significant reduction in costs of their infrastructure due to the fact that the network is then IP based and not circuit switched,” adds Ortiz.

Seamless integration
Qnective’s Qtalk is a platform that enables full integration into existing infrastructures – home location registries, billing systems, media gateways etc. “Even though usable in a broad variety of networks the clients still need at least Enhanced Data for Global Evolution, or EDGE for short,” says Ortiz. “This is basically an upgrade for GSM/GPRS networks that triples data rates (speed) over standard GPRS in order to work properly. The quality of the data connection influences the communication quality through the client.”

Bear in mind too that though EDGE is faster than GPRS, it is still not quite as fast as 3G technologies such as HSDPA and EVDO.

Security issues? “Qtalk secure addresses specifically the security issues in VoIP technology,” assures Ortiz. “Using very advanced encryption algorithms and clever verification mechanisms, the user is protected from security attacks at all times. This allows the user to communicate securely no matter where he is.”

The technical implementation is based on the highest security standards and protects against man-in-the-middle attacks and eavesdroppers, says Ortiz. “Using end-to-end encryption, which cannot be decrypted between the two end devices, Qtalk opens the airwaves to independent and simple communication.”

Always cost-effective
Qtalk secure guarantees your privacy through bug-proof telephony, making it suitable for government agencies and ministries, industrial companies, insurers, banks, security firms and people in the public eye (politicians, VIPs). Plus many more, supplying good voice quality from all data networks (EDGE, UMTS, HSPA and WLAN).

It’s also a very cost-effective service claims Ortiz. Most mobile virtual network operators (MVNO) are GSM based. The dependency of MVNOs on GSM services has led to an enormous decrease in prices and therefore margins. “The integration with Qnective technology,” says Ortiz, “is possible for these parties, without having to change their existing infrastructure. This means that minimal effort is required to be able to offer these new services.”

Depending on the size and the complexity of the network the costs do vary, of course. Also, the customer requirements need to be kept in mind. “Even though the interface is designed to be open towards existing networks,” says Ortiz, “it may be possible that the integration requires adaption layers for the two systems to communicate. This requires a bit more effort.”

However, as soon as the service is deployed the investment begins to pay back. The deployment of the service results in a different underlying cost structure for users who integrate their network and considerable savings can be expected.

Proven technology
There’s no issue or worry around reliability. Qnective’s technology is solidly proven. “Qnective’s technology uses pre-defined and established standards as a basis to offer its customised solutions for VoIP communication,” confirms Ortiz. “Qnective’s expertise lies in this field and the main priority of Qnective is to develop and offer carrier grade technology based on the three S’s: that is, stability, solidity and scalability.”

6G Mobile – formerly BT INMO – is entering the market in the Netherlands with their product SMARTMOBILE, which is based on Qnective technology with a hybrid GSM-VoIP solution. “Qtalk by Qnective is the first real mobile VoIP product which really deserves the rating ‘carrier grade’,” says Harry van Streun, CEO of 6G Mobile.

So, what of the future? It is Ortiz’s clear aim to become the leading supplier for IP/GSM technologies for operators he says. “Operators will have to develop new business models to finance the expensive rollout of full-IP access networks such as LTE, WIMAX, etc,” he says.

“And therefore they will roll out new services and open their networks to content and service providers with new capabilities. This won’t work without the ‘right’ standards and technologies. And this,” he says firmly, “is where we join the game.”

VoIP in brief:

VoIP technology harnesses the Internet’s packet-switching capabilities to provide phone service. This has multiple advantages compared to conventional circuit switching technology. Packet switching technology, for instance, allows several telephone calls to absorb the amount of space occupied by just one in a normal circuit-switched network. Transmission costs are dramatically reduced, which can be helped further by data compression.

A window into the future of glass in buildings

If the past decade was defined by design innovation, the future will have a greater focus on functionality, as every aspect of building design now begins to require a secondary purpose. Glass has always been at the forefront of building development – allowing businesses and homeowners to create visual impact, extend, brighten and add an extra dimension of comfort to their properties. In the past, glass was used mainly for windows to admit air and light, but now it is integral to interior and exterior architecture. From façades, skylights and walkways to revolving doors and glass box extensions, it does so much more than just let light in – it provides architects with solutions for heat, energy efficiency and lighting requirements.

For many years the Pilkington brand has been at the heart of architectural glazing innovation. It was Sir Alastair Pilkington’s invention of the float glass process that gave birth to modern methods of glass manufacture, and the Pilkington brand, now part of the NSG Group, continues to build on this pedigree with advancements in glass functionality. The Group now operates or is involved in 49 float glass lines worldwide, selling its vast range of functional glazing in more than 130 countries.

Architectural glass is now heralding a new era, powering the green homes revolution, using more sustainable means to manufacture and helping governments meet carbon reduction targets. With its extensive range of high performance, value-added products, the Group brings benefits to its customers and the environment, capitalising on the modern role of glass by embracing functionality and fashion. It hopes to inspire the next generation of interior and exterior designers to create even more beautiful, sustainable environments in which billions of people live and work.

Vice President, Technology Building Products, Philip Ramsey, outlines the Pilkington brand’s future vision. The key, he explains, is how the shift from primarily aesthetics to performance and energy-efficiency informs his working, and he reveals how he is looking forward to finding new ways of developing further sustainable, functional, and aesthetically-pleasing glazing. “What’s important to remember is that glass is extremely versatile and has many applications. As consumers sit down to enjoy the view from a restaurant window, the glass used in that window preserves the view by reducing condensation and glare, while keeping diners comfortable by maintaining building temperature and providing safety. We have strong R&D programmes here to further improve the performance of our energy-saving products, both in terms of a reduction in heating requirements for cold climates and cooling for hot climates. Most of these developments are related to improving glass coatings, but the glass itself is also being modified, for example to allow even higher solar heat gain in cooler climates. The Group is also a leader in vacuum glazing technology with Pilkington Spacia™, the world’s first commercially available product offering the thermal performance of conventional double glazing within the same thickness as a single sheet of glass.”

Globally, the architectural glass market is trending more and more towards these types of functional glass. Architects, regulators, tradesmen and consumers are asking for more from glass, and glazing is now required to work harder in buildings. Pilkington has driven this shift, with a wide variety of glazing ranges including solar control, thermal insulation, fire resistant, safety/security as well as its revolutionary self-cleaning glass. Pilkington Activ Suncool™, for example, is an industry-leading range of high performance glass, designed to maximise natural light whilst ensuring solar control, low-emissivity, and self-cleaning properties, all in one product.

Guy Roberts, Business Planning Manager at Pilkington Building Products said: “Trends in the global architectural glass market are essentially driven by two things; building regulations and architectural fashions. With more and more countries pledging to cut their carbon emissions, and the energy lost through windows making up a significant part of a building’s average consumption, energy efficiency is becoming more of a key factor in glazing. Couple this with the long-term trend for more of a building’s skin to be glazed, and you’ve got the perfect scenario for global glazing innovation.”

With the recognition that buildings need to exist in synergy with their surroundings, energy efficiency is a huge driver for innovation in the industry. Buildings account for 50 percent of all the energy consumed in advanced countries, and since the Kyoto Agreement, glass manufacturers have been proactively working to produce ever-more efficient products in anticipation of the changes to building regulations. Glass for Europe studies show that the CO2 saved by replacing just one square metre of single glazing with low-e double glazing in a typical European building is 91kg per year. This means that in less than four months, Pilkington low-e glasses will have off-set the CO2 emitted during their manufacture, making them one of the industry-leading performers on sustainability and an excellent investment for any building purpose.

The Group is also one of the leaders in photovoltaic (PV) technology used to generate electricity via panels for domestic or commercial purposes, and is seeing this once-niche market expand very quickly. Solar energy panels offer alternative solutions for a range of energy requirements, from small-scale domestic applications to large scale-solar power stations, from cloudy northern rooftops to hot sunny deserts. Glass is an integral and important element of these solar panels, and the Group offers a wide range of high-quality products that are used in the three leading solar technologies aimed at converting solar energy into electricity: thin film photovoltaics, crystalline silicon photovoltaics and concentrated solar power applications.

The technical capabilities of the Group will enable the Pilkington brand to explore new possibilities for functional glazing and continue to extend its range of high-performance, value-added products. In the future, it is likely the industry will focus on the ability of glass to perform in more extreme conditions whilst continuing to optimise energy saving and cost efficiency. Improvements in energy performance will continue to be required as the demands of building regulations increase. Furthermore, the role of glass is likely to become more multi-functional. For example, a business may require glass to play a bespoke role in energy generation, as well as being self-cleaning and non-reflective. Digital touch screen panels will control the multi-functionality, ensuring the glass performs at optimum efficiency in its environment.

The role glass can play will become increasingly varied – technological advancements in the development of OLEDs (Organic Light Emitting Diodes) are likely to play a large part in architectural design in the push for ever-more environmentally friendly ways of working in the modern world. The possibilities such technology affords have the potential to revolutionise the way buildings are constructed, energy is used, and even the way in which people work. The NSG Group is committed to developing technology to enhance glazing performance and optimise functionality. As energy resources and efficiency continue to play a more important role in government and business agendas, glazing will play a larger role in reducing both costs and impact on the environment.

Further information: www.pilkington.com

An interest in plugging the future

They haven’t lodged firmly in the public consciousness – yet. But smart grids are coming. Increasingly they will be a big part of how we all consume – and rely upon – our electricity supply far into the future, whether we’re customers, manufacturers or energy providers.

They’re badly needed. Just about all developed (and developing) countries demand the kind of flexibility, efficiency and resilience to cope with an increasingly diversified power generation source and demand, not to mention changing use patterns. Throw in increasingly stretched transmission and distribution networks and it’s easy to see the advantages of smart grids.

“They give a high level of control over energy networks and that’s fundamental to curbing wasteful energy use and integrating intermittent renewables and small distributed generators into the grid,” says Dr Aijuan Wang. “Smart grids play an important role in reducing greenhouse gas emissions too.”

Dr Wang knows what she’s talking about. She’s a chartered engineer with over twenty years of technical and commercial experience in the energy sector with wide experience of project development, commercial contract negotiation and power asset expansion and strategy.

A smart grid is a new generation of the electric power transmission and distribution networks which will last deep into the 21st century. A smart grid can intelligently coordinate the behaviour and actions of all users connected to it – generators and consumers – and interact with them seamlessly. It’s not all about large infrastructure projects; smart grids have what you might term apps; these apps can regulate and direct energy where it’s needed, intelligently and quickly, so established systems don’t become overloaded. A smart grid is also about multiple power system operators, in particular distribution networks employing a broad level of communication and control. Much of this is manually controlled today. A smart grid is about how to supply and use electricity cost-effectively. But first, let’s look at the broader picture – and how smart grids fit into it.

Electricity supplies have been around for more than 100 years. However, the amount of energy we consume and how it is used today is very different to what it was one hundred years ago. That’s why many countries around the world struggle to keep their supply reliable. The existing infrastructure is simply not coping. “Just think about the California black-outs,” says Dr Wang. “It’s not just in California, of course. But it doesn’t give you confidence in the quality of the infrastructure when this happens.” Of course, in the developing world, the challenges are more critical. “Many existing networks are simply overloaded and can’t cope with the fast demand growth,” says Dr Wang. “The supply is often congested, unreliable and sometimes even unsafe.” To its credit, Dr Wang says the US administration is taking a hard look at how it can secure its energy supplies for the next 100 years – and is willing to do what is necessary. “The US is moving ahead in terms of not just the concept of how to bridge this gap but to move onto the next generation of power supplies.” It needs to. The vast distances between power bases means that the US, like other large countries, needs to ensure supply is reliable and network control is automatic. The expense will be considerable. But smart grids are cost-effective from day one. That’s because they save capital investment and offer consumers flexibility of energy use.

“The smart grid will allow operators to route electricity from supply source to point of demand via the most economical way,” says Dr Wang, “preventing local overload in the transmission and distribution system. If a line has a disturbance or outage the smart grid will find alternative paths and self-heal.”

There’s also a whole host of benefits to smart grids. Improved power quality and security. A dramatic lowering of outages. Lower maintenance and running costs. A far smaller CO2 footprint.

“Just think of some power plants,” says Dr Wang, “they don’t run regularly throughout the year; just during peak hours. It simply doesn’t make economic sense to build them in the first place. It is an expensive method of power provision. But we do have the dilemma of having to keeping lights on during the peak hours.  A smart grid can provide solutions by getting supply from demand response and control.”

So it’s a question of standing back and really looking at your load profile. What is really needed here? Well, smart meters could help for a start.There are also a load of client drivers that will be pushing for changes. “Energy customers will increasingly be given freedom of choice over when they use things, like their washing machine. Do they use it in the daytime when energy is expensive, or at night time when there’s less load?”

Of course, it’s hard to give concrete numbers for smart grid cost savings. They vary largely from one country to another depending on types of generation sources, locations of power plants and demand centres.  But when you build less generation to serve the same amount of demand, you effectively save on capital costs of power plants.  “This means that the transmission infrastructure requirement is subsequently reduced; you also save on distribution investment.  So it’s also about the overall efficiency of the investment in the electricity value chain,” says Wang.

Government investment is coming, but how much and when remains cloudy. For example, the UK’s Committee on Climate Change reckons the UK needs to invest between £300bn and £500bn by 2025 in order to meet its clean energy bill targets. That’s from both private and public sectors. Bottom line? This means an investment of at least £20bn to £30bn a year. These numbers are not insignificant, by any stretch of the imagination. The Low Carbon Network Fund also helps transmission and distribution companies to invest in smart grid projects and innovations.

Smart grids though are not just about technology for technology’s sake. The ‘smart’ bit is how this technology makes our energy go further – a lot further. “We have multiple energy challenges,” says Dr Wang. “The supply side is severely constrained. We’ve also got increasingly tougher emissions targets. The core technology to control and develop the infrastructure is already here.”

So really it’s a question of priorities and budgets. “We know that utilities are working very hard to squeeze what they have from current assets, which is where smart grids come in,” says Wang.

However, looking 10 years ahead, things will look very different. “Conceptually, it’s a kind of revolution in terms of how electricity is supplied and used. A quiet revolution. Large scale applications of new technologies will be happening.  Demand and interest is building. Electric vehicles are coming, and in a big way. That will not just put additional demand on our energy infrastructure but also place the issue of generation supply more publicly.”

She adds: “But I would say it’s about intelligently mobilising what we have already got. Customer engagement will also play a key role in achieving the goals of a smart grid although the challenges remain. It’s going to be a very exciting period.”

Further information: www.mottmac.com

Energising utilities for economic stimulus

With the passage of the American Recovery and Reinvestment Act (ARRA), more than $11bn (USD) was allocated for the creation of a bigger, better, smarter electric grid. The combined total of these investments will allow for the integration and use of greater amounts of renewable energy, increased utilisation of innovative efficiency technologies, and a reduction in the electric congestion that costs consumers billions of dollars each year. As a result, utilities now have an unprecedented opportunity to leverage public funds and become the “utility of the future”. But, in the meantime, while utility executives sort out the implications associated with the stimulus funding, they continue to tackle issues confronting their industry during this period of economic volatility – energy inefficiencies and costs, complex regulatory requirements, and ageing infrastructures. 

Most utilities use robust meter, network, and customer service infrastructures designed to support processes and systems for well-defined work routines and functions. In many cases, they use conventional meters with life cycles of up to 40 years – devices that worked well when energy markets were largely regulated and characterised by price regulations, easy access to energy resources, and sufficient infrastructure capacity. In that environment, organisations could rely on manual processes for everything from checking meter readings to determining future demand for electricity, without as much concern about margins, ensuring customer retention, energy efficiencies, and sustainability. But changes in the utilities industry in recent years are making it increasingly difficult to compete using traditional infrastructures and processes. Resources and infrastructures capacities are becoming more marginal and inelastic demand is restricting revenue growth. Then there’s a heightened focus on reducing carbon footprints. In addition, new legislatively mandated market rules demand that utilities compete for customers on the open market, so utility executives must find new ways to differentiate their services and capture additional revenue while increasing operational efficiency. And because customers can switch retailers relatively easily – especially in electricity markets – utilities need innovative processes to improve sales and customer service performance.

The reality is that current assets are ageing and a more adaptable infrastructure is needed going forward. For the near term, utilities must optimise current asset efficiency and availability. Downtimes must be limited to planned shutdowns and necessary overhauls only. Stringent maintenance processes can help ensure high levels of equipment reliability. Additionally, proactive planning can establish a stable environment where resources such as personnel, contractors, parts and tools can be optimised. Future increases in bulk transmission capacity, however, require significant improvements in transmission gird management. A smart grid, for example, can upgrade the use of capital assets while minimising operations and maintenance costs.  Smart grids precisely limit electricity power down to the residential level. Optimised power flows reduce waste and maximise use of lowest-cost generation resources. But how can utilities – and consumers – better understand and manage energy use? Enter technology. Because technology in itself is seen as a critical enabler for implementing the Economic Stimulus Plan, with billions in funding globally to be spent incrementally over the next five years, it can help utilities see, think, and act more clearly as they develop and execute the necessary strategies to:

Optimise energy efficiencies. New energy-grid technologies can help utilities balance supply and demand while improving the efficiency of energy delivery and consumer usage. These metering and data-exchange systems, however, require real-time communications and greater system interoperability.

Respond to sustainability concerns. Increasingly, the adoption of sustainable energy practices is becoming a business imperative. Today’s utilities are challenged to take a holistic approach to sustainability that simultaneously addresses compliance, globalisation, environmental impact, and energy politics. Visibility into all aspects of a utility’s operations and end-to-end process control are keys to success.

Develop higher-performing assets. Managing today’s ageing infrastructure for peak efficiency while making the energy investments that will deliver maximum value tomorrow are basic tenets of the stimulus programme. Utilities must be able to maintain existing equipment, model current and future assets, and analyse complex energy scenarios.

In regard to optimising energy efficiencies, advanced metering infrastructure (AMI) technologies can help utilities and consumers alike better understand and manage energy use. With AMI, energy consumption is recorded at regular time intervals (e.g., every 15 minutes) and then billed at different rates based on peak and off-peak hours. As a result, consumers and suppliers can collectively lower overall energy requirements and reduce carbon emissions. With these technologies, energy providers can improve the balance between demand and supply. The data collected through AMI helps utilities better profile energy requirements during both peak and off-peak hours and predict energy usage spikes due to environmental changes. Smart-grid technologies are also supported. These improve the delivery of energy by providing greater control over load shedding, energy leakage, and outage management.

AMI technology can also help utilities stay competitive in ways that older metering and data-exchange technologies simply cannot. But without a proper IT infrastructure that enables companies to implement and work with AMI in a cost-effective manner, utilities find it difficult to deliver the flexible pricing options that the market demands. AMI requirements for data management and real-time information exchange call for improved communications and collaboration between customer and utility. It also requires far greater interoperability of systems within the utility’s IT landscape and across enterprise boundaries. Upgrading metering and data-exchange infrastructure can increase the quality of a company’s sales and customer service processes. In addition, automated business-control processes can help utilities manage the variations between peak and off-peak production, thus reducing the overall cost to serve while lowering the cost per kilowatt hour.

As utilities develop and execute the necessary strategies that can help them see, think, and act more clearly to wisely spend the billions in economic stimulus  funding over the next five years, they need to execute those strategies decisively and effectively to endure the current environmental conditions and emerge in a stronger, more competitive stance. To gain this clarity, utilities need visibility to refocus business strategies and streamline operational execution and transparency to demonstrate compliant and sustainable business practices. Such “clear” utilities understand what is going on in every aspect of their business and business networks. They operate with increased speed, relevance, and accuracy. They are prepared for risk and uncertainty and can adjust operations nimbly as market conditions change. In short, they are transparent and accountable, lean and agile, customer-centric and collaborative.

further information: www.sap.com

Measuring up to measuring down

Nanotechnology is concerned with manufacturing to dimensions or tolerances in the range 0.1-100 nanometre, where one nm is one-millionth of a millimetre. Nanometrology is the science of measurement at the nanoscale. It has a crucial role in enabling products with a high degree of accuracy and reliability in nanoscale manufacturing.

Anticipated advances in emerging nanotechnology industries will require revolutionary measurement capability with higher resolution and accuracy than has previously been envisioned. Fundamentally new measurement techniques and standards must be developed. Co-Nanomet is a European Commission funded programme which has been reviewing the current landscape and assessing future challenges for nanometrology in four core nanotechnology-related areas. Each represents an area within which rapid scientific development has seen corresponding growth in actual or potential commercial applications. In turn, questions of fundamental measurement understanding have emerged.

Engineered nanoparticles are particles designed and produced to have all external dimensions in the nanoscale. Whilst in use in industry for many years, nanoparticle production today is in rapid growth. Recent increases in the number of different kinds of engineered nanoparticles and the broader range and understanding of their functional properties promises much for future applications. Common applications include sunscreens, anti-reflection coatings or antibacterial silver coatings on wound dressings. To profit fully from their potential, development of scientifically sound methods is necessary to distinguish them in terms of their basic characteristics and properties. Enhanced understanding of the suitability and limitations of measurement techniques and instrumentation is also required both within the scientific and industrial community to aid the development of this expanding field.

Nanobiotechnology addresses the development of nanometrology related to biomedicine, bioscience and bio-technology. This is of particular importance for the pharmaceutical industry, health care applications, clinical diagnostics, and medical devices (e.g. implants), as well as for food safety. Measurement challenges in this rapidly developing area include the measurement of dimensions of biological structures, measurement of relevant levels of biologically important substances (such as drugs, biomarkers, and toxins), and the biological variability of systems. Characterising soft and wet materials at the scale of nanometers remains a challenging task to this community.
 
Thin films and structured surfaces addresses the measurement and characterisation of surfaces and layers or coatings that have features with sizes (lateral and depth) of 100 nm and less.
Thin films are now integral to several major technology-based industries, including: semiconductor fabrication and microelectronics, magnetic data storage, optical components and coatings and solar cells. As thin film technology develops – more complex layers, tighter control over parameters, an increasing diversity of applications and environments – the challenges for measurements grow. The ability to deterministically alter the structure of a surface can have a profound effect on how that surface functions. Whilst much of this work is at the research stage, the number of products that include some form of surface structure control is growing rapidly.

Modelling and simulation is a novel field in metrology, which has recently attracted significant attention. New modelling and simulation techniques today offer possibilities to interpret experimental data and to predict the properties of new materials. As simulations become increasingly relevant for the planning and interpretation of experiments, so do closer links between the computer-simulation community and experimentalists working at a nanoscale level.

Under the Co-Nanomet Programme experts from industry, academia, metrology institutes and government have been working to develop broad long term goals in this area and to identify scientific and technological barriers which, once overcome, will allow advances towards those goals. The definition of a Strategic Plan for European Nanometrolgy is providing direction to researchers and nanotechnology leaders. For the first time priorities have been agreed. In this way, European nanotechnology will be supported to reach its full and most exciting potential.

Wind in their sails

In a powerful symbol of such shifts in the world economy, the manager of the euro zone’s financial rescue fund reckons he could quickly raise money to bail out Ireland – by turning to Asia. “We are confident that we can raise the necessary funds from institutional investors, central banks and sovereign funds, in Asia in particular,” said Klaus Regling, chief executive of the European Financial Stability Facility.

So is it game over for the Old World?
Not so quick, argues George Magnus, senior economic adviser at UBS Investment Bank in London. In Uprising, Will Emerging Markets Shape or Shake the World Economy?, Magnus cautions against writing off the West, especially the United States. Given America’s proven capacity to reinvent itself, the epitaph RIP being prepared for it should perhaps stand for Renewal in Progress instead of Rest in Peace, he says. And Japan’s precipitous decline since the end of the 1980s, when the land under the Imperial Palace in Tokyo was worth as much as the whole of California, is reason alone to question the “Sino-euphoria” generated by China’s seeming inexorable rise.

Magnus’s overarching argument is that China still lacks the organisations and institutions that accept – indeed welcome – the risk-taking that holds the key to technological innovation. For it is technology that will be a major determinant of the fortunes and failure of developing countries over the next decades. It’s one thing to manufacture an iPad. It’s another to invent, design, brand and commercialise it. On this score, the United States and parts of Europe are likely to retain pole position for a long time, Magnus believes. “China has proved itself quite adept at being able to introduce economic reforms, but I wonder whether it has the legal and political and social institutions to be able to encourage and tolerate disruptive change,” he said in a telephone interview.

In raising questions about the durability of the rise of emerging markets, Magnus is swimming against a strong tide. The Organisation for Economic Co-operation and Development, a club of 33 industrial democracies, forecasts in a new report that by 2030 non-OECD economies will account for 57 percent of global GDP, up from just 40 percent in 2000. According to a recent book by World Bank economists who examined the post-crisis policy outlook, developing countries could overtake their developed peers collectively in size as soon as 2015. “The countries of East Asia are leading the world out of its economic and financial crisis and may well become the new engine of global growth,” co-editors Otaviano Canuto and Marcelo Giugale write. 

Magnus agrees that, unless their governments make profound policy errors, emerging markets will probably continue the process of catching up with the developed world. And China should remain the dominant Asian power. But nothing is pre-ordained.

While the ruling Communist Party is making the right noises about rebalancing China’s economy away from exports, Magnus says the present pace of change is too gradual for an impatient US government dogged by slow growth and high unemployment.

“I’m not saying at all that China’s leaders don’t ‘get it’. But I have serious reservations as to whether the scale of change required from a global perspective is something the leadership is willing to embrace,” he said in the interview. Futurologists, he says, must ask whether an autocratic China with weak legal institutions can avoid a clash between accelerating economic development and rising demands for commercial and political freedoms. “It could be that nothing less than this will determine whether, in the longer run, China will continue to develop as an economic power or succumb to mediocrity,” Magnus writes.

A new weapon for cyber insurgents?

Al Qaeda scares airlines with parcel bombs worth $4,000.

War with the Taliban costs the West billions of dollars a week. North Korea shells disputed land, winning instant fresh attention in a standoff with major powers.

Weaker combatants have always used unconventional or inexpensive means to defy stronger foes, including guerrilla warfare and suicide attacks that depend on a greater willingness to sacrifice life.

This approach can be decisive. Of all “asymmetric” wars since 1800 in which one side had far more armed power than the other, the weaker side won in 28 percent of cases, according to a 2001 study by US political scientist Ivan Arreguin-Toft.

The ratio may now be set to shift further in favour of the underdog. The new weapon is likely to be a variant of Stuxnet, a highly destructive Internet worm discovered by a Belarus company in June and described by European security company Kaspersky Labs as “a fearsome prototype of a cyber-weapon”, analysts say.

“A great danger”
“Stuxnet is like the arrival of an F-35 fighter jet on a World War I battlefield,” blogged German industrial control systems expert Ralph Langner.

Whoever created the bug, believed by many to have targeted an Iranian uranium enrichment facility, the job likely required many man-hours of work and millions of dollars in investment.

But now that its code has been publicly analysed, hackers will need only a few months to develop a version of the customised malware for black market sale, some experts say. Ali Jahangiri, an information security expert who tracks Trojan codes, harmful pieces of software that look legitimate, describes that prospect as “a great danger.”
 
“The professional Trojan codemakers have got the idea from Stuxnet that they could make something similar which can be used by governments, criminals or terrorists,” he told Reuters. Stuxnet’s menace is that it reprogrammes a control system used in many industrial facilities to inflict physical damage.  At risk is automation equipment common to the networks on which modern societies depend – power plants, refineries, chemical plants, pipelines and transport control systems.

Sold to the highest bidder
Analysts say they suspect hackers are rushing to build a version of the worm and sell it to the highest bidder before experts can install counter-measures plants across the globe. “My greatest fear is that we are running out of time to learn our lessons,” US information security expert Michael Assante told a Congressional hearing on Stuxnet.

“Stuxnet … may very well serve as a blueprint for similar but new attacks on control system technology,” said Assante, President of the US National Board of Information Security Examiners, which sets standards for security professionals. Langner says multinational efforts against malware inspired by Stuxnet won’t work since “treaties won’t be countersigned by rogue nation states, terrorists, organised crime, and hackers.”

“All of these will be able to possess and use such weapons soon,” he said. If the next Stuxnet cost less than $1m on the black market, then “some not-so-well equipped nation states and well-funded terrorists will grab their checkbooks.”

As well as favouring small states, cyber appears to be a tool of special value for Russia and China, since it allows them to become equals to the United States in a sphere where US conventional military dominance counts for nothing.

Stuxnet is a powerful example of the fastest-growing sort of computer bug – customised malware written specifically to attack a precise target. What is new is its power, and the publicity it has attracted through a presumed link to Iran. That publicity will have drawn attention in small nations such as North Korea, which can be expected to take an interest in acquiring a Stuxnet-like capability to balance an inferiority in conventional arms with its US-backed southern foe. Like some impoverished countries in Africa, North Korea has a cyber advantage – it has so few systems dependent on digital networks that a big cyber attack on it would cause almost no damage, writes former US National Security Coordinator Richard Clarke in his book Cyber War.

“A matter of time”
A state contemplating use of such a devastating weapon in a speculative attack could not guarantee it would not be found out, and might prudently restrict its use for all-out conflict. However many terrorist groups, particularly those with a tradition of glorifying martyrdom, would have no concerns about launching cyber attacks.

“It can only be a matter of time before terrorists begin to use cyber space more systematically, not just as a tool for their own organisation, but as a method of attack,” British Armed Forces Minister Nick Harvey said in a speech.

A report on cyber warfare by Britain’s Chatham House think tank said there was no evidence to show terrorist groups had a cyber warfare capability but they were increasingly web-literate, using chat rooms to propagate their message and everyday items such as smartphones, online mapping and internet infrastructure as operational supports in attacks.

What is not in doubt is al Qaeda’s willingness to use such a weapon to inflict economic damage on the West if it ever had the opportunity, experts say. Few doubt it would be able to get funds from rich donors to buy the malware on the black market. Al Qaeda’s Yemen wing said it cost just $4,200 to mail two parcel bombs from Yemen to America last November. Intercepted in Britain and Dubai, the bombs sparked a global security alert.

“This strategy of attacking the enemy with smaller but more frequent operations is what some may refer to as the strategy of a thousand cuts,” it said. “The aim is to bleed the enemy to death.”

The old home of rare earth metals

Fearing shortages, many nations want to ease China’s grip on production but will have to manage risks such as radioactive thorium, high energy use and a toxic cocktail of hydrochloric acid and sodium chloride used to isolate the metals.

Experts say rare earths show how the world fails to include pollution – borne by China which produces 97 percent of world’s supplies – into costs of laptops, TVs or “green” goods such as wind turbines or electric car batteries that use the metals.

“It isn’t factored in and you can make a case that almost everything is under-priced,” said Thomas Graedel, professor of industrial ecology at Yale University who was an author of a UN Environment Programme report on speciality metals recycling.

“As a planet we are using the entire periodic table of the elements and to varying degrees this requires that we spend a lot of energy,” he said. “It’s a bit of a Faustian bargain.”

Cancer
The disused Ytterby mine, on an idyllic island in the Baltic Sea near Stockholm, lent bits of its name to rare earths discovered there in the late eighteenth century such as ytterbium, yttrium, terbium and erbium.

“My chemistry teacher used to walk around the neighbourhood with a Geiger counter and it would crackle like crazy when he got close to the houses,” said Rolf Hultberg, who has lived close by the mine most of his life.

He laughed off the slight rise in radiation at his home, built with rock from the mine by the village of 2,900 people. Levels are below those considered a health threat. But experts say slightly radioactive thorium, often found in ores that also contain rare earths, are a huge environmental hurdle. When rocks are smashed in mining, thorium dust can lodge in lungs and cause cancer.

“The most critical potential environmental impact of rare earth metals is the presence of radioactive elements in most of the ores, especially thorium,” said Christian Hagelueken, of Umicore, a Belgian-based firm looking
at new products from specialty metals.

Studies including those in the Chinese Medical Journal have shown high rates of lung cancer among rare earth miners, traced to thorium. “We are looking for any rare earth deposit anywhere in the world without uranium or thorium. But we haven’t found any,” said Harald Elsner, a rare earths expert at the German geological service BGR. He also said costs of handling, storage and disposal of the slightly radioactive waste were “definitely” the main environmental problems for miners in places from Australia to Alaska considering new production.

Ray Sollychin, a thorium expert at the International Atomic Energy Agency in Vienna, said radiation levels from China’s biggest mine were “high enough to be a concern but it’s low enough be piled in the mountains”.

The waste is “in a dust form but still not very well contained”. He said he had urged Chinese officials “to spend money to turn it into an oxide with a simple chemical process so they can store it in a solid form rather than as dust.” That would decrease the risks of inhalation.

Cauldrons of acid
China has raised worries about supplies by curbing exports, notably to Japan but also to the United States and Europe, renewing interest in finding sources in other countries.

All are likely to face tougher environmental laws than China. Workers at Baotou, which calls itself the “capital of rare earths” 650km (400 miles) west of Beijing, tend cauldrons of sputtering acid and ore amid acrid fumes.

“There is no question that the costs of environmental compliance in the US are significantly higher than in China,” said Jim Sims, spokesman of Molycorp Minerals LLC, the main US producer from stockpiled
ore at Mountain Pass, California.

“That has provided the Chinese with a significant price advantage in rare earths. But that has changed,” he said. He predicted that Molycorp, planning to resume production with a new, cleaner process in 2012, could mine at a cost of $1.26 per pound (0.45 kg) averaged across all earths, against $2.54 as the firm’s best estimate for Chinese production.

“Some ores have a lot of thorium. We are blessed with a very low level… We have a protocol for handling it and it’s then shipped to a storage facility,” he said.

“The Russians and the Indians have a relatively higher percentage of thorium,” he said. Mining was halted at Mountain Pass in 2002 partly after a wastewater leak from a pipeline.

Nobel connection
The Ytterby mine, reached by a steep path through a wood, won fame after a 1794 paper by Johan Gadolin – “Study of a black heavy kind of stone from the Ytterby quarry”. The stone contained the rare earth element gadolinium.The mine shaft is blocked by a slab of concrete. Village streets are named after rare earths and Nobel prize winners often visit around the time they collect awards in Stockholm on December 10th.

Sweden is exploring two sites for possible rare earths – Norra Karr and Olserum in southern Sweden’s forests, as part of the European Union’s strategy to secure and improve supplies.

Thorium has fallen out of favour as a possible nuclear fuel, meaning no real demand from miners. Some disused mines that worked decades ago to extract thorium or uranium were now being re-branded as rare earth deposits.

Thorium has some advantages over uranium for power generation, with India most interested. But it is less stable than uranium as an ingredient for nuclear bombs.

Far from the Chinese approach, anyone wanting even to build in Ytterby has to follow special designs such as ventilation to deal with the high background radiation.

“We allow new buildings, with precautions,” said Susanne Iden, the local official in charge of granting building permits.

Where is Europe’s weak link?

A burst property bubble still weighs on the Spanish banking sector, but the immediate worry is over the spring, when both the banks and the government will be in the market for a combined 50bn euro of funding.

“Problems may well arise when banks need to turn over debt at the same time as the government if confidence remains low,” said Javier Bernat, analyst at Caja Madrid. A wave of consolidation and conservative rules have supported the banks so far, but after Ireland accepted an 85bn euro aid package, investors are turning their eyes to Portugal and Spain.

In the larger economy, mid-size banks such as Banco Sabadell and Banco Pastor are seen as potentially the most vulnerable. They were frozen out of European interbank markets in 2010, and are not as diversified as global giants BBVA and Santander.

Capital requirements for Spanish banks have traditionally been more stringent than in other countries and the average core Tier 1 capital of the sector was 7.7 percent under a crisis scenario in last July’s Europe-wide stress tests.

But the bar has been raised for all banks since then, as new capital rules have been agreed, leading Spanish banks looking less well capitalised than many international rivals.

The weakest link in Spain’s banking system are regionally focused savings banks – who had an average core Tier 1 capital ratio of only 5.5 percent in the stress tests. They have already gone through a forced consolidation process that has cost the government 15bn euro in credit lines.

Shares in Santander, the euro zone’s biggest bank, sagged three percent to an 18-month low in December as worries about the sector grew. Spain’s economy is larger than those of fellow euro zone peripheral countries Ireland, Greece and Portugal combined, and if it spirals into a debt crisis despite progress cutting its budget deficit, a bailout would strain the European Union’s safety net.

With the economy stagnant and spending cuts landing, market jitters over possible bank funding problems have pushed up financing costs for Spain to euro-zone lifetime highs of about 5.4 percent for 10-year sovereign bonds.

Another source of concern is heavy exposure to Portugal, which is seen as the next euro zone trouble spot. Spanish banks had a $108bn exposure to Portugal at the end of March, according to Bank of International Settlements data.

Spain’s government faces a bond redemption at the end of April of 15.5bn euro in competition with the banks, which are also looking to refinance around 35bn euro, according to analysts at Barclays Capital.

“We are concerned that tapping the markets for more than 50bn euro in March and April represents a substantial level of execution risk,” the analysts said in a note, adding the situation would worsen in 2012. Banks also face mounting credit losses, aggravated by stubbornly high unemployment, which Barclays estimates could hit 200bn euro over the next few years.

Spain’s central bank in September required banks to put aside more reserves to protect against losses and make further write downs on real estate they hold, but investors are concerned the system hasn’t been tested severely enough.

“A stringent, Spain-specific bank stress test would allow for differentiation, accurate sizing of (savings banks) potential capital needs and would exert additional pressures towards restructuring,” Goldman Sachs analysts said. They estimated loan losses would hit 145bn euro, varying greatly between large banks like Santander and mid-sized banks such as Sabadell and the privately held savings banks. The central bank will run a second round of stress tests in the Spring.

Capital requirements
Many analysts expect another round of consolidation in the savings banks and further drawing on the government’s 90bn euro restructuring fund. Savings banks’ exposure to real estate and construction might require 56bn euro of funding, Unicredit said.

“Assuming this 56bn euro is needed in 2010 and adding on bonds maturing plus the funding of the deficit, then we calculate that Spain needs approximately 350bn euro over the next three years,” the analysts said.

Spain’s banks are heavily reliant on funding from the European Central Bank. Financing needs dropped by a third in the final quarter of 2010 from September in a sign banks are weaning themselves off the funding programme, but the banks still tapped the ECB for 71bn euro.

Banks have turned to retail depositors to fill the gap left by erratic wholesale funding, but competition for savers has developed into a pricing war, leaving banks paying interest more than double the 1.85 percent offered on 12-month Spanish Treasury bonds.

Break-up far from the truth

Analysts are discussing the idea that economic differences between countries may be too wide for the eurozone to survive, and that it may have to break itself up.

Markets themselves, however, are telling a different story. Individual financial instruments, such as sovereign bonds and bank debt in the weakest countries, have been hit hard by default fears. But movements in most markets show most investors do not think the eurozone faces the dangerous and expensive prospect of a break-up.

Indeed, correlations between markets suggest investors are not nearly as afraid of a systemic crisis in the zone as they were back in May and June, when the panic over Greece’s debt problems was as its height. Consider the euro itself. Although yields on Portuguese and Spanish 10-year debt are at record highs near seven percent and five percent, and Ireland is negotiating an international bailout, the currency has not reacted strongly.

Little correlation
The 30-day correlation between the euro/dollar exchange rate and the spread of the 10-year Greek government bond yield over German Bunds – the risk premium which investors demand to hold Greek bonds – was minus 0.65 in June but at the time of writing is minus 0.19. Minus 0.65 is a fairly strong negative correlation – heading towards the limit of minus 1.0 – and shows investors sold the euro heavily in June because of fears of a Greek debt default. By contrast, there is now very little correlation between the euro and expectations for Greek debt.

For Ireland, the correlation has moved from minus 0.61 to minus 0.29. Also, the euro remains strong historically. At around $1.32 and despite a roughly seven percent fall in December, it is still close to 13 percent stronger that it was at the depth of the Greek crisis. It is far above its lifetime average of $1.188.

According to IFR, implied volatility levels for the euro are still well below those seen in May and June. Implied one-month volatility hit a high of 18.75 percent in late May and 12-month volatility reached a peak of 15.1 percent. The equivalent figures now are near 14 percent, a level that suggests some concern but not a huge amount. In the meantime, there has been demand for euro “puts” – options which provide the right to sell the euro at a given price. This suggests expectations for further euro weakness. But the 1.65 percent premium currently demanded over “calls”, the right to buy, is much cheaper than the 3.0 percent seen in June.

Corporate strength
It is a similar story on European stock markets, where most companies are being buoyed by signs of surprisingly robust German growth and general improvement in the eurozone, as well as healthy corporate cash balances. The EuroStoxx index, a broad gauge of eurozone equities, is about nine percent higher than it was in early June despite falls in recent weeks.

Correlations between the index and bond spreads also show that while they are not insignificant, the eurozone bond crises are not overwhelming equities. The 30-day correlation with changes in bond spreads is now minus 0.38 for Greece and minus 0.36 for Ireland. In May, the Greek correlation was minus 0.86.

While the prices of bonds issued by Irish and some other banks have tumbled, the eurozone corporate debt market remains fairly healthy. The iTraxx Europe index for investment grade eurozone debt is at 111 basis points compared with 141 bps in June; a lower number implies more risk appetite. The iTraxx Crossover index of more risky corporate “junk” bonds is at 494 bps, down from 633 bps in June.

There are at least two major reasons for the easing of markets’ fears about the eurozone as a whole since June. One is that the European Union has set up a formal mechanism to handle debt crises, the 440bn euro European Financial Stability Facility (EFSF), and the Irish bailout shows the EU is willing and able to use it.

Secondly, expectations for debt defaults in a few eurozone states have grown in recent weeks as Germany has pushed a proposal to create a mechanism for orderly sovereign debt restructuring. But debt restructurings could actually reduce the risk of a systemic eurozone crisis, by helping countries return to health without a need for them to leave the zone in search of currency depreciation and lower interest rates.

Three years
The markets see the time of maximum risk for debt defaults as roughly three years from now, after the three-year terms of Greece’s bailout and the EFSF expire, at which point the EFSF will be replaced by a crisis mechanism that may be less protective of bond investors.

The curves for prices of credit default swaps, used to insure debt against the possibility of sovereign default, rise to peak at about two years for Greece and three years for Portugal and Ireland; Ireland’s curve later resumes rising to hit a fresh high five years out. But euro forwards, which are contracts to buy euros at future times, do not suggest investors see a major rise in risk for the single currency three years from now.

The euro has dropped steeply in the forwards market but the move in three-year forwards has been similar to that for shorter tenors, when taking into account interest rate expectations.

The rouble’s clouded horizon

Low interest rates, high inflation, lacklustre growth, corporates’ acquisitions abroad and foreign debt repayments, a broken-down correlation with oil and rising imports are all factors pressuring the rouble – and here to stay for now.

“We still expect the rouble to weaken. The main argument for this is the problem with capital outflows. It is the factor which keeps on driving the Russian forex market,” analysts at Troika Dialog said in a research note.

In a few weeks in autumn, the rouble lost all its 2010 gains to hit its weakest level of 2010 of 36.42 against the euro-dollar basket. Some $21bn left Russia between January and October.

Russia’s trade surplus is shrinking; exports increased just 13.8 percent year-on-year during that period, while imports rose 27.3 percent, fanned by improving domestic demand and food purchases following a summer drought that killed one-third of the harvest. Analysts now see little room for a surge in exports unless oil prices soar towards $100 per barrel.

The trade surplus is expected to shrink by nearly a fifth in 2011 to $115bn, according to a Reuters poll. Foreign debt redemptions are another factor weighing on the rouble, with Russian banks and companies paying back more than $25.5bn in November-December.

Pressure also comes from corporate acquisitions abroad, such as TNK-BP’s $1.8bn purchase of oil and gas fields in Vietnam and Venezuela.

Small returns, weak data

Russia can offer little premium to investors as inflation accelerates, while the central bank keeps interest rates at record lows to revive lending. Nominal yields on Russian financial instruments remain at relatively high levels, but in real terms, inflation erodes returns from holding Russian debt to almost zero.

Consumer prices have rose 7.4 percent in 2010, inching close to the central bank’s 7.75 percent benchmark refinancing rate. With average yields on the most liquid corporate bonds, such as gas behemoth Gazprom and railroad monopoly RZhD, at around 7.5 percent, potential returns are limited for investors using low-yielding dollars and euros to fund carry-trade positions in Russian assets.

“Such operations have become less attractive now due to the decrease in bond yields. Moreover, short-term currency risks have increased,” said Dmitry Kharlampiev, analyst at Petrocommerce bank. The central bank
has moved to discourage short-term capital inflows, such as carry trades, allowing more flexibility in the rouble and changing the pattern of interventions. That suggests the currency will be increasingly vulnerable to external and internal shocks.

Lacklustre growth offers little help, with Russia’s economy growing 2.7 percent in the third quarter – a weak result compared to other emerging markets.

The data is “a negative development for the local equity markets and the rouble, as it calls for greater monetary support from the authorities, even despite the continued acceleration of inflation,” Vladimir Osakovsky, economist at Unicredit, said.

Out of fashion
As a result the rouble has not benefited from the increased level of global liquidity seeking a home after the Federal reserve announced the second round of quantitative easing.

“More obvious places to invest … are Turkey, Brazil, India,” said Roman Pakhomenko, chief dealer at Lanta bank. The Brazilian real has added 0.9 percent against the dollar year-to-date, while the Indian rupee has gained 1.3 percent. The rouble has lost 4.0 percent.

“The rouble was the worst performer among commodity currencies and was also lagging among BRICs, as capital flows into Russia remained negative,” said Paul Biszko at RBC Capital. The pile-up of anti-rouble factors has led to a breakdown of the correlation with oil – Russia’s major export and once a key support for the currency, the price of which has held above $80 per barrel for almost three months.

As a result, analysts and markets alike have turned more bearish on the rouble. The latest Reuters poll showed it at 35.30 versus the basket by year-end compared with late April forecast of 33.48. Implied yields of three-month non-deliverable forwards for the dollar versus the rouble rose to their highest levels in six months, 4.2 percent, in late November, up from 3.0 percent seen before the rouble’s slump started in mid-September.

 “Russia is out of fashion today,” deputy chief executive of Russia’s biggest lender Sberbank, Bella Zlatkis, said in mid-November after meeting with international investors.