Google to acquire home automation company Nest Labs

Google’s never-ending quest to dominate all aspects of people’s digital lives has taken a significant step further, with the search engine giant having recently announced an agreement to acquire home automation company Nest Labs. The $3.2bn deal sees Google incorporate one of Silicon Valley’s most innovative companies, a company that many observers claim has cracked the market for modern and efficient home automation.

Founded in 2010 by Tony Fadell, one of the inventors of Apple’s iPod and a key developer of the iPhone, Nest provides programmable thermostats and smoke detectors that are Wifi-enabled, self-learning and sensor driven. The ability of Nest’s products to optimise the temperature of homes and businesses so that they are more energy efficient has drawn considerable interest from many tech giants.

Many had predicted Apple to acquire the company due to Fadell’s involvement with the iPod and iPhone

Many had predicted Apple to acquire the company due to Fadell’s involvement with the iPod and iPhone, as well as Nest’s tight integration with Apple’s iOS operating system. However, Google has seized the opportunity to integrate the technology into its ever-growing product line up.

Announcing the deal, Google CEO Larry Page said the company wanted to expand Nest’s reach to many new regions. “Nest’s founders, Tony Fadell and Matt Rogers, have built a tremendous team that we are excited to welcome into the Google family. They’re already delivering amazing products you can buy right now–thermostats that save energy and smoke/CO alarms that can help keep your family safe. We are excited to bring great experiences to more homes in more countries and fulfill their dreams!”

Analysts reacted to the news with enthusiasm, with Peter Stabler, senior analyst at Wells Fargo, saying in a research note that it was a logical step towards creating an integrated service for people’s homes.

“We believe Google’s purchase of Nest Labs clearly points to an intention to play a leadership role in the connected home, offering a complement to Google’s connected car ambition. We see the move as a logical one, and expect Google to expand its efforts to become the intelligent backbone of consumer homes.”

Google’s expansion beyond its traditional search business has been rapid in the last few years, and the company has started to make sizable acquisitions in order to expand its product range. In 2011 it bought struggling mobile phone maker Motorola Mobility for $12.bn. The Nest deal is it’s second largest to date.

French telecoms price war continues despite job concerns

The battlefield that is France’s mobile telecoms industry has added another front in the form of new high-speed 4G services. A number of the country’s mobile providers have been fighting it out for dominance of a market, with price slashing and bundled services being dangled in front of increasingly eager consumers.

In December, low cost operator Iliad, which has over 7m customers, started to offer existing 3G users 4G services for no extra cost. An additional service, offered through its even lower cost subsidiary Free, provides users with 20GB of data use as part of its 4G service.

The news of a price war is a problem for the country’s operators after years of declining revenues

In response to Iliad’s aggressive move, rival firms like Orange, Bouygues Telecom and SFR all slashed the prices of their new 4G networks. Orange, the country’s largest telecom operator, said it would be following Iliad’s lead by offering 4G at no extra cost to existing customers.

The news of a price war is a problem for the country’s operators after years of declining revenues. Many firms have invested considerable amounts towards improving revenues by raising subscription fees, and so Iliad’s move to undercut its rivals has come at a difficult time for the rest of the industry.

The government has also been wary of Iliad’s move. Economic Development Minister Arnaud Montebourg warned Free’s founder Xavier Niel in December of the impact such a price war would have on the rest of the market. He told French broadcaster RTL in December of his concerns over the impact a price war would have over jobs and investment in the telecoms industry.

“I have a problem in this war. All these operators are working on the basis of the public spectrum which belongs to the French, so we are obliged by law to demand something in return in terms of employment and investments.”

Some observers don’t believe that the price war can carry on for much longer. Stéphane Téral, an analyst at telecom research firm Infonetics Research, told The New Economy that unless there were significant advances in technology, lowering prices would have to end.

“In the long run, I don’t think these ever decreasing low prices are sustainable unless a major innovation happens to keep service providers’ cost structure under control. As many mobile telecom firms ultimately offer the same service, the only real differential is coverage. Téral doesn’t think prices will rise on the back of new 4G services.

“At this point in France, differentiation is hard to achieve and I’m not convinced demand is still elastic. I only see coverage as the main differentiation between Free and the rest. If prices are too low, as they already are, it’s going to be very difficult to raise them on the basis of faster low latency 4G services.”

Téral adds that firms should begin to invest in new technologies, such as Software Defined Networks (SDN), Network Function Virtualisation (NFV), and Self-Organising Networks (SON) that can provide more automation and therefore improving operational efficiencies. He also believes that network sharing could help bring down operational costs.

Some in the industry, says Téral, would inevitably feel the effect on jobs of further automation. However, it will also create newer, higher quality roles.

“Yes this overall move to free competition has a negative effect on jobs, although the quantification is very hard to perform. In fact, the move to more automation creates new types of highly qualified jobs but again are they enough of them to more than offset the loss of traditional jobs?”

Read more about job automation and unemployment

Futurist Thomas Gray has predicted that as many as two billion jobs could be automated by 2030, posing a significant threat to already-dwindling employment levels around the world.

BofA-Merrill Lynch closes European power and gas trading operation

Bank of America-Merrill Lynch has closed its European power and gas sales and trading operation. It becomes the fourth key industry player to withdraw from the European commodities market after the implementation of stricter regulation started to hit profits.

BofA-Merrill Lynch follows Morgan Stanley, Deutsche Bank and JPMorgan, all of which exited the market in 2013.

[I]t has been speculated that low profits deriving from lower demand were to blame for the closure

Morgan Stanley sold its oil-trading desk to Rosnet, the Russian state-run oil company. Deutsche Bank has also recently announced that it would be shutting down its commodities trading operations. JPMorgan put up its physical trading operations up for sale in July.

Though BofA-Merrill Lynch has not disclosed the revenue for the defunct desk, but it has been speculated that low profits deriving from lower demand were to blame for the closure. In a statement, the bank suggested the winding down “follows a recent review of its broader commodities activities and reflects the combined impact of lower client demand for European power and gas hedging and recent regulatory changes.”

Since the onset of the 2008 financial crisis changes in proprietary trading regulations have dampened demand for commodities trading, with revenues down in the sector over the past five years.

BofA has announced it will be selling off its current European power and gas portfolio, though it intends to remain active in the American markets. In the press release the bank said it “has made a long-term commitment to its commodities business, and will continue to trade coal, commodity indices, oil, metals, and US power and gas from its Houston, London, New York, and Singapore offices.”

China lifts ban on foreign games consoles such as PS4 and Xbox

China’s State Council has announced its decision to temporarily lift a 14-year ban on selling video games consoles, giving rise to various opportunities and fresh challenges for major console players.

The announcement failed to specify how long the suspension would last, although it did include a select few details about logistical changes. The State Council said that it would allow foreign firms to manufacture consoles in Shanghai’s free trade zone, which would later be passed on to the mainland for sale, given that they pass a rigorous inspection process carried out by China’s cultural departments.

The anti-console policy was first introduced in 2000 and was originally intended to protect youths from game-related distractions, however the government’s U-turn appears to signal a change of heart and a new market opportunity for major industry players such as Sony, Microsoft and Nintendo in particular.

In terms of revenue, China is the world’s third-biggest market for video games despite the absence of foreign-made consoles and a government that appears highly averse to gaming in itself. According to research by PwC, the country’s video game market is worth an estimated $11.4bn. However, China’s problems with regards to video game pirating and the domination of mobile and PC gaming may well see new participants falling foul of fresh challenges in the near term.

The costs of console gaming also appear incompatible with the Chinese market, given that the nation’s average earnings fall far short of those in rival console markets such as Europe, the US and Japan. With the new PS4 priced at $399 and the Xbox One at $499.99, many Chinese consumers will simply be priced out of the market.

Samsung unveils bendable TV

Many observers of technology trends have predicted that the television industry will be the next to enjoy significant innovation. However, while most have assumed TV sets will become smarter, with continually online entertainment hubs, few have suggested the need for flexible screens.

Defying the long-held rule of giving consumers what they want, Samsung has unveiled its huge new Bendable TV that allows multiple viewing angles. The 85in screen is designed to enhance viewing experiences, and allows users to control the curvature by remote control.

Many don’t see any clamour from the public for flexible screens, with more demand for simplified services and customisable content

Samsung’s LED screens were unveiled at this year’s Consumer Electronics Show (CES) in Las Vegas, alongside similar, but slightly smaller, flexible screens from rival LG. However, immediately there were mumblings of derision from observers. Many don’t see any clamour from the public for flexible screens, with more demand for simplified services and customisable content.

Martin Garner, a technology consultant for CCS Insight, said that both Samsung and LG would need to prove that such televisions are necessary in people’s homes. “The challenge for the firms is to demonstrate that the world really needs this. They are, however, a nice attempt at differentiation.”

The TV industry is set to undergo big changes in other areas over the coming year. Many high quality 4K screens are set to go to market in the next twelve months, while other manufacturers are seeking tighter integration of content and online services in their products.

Apple has long been rumoured to release a television of its own, while Google has made another attempt at cracking the market with its recent Chromecast dongle that streams content from computers to TVs. LG, which last year acquired the license to the webOS operating system from HP, has integrated it into their new sets in the hope of simplifying things for viewers.

Brazil approves suicide seeds

Last week the Brazilian senate quietly passed a hugely controversial bill approving the use of GM crops known as ‘suicide’ or ‘terminator’ seeds. The use of these sterile seeds that produce crops that die off after one harvest has been largely condemned internationally and a moratorium on their use has been in place since 2000. Brazil is the first country to formally approve the use of such seeds.

Genetic use restriction technology (GURTs) – the formal name for suicide seeds or terminator technology – was developed under a cooperative research collaboration between the US Department of Agriculture, the Agricultural Research Service and private companies in the 1990s.  It has not been widely available commercially until now. According to the World Seed Federation, a lobby group that represents the interests of the global seed industry, research suggests that suicide seeds are a way to help protect “intellectual property rights” of plant breeders who develop new plant varieties. “Where effective intellectual property rights protection systems don’t exist or are not enforced, GURTs could be an interesting technical alternative to stimulate plant-breeding activities,” they wrote in a release in 2003.

They add that terminator seeds can aid “farm management” as they ensure the weather conditions during harvest do not result in poorer quality grain and would reduce the propagation of volunteer plants that grow amid crops and can cause problems for large-scale mechanised farming systems.

If this bill goes through, it would be a disaster. Farmers would no longer be able to produce their own seeds

Large-scale benefits
In essence, terminator seeds would be well suited to increase the efficiency of large-scale farms that could right off the cost of having to buy new seeds each year.

Traditional farming methods generally adopted by small-scale producers and indigenous populations usually rely on preserving a portion of each harvest to replant the following year, limiting the market for new seeds. One harvest creates the next in an enduring cycle.

Today, if a farmer wants to plant a particular type of crop, they could buy some seeds and – in theory – keep growing that one crop forever.  So the seed manufacturer, having probably invested many research dollars into developing that hybrid, faces a very limited market for its product.  If farmers only have access to terminator seeds, they will have to come back for more each year as their crops die out after each harvest. A handful of companies – Syngenta, Bayer, BASF, Dow, Monsanto and DuPont- control over 60 percent of the seed market in the world today.

In 2000, 193 countries signed the UN Convention on Biological Diversity, which included a moratorium on the use of these seeds. But the Brazilian agricultural lobby, a powerful political presence in the country, has long since fought for the right to sow these seeds to propagate certain non-food crops. They argue that by using this technology they will ensure that no second generation GM traits spread to native plants. But the economic implications of the use of these seeds could be tremendous, especially in a country like Brazil.

One country, worldwide implications
“Brazil is the frontline. If the agro-industry breaks the moratorium here, they’ll break it everywhere,” Maria José Guazzelli, of Centro Ecológico, which represents a coalition of Brazilian NGOs, told The Guardian. “If this bill goes through, it would be a disaster. Farmers would no longer be able to produce their own seeds. That’s the ultimate aim of the agro-industry,” she said before the vote.

Internationally, environmental groups have been up in arms about the decision because of Brazil’s heavyweight presence in the global agro-business. In the 1990s, massive protests by Indian, South American and southeast Asian small-scale farmers was enough to suppress the use of these technologies and seed manufacturers had to shelf their plans. But now GM crop producers can argue that environmental pressures make GM crops a necessity.  It is this line that legislators in Brazil appear to, or are pretending to, have bought, and that will ultimately threaten the livelihood of farmers all over the world.

Racing with machines: the rise of “technological unemployment”

Futurist Thomas Frey has predicted more than two billion jobs will disappear by 2030 as a result of automation. Statements such as these strike fear into an already turbulent job market, but there are many who believe we should take solace in history. In their book, Race Against the Machine, Erik Brynjolfsson and Andrew McAfee explain how the recent panic brought on by ‘technological unemployment’ – a term originally coined by John Maynard Keynes – is nothing new.

Brynjolfsson and McAfee describe how the potential for advances in technology to render human labour obsolete first surfaced during the Industrial Revolution and the advent of the steam engine. Many people then, as now, worried they would lose the race with technology, and unemployment would be widespread. Instead, a plethora of new jobs was created: what differed was the type of skills that were required.

Workers were no longer needed for their physical attributes but their mental faculties. The point Brynjolfsson and McAfee make is that we should learn from the past and not attempt to compete against machines, but evolve to work in harmony with them, if we wish to transition into the technological revolution as successfully as we did the industrial one.

Harder, better, faster, stronger
In the introduction to their book, Brynjolfsson and McAfee explain how “our technologies are racing ahead but many of our skills and organisations are lagging behind. So it’s urgent that we understand these phenomena, discuss their implications and come up with strategies that allow human workers to race with machines instead of racing against them”. In the modern world, that is easier said than done.

2bn

The number of jobs expected to disappear by 2030

50%

The amount of jobs that are potentially automatable within two decades

The technology we are now up against doesn’t simply render our physical attributes useless; computers can outperform us in a wider variety of far more complex tasks. In fact, the only jobs in which we trump machines are those that require social intelligence (e.g. persuasion and negotiation) or creative intelligence (which is simply the ability to imagine new ideas). This means nearly 50 percent of jobs are potentially automatable within a decade or two.

But we shouldn’t worry; while automation may eliminate certain jobs, it will also create new ones that require social and creative intelligence. However, this kind of intellect is much harder to acquire. More importantly, there is a natural disparity that is present within nature; such forms of intelligence are very broad and cannot be simply learned. How would you go about teaching such skills in schools? Especially when governments around the world are struggling to adequately educate children with the skills they need in order to compete in the current economy, let alone the ‘creative economies’ of the future.

The dystopia has arrived
Keynes predicted widespread technological unemployment would occur when our means of economising the use of labour exceeded the pace at which we were able to find new uses for it. Moore’s Law, which describes the exponential growth pattern that is present within modern technology, means what Keynes predicted has now, in the 21st century, become reality.

Even if we were able to educate and employ a population of creatively intelligent, highly skilled workers, who were capable of “racing with machines” – as Brynjolfsson and McAfee claim is possible – it might not be enough. As US economist and historian of economic thought Robert Hellbroner said: “[Even if] we can employ most of the population as psychiatrists, artists or whatever… there is still an upper limit on employment due, very simply, to the prospect of a ceiling on the total demand that can be generated for marketable goods and services.”

Obamacare: another public sector IT shambles

President Barack Obama’s controversial healthcare reform – also known as the Affordable Care Act (or Obamacare) – has been newspaper fodder for the last six years. During his campaign for re-election, barely a speech passed without it being mentioned by the President or his opponent. Though Obamacare had already passed some significant legislative hurdles by that point, the campaign centred on that issue: vote for Obama if you want the care act, vote for Mitt Romney if you want to repeal it. Which is why it comes as a bit of a surprise that now, over a year after the re-election, Obamacare remains in the news despite it being on the cusp of implementation.

It seems that, while trying to get the act passed and instated, the Obama administration didn’t think too much about how to get it implemented. After going live on October 1, the Affordable Healthcare website – where people can register for Medicare and Medicaid services – failed to cope with the huge demand. The site crashed almost immediately, leaving millions of people in the lurch. The act requires most Americans to be at least enrolled in health coverage by the end of March or they might face penalties – but as the website continues to struggle, it seems unclear whether this will be achieved.

A shaky start for Obamacare

Obama talks about Affordable Health Care Act
US President Barack Obama takes a pause during a speech on the Affordable Care Act in November

40%

Up to 40 percent of the technology necessary to run Obamacare is still not in place

100,000

The number of Americans successfully enrolled in Obamacare after its first month – a number Obama has admitted is “absolutely not” enough

It has since transpired the administration suffered from a shocking lack of foresight: up to 40 percent of the technology necessary to run the insurance marketplace required for Obamacare to function properly is still not in place. Henry Chao, the Deputy Chief Information Officer of Centres for Medicare and Medicaid Services told an Energy and Commerce Oversight and Investigations subcommittee hearing that “it’s just not working. It’s still being developed and tested”.

The failure of the website and registration process has been a severe blow to Obama. His Affordable Care Act is meant to revolutionise healthcare in the US and could very well define him as a president. For Obama, the smooth rollout of registration and coverage could be a crowning glory or the straw the broke the camel’s back.

On November 14, after six weeks of trouble with the website, the President was forced to stand before the US public and say, during an almost-hour-long speech: “That’s on me. We fumbled the rollout on this healthcare law… I think it’s fair to say that the rollout has been rough so far. Yesterday, the White House announced that, in the first month, more than 100,000 Americans successfully enrolled in new insurance plans. Is that as high a number as we’d like? Absolutely not. But it does mean that people want affordable health care. The problems of the website have prevented too many Americans from completing the enrolment process, and that’s on us, not on them.”

Chronic mismanagement of governmental IT schemes
If it seems like a tremendous oversight that the Obama administration would have worked so hard at passing the act, only to let it fail because of a poor IT strategy, that’s because it is. However, as far as technological innovation goes, the Obamacare shambles is far from the exception for governments; the UK is going through a similar situation as the Department for Work and Pensions (DWP) fails to implement its Universal Credit IT system that would enable the government to streamline the welfare system.

The DWP’s Universal Credit IT system: An expensive experiment?

IT shambles
The UK Department of Work and Pensions has already spent millions of dollars on a questionable Universal Credit IT scheme

70%

Over 70 percent of the DWP’s £425m budget has been spent on IT systems

£34m

The DWP has reportedly already written off £34m spent on IT work

In September, the DWP announced it was writing off £34m spent on IT work that would enable the new benefit to be rolled out across the country in 2014. The National Audit Office released a damning report in September claiming the programme has been plagued with “weak management, ineffective control and poor governance”. The report went on to conclude that “over 70 percent of the £425m spent to date has been on IT systems. The Department, however, has already written off £34m of its new IT systems and does not yet know if they will support national rollout.

The existing systems offer limited functionality. For instance, the current IT system lacks a component to identify potentially fraudulent claims so the Department has to rely on multiple manual checks on claims and payments. Such checks will not be feasible or adequate once the system is running nationally. Problems with the IT system have delayed national roll-out of the programme.”

This systemic failure to produce workable platforms has meant the Universal Credit was not phased-in in 2013, delivering a harsh blow to one of the coalition government’s cornerstone policies: welfare reform. As with Obamacare, a controversial and long-fought-for policy is in jeopardy because of incompetence in the tech department.

It is inexplicable that both the Obama administration and the UK coalition government would allow this level of inefficiency to hamper fundamental programmes. But chronic mismanagement of governmental IT schemes is by no means confined to these two administrations: in fact, it seems there is a global tendency to underestimate the demands of such complicated systems. There is also a lack of foresight into sourcing the best talent to run these programmes.

Public sector IT professionals need to keep up with the times
In an age when social media dominates public discourse, and websites such as Facebook and Amazon effectively process the information of hundreds of millions of people every day, it is inexcusable for governments to fail to do the same. Of course, the information processed for enrolling in Obamacare or rolling out Universal Credit is significantly more complex, but the technology underpinning the programmes does not have to be.

There is a shocking lack of renovation in IT when it comes to governments. Sean Gallagher reported on Ars Technica that some US government servers still run on Windows Server 2003: a decade-old and practically obsolete piece of software. “Magnifying the problem is the government’s decades-long increase in dependency on contractors to provide even the most basic technical capabilities,” said Gallagher.

As the private tech sectors boom in both the UK and the US, the internal government IT departments are left depleted and under-resourced.

“While the Obama administration has talked of insourcing more IT work, it has been mostly talk, and agencies’ internal IT management and procurement workforce has continued to get older and smaller.” This type of program was phased out in the private sector at least half a decade ago, and no self-respecting private tech company would be coding anything new or useful for such an antiquated platform.

There is a further issue: an ageing workforce. In the US, up to 50 percent of federal workers are over 48 years old. In the UK, the average age of a public sector worker has been increasing steadily, according to the Office of National Statistics. As the private tech sectors boom in both the UK and the US, the internal government IT departments are left depleted and under-resourced. There is an urgent need for a root-to-branch renovation of these departments, and governments need to do more to attract better and more creative talent.

Then there is the added issue that the lack of resources for these departments has meant talented young people are looking for tech jobs elsewhere. Matt Linton, who works as a ‘digital fire-fighter’ for NASA and is headhunted by the private sector, recently told the Washington Post: “No matter how much you love your job, everybody has their limits. If Congress wanted to force young people out of federal jobs, then they are doing a great job.”

Having strong and modern IT departments is not only a matter of efficiency: in the modern market, it is about survival. Even the smallest start-ups invest in solid IT systems because they know, today, it is the foundation of a good business. Which, of course, makes it inexcusable governments are wasting precious resources on unworkable, inefficient, yet fundamental programmes.

Google’s driverless cars

The driverless revolution
It’s no secret Google has been working on driverless cars for about eight years now – although only officially since 2010. Sightings of these vehicles are becoming more frequent as engineers move them from the testing circuits out onto the roads of Silicon Valley. Currently, these autonomous cars are Toyotas fitted with cameras, lasers and sensors. A laser range finder is, conspicuously, fitted to the roof of the car: it generates a highly detailed 3D map of the surrounding area, aided by GPS, an inertial measurement unit, and a wheel encoder.

There is a camera near the rear view window that spots traffic lights sufficiently far in advance, and both front and rear bumpers have radars on them to aid the work of the laser. This technology allows the driverless cars to follow speed limits and navigate the best possible route to its destination. The technology costs between $75,000 and $85,000 per vehicle – more than the vehicle itself.

When do I get one?
Whether Google will develop its own vehicle or sell the technology to an existing automobile manufacturer is not yet known. Google has no experience of making cars so it is likely to seek a partnership with a manufacturer in order to implement the equipment – and that seems to be what’s delaying the project. It is thought automobile companies are reluctant to join for fear of liability if the technology fails.

Google Cars Product Manager Anthony Levandowski said: “I can’t tell you you’ll be able to have a Google car in your garage next year. We expect to release the technology in the next five years. In what form it gets released is still to be determined.” In August last year, rumours emerged of a Google Robo-Taxi: a driverless vehicle to pick up and drop off commuters on demand. This would be linked with an iPhone app, similar to current hail-a-cab apps such as Hailo and Uber.

A major problem is the technology is ahead of the law. Most federal and state automobile laws in the US assume a human operator. These need to be repealed before the technology can be commercialised. In 2012, Nevada became the first state to legalise the operation of autonomous cars on the roads. Florida and California (the latter the home of Google) have followed suit. Lobbyists from Google have been travelling around other states, targeting insurance companies in an attempt to gain state support for similar changes to the law.

Getting rid of the driver
There is a serious side to this project that goes beyond Google’s desire to be the first company to develop autonomous cars. Google founders Larry Page and Sergey Brin believe cars that drive themselves will improve road safety; the vehicles will react to hazards faster than humans and so potentially save lives. Chris Urmson, Lead Engineer on the project, said: “Technology is at its best when it makes people’s lives better… We’re using advanced computer science to try and make driving safer and more enjoyable”.

In testing, Google Cars have covered around 500,000 miles without a serious accident. Google employees drive these vehicles to and from work, though with some human intervention as the technology is not yet perfected. Two years ago, a Google Car went into the back of a Prius with enough force to knock it into two cars in front. However, Google maintains that, at the time of the accident, the car was under manual control and the accident was a simple case of human error.

Under the bonnet

How a self-driving car works

google-car
Source: The Economist

The EADS Airbus A300 Beluga

An Airbus A300 Beluga aircraft is loaded with wings at the Airbus SAS wing assembly factory in Broughton, England. EADS, the owner of Airbus, has enjoyed high profits of late and raised its orders and delivers to forecast in November. However, the aircraft manufacturer also admitted it may have to cut production of its flagship A380 super-jumbo unless it could find additional buyers.

3D printing: voyage into the third dimension

Additive manufacturing – or ‘3D printing’ as it is popularly known – has been used to manufacture prototypes for almost 30 years. In the past couple of years, however, awareness of the process has grown as it has increasingly been used to produce final parts and products. As the novelty dies, we may see the practical future of 3D printing.

Sex, drugs and your new job role

Talking heads online privacy Rita Lobo

Don’t hire me for my private interests

Employers are using social media to strengthen their hiring process, but candidates have a right to a life outside the office

Social media has invaded the business world with the same ferocity it took over our personal lives around half a decade ago. Our various online profiles have become indicators of who we are, open for the world to see and interpret as they wish. It is unsurprising that the internet is littered with advice about how to spruce up our social media presence in order to impress employers and potential business connections. And just like that, work has invaded yet another aspect of our lives.

It is not that social media isn’t useful in business – it is – and can be a great way to outwardly connect with customers. But on a personal level, it is another way to dehumanise an employee and reduce him or her to simple a worker.

There is plenty of advice for jobseekers online, with experts suggesting the best way for one to present oneself online and get the most positive attention from potential employers. LinkedIn was created with the sole purpose of helping employers and potential employees connect, and business contacts to network online. And it is a fabulous tool because it is tailored for showcasing business experience and achievements. But when potential employers start looking beyond LinkedIn, and into Twitter, Pinterest and Facebook, then a line has been crossed from business interest to personal life.

Every potential employee is more than a worker. While it may be useful for employees to know the interests, likes and dislikes of the people they work with, those things don’t particularly affect their ability to perform a job well and with integrity – which is why they should not be shy about restricting access to their profiles to keep employers out.

Furthermore, employees should not be willing or feel compelled to sacrifice their privacy and right to a personal life outside the office in order to get a position. It is important for individuals to maintain a private life: one that is shared with friends and family online, but not with a boss or potential employer. Though the term ‘work-life balance’ is often used when discussing a woman’s ability to juggle her career and home life, it is a useful term for everyone, from every gender.

Though employers will argue social media screening helps them avoid some of the many pitfalls inherent to the hiring process and select more suitable candidates for a position, they will never admit that gazing into a potential employees personal life is not only a violation of privacy but could also lead to discrimination. That discrimination might be accidental or even unconscious, but it is very real. It is not uncommon to hire people with similar interest and backgrounds to ourselves, and by glimpsing the private lives of every potential candidate, employers might make unwitting connections about who would be more suitable to a position, not based on their skill or experience. This also raises a lot of important questions about diversity and equal opportunities in the hiring process.

Employers have plenty of tools in their arsenal to ensure they are hiring the most suitable candidate for a position. Background checks and references are still relevant and much more appropriate when screening someone for a position. And, of course, there is the all-important interview where those hiring can get a more than adequate idea of a candidate’s ability and style than they ever could from browsing their holiday snaps or retweets.

Talking Heads Online Privacy Jules Gray

Facebook isn’t a place for privacy

Social networks give potential employers a better understanding of a candidate, helping them find the person who’s the best fit

The days when the success of a job application rested on a highly polished CV and a well-rehearsed performance during a job interview appear to be long gone. Potential employers want to know an increasing amount of background information before they even commit to offering a candidate an interview. Be it Facebook, Twitter, Linkedin, Flickr, Pinterest or a personal blog, an individuals’ online presences are increasingly being screened by companies during job applications.

While people may have valid concerns about privacy and maintaining a work-life balance, it seems somewhat daft to want to try to keep their personal – and truer – selves quite distinct from the persona they would assume at work. It is also quite understandable that companies would want to know as much about a potential new employee as possible, before giving them a job. For a company looking to hire new members of staff, the process is time-consuming, expensive and risky. It is obvious they would want to lower the potential risks by finding out as much about a candidate as possible.

Clearly LinkedIn is a social network designed for the workplace and for promoting people’s careers. It serves as an online CV, so employers will certainly be looking at that first when trawling the internet for more background information. What many people are concerned about is the potential for employers to want to access a candidate’s Facebook or Twitter account – as if the employers will see the candidate has a fun and interesting life, and isn’t the serious, work-obsessed person they pretended to be, and hold it against them.

In reality, employers would much rather select a well-rounded individual who’s likely to be able to fit into the working environment and get along with colleagues, than someone incapable of interacting with people outside the office. They’re also well within their rights to want to know the candidate they’re considering giving a full-time contract to isn’t completely unhinged. This works for the candidate too, as the chances are that, if an employer disliked a person because of certain lifestyle and personal choices, then that person would probably not want to work for their company in the first place.

Social networks can also be advantageous to both employers and candidates in taking away much of the awkward formalities and introductions that go on in job interviews. If an employer has had the chance to get a feel for what the candidate is really like – and not just based on what they’ve carefully constructed in their CV – then the interview can be tailored towards the specifics of the actual job.

There has been controversy over a few companies asking candidates to allow them access to their private Facebook profiles: this is clearly different to them just browsing public profiles. Companies can’t demand access and users are obviously free to say no if they do. The range of privacy settings on most social networks – admittedly sometimes not that obviously located – means users are free to hide whatever it is about themselves they are terrified a potential employer might see.

By definition, social networks are for people to connect with others and present themselves in the way they would like to be seen. Extending that to how potential employers see a user is a natural progression from a CV, and one candidates should be embracing, rather than panicking about.