Smart kitchens set to revolutionise domestic life

Talk of futuristic fridges has been around for some time now. In fact, it started even before the Internet of Things transformed how we view technology. Disappointingly, however, in the average home this key kitchen item remains pretty much as it has always been: a large cooling box. Sometimes it features a water and an ice cube dispenser, but very little has changed in a long stretch of years. And yet, as CES 2016 would have us believe, a shift could be in motion, involving more than just advanced refrigeration.

Imagine a fridge that notifies you when produce is going off so you can reduce waste, an induction stove embedded in a kitchen worktop that saves valuable space, or even a robot that cooks fresh meals in time for your arrival home. These may sound like props from a film set in 3016, but they are already in existence, though they naturally come with a hefty price tag. And, while these items may seem unnecessary, or even a symptom of an increasingly indolent society, they actually address several significant issues that we currently face – in the west at least.

Domestic nucleus

As indicated by the media coverage they receive, fridges inspire the most interest when it comes to technological developments in the kitchen space. As explained by Thingk Design CEO Stefano Marangoni: “They have great appeal because one of the most important and oldest needs of a human being is to gather food to survive, and the fridge is the home appliance that is closest to this ancestral need. So, it is obvious that there is a very strong, unconscious interest in all things concerning fridges.”

Imagine a fridge that notifies you when produce is going off so you can reduce waste, an induction stove embedded in a kitchen worktop that saves valuable space, or even a robot that cooks fresh meals in time for your arrival home

Of the few commercial offerings available at present, Samsung’s Family Hub is the most impressive. The aptly named device has a variety of innovative features, including a 21.5-inch touchscreen integrated into the door, which features an app that can order groceries online or stream content through a synced Samsung Smart TV. It also has cameras positioned inside to capture an image every time the door is closed, allowing owners to easily check the contents while out shopping – helpful for the forgetful among us. The Family Hub even has an option to date items, which then appear in the smart screen display in a bid to reduce wastage. Space is also considered, with the option to flip instantly from freezer mode to refrigerator mode depending on a home’s ever-changing needs.

Industry experts predict that, at $5,000, the Family Hub will not become a mainstream product, yet it does act as an important step in gauging consumer preferences. Unsurprisingly, Samsung is not alone in venturing into the field. Its biggest rival at present is LG, with the Signature fridge, which is not quite as ‘smart’, but has a number of practical features, such as a motion activated door and a censor that responds to a double knock to view what’s inside. As Whirlpool’s French Door fridge has a similar feature, it appears that promoting energy-saving consumer behaviour is a central focus in kitchen technology development.

All the gadgets

Aside from fridges and other clever upgrades to the appliances that we already use on a daily basis, a huge array of kitchen gadgets is appearing on the scene as well. This includes the Palate Smart Grill, which cooks food to the ideal temperature and notifies users when it is ready through their smartphone – it will even keep the dish warm if there is an unexpected delay. There are gadgets now to measure the nutrition levels of ingredients, independently grow micro-herbs, or let you know when milk is spoiling. There are also those that simply make current tasks more efficient while taking up less space, such as the GKILO, a dual timer and kitchen scale. “Resting on one side, the LED numbers act as a scale, displaying the weight of any objects placed on it”, Marangoni explained. “Flip the GKILO over, and the lights will display the current countdown. Using simple hand motions, users can easily reset the scale or adjust the timer.”

Things become a whole lot more Jetsonian with the likes of Sereneti Kitchen’s Cooki, a robotic chef that can cook entire meals from scratch with perfectly
timed precision.

“The technology is a pod system where you can insert trays of fresh ingredients that are protected by a refreshing system. It is then able to add each ingredient at the right time and at the right temperature”, explained the company’s CEO, Timothy Chen. “It’s really a program language; you can synthesise it to figure out how you’re going to be able to make good food, perfectly, every time. That’s cooking; there’s an art to it, just like playing music, and it can get you great results, so-so results or very bad results.”

By drastically cutting down the time it takes to cook, in addition to elevating the quality of the results, Sereneti hopes more people may be tempted into the kitchen, as opposed to resorting to junk food.

Worktops are another huge area in which innovators are working to improve the way we cook and eat at home. At CES 2016, Whirlpool gave attendees the opportunity to explore the company’s vision for the kitchen of the future through a virtual reality experience. With items such as a ‘smart cooktop’ that interacts with pots and pans to heat and cook food anywhere a vessel is placed, while offering up recipes based on what’s available in the pantry, there is a heavy focus on making cooking more efficient, reducing both the amount of space and number of items needed.

Societal afflictions
Some argue this technology is a product of a sedentary 21st century lifestyle, but is making cooking easier and more efficient really such a bad thing? With the rate of diabetes, heart disease and other diet-related illnesses rapidly rising, it cannot be denied that society is suffering from the effects of mass-produced and processed food. “People’s lives are getting busier and busier, and we believe that with the ‘having it now’ type of economy, people tend to gravitate towards less healthy, frozen meals that are full of preservatives and salt – we believe that these people can eat better”, Chen said. And, as research by the Food and Agriculture Organisation shows, fast food habits are now infiltrating developing markets as well.

Bucking this trend is perhaps the most critical benefit futuristic kitchens can offer, as encouraging more people to return to the kitchen by inspiring and helping them to cook fresh meals from scratch is increasingly urgent in the modern age. Moreover, technology presents ways this can be achieved in much less time than it currently takes, which is crucial as a lack of free time is a major reason people turn to unhealthy fast food in the first place. Less time spent in the kitchen, while still achieving healthy meals, is the Holy Grail for this type of technology – and we are on the cusp of grasping it once and for all.

It could be argued space-age kitchens are not a crucial use of technology or money. And yet, we all hope to live in a civilisation that focuses on improvement, both self and societal, and one vital thread of that evolution is the preparation of food, which ultimately frees up time for the other important things in life.

There is no ‘new’ Silicon Valley

Since the dawn of the internet age, Silicon Valley has been at the forefront of US economic growth. Between 1994 and 2014, the Valley’s innovation industries doubled, while the rest of the nation’s economy grew by just under 60 percent. This underlines the important role the region has played in terms of both providing innovation, and also contributing to overall GDP growth. It has contributed much legwork in pulling the US economy forward. Its GDP surpasses the entire GDP of small-yet-wealthy nations such as New Zealand. In 2011, it attracted 40 percent of all venture capital in the US. In spite of all this, however, the Valley is said to be on its way to being eclipsed by rival regions.

Uber of x
There is much concern that the lavish funds provided by sanguine venture capital firms are starting to dry up after an initial bubble-like surge in funding. One area of supposed innovation to come from the Valley in recent years has been the ‘on-demand’ app. Uber has been one of the most successful of these, leading to a slew of other companies offering services in a similar manner, claiming to be ‘the Uber of x’. However, venture capital – after an initial flurry – appears to be moving towards other prospects, cooling interest in Uber-like services.

Globally, on-demand start-up firms secured nearly $18bn of funding in 2015. As a report by CBS Insights noted: “On a year-over-year basis, investors deployed 142 percent more dollars to global on-demand start-ups in 2015.” However, by the fourth quarter of the year, funding started to cool off significantly, falling to the lowest levels since Q3 of 2014. Within Silicon Valley and the Bay Area, firms such as TaskRabbit (an online work agency, or the Uber of temp recruitment) have been forced to review their working model, while the Uber of food delivery, SpoonRocket, has been forced to shut down altogether. Zenefits, the fastest growing company in Silicon Valley in 2015, has also seen its valuation plummet of late, while, as CNBC noted, “just 52 percent of its employees believe the future of the business is bright, according to review site Glassdoor”.

Globally, on-demand start-up firms secured nearly $18bn of funding in 2015

Heading east
However, this change is not necessarily a challenge to Silicon Valley’s position as a major hub of tech and start-up activity. The problem of dried-up venture capital funding is a general issue facing tech firms, but the Valley is likely to be able to withstand a bubble – just with some of the fat built up in the good years trimmed, as is the nature of bubble deflation. But, on top of dried-up venture capital funding, Silicon Valley is facing increased rivalry from other regions, and it is this that truly threatens the area’s position as the world’s leading tech hub.

The 2016 Silicon Valley Competitiveness and Innovation Project noted that “while Silicon Valley continues to create jobs [and remains] one of the world’s most dynamic centres of innovation”, this status faces a number of issues that threaten its future. As the report also noted, “the foundations of prosperity [in the Valley] are under strain”. A number of factors, such as a general slowdown in initial public offerings (as noted above) and a slowdown in spending from universities and other institutions in R&D were cited. However, one factor stood out as specific to the Valley: the out-flow of homegrown talent to other parts of the US.

What this perhaps indicates is that, increasingly, start-ups seem to be choosing newer, cheaper regions. Before Silicon Valley, there was Silicon Prairie in the Dallas/Fort Worth area of Texas – now another region in Texas may lead the state back to the forefront of the technology business. Austin has seen its start-ups grow exponentially in recent years. According to the Austin Startup Report, the city’s 147 companies raised $966m in 2015, while it came first in the Kaufman Foundation’s Index of Startup Activity.

The Research Triangle, composed of three major research universities in North Carolina – North Carolina State University, Duke University, and the University of North Carolina at Chapel Hill – is also attracting an increasing number of tech-proficient graduates due to its lower cost of living. With one in five residents holding a graduate degree, it is one of the most educated regions in the country. Other places cited – with equally impressive numbers of educated people and growing growth figures on various annually released indexes or reports – as the next Valley include South Florida and Colorado. A recent CNBC piece even claimed “Israel may give Silicon Valley a run for its money”, citing the fact that the country has “more tech start-ups and venture capital funding per head of population than anywhere – even the US”.

All that is solid melts into air

Exactly where the ‘new Silicon Valley’ will emerge has been speculated about for years. Areas of intensive or groundbreaking research have often been cited as providing a challenge to its status, but, in many ways, this is based on a mistaken assumption of what makes the Valley unique. Computers were not invented in the Valley, nor were robotic transistors, the internet or practically any of the founding inventions of the computer, digital, internet or automated age. Rather, Silicon Valley was built upon an ability to apply any number of these inventions to people’s lives.

As software consultant and lecturer Piero Scaruffi has previously noted: “Silicon Valley was instrumental in making [these inventions] go viral.” The next big invention will not be invented in the Valley, he claimed, but rather “it is being invented somewhere else”. However, the Valley will pick up these inventions if “they promise to revolutionise the lives of ordinary people”. Across the world, “much more sophisticated technology is created, from nuclear plants to airplanes”. But intensive research is not what made the Valley what it is.

Silicon Valley has a unique, almost eerie knack for understanding the socially destabilising potential of an invention and then making lots of money out of it

Silicon Valley has a unique, almost eerie knack for understanding the socially destabilising potential of an invention and then making lots of money out of it. As Scaruffi noted: “That’s, ultimately, what people mean when they talk of Silicon Valley as a factory of innovation.”

Such an aptitude cannot necessarily be replicated simply by the existence of a cluster of clever people graduating from or working in hubs of cutting-edge research. Another region may indeed catch on to this ability that the Valley has capitalised on – but high rates of graduate degrees and lavishly funded private universities alone do not make this a certainty. San Jose and San Francisco, according to a 2016 report from the Martin Prosperity Institute, Rise of the Global Start-Up City, still attract over quarter of all global investment capital. Much touted ‘next Silicon Valley’ Austin, by comparison, attracts a paltry 1.5 percent of global venture capital funding. And no other region seems likely to overtake this dominant position anytime soon.

Of the two wings of innovation – research and development – Silicon Valley heavily leans towards the development side. Merely inventing something in a research-intensive area of the US will not turn it into a innovative product.

“Turning an invention into a mass product involves creating and running a company, which is complicated enough, and then manufacturing the product and marketing it, which are both processes that have become increasingly complex in a globally competitive economy”, Scaruffi observed. And it is this that Silicon Valley seems uniquely adept at. Increasingly, a number of smaller, less world-impacting firms may relocate, but the solid status of Silicon Valley is harder to shake.

Will humans have a place in the world of artificial intelligence?

The impact that groundbreaking technological advances like artificial intelligence will have on the functioning of our economies and labour markets has been a hot topic for a long time. But Jerry Kaplan’s recent book Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence has impressed upon me the true size of the socioeconomic stakes.

One relatively well-known example of the digital revolution’s impact on the workings of markets is the ability to earn huge returns in high-speed trading by being a microsecond ‘ahead’ of everyone else. Another is the capacity for price discrimination by new electronic market makers like Uber, which thus appropriate every penny of the old ‘consumer surplus’ of microeconomic theory. Soon a new kind of enhanced Uber could emerge, integrating car, bus, ship, aeroplane travel and hotel rooms into one super-app. In fact, some car manufacturers are now working on exactly that.

What we really need is fundamentally to transform the way our societies function – and fast

Dominant and secure
A key question is why good old competition does not whittle away these profits rapidly. The answer often lies in the business model. Companies borrow a lot to start up, accumulate large fixed costs, and offer such low prices at first that they lose money. This enables them to expand their businesses virtually competition-free until they have established what is essentially a monopoly. At that point, they can hike prices and engage in price discrimination relatively freely.

As Kaplan points out, that is precisely what Amazon has done. It first achieved massive scale, enabling it to store unordered products at diffuse locations and thereby reduce transport costs. Now it can offer fast and free delivery, which smaller companies cannot beat. Add to that complex algorithms that set prices in a way that maximises profits, and the company’s dominance seems relatively secure.

As this approach facilitates the rise of global super-firms, it creates serious problems for labour markets and societies, because it destroys mid-level jobs based on old skills faster than comparable jobs based on new skills can be created. Equally serious, it contributes to shocking levels of income inequality, with a few households not only enjoying massive wealth, but also wielding considerable political influence. If income becomes too concentrated too quickly, desired investment will tend to fall short of available savings, creating a Keynesian macroeconomic imbalance (contrary to popular belief, what matters is not the actual amount of income flowing towards top earners, but the changes in that income).

As it stands, the impact of these technological disruptions remains relatively small in the advanced countries. But it could affect some 20 percent of GDP and 40 percent of jobs by 2030. That is a huge shift, happening at an unprecedented speed.

Job mortgage
Though we are becoming accustomed to breaking records – in 2013, more than 90 percent of all data ever accumulated in human history had been accumulated in the previous two years – the reality is that the speed and scale of this transformation pose major challenges. If we fail to address them adequately, the outcome is unlikely to be a safer and more prosperous society, but rather one in which frustration and hopelessness increase, potentially leading to extreme behaviours.

One proposal, made by Kaplan, is to create a “job mortgage”. Companies with a future need for certain skills would become a kind of sponsor, involving potential future job offers, to a person willing to acquire those skills. The worker could secure a loan against their projected future income to finance their studies. The loans would be repaid once the job began. If no job materialised, the individual would be held responsible for repaying only a fraction of the loan, with the balance absorbed by the company, which thus bears a share of the risk. There would also be a cap on repayments as a percentage of income.

Other proposals focus on reforming legal frameworks to reflect more accurately new ways of working. For example, those who are partly self-employed and partly employed by a third party would have to be better incorporated into social policies.

Such changes would undoubtedly be useful. But none of these proposals reflects the scale of the transformation that we are confronting. What we really need is fundamentally to transform the way our societies function – and fast.

Staking a claim
France Stratégie, which advises the French Government, and the American entrepreneur Nick Hanauer have proposed social-policy reforms that share a similar conceptual foundation. Individuals earn ‘claims’ over their lifetimes, first by being legal residents, and then by, say, obtaining educational degrees, earning professional certifications, performing community service (including military service), and making money. These claims can be exchanged for various benefits, such as family leave, retraining, or supplementary retirement income. The system would include all residents and be completely portable, with plenty of room for people to choose how and when to use their claims. Complementary arrangements – with, say, potential employers offering additional credits for retraining – could also be implemented.

At the same time, governments must upgrade competition policy to protect against the emergence of global monopolies. Here, new mega-regional trade deals, such as the Transatlantic Trade and Investment Partnership and the Trans-Pacific Partnership, should prove invaluable, provided that today’s powerful players are not permitted to dictate terms that advance only their interests. International cooperation should also be strengthened to prevent multinationals from avoiding taxes by shifting profits among jurisdictions. Such tax avoidance has reached hundreds of billions of dollars.

Tinkering with existing systems will not be enough to prepare us for the technology-driven transformation that we are facing. We need a comprehensive strategy aimed at making markets – and politics – truly competitive, and ensuring that public policy works for the benefit of all.

(c) Project Syndicate 2016

Here’s how diabetes treatment evolved

The existence of diabetes has been noted throughout history, across a range of different civilisations. The first description of symptoms of the disease appeared over 3,000 years ago, in an ancient Egyptian text that described patients suffering from “too great emptying of the urine”. Around the same time, physicians in ancient India also noted symptoms of the disease, which they termed madhumeha or ‘honey urine’, referring to the excessive sugar levels in the urine of diabetics. Based on this, they developed a crude method of diagnosis: observing whether or not ants and flies were attracted to the urine of suspected sufferers. Attraction indicated high sugar levels, owing to it not being broken down or stored by the pancreas. Likewise, the ancient Greeks observed the condition in around 230BC, coining the term ‘diabetes’, meaning ‘to pass through’.

Despite ancient knowledge of the condition, it has, until relatively recently, been largely untreatable. While the ancients knew the symptoms and the mortal outcome of the disease, they were unaware of the cause of diabetes: a lack of insulin in the pancreas of the sufferer, creating an inability to process sugar.

Life for diabetics has become longer, more pleasant and less painful

As the ancient Greek physician Aretaeus of Cappadocia wrote in the second century AD: “Diabetes is a dreadful affliction.” For a diabetic, he noted in his work On the Causes and Indications of Acute and Chronic Diseases, “life is short, unpleasant and painful”, and “within a short time [sufferers] expire”. However, owing to a number of medical breakthroughs starting in the 20th century, life for diabetics has become longer, more pleasant and less painful.

The discovery of insulin, and the development of increasingly easy ways for sufferers to administer it themselves, allowed patients to self-perform the key function their own pancreases could not: use up or store sugars in the blood. Progress in the 21st century, however, continues apace and a number of new medical developments should mean further improvements to the lives of diabetes patients.

A significant breakthrough
To make treatment of diabetes possible, a clear understanding of its cause was needed. From the 19th century, scientists had started to understand that the sugar found within the urine of diabetics was glucose. However, without the discovery of insulin and the vital role it plays in the bodies of non-diabetics, treatments were generally restricted to recommending a restrictive diet. John Rollo, the surgeon-general of the Royal Artillery, developed a high-protein and low-carbohydrate diet for those believed to be suffering from the disease. Other methods included prescribing appetite suppressants such as opium. While these techniques often extended the lives of diabetics, they did not do so for very long; it was not uncommon for these diets – bereft of carbohydrates – to be so low in calories that patients would die of starvation.

It was not until the discovery of insulin by Fredrick Banting and Charest Best in 1921 that anything resembling a real breakthrough in treatment became possible. This occurred following the work of Oskar Minkowski and Joseph von Mering, who found in 1889 that removal of the pancreas of dogs caused them to suffer the same symptoms as those with diabetes. This discovery was expanded upon, with attempts to narrow down the chemical within the pancreas that was responsible for regulating glucose. In 1910, Edward Albert Sharpey-Shafer came to the conclusion that this regulator was a single chemical, which he called insulin.

By administering insulin to the diabetic dog, they were able to keep the dog alive for 70 days

Based on this previous research, Banting and Best successfully removed insulin from a dog’s pancreas in 1921. With this thick brown liquid, the two scientists treated another dog that was suffering from severe diabetes. By administering this insulin to the diabetic dog, they were able to keep the dog alive for 70 days, with the test animal only dying once they had run out of the insulin extract. The results seemed near conclusive: diabetes was caused by a lack of insulin, and could be treated by administering it from an outside source.

Alongside their colleagues, JB Collip and John Macleod developed a more refined form of insulin from cattle that, in 1922, was finally trialled on a human. A 14-year-old boy named Leonard Thompson was chosen. In hospital and facing certain death from the as-yet untreatable condition, Thompson was injected with the new refined insulin, resulting in his glucose levels falling to nearly normal levels. A successful means of treating diabetes in humans had been found. Soon, pharmaceutical firms started to produce insulin en masse, and eventually created both slow and fast-acting forms. With the cause and means of treating diabetes discovered, the task was now to develop the best technology to allow diabetics to administer this life-saving chemical.

Continued progress
Although responsible for extending the lives of untold millions, administering insulin was still fraught with a number of problems. Firstly, insulin was derived from animals such as cattle and pigs. This caused allergic reactions in certain patients, effectively excluding them from diabetes treatment. This led to the creation of synthetic ‘human’ insulin, derived from bacteria.

At the same time, the needles needed to administer insulin were terribly invasive. The first insulin injections required intramuscular injections of five to 18 millimetres. However, the 20th century saw gradual development, culminating in the current, small, relatively non-invasive needles used to dispense insulin. And progress did not stop there. Scientists around the world are still working on a plethora of other methods of allowing diabetics to get the life-saving insulin they need in easier and more efficient ways.

The first insulin injections required intramuscular injections of five to 18 millimetres

One of the latest and most promising methods is the insulin patch. Developed by researchers from MC10 alongside Dae-Hyeong Kim, an assistant professor at Seoul National University, the patch promises to do away with insulin injections (or the more invasive insulin pump) altogether. A small piece of graphene studded with gold particles, the patch is able to perform the dual task of monitoring blood glucose levels and administering insulin accordingly. It senses high glucose levels in the wearer, resulting in heaters in the patch dissolving a film that releases insulin.

The laborious task of measuring insulin levels and the slightly invasive injection needed to administer it may become relics of medical history. Of course, similar non-invasive methods of insulin administration have been tried before – from similar patches to the insulin inhaler – and have failed to deliver. But, if the history of insulin in the 20th century is anything to go by, insulin administration will become increasingly refined and perfected as time goes on.

The discovery and constant refinement of insulin has saved countless lives, and will continue to do so with increasing ease. As Michael Bliss wrote in his book The Discovery of Insulin: “[The discovery was] one of the most dramatic events in the history of the treatment of disease. Those who watched the first starved, sometimes comatose diabetics receive insulin and return to life saw one of the genuine miracles of modern medicine.” As instances of diabetes around the world grow, many more will be saved. However, the miracle of modern medicine will not end there, as further developments will make the administration of insulin an ever-easier process.

Energy storage could be the bridge that eases the transition to renewables

According to the Energy Storage Association’s Executive Director Matt Roberts: “Energy storage is changing the paradigm on how we generate, distribute and use energy.” As bold a statement as this is, it’s not unreasonable to suggest that 2015 was a breakout year for the technology. Talked about often in vague or unnecessarily technical terms, investors have struggled to get a firm handle on what this ‘game-changer’ of an innovation means for the energy landscape.

Sayani Roy, Technavio Research Analyst and joint author of the Global Advanced Energy Storage Systems Market 2016-2020 report, stresses the technology “has gained immense prominence” in recent times, and, after decades spent as an energy asset for the future, its time has finally come.

“The electric grid”, said Roberts, “is based on 100-year old technology that is inflexible and in need of considerable upgrades.” Energy storage, he said, “touches every part of the electric system and, in doing so, changes the paradigm and provides value to every stakeholder”.

The market for these technologies is on course to almost double in the next half-decade, and investment in thermal energy storage, batteries, compressed air, molten salt, flywheels and other advanced technologies is set to explode. In the US alone, energy storage capacity in 2015 grew 243 percent – eclipsing the previous two years combined – and investments in storage reached $365m. The emergence of new energy storage start-ups like Stem and Green Charge Networks – not to mention household names Tesla and GE – has given observers good reason to feel optimistic.

The changing market
If successful, improvements in energy storage technology could render petrol tanks redundant, and the benefits could extend to every part of the value chain. Already, the utilities sector is partway through its biggest change in a century, and so too are the capital markets that serve it. The gathering competitiveness and complexity of renewables also mean energy companies can ill afford to ignore enabling technologies like storage that not only facilitate renewables penetration, but also give way to a more decentralised system of distribution.

According to EY’s Renewable Energy Country Attractiveness Index, utilities must redefine – and immediately – their business propositions if they’re to survive in a more decentralised landscape and serve this emerging class of producer-consumers. “The decoupling of supply and demand that arises from the flexibility offered by such solutions, combined with a more empowered consumer”, according to the company’s Global Power and Utilities Corporate Finance Leader Ben Warren, “should increasingly force the energy sector to ask what is required, why, when and by whom when constructing the investment for new projects.”

Utilities and grid operators in particular support energy storage as a more cost-effective and flexible solution for non-traditional energy technologies. “At the distributed scale, energy storage allows more effective load control and reliability benefits, creating a more responsive and resilient grid”, said Roberts. “Because it provides value at every scale, energy storage is a critical facet of a smarter grid, and its flexibility accelerates our deployment of clean technologies like wind and solar.”

The decline in battery prices and the emergence of government incentives to support energy storage systems also mean utilities and energy companies more generally are ploughing investment into such technologies. In short, the challenge that energy storage solutions are seeking to address is one of intermittency, and the ability to store energy in times of plenty and supply it in times of scarcity is a crucial part of the renewables equation.

Roberts stressed that energy storage can create a more efficient and flexible grid, and drive renewables growth in two major ways. First, consistent output makes it easier to interconnect renewables, and second, energy storage reduces the need among utilities firms for capital-intensive investments in inefficient systems.

Storage is not deconstructing centralised systems of electricity generation, although this isn’t to say the development of these technologies is any less revolutionary. Changing the paradigm on how we generate, distribute and use energy would be a better description, although questions remain as to whether these technologies can generate
consistent returns.

Tesla energy
Tesla last year unveiled Tesla Energy, “a suite of batteries for homes, business and utilities fostering a clean energy ecosystem and helping wean the world off fossil fuels”. Some, however, saw the gamble on residential-level installations as a short-sighted one. For starters, 87 percent of the energy storage capacity installed in last year’s second quarter was bought up by utilities companies, and the number of commercial and industrial owners equates to just one percent.

The advent of energy storage technology, particularly on the commercial and residential front, will take time. According to Moody’s, incremental improvements to existing battery technologies mean costs have fallen somewhere in the region of 50 percent in the last five years. Lazard’s report shows advances in storage technology offer no signs of abating in the next five years, to the point where they could displace gas-fired capacity.

Frost & Sullivan data shows the global market for utility-scale, grid-connected storage amounted to around $460m in 2014 and is forecast to reach the $8.3bn mark before the quarter point of the century. While the monetary worth of the home storage market is significantly lower, its growth is rising fast. Lux research indicates that almost 14,000 battery units were installed in the opening nine months of 2015 – more than twice the amount for the entirety of 2014 – and Tesla alone expects to install 29,000 in 2016.

Surely the energy storage technology that features most heavily on investors’ radars is the lithium-ion battery, which without doubt receives the largest share of investment and will likely remain dominant for at least the next few years. The ability to power consumer electronics and electric cars for longer has created a significant lead ahead of competing technologies. However, the drawback is that they don’t come cheap.

“In terms of technology, right now the US energy storage market is dominated by lithium-ion batteries, which can provide both short and medium duration services effectively”, said Brett Simon, an energy storage analyst with GTM. “Lithium-ion batteries have undergone a massive cost reduction in the past few years, driven in part by the consumer electronics industry and the automotive industry.”

Molten salt, lead-acid and flow batteries are likewise giving utility companies reason to invest. Panasonic, Samsung, Tesla, GE and a string of major companies are clamouring for a slice of the pie and their investments are ushering in many of the most impressive developments.

As Roberts attests: “Energy storage is not any one technology, it is a way to apply technology.” Whether utilised for generation, distribution, or even load, the ability to store energy is inherently valuable in that it allows utilities to focus on new alternatives. “Think of pickling, canning and refrigeration – all different food storage ‘technologies’ ideal for different applications. Their collective impact on our food supply chain is an apt analogy for how cost-effective, reliable energy storage changes the grid. When you can store energy, it changes the way that you plan the system, impacting every facet of integrated resource planning.”

Is business travel still relevant? More than ever before, it is

Think about it: you can call, email and even watch your counterparty on FaceTime, Skype or GoToMeeting. So why do companies fork out more than $1.2trn a year – a full 1.5 percent of the world’s GDP – for international business travel?

The expense is not only huge, it is also growing – at 6.5 percent per year, almost twice the rate of global economic growth and almost as fast as information and telecommunication services. Computing power has moved from our laptops and cellphones to the cloud, and we are all better off for it. So why do we need to move brains instead of letting those brains stay put and just sending them bytes? Why waste precious work time in the air, at security checks, and waiting for our luggage?

Many problem-solving tasks require parallel computing with brains that have different software and information but can coordinate their thoughts

Travel and development
Before anyone starts slashing travel budgets, let’s try to understand why we need to move people rather than information. Thanks to a research collaboration on inclusive growth with MasterCard and an anonymised donation of data to the Center for International Development at Harvard University, we are starting to shed some light on this mystery. In ongoing work with Dany Bahar, Michele Coscia and Frank Neffke, we have been able to establish some interesting stylised facts.

More populous countries have more business travel in both directions, but the volume is less than proportional to their population: a country with 100 percent more population than another has only about 70 percent more business travel. This suggests that there are economies of scale in running businesses that favour large countries.

By contrast, a country with a per capita income that is 100 percent higher than another receives 130 percent more business travellers and sends 170 percent more people abroad. This means that business travel tends to grow more than proportionally with the level of development.

While businesspeople travel in order to trade or invest, more than half of international business travel seems to be related to the management of foreign subsidiaries. The global economy is increasingly characterised by global firms, which need to deploy their knowhow to their different locations around the world. The data shows that there is almost twice the amount of travel from headquarters to subsidiaries as there is in the opposite direction. Exporters also travel twice as much as importers.

Moving brains
But why do we need to move the brain, not just the bytes? I can think of at least two reasons. First, the brain has a capacity to absorb information, identify patterns, and solve problems without us being aware of how. That is why we can, for example, infer other people’s goals and intentions from facial expressions, body language, intonation, and other indicators we gather unconsciously.

When we attend a meeting in person, we can listen to the body language, not just the spoken word, and we can choose where to look, not just the particular angle that the video screen shows. As a consequence, we are better able to evaluate, empathise and bond in person than we can with today’s telecom technologies.

Second, the brain is designed to work with other brains. Many problem-solving tasks require parallel computing with brains that have different software and information but can coordinate their thoughts. That’s why we have design teams, advisory boards, interagency forces, and other forms of group interaction.

Conference calls try to match this interaction, but it is hard to speak in turn or to see one another’s expressions when someone is talking. Conference calls have trouble replicating the intricacy of human conscious and unconscious group interactions that are critical to solve problems and accomplish tasks.

All about the knowhow
The amount of travel should then be related to the amount of knowhow that needs to be moved around. Countries differ in the amount of knowhow they possess, and industries differ in the amount of knowhow they require. Controlling for population and per capita income, travel is significantly more intense to and from countries and industries that possess or use more knowhow.

The countries that account for the most travel abroad, controlling for population, are all in Western Europe: Germany, Denmark, Belgium, Norway and the Netherlands. Outside of Europe, the most travel-intensive countries are Canada, Israel, Singapore and the US, a reflection of the fact that they need to deploy many brains to make use of their diverse knowhow.

Interestingly, countries in the developing world differ substantially in the amount of knowhow they receive through business travel. For example, countries such as South Africa, Bulgaria, Morocco and Mauritius receive much more knowhow than countries at similar levels of development such as Peru, Colombia, Chile, Indonesia or Sri Lanka.

The fact that firms incur the cost of business travel suggests that, for some key tasks, it is easier to move brains than it is to move the relevant information to the brains. Moreover, the fact that business travel is growing faster than the global economy suggests that output is becoming more intensive in knowhow and that knowhow is diffusing through brain mobility. And, finally, the huge diversity of business travel intensity suggests that some countries are deploying or demanding much more knowhow than others.

Rather than celebrate their thrift, countries that are out of the business travel loop should be worried. They may be missing out on more than frequent-flyer miles.

Banks must embrace the cloud or risk destruction by tech-savvy challengers

Banks are, by their very nature, conservative, so it’s no surprise that they take a more circumspect approach to new technologies. While other industries are quick to adopt new devices, software and ways of working, banks have a much higher risk profile, which means they must always proceed with caution when considering implementing technological innovations – such as the cloud.

Whereas many different business sectors pioneered the use of new cloud services, the financial services (FS) industry, for the most part, preferred to wait until legitimate questions over security and compliance could be satisfactorily answered. Although this is a safe and sensible strategy, the conservatism of traditional FS institutions has seen them being overtaken by a new breed of tech-savvy challenger banks, whose use of the cloud enables them to provide innovative and customer-centric services at a far lower cost. This, more than any factor, is driving established FS businesses to make the move to the cloud.

The price of fear
There are many incentives for moving to the cloud, and these differ as much between individual companies as they do between sectors. For FS institutions, however, the most common motivation is the ability to compete with new challengers, such as internet-only banks and investment apps, and to cut the costs and complexity of their legacy infrastructure.

This natural conservatism – or, more bluntly, fear – has left many established FS businesses playing catch-up. The industry as a whole has seen how forward-looking businesses have used the power of the cloud to achieve much greater agility, lower running costs and quicker time-to-market for new financial products and services. The clock is ticking for those that have yet to embrace the cloud.

Industry analyst Gartner has predicted that, this year alone, a poor return on equity will drive more than 60 percent of banks worldwide to move the majority of their transactions to the Cloud. For an industry that is heavily dependent on shifting compliance and regulation requirements, this statistic is promising. The question for FS organisations today is therefore not ‘do we dare trust the cloud?’, but rather ‘how soon can we migrate?’.

Profiting from the cloud
Every FS firm is different and has unique needs, so it is good to know that reputable cloud providers do not take a one-size-fits-all approach to their services. That being said, traditional banks and other FS institutions are likely to demand certain outcomes from their cloud deployments, including a much greater degree of flexibility compared to that offered by legacy systems.

The cloud enables this agility by separating workloads from the operating systems, storage and layers underneath, allowing IT departments to test and deploy applications on the most modern infrastructure and architecture. Cloud platforms also allow IT teams to test new applications internally before rolling them out to customers. This means less successful applications can be abandoned quickly, saving IT departments time and money, and making the fine-tuning of new services more efficient. The overall result is that FS institutions can dedicate more time to developing innovative applications that deliver greater customer engagement.

Of course, the issue of security is still paramount for banks, which is why FS cloud deployments tend to balance flexibility with robust safeguards. For example, it’s more common for institutions to choose private or managed cloud services, which balance the requirements of agility and security. These models give organisations greater control over the management and security of the infrastructure components of their IT environments, while supporting the need for faster innovation and extremely high availability and performance.

Given the concerns regarding compliance and security, it’s not surprising most core banking services are located in private clouds. However, there are plenty of functions where banks can take advantage of public clouds and organisations can pay to access applications ‘as a service’ over the internet, rather than hosted in their own data centres. Applications such as enterprise resource planning, human resources or billing and revenue management all lend themselves well to the public cloud approach.

For other FS organisations, the most attractive offering for many of their infrastructure and application requirements is a hybrid approach, leveraging a combination of on-premises, private and public cloud platforms. The key to successful hybrid environments is interoperability of information across these various platforms and delivery models.

Meeting customers’ demands
Customer expectations are changing, and every FS provider needs to ensure they remain relevant to a population that prefers to engage with the world through smart devices and apps. Traditional banks must prioritise providing clients with on-demand and consistent access to services through all of these channels if they are to stand out from competing challenger banks.

The rise of mobile banking is a prime example of how customers today want to engage with their financial institutions anywhere and at any time. In response, banks should be providing them with applications that encourage and facilitate this preference.

Banks cannot hope to provide the level of usability, performance and reliability customers have come to expect unless these new mobile and internet services are based on the very best underlying infrastructure. To achieve this and develop innovative applications that deliver an excellent customer experience, FS organisations should be embracing cloud deployment.

The cloud presents FS institutions with the means to digitally transform their businesses and stay ahead of growing competition. Those organisations that have already begun to deploy cloud solutions – whether public, private or hybrid – have been quick to see the enormous benefits that can accumulate, including the ability to bring services to market faster, or to dramatically reduce operational expenses for their IT infrastructure.

The FS industry will always have unique requirements for security and compliance, which mean it will never be able to deploy the cloud in the same way as less tightly regulated industries. However, that does not mean it cannot achieve digital transformations that are every bit as fundamental and wide reaching as those achieved by businesses in other sectors.

The history of ransomware: attacks threaten online security for businesses

While it leaves you tantalisingly close to your data, ransomware also keeps it very far away from you. The files are still on the hard drive, but thanks to complex cryptography, they are completely inaccessible. It’s a form of malware that will continue to spread and one that is difficult to stamp out.

Ransomware does pretty much exactly what it says on the tin – it holds data hostage in exchange for a ransom. The program gets into a computer system, usually through an attachment or under the guise of another program, and places some form of lock on the computer or the information it holds, sometimes under the pretence of being from a law enforcement agency. Users are unable to access data until a specified fine is paid. While the scams were pretty simple at first, modern ransomware may as well be considered unbreakable.

Hacked history
Paul Ducklin, a senior security advisor at Sophos with decades of experience in cybersecurity, said ransomware attacks date back as far as 1989. “Obviously there was no internet so the guy mailed out floppy disks. Supposedly it was an expert system that gauged your risk of contracting HIV, called an ‘AIDS Information Program’, but in fact it was really a cover for the fact that, if you didn’t pay his licence fee upfront, then after 90 days he scrambled the data.”

Fortunately, the software was relatively easy to crack. Due to the lack of internet, the key needed to unscramble the information was the same for every disk. Ducklin said the software was decoded by one of the founders of Sophos, and the hacker never made any money from the scheme.

“Amazingly, the money he was after was almost exactly the same price point as today. It’s not like petrol where the price goes up and down as the price of oil changes; it stays surprisingly steady in the long run. It was $378 you had to send via banker’s draft to Panama.”

During early ransomware attacks, crooks struggled to find a way of effectively eliciting a payment. Between mailing out disks and today’s complex attacks, ransomware used to be lockscreen programs: the software locked a computer, but left the data untouched. Ducklin said this was easily bypassed by booting an antivirus program from a USB or CD, and removing the malicious software.

Ransomware does pretty much exactly what it says on the tin – it holds data hostage in exchange for a ransom

Ransomware became far more threatening with the advent of Cryptolocker in 2013. Cryptographic ransomware – much like the earliest attacks – scrambles files and demands payment for the key. The difference is the key is now generated by the attackers and stored remotely, meaning it can’t be calculated or guessed without breaking into the attacker’s servers or cracking the incredibly robust code. While a high-profile break-in of a scammer’s server was staged by security experts in 2014, a similar feat would today be unlikely.

Attacks are now on the rise. According to a report published by the Australian Cyber Security Centre, 17 percent of businesses had experienced a ransomware incident in 2013. By 2015, that number had jumped to 72 percent. Attacks have been recorded across Windows, Mac OS and Android devices.

Ducklin said the recent rapid increase could be attributed to criminals now having access to strong cryptography software and a secure, untraceable payment method. “The crooks have figured: ‘Hey, we should just read some books on cryptography and maybe use Microsoft Crypto API, which is all built into Windows.’ You can get strong cryptography for free if you like. It’s easy to do it right.”

Previously, payments were demanded through Green Dot MoneyPak Cards – reloadable prepaid debit cards. Green Dot decided to stop selling them in 2015 due to their widespread use in illegal activities. Now ransoms are typically demanded in Bitcoin.

Cheaper to give in
Ducklin said the success ransomware has recently found could also be because the scammers tend to come good on their promise to decrypt the scrambled files. “Annoyingly there’s been a vague, what you might call if you want to be really keen, honour among thieves. For all that these guys are rogues and cybercrooks, generally the gangs who’ve been successful are the ones who’ve quickly developed a reputation that, if you pay, you will get your data back. That seems to be the tipping point, if you like, I’m guessing for most people.”

When a person or company could spend hundreds of dollars on repairs to find their data could not be unlocked, a comparatively cheap unlock key is probably the easiest fix. This was certainly the case in February, when the Hollywood Presbyterian Medical Center was hit by a ransomware attack. The ransomware reportedly shut down the system used to share patient information across departments. After two weeks, the hospital cracked and paid 40 Bitcoins for the unlock code, roughly $17,000 at the time.

“The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key”, said CEO Allen Stefanek in a statement released after the payment was made. “In the best interest of restoring normal operations, we did this.”

Similar stories have come out of some police departments in the US. Police in Tewksbury, Massachusetts paid $500 to unlock a computer in 2014, according to the Tewksbury Town Crier. If you haven’t got a backup, paying is the only way to get back your files. But even if a company has backed up files, Ducklin said it still might be cheaper and easier to pay the ransom.

“You hear stories of people in the business environment who regret they’ve been hit by malware but for them it’s never been the end of the world. They do the arithmetic and they go: ‘You know what, if we go to IT and raise a backup request and someone goes on their little scooter and gets the tapes from safe deposit and brings them back and loads the tapes, we’ll get the data back, but it’ll take a couple of days and it’ll probably cost us $500-$600 in internal operating costs. Versus if we just pay the guys the bloody bitcoins, and then we can think about how not to get caught again.’ From a sort of business and security perspective, Sophos’s opinion would be: ‘Don’t pay’, but we’re not so hard hearted as to say it would be morally indefensible if you did.”

Ducklin said that, although ransomware is confronting and frustrating, it’s by no means the worst thing a business could be hit with. At the end of the day, the malware isn’t destructive and the data is retrievable and more or less secure.

If a business were to put resources into protecting against ransomware, it would also protect it from far more destructive forms of malware. Ducklin said educating staff in a judgement-free environment and setting up a way for people to easily ask an expert cybersecurity questions is the best preventative measure a business could take, after installing security software, that is.

Malware continues to be a profitable endeavour for thieves, with Trustwave recently estimating a criminal could bank $84,100 per month once a ransomware campaign is active. With such a profitable system, attacks are not going to stop anytime soon. The best way for businesses to protect against any and all threats is to keep staff informed and, as always, back everything up.

Why your start-up will probably fail

Last Christmas, like every other, had its ‘hottest gift’, and 2015 was ever so briefly the year of the hoverboard. Small boards with two wheels that can be controlled by leaning, their popularity was swiftly dampened when most failed safety checks and a small number began to explode. Like many other products rushed to market, the hype failed to live up to expectations and the product, quite literally, crashed and burned. It’s the trajectory of the typical technology start-up.

While companies boasting seemingly unstoppable levels of growth dominate the headlines, most quietly fade into obscurity. For every Uber, Snapchat and SpaceX, there are dozens of failed operations, including some that generated similar levels of initial interest. There are plenty of traps for entrepreneurs to fall into.

In memoriam
Last year saw a number of high-profile technology start-ups – some once considered darlings of the industry – shut down. Quirky was launched in 2009 as a platform for everyday people to pitch their ideas for new gadgets. Concepts were workshopped by the community, with Quirky developing and selling the most popular and paying out a percentage of profits to anyone who had helped. It struck gold with gadgets such as Pivot Power, a shape-changing power strip, but they weren’t enough to outweigh some disastrous flops: Egg Minder, an expensive egg tray that alerted your smartphone when you were running out of eggs, was a complete mess. The company also spent over $800,000 developing a set of wheels that could turn anything into a remote control car and a bathroom mirror that didn’t fog up, neither of which ever made it onto shelves, according to The Verge. Quirky raised $185m from investors but ran out of cash in August and was sold to a holding company.

By year 10, 71 percent of all start-ups, regardless of industry, had failed. Building something to last is tough, especially without help

Homejoy was an app for on-demand house cleaners, founded in 2012. It raised $40m from investors, but was never profitable, mainly due to poor customer and worker retention. People signed up for a heavily discounted first use and never bothered to use it twice. It was also under immense legal pressure from four pending lawsuits on the classification of its employees.

Secret was a social network where people could post anonymously; it raised $35m in 16 months but struggled to contain cyberbullying and was quietly shut down. Grooveshark and Rdio lost out to bigger music streaming companies despite raising $6m and $125.7m respectively. Leap Transit, a boutique commuter bus service in San Francisco, once described as a “crock of shit” by a city supervisor, generated more money in the sell-off of its fleet of luxury buses than it ever did from users.

The odds are a tech start-up will fail in its first few years. Research conducted by Statistic Brain Research Institute found only 37 percent of information start-ups were still operating after four years. By year 10, 71 percent of all start-ups, regardless of industry, had failed. Building something to last is tough, especially without help.

Andy Shannon is head of Startupbootcamp Global, an international start-up accelerator. Startupbootcamp invests in and mentors technology start-ups, guiding them through the difficult early stages. Each of its programmes focuses on a sector (e.g. fintech) to give start-ups targeted advice and access. Shannon said start-ups at the seed stage have a successful exit roughly 10 percent of the time, but those with mentoring have a far better chance.

“Talking from our own experience, the 300-plus companies we have invested in, in the last five years, have [had] an 80 percent survival rate”, he said. “We believe being part of Startupbootcamp will increase their exit rate much higher than 10 percent, in addition to the start-ups being strong from the start. We are already seeing over 70 percent of the teams raise funding, which is much higher than average. With exits taking place between eight and 15 years after seed stage, and Startupbootcamp existing five years, we have some time before expected exits will occur.”

So what goes wrong?
For young start-ups, there is no shortage of potential business-killing hurdles. Shannon said the first test should be seeing if there is a market for your product. “For instance, at the very early stages, some may struggle with finding the right product/market fit by not testing their hypothesis on the value, use and need for their product. Similarly, acquiring their initial customer or users is another challenge start-ups may face at the very early stages. On the other hand, once they’ve passed the early validation period, other challenges arise, such as not having a recurring revenue, metrics that show the business is not growing, or not being able to raise the investment needed to support product development or market expansion.”

When a start-up goes under, it’s now common to see a final blog post from its founder lamenting, celebrating or venting about what exactly went wrong. CB Insights has been maintaining and analysing a collection of these posts since 2014, identifying the most common reasons cited: 42 percent listed a lack of a market for their product – which shouldn’t be surprising – but only 17 percent mentioned a poor product, and the same amount mentioned the need for, or lack of, a business model.

Still, once a business has proved a viable market exists, finding the right growth rate is the next challenge. Often start-ups begin life as a loss-making operation, perhaps having a larger scale in mind for when their company can turn a profit. This is where funding comes in, providing the cash injection needed for quick growth.

The dilemma then becomes how to spend the money to grow the company, but not grow so fast you can never make back the cash you have already spent. Once a couple of million dollars is sitting in the bank, the pressure is on to spend it. But if the business grows to a size at which it can’t sustain itself, or can’t generate more investment, it’ll eventually fail. It’s a trend identified by Compass’ 2011 report The Startup Genome, which suggested premature scaling is the cause of 70 percent of failures.

Shannon said premature scaling and too much funding go hand in hand. “So often, young start-ups raise a few million dollars prior to true product/market fit, and they feel like they need to ramp up spending regardless if they have proven their scaling metrics”, he said.

Scaling should be an avoidable problem, but overconfident entrepreneurs are often the source of their own demise. The Startup Genome found entrepreneurs serially overestimate themselves and showed start-ups needed two to three times longer than founders expected to validate their market. Founders also overestimate the value of IP before product/market fit by 255 percent. Start-ups that haven’t raised money also overestimate their market size by a multiple of 100. It’s hard to blame them when it often looks so easy.

The lucky ones
Spectacular successes dominate the headlines generated by the technology start-up sector. Unicorns, named for their rarity, are companies that command a valuation in excess of $1bn, a feat once unheard of in the technology sector. From the outside, it looks like anyone with a good enough idea can join the start-up elite.

But there is such a thing as ‘survivorship bias’: it denotes a tendency to focus on examples of success over examples of failure. David McRaney, author of You Are Not So Smart, detailed the concept, which dates back to the Second World War. Returned bombers were filled with bullet holes along their wings and tails, so commanders wanted to put extra armour in those areas to increase survivability.

Researchers argued this wouldn’t improve the odds of surviving at all, as a plane could already be shot in those areas and make it home. What would actually be useful would be finding the places they were shot and didn’t come back, then make those areas stronger.

The same concept can be applied to the start-up sector. Success stories abound both in headlines and speaking circuits. The small team of entrepreneurs pulling all-nighters in a basement to get their first prototype up and running, the small launch, the first round of investors and then unstoppable growth is the industry’s equivalent of the Arthurian legend; anyone can be king if they can pull the sword from the stone. Shannon said survivorship bias is a very real part of such dreams.

“A start-up with $15m-plus in funding is often suddenly displayed as a case in what all other start-ups should try to emulate. What most forget is those founders and that company were successful at one specific point in time because of many variables. Is what they did applicable to other start-ups? Possibly, yes. But will the exact same thing happen to the next start-up? Unlikely.”

It’s pretty easy to dismiss failures because they are, well, failures, even though they might be the best equipped to hand out advice. Research conducted by Francine Lafontaine and Kathryn Shaw looked at the success and failure rates of businesses that opened in Texas between 1990 and 2011. On average, half closed after two years and 71 percent of entrepreneurs gave up: the 29 percent who persevered were far more likely to succeed the second time around. The more a person fails with a business, the better they will probably do in the future.

Turn on a dime

Even if a start-up is on the ropes, it has one last risky measure it can use to save itself. Coined by technology writer Eric Ries in his book The Lean Startup, ‘a pivot’ is when a company takes a completely different business direction. If its product or service isn’t working, a start-up small and nimble enough could try using that information to develop a new product.

A prime example of a successful pivot is Groupon, which started as activism website The Point in 2007. A person could register their interest in attending something (for example a protest), but would only commit to going if a certain number of other people also registered their interest. It failed to generate much interest, largely due to how unfocused it was; people could use it for virtually anything, so they used it for nothing.

The most popular campaigns on The Point were the ones that gave people buying power in numbers, such as a 20 percent discount if enough people committed to buying something. The Point’s founders saw the value in this and spun the site off into Groupon, the bulk deals website it is today. It became, at the time, the fastest growing company in web history. However, more recently, Groupon’s stock has suffered a major slide due to concerns it can’t expand beyond its current state.

But Shannon said he thinks the term ‘pivot’ is thrown around a little too often among start-ups. “To me, the key is having a vision for solving a core customer problem, and searching for a business model while solving that need. The best start-ups stay nimble during the early years by developing a minimum viable product and adopting the lean start-up methodology, where they build, measure and learn. This creates a feedback loop that provides data-driven insights.”

Shannon said, if these tests show a business is failing, it might be time for a pivot. “Of course, in an ideal scenario, if the start-up has a solution that solves a real-world problem and has validated their assumptions through their minimum viable product, then there won’t be a need for a pivot, and they can just go on taking over the world!” For start-ups to last in the long term, a cooling of expectations needs to happen. While it’s certainly possible to develop the next Snapchat, attaining a similar level of success is not easy or likely. Careful decisions, steady but restrained growth, and a refusal to emulate the unrealistic expectations of unicorns are what is needed if a company is to last longer than your average hoverboard.

The end of managers – workers are now able to supervise themselves

Tony Hsieh, Chief Executive of Las Vegas-based shoe and clothing retailer Zappos, last year told his 1,500 or so employees of his intention to employ a flatter management structure and do away with so-called ‘people managers’. The memo, which ran to an unwieldy 5,000 words and waxed lyrical about the virtues of self-organisation, advocated a ‘rip the band aid off’ approach. Hsieh’s alternative is an egalitarian dream without traditional managers, where employees are free to govern themselves.

Any employee not in favour of Hsieh’s self-governing ‘circles’ was given the opportunity to bow out of the transition in exchange for a minimum three months’ severance. Some 210 employees – 14 percent of the workforce – took the money and ran.

The latest Gallup Employee Engagement Index shows 29 percent of employees could be self-managed without any encouragement and a further 52 percent could do the same given the right environment and motivation

It goes without saying that a flat organisational structure is not for everyone, and even the model’s most vocal supporters have warned against ditching a conventional hierarchy too suddenly. Hsieh himself conceded “managers were absolutely necessary and valuable to the growth of Zappos over the years under our previous structure”. And even Jacob Morgan, best-selling author of The Future of Work and a vocal critic of rigid hierarchies said: “A completely flat structure is not something I would recommend for any company that has more than a few hundred employees, and even that might be tough.” Abolishing managers outright can create confusion among investors, customers, and the employees themselves.

The Zappos take on self-management is a ‘holacracy’, designed to boost transparency and streamline operations with committee-like circles. According to Brian Robertson, cofounder of HolacracyOne and author of Holacracy, where traditional hierarchy has reached its limits and flat management alternatives lack the rigor to run a successful business, Holacracy represents a third way. “It brings structure and discipline to a peer-to-peer workplace”, he said, and the core principle is that, by empowering the workforce, people will be more productive.

If nothing else, Zappos underlines the seriousness with which major companies are moving away from the ‘legacy management hierarchy’ and embracing flatter alternatives. It’s only natural some will stray from the conventional hierarchy, particularly as the transition to software-defined enterprise takes hold and information costs plummet.

“Information came on a need-to-know basis, so those at the top had access to everything”, said Morgan. “Today, information is the largest commodity that anyone can access, which means that, instead of employees having to push information up, managers instead need to push authority down to enable employees to make decisions.”

The adoption of completely managerless alternatives, however, is a decision reserved only for the most radical of organisations.

The future of work
Flat – or self-managed – organisations are unlike any other in that they typically have no job titles, no managers and no executives. With a short chain of command and wide span of managerial control, a flat structure facilitates a far greater range of communication and streamlines decision-making processes. On the flip side, a flat structure has been shown to stifle innovation, discourage middle managers and clog up communication channels, if employed to ill effect.

The latest Gallup Employee Engagement Index shows 29 percent of employees could be self-managed without any encouragement and a further 52 percent could do the same given the right environment and motivation. The upshot is that 81 percent of employees, in the right conditions, would be willing to adopt a managerless structure, which means a traditional management hierarchy plays to a minority and stifles the majority who feel they would do just as well – or better – without a senior figure looking over their shoulder.

It’s no surprise, therefore, that Zappos’ holacracry is only one of several alternative organisational structures that promise to redefine the workplace. Network-centric and lattice organisations are growing in popularity, and the reasons for their proliferation are many.

The internet has accelerated engagement with flatter organisational alternatives and, as a result, the majority of early adopters are privately held and/or tech companies headed by Millennials with few – if any – ties to the old organisational structures. The technology is there to ensure every employee can access information and have a greater degree of autonomy. New and improved platforms have given employees the power to decide among themselves who is best placed to complete the task at hand.

Game company Valve has been held up as an example of how to implement this set-up. Partly as a result of it, the company boasts higher profits per employee than either Google or Apple. “This company is yours to steer toward opportunities and away from risks”, said the Valve employee handbook. “You have the power to green-light projects. You have the power to ship products.”

Flat isn’t for all

This isn’t to say the appeal is limited to tech-minded Silicon Valley types. Zappos is a shoe and clothing retailer, Morningstar Farms produces veggie cuisine and Sun Hydraulics manufactures screw-in hydraulic cartridge valves. Researchers at the University of Iowa and Texas A&M University, meanwhile, followed 587 factory workers and determined self-managed teams resulted in all-round improved performances.

That said, some organisations are better suited than others. For one, larger companies with thousands of employees will find an organisational overhaul on this scale difficult to stomach, and whether or not workers are more productive without a manager boils down to self-motivation.

The point isn’t that that managers should be fired; it’s that flatter approaches can streamline processes. “[Older] structures are rigid and make it hard for communication, collaboration and innovation to happen”, said Morgan. “Not only that, but employees don’t want to work for these types of organisations, so it has also become an issue of attracting and retaining top talent.”

Going completely managerless is one of the more extreme responses, but it is also one of the more successful. Morgan’s conclusion is that new models are emerging for one simple reason: “Rigid hierarchies are bad for business.”

The sharing economy is creating a third way between employment and unemployment

The current generation wants it all: money, flexibility, a work-life balance, plus a diverse, successful and varied career. The nine to five is fast becoming irrelevant. The answer? The new gig economy, or shared economy as it is also known: being able to be a taxi driver as and when with Uber; being able to rent your house out when you want to via Airbnb; being able to offer skills and services on a whim via TaskRabbit. We want to work how and when we want, but we also need a reputable platform to provide us with access to the market, a known brand and the credibility to help us when things go wrong. The shared economy is expanding into many sectors, it has even hit the corporate end of the legal market: the legal interim market is now part of the shared economy, providing lawyers with all of the above.

In the UK, to take one example, the shared economy is currently valued at around £500m and engages approximately five million people. A recent report by PwC estimated the UK shared economy market might increase to £9bn by 2025. Interestingly, the report estimated that, in 2013, the five main shared economy markets – peer-to-peer lending and crowdfunding, online staffing, peer-to-peer accommodation, car-sharing, and music and video streaming – generated $15bn in global revenue. This is a massive market that has expanded by 1,700 percent in the UK and 2,133 percent globally in just 12 years.

However, it is not without its challenges and is already facing a significant regulatory hurdle. The regulators have failed to keep up with this new economy and, given the reaction from traditional providers, and regulatory and governmental bodies, it seems highly likely regulatory issues will either make or break this market. If we look at two of the best known shared economy companies, Uber and Airbnb, both are under regulatory (and therefore public) scrutiny; Uber has come under attack from many bodies, while New York City has been aggressively taking on Airbnb and the short-term rental market. The sharing economy is under increasing attack; rather than being a model for people to make money on their own terms, the growing concern is that this new economy is a precarious one for the workers and that, while the corporations are profiting from this new piecework economy, those working for them are doing so without a safety net.

A recent report by PwC estimated the UK shared economy market might increase to £9bn by 2025

Brave new world
Yet, while the regulators are shining their light on the shared economy, this new economic model has no precedent. As yet, they have not provided an answer as to how to deal with it and have been caught between trying to protect workers and consumers while at the same time trying not to stifle this increasingly important market.

Historically, there have been two categories of workers: those who are employed and those who are self-employed. The law was created to reflect this and, up until this point, has proven adequate. Now the lines are becoming increasingly blurred; although Uber drivers are technically self-employed, in reality they probably fall somewhere inbetween self-employed and employed. The most controversial issue facing employers in the new gig economy is how they are potentially exploiting workers who receive no benefits or safeguards and do not know how much they will be paid or what they will be doing next. These people fall through the welfare net and, arguably, a number of companies who make use of the shared economy have capitalised on those already outside the employed workforce. This has taken place within the legal profession as much as the taxi market.

It would be interesting to know where these individuals would have been without the shared/flexible economy. Did these individuals leave jobs or were they already outside the employed workforce? Workers seem to include individuals outside mainstream employment who are looking for a way back in: working parents who were shut out of the market due to childcare costs and logistics; students; and part-time or shift workers trying to make ends meet.

For all the criticism levelled at shared economy employers, it is important to note that, fundamentally, the shared economy gives the demand and supply sides some, if not absolute, access to, and control of, each other. We hail a taxi and the drivers can choose to accept or not while the platform provides its drivers and customers with a market and quality control. The shared economy is about access to the market and the platforms facilitate this; this is the new resourcing model. It is also not as simple as defining all shared economy workers as either employees or even zero-hour contractors, as this will mean the ‘employers’ will limit flexibility and want greater control – all reasons many of the individuals using their platforms may have left mainstream employment in the first place.

A third way
So what is the answer to the emergence of this new form of working? It might appear the best answer would be to create a new working relationship in the shared economy. In order not to drown the market in red tape and regulation, I believe we need to create a third category of worker. We need to look at what the market wants from the demand and supply sides and work on creating a sensible solution, while at the same time ensuring any measures do not kill the shared economy.

One possible suggestion is to look at how the rest of the economy is structured to provide the flexibility and support required to create a marketplace conducive to the growth of the shared economy. This might mean lower personal tax rates if you work for a shared economy business. The reduced taxes for the individual would take into account the fact they have to pay for benefits and extra insurance. The companies themselves could be required to pay to register as a shared economy business, and also be required to negotiate bulk benefit and insurance rates for the individuals who work with them. That would still give both sides flexibility, it would not stifle the market, and would help the individuals with any shortfall in income and protect them when they cannot earn any income. Ultimately, it would provide independent workers with a financial safety net currently only in place for salaried workers.

One of the challenges facing employers with regards to Millennials and, more recently still, Generation Z is that these generations do not necessarily want to be defined. For many, they are actively choosing to work ‘gigs’ and it is less about taxes or benefits or rights and more about freedom. Historically, if you wanted career freedom, you either opted to be self-employed or a contractor. The significant difference here is the individuals want to have the freedom within a structure, but at the same time want some level of protection as well.

In my view, I do not think it is accurate to say the shared economy has simply grown out of companies exploiting loopholes. Yes, it is borne out of a need for less structure and more flexibility, which does mean less regulation but does not mean no regulation. Over-regulation will cause the shared economy to collapse, cutting off a healthy source of remuneration for many individuals. At the same time, we cannot have millions of workers without social protection.

This market has been created by the innovators and, to fully take advantage, we as a society have to innovate – and quickly – to catch up with this new reality.

Broadcast TV is finished – the streaming era has begun

John Landgraf, President of FX Networks in the US, said last year: “My sense is that 2015 or 2016 will represent peak TV in America, and that we’ll begin to see decline coming the year after that and beyond.” His concern was that the volume of new shows being released had made it impossible for networks to find compelling original stories that stood out against competitors’. New TV shows might not dominate water cooler discussions in the same way they did 10 years ago, but that isn’t because they’re not as good: it’s because people just aren’t watching TV the same way.

Suggesting people are watching less TV is a mistake: in fact, we’re staring at screens for longer than ever before. Figures from eMarketer show the average time spent by a US adult watching video increased from four hours and 56 minutes per day in 2011 to five hours and 31 minutes in 2015. What is in decline is the amount of time people spend watching broadcast TV: the same study suggested time dedicated to broadcast TV has decreased from four hours and 35 minutes to four hours and 15 minutes over the same time period.

The frightening factor for broadcasters is they might not be able to accurately measure the impact of streaming services until it’s too late

Somebody’s watching me
Use of streaming services is on the rise. According to a recent study by Michael Nathanson of MoffettNathanson, Netflix’s US subscribers streamed about 29 billion hours of video in 2015, representing six percent of total US live-plus-7 TV viewing, up from 4.4 percent in 2014. The report also calculated about half of an overall three percent decline in TV viewership can be attributed to people switching to Netflix, with this trend expected to continue.

The frightening factor for broadcasters is they might not be able to accurately measure the impact of streaming services until it’s too late. Nextflix and Amazon have proved very secretive about their viewerships’ habits, making it difficult to get a grip on exactly what Netflix’s 75 million worldwide subscribers are actually watching.

Nielsen ratings are what TV shows traditionally live and die by. Data is collected either through viewer diaries or tracking units connected to TVs. They are invaluable to networks in judging what their shows are worth and how much they can charge advertisers. Only last year did Nielsen announce it would begin to monitor select shows viewed through streaming services. It now uses Nielsen boxes to scan for a show’s ‘digital fingerprint’; specific audio cues the show’s makers slip in. It’s a start, but will hardly provide a complete picture given it will miss shows watched by people using headphones.

A sporting chance
Broadcasters used to be able to count on live sports events to bring in subscribers and viewers, but even this lucrative channel is beginning to lose its lustre. Given their importance to networks, sporting organisations have been able to charge ever increasing fees for broadcasting rights. As subscriber numbers slowly slip, higher fees become more difficult to justify. It might not be long before it makes more business sense for sports organisations to offer their own streaming services. Indeed, many have already started: the NBA, NHL and MLB offer packages that can be tailored towards watching your favourite team if its game isn’t being broadcast locally. For consumers who are interested in only one or two sports offered by a network, subscribing to a couple of streaming services could be far cheaper than a network subscription.

People are no longer willing to fit themselves around broadcaster’s schedules, and shows are less likely than ever to draw the same weekly frenzy that for so long was the measure of their success. TV broadcasters will have to adapt to a declining live viewership and innovate if they want to stand a chance of outpacing dedicated streaming networks.