Could induced pluripotent stem cells be the breakthrough genetics has been waiting for?

The almost miraculous benefits that stem cells may one day deliver have long been speculated on. Capable of becoming different types of cells, they offer huge promise in terms of transplant and regenerative medicine. It is, however, also a medical field that urges caution – one that must constantly battle exaggeration. If stem cells do in fact hold the potential to reverse the ageing process, for example, then such breakthroughs remain many years away.

Recently, though, the field has had cause for excitement. In 2006, Japanese researcher Shinya Yamanaka discovered that mature cells could be reprogrammed to become pluripotent, meaning they can give rise to any cell type of the body. In 2012, the discovery of these induced pluripotent stem cells (iPSCs) saw Yamanaka and British biologist John Gurdon awarded the Nobel Prize in Physiology or Medicine. Since then, there has been much talk regarding the potential iPSCs possess, not only for the world of medicine, but for society more generally, too.

A big step
Historically, one of the major hurdles preventing further research into stem cells has been an ethical one. Until the discovery of iPSCs, embryonic stem cells (ESCs) represented the predominant area of research, with cells being taken from preimplantation human embryos. This process, however, involves the destruction of the embryo and, therefore, prevents the development of human life. Due to differences in opinion over when life is said to begin during embryonic development, stem cell researchers face an ethical quandary.

The promise of significant health benefits and new revenue streams has led some clinics to offer unproven stem cell treatments to individuals

With iPSCs, though, no such dilemmas exist. IPSCs are almost identical to ESCs but are derived from fully committed and differentiated cells of the adult body, such as a skin cell. Like ESCs, iPSCs are pluripotent and, as they are stem cells, can self-renew and differentiate, remaining indefinitely propagated and retaining the ability to give rise to any human cell type over time.

“One important distinction to make is that both ESCs and iPSCs do not exist in nature,” Vittorio Sebastiano, Assistant Professor (Research) of Obstetrics and Gynaecology (Reproductive and Stem Cell Biology) at Stanford University’s Institute for Stem Cell Biology and Regenerative Medicine, told The New Economy. “They are both beautiful laboratory artefacts. This means that at any stage of development, you cannot find ESCs or iPSCs in the developing embryo, foetus or even in the postnatal or adult body. Both ESCs and iPSCs can only be established and propagated in the test tube.”

The reason neither ESCs nor iPSCs can be found in the body is that they harbour the potential to be very dangerous. As Sebastiano explained, these cells “could spontaneously differentiate into tumorigenic masses because of their intrinsic ability to give rise to any cell type of the body”. Over many years of research, scientists have learned how to isolate parts of the embryo (in the case of ESCs) and apply certain culture conditions that can lock cells in their proliferative and stem conditions. The same is true for iPSCs.

To create iPSCs, scientists take adult cells and exogenously provide a cocktail of embryonic factors, known as Yamanaka factors, for a period of two to three weeks. If the expression of such factors is sustained for long enough, they can reset the programme of the adult cells and establish an embryonic-like programme.

Turning back the clock
There is already a significant body of research dedicated to how stem cells can be used to treat disease. For example, mesenchymal stem cells (usually taken from adult bone marrow) have been deployed to treat bone fractures or as treatments for autoimmune diseases. It is hoped that iPSCs could hold the key for many more treatments.

Global stem cell market:

25.5%

Expected compound annual growth rate (2018-24)

$467bn

Expected market value (2024)

“IPSCs are currently utilised to model diseases in vitro for drug screening and to develop therapies that one day will be implemented in people,” Sebastiano explained. “Given their ability to differentiate into any cell type, iPSCs can be used to differentiate into, for example, neurons or cardiac cells, and study specific diseases. In addition, once differentiated… they can be used to test drugs on the relevant cell type. Some groups and companies are developing platforms for cell therapy, and I am personally involved in two projects that will soon reach the clinical stage.”

Perhaps the most exciting prospects draw on iPSCs’ regenerative properties. Over time, cells age for a variety of reasons – namely, increased oxidative stress, inflammation and exposure to pollutants or sunlight, among others. All these inputs lead to an accumulation of epigenetic mistakes – those that relate to gene expression rather than an alteration of the genetic code itself – in the cells, which, over time, results in the aberrant expression of genes, dysfunctionality at different levels, reduced mitochondrial activity, senescence and more besides. Although the epigenetic changes that occur with time may not be the primary cause of ageing, the epigenetic landscape ultimately affects and controls cell functionality.

“What we have shown is that, if instead of being expressed for two weeks we express the reprogramming factors… for a very short time, then we see that the cells rejuvenate without changing their identity,” Sebastiano said. “In other words, if you take a skin cell and express the reprogramming genes for two to four days, what you get is a younger skin cell.”

By reprogramming a cell into an iPSC, you end up with an embryonic-like cell – the reprogramming erases any epigenetic errors. If expressed long enough, it erases the epigenetic information of cell identity, leaving embryonic-like cells that are also young.

Slow and steady
As with any scientific advancement, financial matters are key. According to Market Research Engine, the global stem cell market is expected to grow at a compound annual growth rate of 25.5 percent between 2018 and 2024, eventually reaching a market value of $467bn. The emergence of iPSCs has played a significant role in shaping these predictions, with major bioscience players, such as Australia’s Mesoblast and the US’ Celgene, working on treatments involving this particular type of stem cell.

“The business potential around stem cell research is huge,” Sebastiano told The New Economy. “[Particularly] when it comes to developing cell banks for which we have detailed genetic information and, for example, studying how different drugs are toxic or not on certain genetic backgrounds, or when specific susceptibility mutations are present.”

Unfortunately, even as the business cases for iPSC treatments increase, a certain degree of caution must be maintained. The promise of significant health benefits and new revenue streams has led some clinics to offer unproven stem cell treatments to individuals. There have been numerous reports of complications emerging, including the formation of a tumour following experimental stem cell treatment in one particular patient, as recorded in the Canadian Medical Association Journal last year. Such failures risk setting the field back years.

The challenge for researchers now will be one of balance. The potential of iPSCs is huge – both in terms of medical progress and business development – but can easily be undermined by misuse. Medical advancements, particularly ones as profound as those associated with iPSCs, simply cannot be rushed.

All hands on deck: the shipping industry’s mission to become more sustainable

Advances in transport networks and fuel efficiency have seen the air travel industry grow rapidly over the last couple of decades, but it is not suitable for all the world’s transportation needs. In fact, around 90 percent of the world’s goods are moved by sea, with over 70 percent travelling as containerised cargo. As globalisation has gathered pace, the amount of goods moving via this method has also increased, and maritime trade is predicted to continue growing, expanding by an estimated 3.4 percent annually between now and 2024. Unsurprisingly, the harm this trade is doing to the environment is also predicted to rise.

Shipping is currently responsible for between two and three percent of worldwide greenhouse gas emissions, but it also causes noise and light pollution, as well as significant physical damage to marine ecosystems. With global trade unlikely to slow in the long term (even if it has been dented somewhat by the US-China spat), the ability to cut down on the emissions produced by this essential mode of transport is becoming increasingly vital.

The decarbonisation of shipping and its energy value chains requires a systemic transformation

Test the waters
Currently, most maritime trade ships use low-cost heavy fuel oil, but this contains sulphur levels 2,300 to 3,500 times higher than that of the standard diesel used in most cars and trucks. Reducing these levels has been a core ambition of the International Maritime Organisation (IMO) for a number of years, but there has unsurprisingly been some industry pushback. In January, the IMO achieved its aim, mandating that sulphur levels in fuels must come down from 3.5 to 0.5 percent. Estimates suggest the new rules could cost the global economy as much as $1trn over a five-year period, but this is the sort of international coordination that is needed to regulate a global industry.

“The decarbonisation of shipping and its energy value chains requires a systemic transformation,” Johannah Christensen, Managing Director of the Global Maritime Forum, told The New Economy. “It can only be achieved through close collaboration and deliberate collective action between the maritime, energy, infrastructure and finance sectors, with support from governments and intergovernmental organisations. But it is also a shift that represents a trillion-dollar market opportunity for new suppliers of zero-emission fuels. Shipping’s consumption of fuels is estimated to be around 250 to 300 million tonnes every year. This means that shipping has the scale to be a catalyst for a broader energy transition, unlocking the market for zero-emission fuels.”

If the shipping industry is to grasp this opportunity, it will need to act quickly. Many shipping firms have already switched to lower-sulphur fuels in order to meet the new IMO regulations, while others are employing exhaust scrubbers to ensure their emission levels are compliant. Fortunately, the industry does not have to wait for new technology in order to become greener: if the will is there, so are the means.

“The challenge is less a technological one, as many of the technologies to decarbonise shipping are already available,” Christensen said. “The challenge lies in taking concerted, collective action to narrow down fuel options, demonstrate viability, close the competitiveness gap between traditional and zero-carbon fuels, and create new, sustainable supply chains.”

Greater international cooperation will also be required. Currently, the IMO has four emission-control areas – the Baltic Sea, the North Sea, the US Caribbean, and the coastal waters of Canada and the US – but these exclude some of the world’s largest and most polluting container ports.

Setting the right course
The need to deliver a more environmentally friendly shipping sector is now recognised, but it has been a long time coming. For a number of years, even as oil refineries have produced cleaner fuels for road users, they have continued to dump low-quality products into the marine transportation sector. The reason is largely one of convenience. The most polluted marine terminals are far away from the major cities of developed nations – they are out of sight and out of mind.

90%

of the world’s goods are moved by sea

2-3%

of global greenhouse gas emissions are generated by shipping

3.4%

Annual increase in maritime trade (2019-24)

Fortunately, efforts are being made to create a greener industry. A Global Maritime Forum initiative, the Getting to Zero Coalition, was formed with the aim of seeing commercially viable zero-emission vessels operating along deep-sea trade routes by 2030. The group brings together leaders from across the shipping value chain in the hope that greater collaboration will lead to a cleaner future.

“The question of how to best deliver less carbon-intensive shipping is central to the work of the Getting to Zero Coalition, a powerful alliance of maritime, energy, infrastructure and finance stakeholders, supported by key governments and intergovernmental organisations,” Christensen explained. “As part of its work, it is exploring a variety of zero-emission fuel options for shipping. These include synthetic fuels produced from renewable electricity such as ammonia, fuels produced from biomass and fuels produced from fossil fuels, combined with carbon capture and storage. Each option comes with its pros and cons, which need to be evaluated in terms of scalability, commercial viability and safety.”

The Getting to Zero Coalition is not the only initiative looking to cut down on shipping’s environmental impact. The Poseidon Principles were launched in June of last year to align other areas of the industry with the IMO’s climate-related ambitions. “The Poseidon Principles provide a common framework for integrating climate considerations into lending decisions,” Christensen continued. “They allow signatory financial institutions to assess and disclose how their shipping portfolios align with climate targets set by the IMO. By committing to the Poseidon Principles, financial institutions can promote responsible environmental practices among their clients and so incentivise reductions in the maritime sector’s emissions.”

Unsurprisingly for an industry with such breadth, working together is essential to reaching a shared goal. Financial institutions, shipbuilders, port operators and many other stakeholders have a role to play in reducing the emissions of the shipping trade. Collaborative initiatives like Getting to Zero and the Poseidon Principles present a way forward.

Full steam ahead
For an industry as huge as maritime trade to undergo significant change is not easy, and is usually slow going. Nevertheless, some progress is being made. Google has committed to carbon-neutral shipping by 2021 and Maersk, the world’s largest container shipping company, has invested $1bn annually for the last four years in pursuit of cleaner technologies. These early movers are likely to be followed by others now that the environmental impact of shipping is receiving more attention.

“De-risking the early stages of shipping’s transition to zero carbon is one of the key first steps towards a low-carbon future because it will encourage first movers to accelerate the development, testing and scaling of zero-carbon technologies and business models,” Christensen said. “This is why the Getting to Zero Coalition is focusing on overcoming this and other barriers to shipping’s transition to zero.”

Most impressively, four Japanese businesses – Asahi Tanker, Exeno Yamamizu, Mitsui OSK Lines and Mitsubishi – are working together to create the world’s first zero-emissions tanker by the middle of 2021, using large-capacity batteries in place of the usual oil-based propulsion systems. These four organisations are also set to form a group called the e5 Lab, which will promote cleaner forms of marine transportation across the sector. If institutions work collaboratively, it could still take decades to clean up the shipping industry, particularly with global trade on the rise. Collaboration appears to be the only way that the industry’s pollution problem can be tackled.

Apple’s struggle to conquer the Indian smartphone market

In the West, Apple is viewed as an all-conquering business behemoth. First, it revolutionised music, then it took control of the smartphone market. It has made successful inroads into televisions and wearable technology, and has even set its sights on developing its own electric car. By market capitalisation, and by general public opinion, it is undoubtedly one of the world’s most valuable companies.

But Apple doesn’t have everything its own way – certainly not the world over. In India, the company lags far behind market leaders like Xiaomi and Samsung in terms of smartphone sales, with a study by Counterpoint indicating that the company’s devices represented just three percent of those sold in the country in Q4 2019.

For Apple, this is a problem that needs addressing soon: although the company has huge cash reserves, CEO Tim Cook remains well aware that market saturation in developed nations is approaching. As the world’s second-largest smartphone market, India offers great potential for brands like Apple, but the Silicon Valley giant currently has a long way to go in terms of improving the popularity of its devices in the country.

Apple could afford to drop the price tag of the iPhone in order to make progress in the Indian market, but doing so could damage its overall brand

The price is not right
With Apple having achieved mass popularity in the West – it has been the market leader in the US for years – it is easy to forget that its devices are significantly more expensive than many others produced by its competitors. The most recent iPhone release, the iPhone 11 Pro, was available at launch in its cheapest iteration for $999. In India, although the country has recently made progress in improving the wellbeing of many of its citizens, as much as one fifth of the population still lives below the poverty line, according to the most recent World Bank statistics.

“The answer to why Apple has struggled in the Indian smartphone market thus far lies in the fact that it is structured very differently to western markets, or even similar markets (in terms of size) like China,” Navkendar Singh, Research Director of Client Devices and IPDS at IDC India, told The New Economy. “It is a very value-conscious market, with almost zero interest in selling devices as a contract or bundle in the telecommunications space.”

A closer look at domestic sales data suggests Singh is correct: while Apple continues to perform poorly in terms of overall sales in India, the iPhone took the top spot in premium smartphone sales, capturing 75.6 percent of the Indian market in Q4 2019.

“More than 90 percent of the Indian market lies below the $300 price bracket, and Apple has absolutely no play in these price bands with its high-priced new, and even last-gen, models,” Singh continued. “Apple is seen as a super-premium brand; everyone would love to own an iPhone, but not everyone can afford one. The high price of iPhone models makes them out of reach for most consumers, making them almost like a niche purchase. Hence, Apple has traditionally seen a low-volume market share in India.”

Of course, the answer to Apple’s India problem is not as simple as just lowering prices. The high price of the iPhone is one of the reasons why it remains such a desirable item, and likely why it continues to perform so well within the premium market. Apple could afford to drop the price tag of the iPhone (by some estimates, the company’s profit margin on each device is around 40 percent) but doing so could damage its overall brand.

Retail therapy
Because of local regulations, Apple has not been able to adopt its usual retail strategy in India. In most markets, Apple has a number of physical stores to bolster its online activities, providing a trusted outlet where customers can purchase items and receive customer support. In India, this is not the case.

$999

Starting price of the iPhone 11 Pro

$300

Maximum price for 90% of the Indian smartphone market

3%

Apple’s share of the Indian smartphone market in Q4 2019

75.6%

Apple’s share of premium smartphone sales in India in Q4 2019

Since 2018, Indian law has stipulated that foreign firms cannot open single-brand stores in the country, but must partner with a local business. At the time, Apple CEO Tim Cook rejected the chance to team up with another organisation, telling investors that his company likes “to do things our way”. As a result, Apple devices are only distributed through third parties.

But in February, Cook changed tack, announcing that he had received special approval from the Indian Government to open a store without requiring a domestic partner. As such, it was confirmed that Apple would be opening an online outlet later in the year, with a physical store to follow in 2021.

“Although Apple has retail presence via third parties in India, a company-owned store gives a different dimension to the seriousness and commitment of a brand,” Singh said. “Additionally, Apple’s stores are a landmark in most cities, known for their retail experience. Apple would love to leverage this and build on this to further make inroads in such an important market over the next few years.”

Apple stores, with their slick glass displays, are an important part of the company’s overall brand, even if the majority of Apple’s revenue comes from other channels. The coronavirus pandemic may delay the opening of the Indian store, but Apple will be keen for it to go ahead as soon as possible even so. India will remain a vital market for the smartphone industry in the coming decade, with the World Economic Forum predicting that 80 percent of households will fall into the middle-class bracket by 2030. A brick-and-mortar store will help bring these individuals into the Apple family.

Making up for lost time
No number of shiny new shops will solve all of Apple’s problems, however. Although Cook secured a breakthrough with regard to retail store regulations, Apple remains subject to Indian law in terms of its manufacturing supply chain. As part of the Make in India policy, Apple must source at least 30 percent of its components from within the Indian market. This is already proving to be a challenge, with the country’s manufacturing sector still largely underdeveloped. One local supplier that Apple was using in 2018, Superpacks, had to be dropped due to poor factory conditions and missed deadlines.

With Apple’s market share in India currently so disappointing, the company is facing an uphill struggle. Many consumers will now have their own favourite smartphone brand – likely one that uses the rival Android operating system – and convincing them to switch to an iPhone won’t be easy. Still, there are several things Apple can do in an attempt to boost its market share.

“A mid-premium-priced model for the Indian market would be very fruitful for Apple, considering the price structure of the market,” Singh said. “Even in 2019, Apple saw sales volumes increase in the $500 to $700 segment following a price correction of the iPhone XR model. Additionally, retail footprint and affordability offers, like equated monthly instalments and cashback schemes (which are already being employed), make a difference in bringing down the upfront cost to the consumer for such an aspirational and expensive product.”

While the coronavirus pandemic will undoubtedly have affected Apple’s plans for India (and, indeed, elsewhere), when some semblance of normality has returned, the company should refocus its efforts on improving its market share in the country. A new, flagship retail store can be the focal point for increasing sales figures, but making improvements to its local supply chain and adopting more flexible pricing policies will also prove crucial.

Will an IPO be enough to create new success for Airbnb?

A look at Airbnb’s marketing and its language conjures images of a harmonious online community. Its guests “share experiences” while its hosts contribute to “people-powered” tourism that boosts local economies. Unfortunately, this wholesome rhetoric doesn’t always correspond with the headlines.

Since its inception, Airbnb has been implicated in everything from out-of-control parties and highly organised scams to human trafficking – made possible, some argue, by its lax policies. But over the past two years, Airbnb has been on its best behaviour: through 2018 and 2019, the company announced a slew of new initiatives, including tax collaborations with governments in Europe, a partnership with UK fraud expert Get Safe Online, a collaboration with anti-trafficking charity Polaris, and the distribution of tens of thousands of carbon monoxide alarms to its hosts to improve safety.

All of this comes in preparation for one major event: Airbnb’s initial public offering (IPO), which is currently under threat due to the sprawling impact of COVID-19, but is still expected to go ahead in the near future. To reassure investors in the lead-up to going public, the company is improving compliance with local governments and boosting efforts to detect fraud on its platform. But the question remains whether it can do enough to win them over while remaining profitable.

To reassure investors, Airbnb is improving compliance with local governments and boosting efforts to detect fraud on its platform

Going viral
Airbnb was founded in 2008 when CEO Brian Chesky and two friends began renting out air mattresses in their San Francisco apartment. From these humble beginnings, the business went stratospheric: according to its website, Airbnb has hosted more than 400 million guests and has 31 offices around the world.

When it announced its plans to go public in 2020, market analysts heralded what they thought could be an outlier among tech IPOs – finally, one that might not disappoint. Valued at $31bn in September 2017, Airbnb has a stronger financial track record than Lyft, Uber and WeWork, all of which have seen IPOs flounder after failing to prove their profitability. Airbnb also claims its earnings were positive before interest, taxes, depreciation and amortisation in 2017 and 2018.

But the coronavirus pandemic has since put that IPO in jeopardy, with travel bans causing bookings to plummet. Consequently, Class V Group founder Lise Buyer doubts it is still on the cards this year. She told The New Economy: “As the markets are on the volatile side and, of course, given the travel-related business the company is in, I would think this may be a rather exceptionally challenging year for a home-sharing business to make its public debut.”

It’s been a far from ideal start to what was supposed to be a big year for the company. That said, anxieties around Airbnb’s IPO predate COVID-19; in the first nine months of 2019, the home-sharing company posted a loss of $322m, according to The Wall Street Journal. In the same period a year earlier, it had made $200m. This drop in earnings reflects the ongoing challenges that have followed Airbnb since its launch.

Stranger danger
One explanation for this operating loss is that Airbnb spent heavily on acquisitions at the start of last year. In its bid to corner the hotel market, Airbnb invested $100m in the hotel chain Oyo and bought the hotel-booking site HotelTonight for over $400m. Another explanation is the company’s increased investment in safety.

The very thing that made Airbnb so successful – the trust it fosters between guests and hosts – is vulnerable to exploitation. Scams are uncovered with disturbing regularity on Airbnb’s platforms (most recently by Wired, which found fraudulent Airbnb accounts with more than 200 listings in London and 2,100 reviews in February this year), and there are inevitably risks with welcoming strangers into one’s home. Airbnb banned open-invite party events at its locations last year after five people died at a house party in California.

Airbnb in numbers:

400m+

Guests hosted

31

Offices globally

$150m

Amount pledged to safety initiatives (December 2019)

Sayan Chatterjee, Professor of Design and Innovation at the Weatherhead School of Management, believes these challenges have become harder for Airbnb to tackle as the company has scaled. “Airbnb started out with… Chesky living at the host properties as a guest,” he told The New Economy. “They took to heart the principle ‘focus on things that do not scale’. In other words, focus on little things, like how to customise the marketing of each property to help attract the appropriate guest.”

In the formative stages of the business, the founders of Airbnb took care of the photography for properties and made a point of personally meeting hosts in Manhattan. Since then, Airbnb has grown to such an extent that individually checking properties and screening hosts is a huge logistical challenge. “It is almost a given that it is not possible to do it with the same thoroughness as before,” Chatterjee said.

Surprisingly, these security risks haven’t seriously affected bookings. Despite the scaremongering headlines, guests keep going back to Airbnb. “Because of its popularity, guests cut Airbnb a lot more slack,” Chatterjee explained. Nonetheless, these incidents can strike fear into the hearts of investors. As such, Airbnb announced a $150m investment into safety initiatives in December 2019, including 24-hour hotlines for disgruntled neighbours and the provision of noise detectors. It also pledged to verify all seven million properties listed on its website by December 2020.

Room for manoeuvre
In a public letter posted to its website in January, Airbnb promised to make a “positive contribution to society” and create a “company that serves all stakeholders”. When outlining the latter, the company explicitly named local communities as stakeholders. This represents a change in tone for Airbnb, which hasn’t always been local communities’ best friend.

Around the world, city-dwellers claim that the presence of Airbnb’s properties accelerates gentrification and hollows out communities. The evidence would seem to support this: a 2019 study published in the Harvard Business Review found a correlation between the number of Airbnb properties in a city and the price of rent. With income from an Airbnb property potentially so steady, landlords are encouraged to move long-term rentals into the short-term market.

As a result, many local authorities are considering toughening regulations around short-term rental platforms. Leading the charge are Berlin and Barcelona. The former has already enforced restrictions against short-term lets on platforms such as Airbnb and requires landlords to obtain a permit if they want to rent 50 percent or more of their main residence as a short let. Meanwhile, Lisbon recently capped the number of licences for short-term accommodation in some of its neighbourhoods.

While Airbnb claims it wants to help local communities, compliance with local laws could hurt its business model – something it is determined to stop from happening. According to Chatterjee, Airbnb is adopting a much more aggressive legal strategy than the one it used previously: “Initially, Airbnb kept a low profile and tried to be accommodative. Next, it got its customers to battle for them… Now it is using its size and clout to take cities on frontally.” According to Bloomberg, more than half of Airbnb’s lawsuits and appeal cases have been filed since 2018.

In February, as concerns around the spread of COVID-19 mounted, Chesky said that Airbnb would succeed “irrespective of the economy”. But economic conditions are not the company’s only worry: to win the confidence of investors, Airbnb has to strike a delicate balance between proving it can be profitable for a sustained period and demonstrating it can operate responsibly.

The future of remote working

The progress of digital technology has made working from home possible for many job roles for a number of years now. Virtual private networks (VPNs), cloud computing and video conferencing tools ensure that employees can work effectively even when away from the office. Nevertheless, many firms have been reluctant to allow their employees to work remotely. Perhaps there are trust issues at play or maybe technical challenges are proving insurmountable; maybe it remains too much of a cultural shift for some employers to accept.

But in March, the debate over whether to allow members of staff to work remotely reached a definitive conclusion. The outbreak of COVID-19 saw governments around the world ordering employers to let their staff work from home wherever it was possible to do so. In the face of a global pandemic and the thousands of deaths it will cause, businesses were not given a choice – if staff can work remotely, they must be allowed to do so.

The imperative, and the swiftness of its implementation, caught some companies unawares. Although many of the digital technologies required to facilitate home working have been around for a while, implementing them at scale is not always straightforward. Security issues may get in the way, and that’s before the social and emotional costs are considered. Letting one individual work from home may be straightforward, but suddenly forcing entire industries to do so can cause problems.

Technical issues can be hugely disruptive to a workforce even when it has an in-house IT team ready to come to its aid at any moment

Put to the test
Once governments realised the scale of the coronavirus outbreak, businesses were asked, and then told, to allow staff to work remotely where possible. This, of course, meant a significant number of service jobs being pushed out of the office and away from the digital solutions that corporations had built up and finely tuned over the years. And while it’s true that many free-to-use consumer applications are capable of stepping up to the plate, they are now being placed under unexpected pressure.

According to a survey conducted by the US Bureau of Labour Statistics, just under 29 percent of Americans were able to work from home in 2018. Further data from the study indicates that the coronavirus crisis has provided the final push that many firms needed to authorise remote working: VPNs, which allow individuals to access corporate resources via a secure network, have seen a marked increase in use.

Panama-based VPN provider NordVPN told The New Economy that between March 11 and March 23, there was a 65.93 percent increase in the use of business VPNs across the US. Globally, VPN usage surged 165 percent over the same period. But this increase in traffic creates challenges, which has led to organisations including NASA and the National Oceanic and Atmospheric Administration running remote networking stress tests to check if they have the capacity to allow the majority of staff members to work from home.

“Remote workers are reliant on home internet connections and VPNs, which may not be as reliable [as their offices’], so they could see a degradation in performance,” James Tilbury, founder of IT consultancy and support firm ILUX, told The New Economy. “Inadequate VPN provision, or problems with access, may mean employees either have to alternate their VPN usage or alter their working hours to fully access vital files and systems. Additionally, when things aren’t working as expected, remote employees risk being left without support, and this can quickly reduce the efficiency of a workforce.”

Technical issues can be hugely disruptive to a workforce even when it has an in-house IT team ready to come to its aid at any moment. When staff have to resort to calling troubleshooting hotlines or fixing problems themselves, productivity can take a huge hit.

Home security
Even if employees manage to make a smooth transition to remote working, businesses might have other causes for concern. Given the sudden nature of the COVID-19 pandemic, many organisations have not had time to implement new security protocols for home workers.

“Companies may find several issues compromise the security of their organisation; workers using their home computers may lack the protection systems that their work computers would have, such as anti-virus and anti-malware software, or web-filtering software and appliances to prevent staff accessing phishing and unsafe websites,” Tilbury said. “Similarly, it is more difficult to enforce any security policies the company may have in place, which may prevent the usage of USB drives and/or the unwitting installation of spyware.”

Fraudsters are already using the coronavirus crisis to spread misinformation in order to trick consumers into handing over money

Already, reports indicate that cybercriminals have spotted an opportunity. Just as with other major news events, fraudsters are using the coronavirus crisis to spread misinformation in order to trick consumers into handing over money. The Federal Trade Commission recorded 7,800 coronavirus-related complaints between the start of the year and March 31, with many related to texting scams, supposed refunds and other attempts to impersonate legitimate businesses. Any rise in social anxiety, such as that seen during the spread of coronavirus, provides a fruitful environment for cybercriminals to launch ransomware campaigns, phishing attacks and other fraud attempts.

Vigilance is always key to preventing cyberattacks, but businesses should provide additional security support to individuals now that corporate data is also at risk. Sharing the company security policy with all home workers is one way to put cyberdefence at the forefront of employees’ minds. Organisations with dedicated IT security teams should also be running threat-hunting exercises regularly to keep track of any intruders. A reminder of good security practices also wouldn’t go amiss. Informing staff of the risks of opening attachments from unsolicited emails, connecting to public Wi-Fi networks and using easy-to-guess passwords is always a good idea.

Stay in touch
During this mass remote working experiment, businesses should remember that the technology being employed doesn’t only help staff to get their work done; it can also improve their wellbeing. A 2015 study by Stanford University found that employees at a Chinese call centre displayed higher levels of productivity when working from home, but that many also complained of loneliness. When working remotely, technology needs to address social issues where it can.

“Team communication is critical when you are all working remotely,” explained Kevin Green, former CEO of the Recruitment and Employment Confederation and HR Director of Royal Mail. “Recognise you should spend more time talking, listening and engaging with your people – it’s important they feel connected. Firstly, if possible, use video rather than conference calls, but even they are preferable to the dreaded email with all its potential for misunderstanding. The opportunity to use video is a godsend in these circumstances and is so much easier today with the tools available, such as Zoom, Skype and Google Hangouts.”

Although many workers hate their commute and might resent the stifling environment of the modern office, they may also enjoy the structure of a nine-to-five job and being able to engage in the odd bit of chit-chat with their colleagues. Given the choice, plenty of employees would still choose to work from the office – some of the time, at least.

The coronavirus pandemic is an unprecedented and unexpected situation, but one that may very well lead to permanent change. Businesses should use this time to analyse the benefits and drawbacks of mass home working and make sure they have the right tools in place to facilitate what could be a very different world of work in the future.

Japan fights flooding from below the surface

There is something cathedral-like about Japan’s huge underground storm drain. The G-Cans – or, to use its official title, the Metropolitan Area Outer Underground Discharge Channel – comprises a labyrinth of tunnels that stretch on for four miles in total, connecting gargantuan 213ft silos with a tank that is almost 250,000 cubic feet in volume. The sacred feeling of the space was not lost on its designers, who nicknamed this tank the Underground Temple.

The G-Cans was constructed in Kasukabe, a city some 20 miles north of Tokyo. Nestled 165ft below ground between the two cities, a visitor would have no way of knowing the mammoth storm drain was there. But this invisible system serves a critical function: the densely urbanised area is low-lying and has several rivers running through it, making it highly prone to flooding. The G-Cans was built with the capacity to drain 7,000 cubic feet of water per second – the equivalent of drying out an Olympic-sized swimming pool in just over 12 seconds. The water is then pumped into the Edogawa River on the outskirts of the city.

This invisible system serves a critical function: the densely urbanised area is low-lying and has several rivers running through it, making it highly prone to flooding

Below the surface
Every summer, Japan is battered by heavy rains and typhoons. These storms can cause extensive damage and loss of life. Among the most impactful in recent history was the 1959 Isewan Typhoon, which resulted in more than 5,000 deaths.

For decades, Kasukabe consisted mainly of farmers’ fields, which meant that any damage incurred by a flood was relatively minor on the national scale. But towards the end of the 20th century, Kasukabe became more urbanised and heavily populated as Tokyo expanded. It was in 1991, when a storm damaged 30,000 homes in the northern outskirts of the Japanese capital, that authorities were persuaded to invest in more protection.

Rain, rain, go away
As one of the most seismically active countries in the world, Japan has long been accustomed to investing in protection against natural disasters – even when the country was recovering from the Second World War, it invested six to seven percent of its national budget in disaster prevention. But the G-Cans was a project of unprecedented scale; having begun construction in 1993, it took Japan 13 years and a total of $2bn to finish, according to the BBC.

But now experts are asking whether even this gargantuan drainage system is enough to safeguard Tokyo. Historically, Japan has stressed earthquake preparedness over flood response, but flooding is becoming a more serious problem for the country. According to research by Yasunori Hada, an associate professor of regional disaster prevention at the University of Yamanashi, the number of people living in flood-prone areas in the country rose by 4.4 percent between 1995 and 2015. This was partly due to urban expansion, but it’s also the result of changing weather patterns.

According to estimates cited by the BBC, rainfall in Japan could increase by 10 percent over the course of the 21st century, most likely because of the climate crisis. At the same time, disaster experts warn that typhoons are becoming more frequent and more severe – for instance, when Typhoon Hagibis struck in October 2019, 200 rivers burst their banks, causing some of the worst flooding the country had seen in decades.

The Japanese Government is slowly waking up to the reality of this threat. In 2016, it added the Furukawa Reservoir – a two-mile-long tunnel that can hold five million cubic feet of water – to its list of flood prevention systems. But it’s still not enough.

As sea levels increase around the world, cities must address the threat of flooding with more urgency than ever before. Even though there has been huge investment into the G-Cans project, the fact that it alone is not enough shows just how costly it will be for Japan and other countries to build solutions that can match the threat level that flooding poses.

Why consumers’ appetite for meal replacement drinks is growing

Astronauts had a huge impact on popular culture in the 1950s when the US and the Soviet Union began competing to develop their aerospace capabilities. They inspired a wave of science fiction, numerous children’s toys and a new fascination with commercial air travel. As the space race grabbed the attention of consumers, opportunistic entrepreneurs tried to profit from another aspect of the astronaut craze: their diet.

Over the next few decades, businesses set about trying to commercialise the freeze-dried powders and semiliquids in tubes that astronauts lived on while in space. First, there was Tang, a powdered, artificially flavoured orange juice used by astronauts on early space flights and marketed to consumers in 1959. Then came Metrecal, a diet milkshake that feminist writer Betty Friedan compared to chalk. In the 1970s there was Space Food Sticks, an early energy bar with contents as bland and utilitarian as its name suggests.

Consumers have different eating habits to previous generations and are chasing more convenient ways to satisfy their hunger

The business results were mixed. Despite being a US product, Tang continues to curry favour in Latin America and the Middle East and had revenue of nearly $900m in 2016. Space Food Sticks never caught on and has since been discontinued.

Although space food itself hasn’t stuck around, the trend helped create a new market for all things dehydrated, powdered and pre-packaged. Despite humanity’s age-old love affair with food, entrepreneurs have long been on a quest to distil it down to its most basic form. If only we could ingest one pill and instantly satiate our hunger while giving our body all the nutrients it needs. Today, this science-fiction dream is closer to reality than ever before.

Changing tastes
The meal replacement market is booming. Time-poor consumers have different eating habits to previous generations and are chasing more convenient ways to satisfy their hunger. “Nutrition has to be quick, efficient and healthy if it’s not for socialising, like family dinners,” said Andre Schneider, a partner at venture capital firm Square One Foods.

One of the fastest-growing products in this market is ready-to-drink meals. UK-based company Huel, currently the most prolific player in this space, has sold more than 50 million drinks worldwide since it was founded in 2015. It has also seen year-on-year customer growth of 150 percent and is on track to be valued at $1.25bn within the next three years.

But the meal replacement market has always had its sceptics. The question is: who would want to replace meals? Huel’s just-add-water solution, usually served cold, doesn’t exactly provide the same enjoyment as a cooked lunch.

The creators of Huel are well aware that it’s never going to beat a restaurant dinner. “We believe that there are two types of food: functional food and entertainment food,” Julian Hearn, the company’s founder and chief marketing officer, told Forbes. “Huel is never going to be the solution for a Saturday night out with friends, but there are many times during the week [when] you need nutrition and convenience.”

Huel in numbers:

50m+

Drinks sold

150%

Year-on-year customer growth

$125bn

Predicted value by 2022

The end of mealtimes
The fact is that many consumers have little time to spend planning, preparing and cooking food. More households are dual-income, which often means domestic tasks become sidelined. Meanwhile, average commute times are getting longer every year. As such, what we eat is an increasingly last-minute decision. According to a study by Acosta and Technomic, only 15 percent of consumers know in advance what they’re having for dinner. Even the most organised individuals are using different strategies to stay on top of their food. Younger generations regard Instagram influencers who ‘meal prep’ – cooking the week’s meals in just one day – as the masters of routine. “Today, people eat when their schedule allows,” said Schneider. “This is rarely at regular mealtimes and often away from home. So people tend to eat smaller portions when they have time for it.”

The way we eat at work has also changed. In 1979, Lee Eisenberg coined the term ‘power lunch’ in reference to the dining habits of New York’s media elites, who would descend on high-end restaurants to mingle and network. While it wasn’t every company’s experience of lunchtime back then, the restaurant meal is now becoming an even rarer species. Expense accounts have been squeezed and living costs have increased. People are eating alone more often than before, and packed lunches are a common sight at office desks. All things considered, Huel’s proposition doesn’t sound so crazy – if lunch is no longer a social pastime and making healthy packed lunches every week is a time drain, surely it makes sense to swap these meals out for something quick, affordable and nutritious.

Picture of health
Like its competitors Soylent and Feed, Huel is a direct-to-consumer company. It has succeeded in marketing itself not just as a food retailer but as a lifestyle brand, and this has earned it a cult following. Huel now has tens of thousands of followers online. It even sells merchandise, including T-shirts and sweatshirts, which gives an indication of the brand image that it is cultivating – it’s hard to imagine that many people would want Heinz Tomato Ketchup, for example, emblazoned on a hoodie.

Clearly, health consciousness is a critical factor in meal replacements’ success. But many nutritionists are concerned about what consuming these food alternatives every day for a long period of time could do to the body. “There are a lot of different meal replacement shakes available and they differ significantly in terms of nutritional composition,” said Kim Pearson, a London-based nutritionist. “However, many contain high levels of sugar or other undesirable additives. In fact, some meal replacement shakes actually contain more sugar than a chocolate bar.”

While meal replacements could be beneficial in some respects, people shouldn’t overdo it – the social and psychological importance of eating proper meals should not be overlooked

With its key ingredients including powdered oats, sunflower oil, flaxseed, coconut, rice and vitamin supplements, Huel is one of the better options. It goes without saying that it’s also far healthier than the average salty, additive-filled fast-food meal. Pearson points out that certain groups may actually stand to benefit from consuming Huel.

“As it provides protein, vitamin D and B vitamins, it can be a good option for vegetarians and vegans who may be lacking these,” she told The New Economy. “It can also be useful for elderly people who struggle to eat much or those with loss of appetite for other reasons.” However, her advice is that, while meal replacements could be beneficial in some respects, people shouldn’t overdo it. “The social and psychological importance of eating proper meals should not be overlooked,” she said. “Generally speaking, I wouldn’t recommend that people replace all their meals with a shake, rather that they include them within a plan of healthy, fresh foods that are rich in natural – not just synthetic – nutrients.”

Despite the limitations of these products, the market for healthy, on-the-go snacks and ready-to-drink meals shows no sign of shrinking. The opportunities for food companies are significant. In its first funding round in 2018, Huel raised $28m – an unusually high amount for a food start-up. Meanwhile, French meal replacement business Feed has delivered its products to more than 30 countries. Clearly, food companies that drive innovation can experience lucrative funding rounds that were previously unheard of in the sector. Once a space-age dream, meal replacements have taken off, and they’re here to stay.

The problem with emotion-detection technology

It didn’t matter that there was no physical evidence that Amanda Knox had killed her friend Meredith Kercher: as far as those investigating the murder case were concerned, Knox’s behaviour proved her guilt. Knox was calm and collected when she was expected to be grief-stricken. She also acted in a way investigators thought was inappropriately sexual, kissing her boyfriend outside the building where Kercher had died while the investigation unfolded inside. To the police, and much of the media, this wasn’t how a mourning friend would behave. Giacomo Silenzi, who had been dating Kercher at the time of her death, was widely quoted saying: “[Knox’s] eyes didn’t seem to show any sadness, and I remember wondering if she could have been involved.”

Knox’s conviction – which was acknowledged to be wrongful when she was acquitted in 2015 – demonstrates the profound role that demeanour can play in the courtroom. “People have stereotypes about how a situation should be appraised and what the person should feel,” said Phoebe Ellsworth, Professor of Law and Psychology at the University of Michigan, “and if the person doesn’t fit that stereotype, they are judged more harshly.” A study published in The Journal of the American Academy of Psychiatry and the Law found that, as a general rule, a defendant who gives a convincing display of remorse is more likely to get a lenient sentence. Meanwhile, rape survivors who are visibly distressed have a greater chance of being believed by the jury.

No matter how hard we might try to hide our emotions, our bodies can give us away

In these legal battles, a person’s nonverbal communication can sway the outcome of a case. Fortunately, many people will never experience the ordeal of defending themselves against a false accusation or fighting for custody of their child. Nevertheless, in day-to-day life, there are often scenarios where some kind of emotional evaluation is needed to make an important decision, ranging from mental health screenings to job applications. Technology has advanced to a point where organisations no longer have to make these decisions alone.

Today, the market for emotion-detection technology is worth roughly $21.6bn, and its value is predicted to more than double by 2024, reaching $56bn. Firms can purchase systems to help them vet job applicants, analyse advertisements for their emotional impact and test criminal suspects for signs of deception. One company, Oxygen Forensics, has suggested its emotion-detecting software could be used by the police to help detect anger, stress and anxiety.

But experts on the ethics of artificial intelligence are beginning to sound the alarm bells. In December 2019, the AI Now Institute, a research centre at New York University, called for new laws to restrict the use of emotion-detecting tech, saying the field was built on “markedly shaky foundations”.

Poker face
No matter how hard we might try to hide our emotions, our bodies can give us away. “During a stressful episode, your body usually experiences a chain of physiological changes such as pupil dilation, deeper respiratory breathing, intensified beating of the heart and increased muscle tension, among many others,” said Javier Hernandez, a research affiliate at the Affective Computing group of the MIT Media Lab. “These types of signals can be readily captured with cameras and wearable devices and analysed with artificial intelligence to detect stressful episodes.”

The emotion-detection technology industry:

$21.6bn

Value in 2019

$56bn

Value by 2024

There are a range of business scenarios in which analysing a person’s stress response could come in handy. The artificial intelligence program Cogito, which emerged from the MIT Human Dynamics Lab in 2007, analyses a person’s voice for signs that they are in a state of heightened emotion. The program has been distributed to dozens of call centres in the US to help staff detect when customers become distressed over the phone. “In the call centre, customer service phone professionals must often handle more than 50 calls per day,” said Steve Kraus, Senior Vice President of Marketing and Finance at Cogito. “The repetition and continuous demand for compassion can be draining, leading to cognitive overload and emotional fatigue and, eventually, burnout.”

Emotion-detection technology could also help advertisers optimise their content. Realeyes, a London-based start-up that has so far raised $33.8m in funding, promises to enhance video content by analysing its emotional impact on viewers. “Our technology is trained on the world’s richest database of facial coding data and incorporates more than 620 million emotional labels across more than 3.8 million video sessions,” said Max Kalehoff, vice president of marketing at the company.

The technology is already highly sophisticated. Realeyes recently updated its predictive modelling for behaviours like view-through rate and responses such as interest and likability. As more data is collected on facial expressions, the accuracy of this technology can only get better. “Today, there are a select few companies that have access to large enough data sources to successfully train systems,” said Kraus. “As the data can become anonymised and shared more broadly, it can accelerate the development of new and more powerful technology.”

The truth of the matter
In its annual report last year, the AI Now Institute warned that we still lack a scientific consensus as to whether these technologies produce accurate results. One of the key problems that Professor Kate Crawford, co-founder of the research centre, identified was that some firms were basing their software on the work of Paul Ekman, a psychologist who led a pioneering study on nonverbal behaviour in the 1960s. He proposed that there are six basic emotions that are recognised around the world: happiness, sadness, fear, anger, surprise and disgust. “His cross-cultural research had a huge effect on emotion research,” said Ellsworth, “as it provided evidence that the recognition of emotions from facial expressions was similar across cultures, and thus that the link between facial expression and emotion might be universal.”

Given the huge influence Ekman’s work had on the science of emotion, his ideas are pervasive in emotion-detection technology, informing many algorithms, including those sold by Microsoft, IBM and Amazon. His emotion-detection research was also used to develop training programmes for the CIA, FBI and US Customs and Border Protection agency.

If there is so much room for error when it comes to reading a person’s facial expressions, we must question how a machine can ever be programmed to get it right

However, more recently, psychologists have begun to question the methodology of Ekman’s study. One of his most vocal critics, psychologist Lisa Feldman Barrett, took issue with the fact that Ekman had provided the study’s participants with preselected emotional labels to match with photographs. Feldman Barrett argues that this meant Ekman had nudged participants towards certain answers. When she conducted the same test without labels, the correlation between specific facial expressions and emotions fell dramatically.

It seems emotions aren’t as universal as we once thought. For the Trobriand people – a remote society in Papua New Guinea – the ‘gasp face’, associated with fear and surprise in many cultures, is seen as a threatening expression. More broadly speaking, different cultures have varying expectations for when it’s appropriate to use a certain expression and in what manner. In the US, for example, it’s common to smile at strangers. However, in other parts of the world, frequent and overeager smiling is not so well received; one Russian proverb roughly translates to “smiling for no reason is a sign of stupidity”.

Ekman’s study is problematic not just because it underplays the importance of cultural differences, but also because it assumes that there is a correlation between someone’s facial expression and their emotional state. Researchers have since found the exact opposite: a recent study by the Ohio State University suggested that facial expressions are often unreliable indicators of emotion.

This presents a problem for technology that relies on facial expressions to reach a conclusion. In certain scenarios, misreading a person’s emotions can have serious consequences. Last year, at the borders of Hungary, Greece and Latvia, the EU practised using a lie detector to process arrivals. A journalist for online publication the Intercept tested the lie detector – called iBorderCtrl – at the Serbian-Hungarian border and, despite answering truthfully, the machine suggested they had given four false-positive answers out of 16.

If there is so much room for error when it comes to reading a person’s facial expressions, we must question how a machine can ever be programmed to get it right. It’s on these grounds that the AI Now Institute makes its recommendation that regulators should ban the use of emotion-detection technology in “important decisions that impact people’s lives and access to opportunities”. Until such a ban is in place and regulations are strengthened, it concludes, AI companies should stop selling emotion-detecting technology.

Taken out of context
However, some affective computing experts – such as Jonathan Gratch, a research professor of computer science and psychology at the University of Southern California – think banning the technology goes too far. “I agree with the spirit of many aspects of the report,” he told The New Economy. “But I am actually concerned about some of the ways they’re representing the science and their particular recommendation – I feel – is overly broad.”

Gratch’s main complaint with the AI Now Institute’s report is its failure to outline the difference between emotion-detection technology that takes context into account, and that which doesn’t. For him, this difference is crucial: “It’s the main challenge with the technology right now. Many of these companies do context-ignorant emotion recognition. So they would just take an image or video of a person and attend to the facial movements, but the algorithms have no knowledge of what the context is.”

Realeyes in numbers:

2007

Founded

$33.8m

Raised in funding

620m

Emotional labels

3.8m

Video sessions

When trying to read a person’s emotions, context is everything. If someone cries at the birth of their child, it’s likely to be out of happiness, not because they’re upset. If a footballer puffs out their chest and bares their teeth after they score a goal, they’re celebrating a victory, not squaring up for a fight. “So it’s only valuable if an algorithm can say, ‘I see this person is in a trial and I know what’s going on in that situation’. Then you might be able to make valid inferences,” said Gratch. What’s more, for the most part, we are deliberately regulating the signals we send to others. This has repercussions for lie-detection technology in particular. If the system’s goal is to catch someone in a lie and that person knows they’re being watched, then the accuracy of that result will inevitably be affected.

To Gratch, banning emotion-detection technology outright is not the answer, because doing so would imply that it’s inherently problematic to make inferences about someone’s emotional state. But we already use a variety of techniques to do this in many aspects of society: personality tests are relatively common in work environments and are used for employee development and as part of the hiring process; no one can be clinically diagnosed with depression or PTSD without an assessment of their wellbeing; and many criminal cases involve an evaluation of the accused’s mental state. There’s little fundamental difference between such tests and emotion-detection software, with the exception that the latter is more technologically sophisticated. “[The AI Now Institute] report can be seen to propose a ban on these techniques as well,” said Gratch.

Technology might be even more accurate than some of these techniques. According to a recent study, one third of the psychological tests used in US court proceedings aren’t generally accepted by experts in the field. Tess Neal, author of the study in question and an assistant professor of psychology at Arizona State University’s School of Social and Behavioural Sciences, explained that many of these are projective tests: “Projective tests require the clinician to show some kind of stimulus to the person being assessed – like an inkblot – or ask the person to create something – like draw a picture.” However, tests like this risk placing too much significance on the clinician’s interpretation of the person’s behaviour. “[Projective tests] are, as a group, less scientific than so-called ‘objective’ measures that are based on algorithms and data from large groups of people,” said Neal.

Feel-good factor
This brings us to an aspect of emotion-detection technology that is often overlooked: its philanthropic applications. Researchers in affective computing hope that emotion-detection technology will soon be able to use the information it has gleaned about a person’s emotional state and – by also taking into account the context – react in a way that helps its users.

“Electronic devices like phones mostly ignore how we are feeling and they almost always behave the same way,” said Hernandez. “This is very different from the way we usually interact with other humans. If technology could read human emotions and adapt its behaviour accordingly, it could not only achieve more natural and less stressful ways of interaction, but [could] also help [us] cope with other sources of daily life stress. For instance, if a mobile device detects that we are having a stressful time, it could filter out negative news, recommend listening to positive songs or recommend talking to someone close to us to help find relief.”

Companies have hastened to develop emotion-detection technology, but some have done so on the back of outdated theories about nonverbal behaviour

At the MIT Media Lab, Hernandez and his team have explored how smart cars could lower aggression in drivers by modulating the temperature and changing the interior and exterior illumination depending on their mood. Researchers also believe that emotion-detection technology could be used in mental health screenings to assist people in building their interpersonal skills or to improve remote learning by helping teachers understand whether students are paying attention and when they need help.

We are still some way off realising this technology’s potential, though. “We are years away from creating a system that generally guides behaviour as a human would, and may never actually get there given all the nuances in human behaviour,” said Klaus. “That said, detecting and guiding emotional intelligence can be effective if it is done within a specific context.”

Companies have hastened to develop emotion-detection technology, but some have done so on the back of outdated theories about nonverbal behaviour. In particular, the technology’s use within the criminal justice system is a cause for concern, given the early stage the technology is at and the severe repercussions that can be felt as a result. However, that should not legitimise a blanket ban on the technology. “Any kind of inference is always probabilistic,” said Gratch. “What you’d hope is that you can make better decisions using that information than not, even if it is probabilistic.” The benefits of using the technology may continue to outweigh the risks, but it all comes down to how high the stakes are if the technology gets it wrong.

Saudi Arabia’s radical new oil strategy

Saudi Arabia’s recent decision to crank up oil production represents a dramatic shift in its thinking about energy markets and its own reliance on oil revenues. Gone are the days when Saudi oil reserves were prudently managed for future generations. By no longer maintaining a specific oil price band or retaining spare production capacity, the kingdom is stepping away from its longstanding role as the market’s swing producer.

The change reflects Crown Prince Mohammed bin Salman’s (MBS’) view that Saudi Arabia has a relatively narrow window of opportunity to monetise its large oil reserves. He has embarked on a policy of capturing market share rather than trying to set the price, once again breaking with long-standing policies that he believes are no longer useful.

If MBS persists with this strategy, he could significantly alter the dynamics of global energy markets. By keeping prices depressed, Saudi policy will not just drive more expensive forms of oil production out of the market, it will also make it harder for renewable energy to compete with fossil fuels – at least, in the near term.

Saudi Arabia has well over 50 years’ worth of recoverable oil reserves; most of that will become a stranded asset if it is not produced more quickly

Marking a new path
The new strategy became clear on March 7, 2020, a Saturday, when Saudi Arabia decided to cut its official selling price and increase its oil production to above 10 million barrels per day, with output in April likely to be nearly 11 million, up from 9.7 million in recent months. When markets reopened the following Monday, oil prices suffered their largest single-day decline since 1991.

Officially, the Saudi action was a response to Russia’s refusal to agree to voluntary oil production cuts at an OPEC+ meeting on March 6. Since 2016, the Russians and the Saudis have been coordinating their production to keep prices elevated at around $50-60 per barrel. Yet the net effect of this cooperation has been to help the US shale industry boost its own production and sales, thereby capturing most of the world’s incremental demand. Having suffered declining exports since 2016, the Saudis were probably hoping that a reduction in output would shore up prices at a time of weakening global demand, owing to the coronavirus outbreak.

Working in partnership
Why the change of tack? Commentators have offered various explanations, including the intimation that Saudi Arabia might be colluding with Russia to undermine the US shale industry. But such collaboration is highly unlikely: there is little trust between MBS and Russian President Vladimir Putin, who has not forgotten that Saudi Arabia’s oil market machinations in the 1980s may have played a role in the collapse of the Soviet Union. Moreover, Saudi Arabia already tried and failed to take on the shale industry in 2014-16, when it badly underestimated US shale producers’ technical competence and ability to operate at low prices.

Rather than pursuing a short-term tactical win, MBS may instead be focusing on several longer-term development goals. He knows that he has only limited time – perhaps just a couple of decades – to extract maximum value from oil, because climate change has fuelled a global push toward de-carbonisation and renewable energy. Saudi Arabia has well over 50 years’ worth of recoverable reserves; most of that will become a stranded asset if it is not produced more quickly.

Although Saudi Arabia will face serious technical and financial hurdles in pursuit of its new, highly ambitious production targets, the deeper point is that the old rules no longer apply. And under the new dispensation, the Saudis may also start running the state-owned oil giant Saudi Aramco more like a profit-maximising international company – producing as much as possible – rather than like the global oil market’s central bank.

Lesser of two evils
There are strong arguments for why the country should pursue this path. For starters, Saudi oil is cheaper to extract and transport than many other reserves. It is also ‘cleaner’ than that produced by Canada’s tar sands, and emits little methane compared to Russian oil. And Saudi Aramco is one of the world’s most technologically advanced and technically competent oil companies. In other words, Saudi oil has multiple comparative advantages over the competition, and therefore is perfectly placed to hold a privileged position in the global clean-energy transition.

Saudi Arabia’s policy shift should give pause to American politicians who boast that the US has achieved energy independence through shale

Moreover, the kingdom has been signalling its intended change in strategy for several months. In December 2019, it proceeded with the initial public offering of 1.5 percent of Saudi Aramco, which represents one way of monetising the upfront value of its oil reserves while also signalling the shift towards maximising profit. After many years of disputes, the Saudis have also reached an agreement with Kuwait over oil production in the neutral zone, which will allow production to increase by up to 500,000 barrels per day. Lastly, Saudi Arabia recently announced plans to develop a massive unconventional gas field called Jafurah, which will make even more oil available for export.

Saudi Arabia’s policy shift should give pause to American politicians who boast that the US has achieved energy independence through shale. In an all-out war for market share, US, Canadian, Russian and other oil producers will have a hard time competing with the Gulf, given its lower costs and other competitive advantages.

Watching the hourglass
The question, of course, is how long Saudi Arabia can maintain this strategy before the new low-price environment drains its own coffers. A back-of-the-envelope calculation suggests that it can hold out for two years.

MBS may be gambling that he can outlast the competition, but given the structural features of the oil market and the world’s inevitable transition to renewables, he probably sees no other alternative. OPEC quotas and production agreements with the Russians have not delivered the results he needs. Whether the new policy can produce more tangible benefits remains to be seen.

©️ Project Syndicate 2020

The end of passwords

As it stands, our use of digital technologies is hugely dependent on passwords. They are needed to access our email accounts, our social networks and our mobile phones. “What’s the password?” has gone from being a secret-agent catchphrase to a call of despair as you try and fail to log in to your account for the umpteenth time. According to a 2019 report authored by software firm LastPass, employees at large companies are expected to have 25 unique logins; for staff at smaller organisations, the number is 85. It is no surprise that many individuals are suffering from password fatigue.

The problem is, when people are required to have too many unique passwords – more than can possibly be remembered – they start cutting corners. Passwords get reused, or easy-to-guess credentials start being employed. According to the UK’s National Cyber Security Centre, the most commonly hacked password of 2019 was ‘123456’.

With research by TraceSecurity indicating that as many as 81 percent of company data breaches are caused by poor password protocols, businesses are scrambling to find alternative methods of authenticating their employees. But while many individuals have grown weary of passwords, they are also accustomed to them – getting them to switch to something else might not be easy. The password era may be coming to an end, but it is likely to be a slow death.

Individuals believe that passwords are on their way out – Bill Gates said as much back in 2004 – but deciding what to replace them with is proving a challenge

Doubling down
Although easy-to-guess passwords are the root cause of a number of cyberattacks, businesses have not removed them from the authentication process – instead, they have embraced what is known as ‘two-factor authentication’. Usually, this approach involves an individual needing two of the following pieces of information before their identity is verified: something they know, something they have, and something they are.

In the case of something they know, a password remains the most commonly used piece of data. Something they have might be a device, like a smartphone, and something they are might include biometric data. This extra layer of security makes it much harder for hackers to access personal or business assets. In addition, a lot of firms are employing a password manager to bolster their cyber defences. With many employees now being asked to remember several passwords for different pieces of software, organisations have looked for a way to push back against a tendency to reuse passwords.

Password managers are applications that store information for multiple digital solutions and log in users automatically. They hold a database of passwords that is encrypted and can only be accessed via a master password, leaving users with much less to remember and meaning that they are more likely to choose a stronger password overall.

Passing out
Many individuals believe that passwords are on their way out – Bill Gates said as much back in 2004 – but deciding what to replace them with is proving a challenge. Biometric identification – like a fingerprint or an iris scan – is harder to cheat, but the consequences of doing so are much greater. Companies would have to store copies of this information internally in order to verify each individual, making them even greater targets for hackers looking to steal personal credentials. Passwords, when compromised, can be changed – a fingerprint cannot.

In today’s digital-first world, there is mounting pressure to tackle the threat posed by hackers – and with good reason. Worldwide, cybercrime is estimated to have cost $600m last year, with inadequate password protocols responsible for a sizeable chunk of this figure. Transitioning to a different system of authentication will not be simple, but it will surely be easier than remembering 20 different passwords – each containing at least one upper case letter, one lower case and one special character, of course.

Will COVID-19 finally usher in the age of the cryptocurrency?

Even as all of humanity mobilises against COVID-19, thoughts are turning to how the world will be different after the crisis. As businesses rush to adapt to the new world of social distancing, the pandemic has accelerated an already inexorable trend towards digital commerce. This broader shift should also include the widespread adoption of digital currencies, which provide stronger consumer financial and privacy protections.

For most of the 20th century, encryption was reserved for national security needs. Cryptography helped the Allies win the Second World War, and then protected secret communications during the Cold War. Until as recently as 1992, the US, as a matter of national security, did not allow cryptographic technology to be exported. Encrypted communication was not widely available, and anyone using it was assumed to have something to hide.

But starting in the 1990s, early internet entrepreneurs began calling for encryption to be used in e-commerce, arguing that it was needed in order to protect customer credit card numbers, passwords and other information entered online. It turned out that the same encryption technology that had been created in academic labs – where trust and collaboration reigned – could be useful to everyone.

Cryptocurrencies hold the promise of creating a more open financial system with worldwide access, instantaneous fund transfers, lower costs and vastly improved consumer privacy protections

Changing expectations
US policymakers and law enforcement initially balked at this push towards widespread encryption. In their view, privacy for everyone meant privacy for terrorists, drug dealers and money launderers. As then FBI Director Louis J Freeh told Congress in 1994, preserving the US Government’s ability to intercept internet communications was “the number one law enforcement, public safety and national security issue facing us”.

The debate about end-to-end encryption is still raging. But, crucially, consumer expectations have changed since the 1990s. The overwhelming majority of internet traffic is now encrypted, and most of us have been trained to look for the closed-lock icon in our browser before entering sensitive information. Popular apps like WhatsApp, Telegram, iMessage and Signal have led the way in normalising private messaging that can’t be tracked by third parties.

But there is one area of our lives where privacy is not yet the norm: our personal financial information. By law, financial firms are required to collect reams of personal information about their customers. This information ultimately ends up in online databases, where it presents a tempting target for hackers. In 2017, the credit rating firm Equifax revealed that a data breach had exposed sensitive information about more than 147 million consumers, or just under half of the US population. That followed a similar breach in 2013, when hackers famously obtained the names, credit card numbers and other information about tens of millions of Target customers.

Anonymous money
Fortunately, a solution is on the horizon. Cryptocurrencies hold the promise of creating a more open financial system with worldwide access, instantaneous fund transfers, lower costs and vastly improved consumer privacy protections. When bitcoin first gained popularity, many people incorrectly assumed that it was anonymous money. In fact, as a blockchain technology, it uses a public ledger that records a digital trail of every transaction. Blockchain analytics firms are thus now helping law enforcement track down criminals who thought their trail was covered. And cryptocurrency exchanges like Coinbase have instituted robust anti-money-laundering and Know Your Customer programs that rival those of any financial institution.

Several more recent developments in cryptocurrency technologies promise to take consumer privacy to even higher levels, and they are sure to be controversial. First, ‘privacy coins’ such as Zcash and Monero offer new cryptocurrency protocols that make every transaction untraceable. Other cryptocurrencies aspire to replicate these features, and even JPMorgan Chase has explored private transactions through its Quorum cryptocurrency. This shift is a bit like when websites moved from HTTP to HTTPS as the global standard: it lets consumers know that their information is protected by default.

For countries thinking about cryptocurrency policy, the best approach will be to strike a balance between law enforcement, cybersecurity, privacy, innovation and economic competitiveness

Second, so-called ‘non-custodial cryptocurrency wallets’ now enable customers to store their own private keys (which allow one to move funds) instead of relying on a third party. By not actually storing customer funds, the providers of non-custodial wallets are aiming to position themselves as software companies rather than financial institutions subject to regulation. In the past, non-custodial wallets required a certain degree of technical sophistication to operate, limiting their use. But, like encrypted messaging apps, they are becoming increasingly accessible to a mass market.

A new generation
Unsurprisingly, these innovations have alarmed banks, regulators and law enforcement agencies. But just as the early internet needed encryption to enable digital commerce, cryptocurrencies need privacy protections to unlock their full power and potential. Whether one needs to guard against authoritarian regimes, data harvesters or criminals, the best way to ensure that sensitive financial data isn’t hacked is to avoid having to collect it in the first place.

Enhancing consumer financial protections does not mean giving free rein to criminals. Law enforcement agencies still have a wide range of tools at their disposal, from subpoenaing cryptocurrency exchanges to examining conversions into and out of fiat currencies (which are likely to remain the choke points for law enforcement). And these exchanges will continue to be regulated as financial services, regardless of whether consumers are using privacy coins or non-custodial wallets.

Having watched the US benefit enormously from the creation of the world’s leading internet companies, many countries are now working to attract the next generation of cryptocurrency firms. For countries thinking about cryptocurrency policy, the best approach, as always, will be to strike a balance between law enforcement, cybersecurity, privacy, innovation and economic competitiveness.

Consumers in a free society will always demand and expect reasonable levels of privacy. Our financial lives are no exception. Fortunately, cryptocurrencies can fix some of the most vexing issues in financial services. As we plan to rebuild economically after the COVID-19 crisis, we must allow these technologies to grow.

©️ Project Syndicate 2020

Artificial intelligence begins to show signs of human-like creativity

“It was nine seventeen in the morning, and the house was heavy.” Thus begins the 2018 novel 1 the Road. Not a bad opening sentence – certainly no worse than many others that have been committed to posterity. The main difference, however, is that this particular sentence has no author – not in the conventional sense of the word, anyway.

This is because 1 the Road was written entirely by artificial intelligence (AI). Its prose was conjured up by a long short-term memory recurrent neural network, which took input – or inspiration, if you prefer – from surveillance cameras and other sensors mounted on a truck travelling from New York to New Orleans. The end result was an AI-penned version of Jack Kerouac’s On the Road.

This new novel is just one example of AI being used for creative endeavours. Those employed in jobs that are deemed ‘low skill’ have long known that automation poses a threat to their livelihood, but musicians, artists and writers have largely assumed that a machine could never replicate their work. Perhaps this confidence was misplaced.

Musicians, artists and writers have largely assumed that a machine could never replicate their work. Perhaps this confidence was misplaced

Not so smart
Every year since 2015, AI has featured as part of Gartner’s top 10 strategic technology trends in one form or another. The technology and the huge impact that it could have on the business world has long been appreciated, but sifting through the hype is difficult. Science fiction has conjured up images of AI that range from benevolent robot housekeepers to apocalyptic supercomputers. Clearly, the technology is not quite at this stage yet.

Nevertheless, AI is no longer something that needs to be discussed in the future tense – it is already here. When customers have an issue with a product or service, their first port of call is often the website of the respective business. Once there, they might join an online chat to help resolve their problem. Increasingly, this dialogue will be between the customer and a chatbot or virtual assistant. They may have a friendly-looking avatar and surprisingly human-like responses, but they are in fact nothing more than multiple lines of code.

In the manufacturing sector, companies like BMW, Airbus and LG are using AI to deliver greater levels of efficiency, safety and reliability on the factory floor. In the home, meanwhile, developments in AI mean that vacuum cleaners can scan rooms for size and obstacles, determine the most efficient cleaning route and then get to work – all without any human input.

These are all impressive feats – ones that would have been scarcely believable just a few decades ago. However, whether they truly represent AI is debatable. In the aforementioned examples, what is termed AI only works within clearly defined parameters: a robot designed to manufacture car parts cannot employ its skills to help put bikes together, and autonomous vacuum cleaners are powerless when confronted by a humble set of stairs.

Artificial general intelligence refers to a machine that can apply knowledge and skills within different contexts – in short, one that can learn by itself and work out problems like a human

“At present, many examples of AI still represent what has been called ‘narrow AI’, working only within clearly defined parameters,” said Arthur I Miller, author of The Artist in the Machine: The World of AI-Powered Creativity. “There is, however, research being done on developing multipurpose machines. DeepMind in London is working on a version of [computer program] AlphaZero, which started out playing games, to work on medical research – specifically, to look into protein folding, the process whereby an embryo begins to generate organs by folding protein chains.”

Self-driving cars, digital personal assistants like Alexa and even automated spam filters fall within what can be termed ‘narrow AI’. Conversely, artificial general intelligence (AGI) refers to a machine that can apply knowledge and skills within different contexts – in short, one that can learn by itself and work out problems like a human. But even as artificial intelligence improves, the number of applications that can be classified as AGI remain few and far between.

Getting creative
One major stumbling block that must be overcome before a machine can claim to possess AGI is the issue of creativity. Humans find it easy to reach beyond the limits of their own knowledge and create something new – it might not be any good (however ‘good’ may be determined), but all of us can write poetry, draw, decorate and cook. These are creative processes, even if we don’t consciously appreciate them as such. Machines, on the other hand, largely do as they’re told.

Increasingly, though, machines are showing their creative side. As well as novels like 1 the Road, an AI bot named Benjamin wrote a short science fiction film called Sunspring in 2016, which was subsequently acted out and played during the SCI-FI-LONDON film festival. It remains available on YouTube.

Journalists, too, have reasons to be concerned. A number of media outlets including Forbes, The Washington Post and Reuters use machine learning tools to help them produce content. Bloomberg uses a computer system known as Cyborg to instantly turn financial reports into mini articles; according to the New York Times, it is now responsible for around a third of all the content produced by Bloomberg News.

Sceptics would say the written word is among the easier creative fields to mimic – after all, it does have clear grammatical rules to follow and there is an almost limitless vault of content for machines to learn from. But AI creativity has extended into other media as well. “Other examples of AI creativity include AlphaGo, an artificial neural network created at DeepMind, which defeated a top-flight Go master in 2016,” Miller told The New Economy. “It did so by making a totally new and unexpected move which went beyond the data on which it was trained – an amazing display of creativity.”

Sony, meanwhile, has used a machine learning platform called Flow Machines to create a song in the style of the Beatles, while in 2019, Warner Music Group signed a deal with an app called Endel for the distribution of 600 algorithm-created tracks to put on streaming services. Similarly, Google’s Deep Dream Generator has been used to create otherworldly pieces of art, some of which have fetched thousands of dollars at auction. The buyers evidently thought the creativity on show was worth paying for.

Machines have even shown an aptitude for thinking on their feet, something previously thought beyond them. Last year, IBM’s Project Debater took on Harish Natarajan, a 2016 World Debating Championship finalist, in a debate over whether preschools should receive government subsidies. Although Natarajan won, IBM’s AI system made a number of convincing arguments, created its own rebuttals to Natarajan’s points and formulated a closing argument.

All work and no play
No discussion of AI would be complete without some consideration of the apocalyptic future that it is destined to bring about – according to some, anyway. Such fears seem hyperbolic at the moment, but that is because there is not much reason to suspect a customer service chatbot is going to take over the world. However, as AI develops further – as it starts to display creativity and emotions – concerns become more justifiable. If AI can do everything better than we can, what is the point of humans even existing?

This replacement fear often manifests itself through the prism of job losses. However, Miller believes a creative machine may be able to collaborate with humans more effectively, rather than replace them: “AIs have hugely more information in their memories than humans and can deal with it in ways beyond our powers. They can therefore work with humans to develop new products, which might not be found without their help.”

Still, there is no doubt that creative AI poses a new threat to the workforce. According to a 2018 report by PwC, as many as 30 percent of existing jobs could face automation by the mid-2030s. However, not all industries and job roles would be affected equally.

“Industries follow different paths of automation over time, and data-driven industries… may be most automatable in the short term,” the report read. “In contrast, relatively low automatability sectors such as human health, social work and education have more focus on social skills, empathy and creativity, which are more difficult to directly replace by a machine, even allowing for potential technological advances over the next 10 to 20 years.”

As the ability of AI solutions to work creatively improves, however, even the jobs once considered safe from automation may find themselves at risk. It will likely take a long time before writers, musicians, chefs and artists are replaced, but they should no longer think it impossible. For employers and patrons, the appeal is obvious: AI will never miss a deadline, complain about having to alter its work or ask for more money.

Finding inspiration
The scientific progress that has enabled machines to display elements of human creativity is astounding – so much so that the scientists behind it are themselves not entirely sure what is involved. Developments are generally focused on neural networks – these were involved in the writing of 1 the Road, the development of Google’s AlphaGo, and were used to craft Project Debater’s arguments.

“Artificial neural networks are loosely inspired by the way the brain is wired,” Miller explained. “They are made up of layers of artificial neurons and, like the human brain, require data in order to respond to what they see and hear. They can learn without being specifically programmed to do so. Deep neural networks have many layers of neurons.”

Job automation in numbers:

$15trn

Potential boost to global GDP from AI by 2030

20%

of jobs are at risk of automation by the early 2020s

30%

of jobs are at risk of automation by the mid-2030s

44%

of workers with low education risk automation by the mid-2030s

Source: PwC

Although often compared with one another, the human brain and a traditional computer actually operate in contrasting ways. In a computer, transistors are connected to one another in relatively simple arrangements, or chains. Conversely, in the brain, neurons are interconnected with each other in complex, densely packed layers. This makes computers great for storing huge amounts of information and retrieving it in set, pre-programmed ways. But the brain, while it may take a long time to learn complex information, can reorganise and repurpose it into something new – in other words, it can behave creatively.

In an effort to mimic this creativity, computer scientists have been working on artificial neural networks (ANNs) inspired by the brain. They perform tasks by learning from examples without being given task-specific rules. This approach has shown particular success in pattern recognition, facial recognition, translating between languages and speech recognition.

Importantly, however, there is no physical difference between an ANN and a more traditional computer. Machines built on ANNs still have transistors connected in much the same way as a standard consumer PC, but will be running software that mimics the connections seen in a human brain.

But despite the fact that ANNs are designed by programmers, just like any other piece of computing software, how they work is not always understood. AI suffers from what is known as the ‘black box’ problem: while we have clear visibility of the inputs and outputs of any such system, we cannot see how the algorithms take the former and come up with the latter. How exactly did Google’s AlphaGo platform come up with the playing moves it chose? Why does 1 the Road start the way it does and not with any number of other grammatically correct sentences? We cannot see inside the black box to answer these questions, which means a lot of AI’s achievements stem from guesswork.

Everyone’s a critic
There are many who will say that the examples of creativity that machines have displayed so far are still operating within fixed parameters. When a computer crafts a song, it may be impossible for a human to determine in advance exactly what it will sound like, but there are unlikely to be any other surprises. The machine is only creating music because it has been told to do so – this seems a long way from the spark of inspiration felt by the likes of Mozart or McCartney.

“Machines have shown glimmers of creativity, but we will not be able to say that they are truly creative until they have developed emotions, volition and consciousness and actually desire to create,” Miller said. “They will also need to be able to assess their work. One day, however, machines will certainly be truly creative. There is no reason why only humans can be called creative. Many people who deny that machines will ever be creative do so out of fear of dystopian worlds that are more sci-fi than reality.”

To determine whether machines will ever truly reach this standard, a definition of creativity will need to be agreed upon. This in itself seems like an impossible undertaking. Art is subjective, and whether a machine can be called an artist is likely to remain so as well – at least, until the point when (or if) AI starts to recreate a wider range of human characteristics.

While we have clear visibility of the inputs and outputs of any AI system, we cannot see how the algorithms take the former and come up with the latter

But ultimately, whether a machine is being truly creative may not matter outside of philosophical debates. The artistic results that computers will be able to produce, whether in the fields of music, art or literature, are only going to get better and better. If audiences approve of the output, will many people care whether art is produced by a troubled genius or lines of software code?

That is a question for another day. For now, neurologists will continue studying the mysteries of the human brain and computer scientists will continue in their efforts to recreate them in software form. Even if true creativity has not been achieved yet, we can still marvel at the dream-like scenes created by Google’s Deep Dream Generator, enjoy software-crafted songs and grow frustrated in our efforts to beat a computer at chess.

The pace at which AI is developing is impressive regardless of its shortcomings. And it is easy to see where these remain most pronounced: a machine does not yet seem able to tell whether what it has produced is any good, or, indeed, have any concept at all of what it has created.

“The table is black to be seen, the bus crossed in a corner,” begins another section of 1 the Road. “Part of a white line of stairs and a street light was standing in the street, and it was a deep parking lot.” Gibberish? Perhaps – or maybe the avant-garde musings of a literary genius. After all, the opening of James Joyce’s Finnegans Wake is arguably even less coherent, starting mid-sentence: “riverrun, past Eve and Adam’s, from swerve of shore to bend of bay, brings us by a commodius vicus of recirculation back to Howth Castle and Environs.”

Art will always be open to interpretation, discussion, praise and ridicule. No one can say precisely what creative works will be produced in the future, but it’s looking increasingly likely that they will be made by both man and machine.