Elon Musk unveils chip that will merge human brains with computers

At a live-streamed event on July 16, SpaceX and Tesla CEO Elon Musk revealed plans for a brain chip implant currently being developed by his start-up Neuralink. The company, which was launched in 2017, has so far raised a total of $158m to build computer-brain interfaces, but this is the first time the firm’s technology has been revealed to the public.

The Neuralink implant would enable humans to control computers using only their brain. Although not the first of its kind, Musk claims that his device has a superior design and a higher data processing capacity than other similar technology. It uses thin, flexible threads that, at four to six micrometres in width, are smaller than a strand of human hair. This means they could move with the brain instead of damaging the tissue in the same way that larger, more rigid implants might.

It’s likely that Neuralink has a long way to go in terms of earning trust and proving its viability before the implant becomes commercially available

The implant would be embedded using a surgical robot that was also debuted during the presentation. This works like a sewing machine by weaving the threads through the brain tissue without breaking any blood vessels.

With the help of brain implants, it’s hoped that paralysed humans could learn to control phones, computers or robotic limbs. The implants could be also be used to treat Parkinson’s disease through deep brain stimulation.

Another reason for developing the technology, according to Musk, is safeguarding humans against the existential threat of artificial intelligence. Rather than humans getting “left behind” by the technology, Musk wants to achieve a “symbiosis” between the two.

The implant has already been successful during animal trials, Musk announced at the event. Neuralink now hopes to begin human trials as early as next year, although the company has not yet received the FDA permission it needs to carry them out.

Musk is notorious for making grand claims about his companies’ technology only to experience setback after setback: both Tesla and SpaceX have received huge amounts of funding, and yet have so far struggled to demonstrate sustainable profitability. It’s likely that Neuralink has a long way to go in terms of earning trust and proving its viability before the implant becomes commercially available.

Amsterdam halts the creation of new data centres

Until recently, Amsterdam has been experiencing a data centre boom. Ultimately, this is a symptom of the city’s thriving tech sector: an increasing number of international tech companies have established themselves in Amsterdam, drawn by cheap electricity and attractive taxes.

However, the Dutch capital now believes it has too many data centres, with the Amsterdam Metropolitan Area announcing that it is preventing any more from being set up in the region until the end of the year.

The Amsterdam region is home to 70 percent of the data centres in the Netherlands and a third of all data centres in Europe

As more tech companies have moved to Amsterdam, the demand for data storage facilities has increased. Today, the region is home to 70 percent of the data centres in the Netherlands and a third of all data centres in Europe, with many of them having only opened in the last five years. Microsoft and Google each have data centres in the Netherlands, although they are outside the capital.

These centres are notorious energy guzzlers; the grid operator Liander estimates that data centres could use as much as 37 percent of the energy in Amsterdam by 2030. In addition to placing a significant strain on the region’s power grid, data centres take up a substantial amount of space and put pressure on the real estate market. Amsterdam’s property prices have hit record highs, partly driven by the demand for commercial property. The creation of new data centres only exacerbates this problem.

The data centre sector in Amsterdam has spoken out against the government’s decision: in a statement, the Dutch Data Centre Association warned that the move has “serious national consequences for the ambition of the Dutch Government to become a digital frontrunner”.

Amsterdam promises to resume the setting up of data centres once it has developed a new policy for the sector. It’s likely this new policy will aim to make facilities more energy efficient and climate-friendly. According to the Amsterdam Metropolitan Area’s official statement on the decision, it hopes to use the large amount of “residual heat” produced by data centres to “heat homes and thus contribute to the sustainability of the Amsterdam heat supply” in future.

Israel kick-starts plans to launch 5G by 2020

On July 14, Israel launched a tender for the development of 5G wireless networks in the country. The winning telecoms operators will be announced by the end of the year, with 5G expected to launch in 2020 and continue through 2023.

As the fifth generation of mobile network technology, 5G is expected to be 10 times faster than its predecessor, 4G. Industry commentators believe it could bring about a new technological revolution by opening the door to self-driving cars, smart homes and remote surgeries. As part of the tender, Israel is auctioning frequencies ranging from 700MHz and 2,100MHz, which are also used for 4G, to 2,600MHz and 3,800MHz.

In light of its struggling telecoms sector, Israel is trying to reduce the financial impact that 5G integration could have on its main operators

Among those expected to bid are Israel’s main telecoms operators, Cellcom, Partner Communications and Pelephone. In recent years, these companies have struggled to stay profitable. For example, revenue in Israel’s mobile sector fell 5.6 percent in 2018, according to data from the country’s Ministry of Communications. This is partly a knock-on effect from new operators having entered the market in 2012, which caused a price war and led to the three main operators losing subscribers and revenue. Developing 5G would be sure to give these operators a competitive edge.

In light of its struggling telecoms sector, Israel is trying to reduce the financial impact that 5G integration could have on these companies. In Italy, the government’s tender came at a high cost to businesses: bidders in the Italian sale paid a total of $7.6bn for 5G frequencies, well above the prices paid by companies in the UK and Spain. By comparison, Israel is offering companies incentives of up to ILS 500m ($141.4m), while its Ministry of Communications will give grants to operators who implement at least 250 5G antennas.

Although Israel is home to a thriving tech start-up scene, it still has relatively slow internet speeds and has so far lagged behind other countries on 5G adoption. Spain, South Korea, Switzerland and the UK have already started to rollout 5G services. Through this tender, Israel is pushing to improve its internet infrastructure and ultimately secure the country’s digital future.

Indonesia set to build 7km sea bridge to connect Batam and Bintan

Indonesia will begin construction of its longest sea bridge next year in a bid to connect two of its islands, the country’s cabinet secretariat announced on July 11. It’s hoped the bridge, which will link Batam and Bintan, will spur economic growth and encourage trade with neighbouring Singapore.

The Batam-Bintan Bridge will be approximately 7km in length and cost up to IDR 4trn ($284.7m) to construct. While proposals to build the bridge have long enjoyed support from the Indonesian Government, they have struggled to take off. This announcement marks a significant push to finally get the project underway, with the government estimating it will take three or four years to complete.

The Batam-Bintan Bridge is likely to increase tourism and attract Singaporean firms to the Indonesian islands

The islands of Batam and Bintan sit adjacent to Singapore. As such, the bridge is likely to increase tourism and attract Singaporean firms to the Indonesian islands. In recent years, the two neighbours have been forging stronger ties and working together through the private sector for mutual benefit. For example, a large-scale technology park was launched in Batam last year in the hope of serving as a ‘digital bridge’ between the two nations.

Furthermore, Batam and Bintan enjoy strategic locations along major shipping routes. Since Batam is Indonesia’s only free trade zone, the island is particularly well placed to benefit from the US-China trade war. In fact, Taiwanese electronics manufacturer Pegatron, one of Apple’s biggest suppliers, will open its first factory in Batam this year in a bid to shield itself from the impact of the trade conflict.

Indonesian President Joko Widodo has promised to invest $400bn into the country’s infrastructure over the coming years. By connecting Batam to Bintan, Widodo intends to transform the region into a shipping and manufacturing hub, as well as bring in $60bn in new investment. In order to achieve this goal, Indonesia has also earmarked Batam and Bintan as pilot locations for the development of an industrial zone for the processing halal goods.

Vertical farming start-up AeroFarms raises $100m in Series E funding round

AeroFarms, a vertical farming start-up, has raised $100m in a Series E funding round. The investment will help the US-based company expand its indoor farms and explore the new types of produce it can grow in these facilities.

As reported by the Financial Times, the fundraising will bring AeroFarms’ valuation up to $500m. The round was led by the venture capital arm of Ingka Group, IKEA’s parent company and an existing investor in AeroFarms.

AeroFarms’ crops are grown under LED lights, on top of a layer of cloth suspended over a solution chamber. Here, aeroponic systems mist their roots with water, nutrients and oxygen. Thanks to this closed loop system, the process uses 95 percent less water than field farming, according to AeroFarms’ website.

AeroFarms can grow the exact same seed in half the time it would take a farmer using traditional methods, resulting in 390 times more productivity per square foot

The company also closely monitors the plants’ nutrition intake. This means that AeroFarms can grow the exact same seed in half the time it would take a farmer using traditional methods, resulting in 390 times more productivity per square foot. Earlier this year, AeroFarms closed a ‘farm-to-plane’ partnership with Singapore Airlines, agreeing to provide fresh produce for its in-flight meals.

According to Grand View Research, the global vertical farming market is expected to grow to $9.96bn by 2025. At present, the farms primarily produce salad greens, but companies are hopeful they can expand their portfolios. For example, Plenty, an indoor vertical farming company that received a $200m investment from SoftBank in 2017, is now experimenting with strawberries, cherry tomatoes and more exotic fruits.

Given their high yields and low water usage, vertical farms certainly seem to be a more sustainable option than traditional field farms. However, it is yet to be seen whether they are financially viable: the facilities are expensive to set up and run, with LED lights and ventilation ramping up energy costs. Nevertheless, demand for this new method of farming is likely to rise as climate change impacts harvest output and the challenge of feeding a growing population becomes more pressing.

Top 5 reasons to study in São Paulo

For those looking to obtain a business qualification, overseas study presents an exciting opportunity. In addition to their academic enrichment, international business students have the chance to learn about a new culture and work alongside people from all over the world.

Today, Brazil is the fifth-largest higher education market in the world. This, combined with its vibrant culture and status as a fast-growing economy, makes it an attractive place for the next generation of business leaders and entrepreneurs.

Studying in São Paulo primes international students for exciting careers in an international landscape

In particular, Brazil’s largest city, São Paulo, offers a student experience like no other. The huge metropolis is known for its thriving business centre and for being a melting pot of different cultures, including Japanese, Italian and Portuguese. We list the top five reasons why São Paulo is an excellent choice for those looking to study abroad.

Business is booming
Brazil’s economy has seen tremendous growth in recent years, making it an exciting place for those with an interest in business to study. São Paulo contributes to 18 percent of Brazil’s GDP and is widely considered to be the financial capital of the country. As well as allowing students to develop their knowledge of the Portuguese language, studying in São Paulo primes international students for exciting careers in an international landscape.

On the one hand, by gaining a greater understanding of Latin American culture, students can find themselves well placed for careers in one of São Paulo’s booming industries, such as banking, finance or telecommunications. Alternatively, students can use their education in São Paulo to kick-start a global career, having gained experience in a multicultural environment working with people from around the world.

Hotbed of culture
Paulistanos, as São Paulo’s residents are known, have a wealth of cultural activities at their fingertips. The city has more than 100 museums and 300 cinemas, as well as an international film festival that takes place each year. Among the city’s most famous museums is the Pinacoteca, the oldest museum in São Paulo, which has around 9,000 artworks. It is situated in the centre of the old city and surrounded by the Jardim da Luz garden.

Another is the Museum of Art of São Paulo, which displays famous artworks by Pablo Picasso and Vincent van Gogh, among others. The building is easily recognisable for its iconic design, courtesy of modernist architect Lina Bo Bardi. Outside the museum walls, the city also has a thriving street art scene, with the Beco de Batman (Batman’s Alley) displaying the city’s highest concentration of street art murals.

A global city
Modern São Paulo has been shaped by immigration. The city experienced a number of migration waves from the mid-19th century to the start of the 21st, with populations relocating from across Europe and Asia. Today, São Paulo has one of the most ethnically diverse populations in Brazil and is known for its cosmopolitanism. It has the largest Japanese population outside of Japan, which is mostly concentrated in the Liberdade neighbourhood, as well as a strong Italian heritage, particularly in the neighbourhoods of Bixiga, Brás and Mooca.

São Paulo continues to champion immigration today and is at the forefront of a Brazilian initiative to welcome refugees, helping these individuals learn Portuguese and find work in the city. Studying in São Paolo is therefore a highly multicultural experience, where students will experience a wealth of cultures beyond just that of Brazil.

Eat well
It’s perhaps inevitable that such an ethnically diverse city would give rise to a rich street food culture. Street markets have continued to thrive even as supermarkets spread across Brazil. The Municipal Market, a historic building stretching more than 12,000 metres, is a popular food market selling a wide range of fresh fruits, vegetables, fish and spices.

Cheese-filled pastries, churrasco (Brazilian barbecue) and the city’s famous mortadella sandwich are among the local delights that can be found at one of these markets, but visitors are just as likely to come across Asian street food like imagawayaki, a sweet treat from Japan. São Paulo is home to an array of excellent restaurants, including DOM, currently the only restaurant in Brazil with two Michelin stars, as well as thousands of pizzerias and high-quality sushi restaurants, thanks to its diverse population.

Work hard, play hard
It should come as no surprise that São Paulo has the non-stop energy that is characteristic of a sprawling metropolis. The sheer size of the city can be daunting initially, but this is also a key aspect of the city’s appeal. Many Paulistanos work in high-flying roles within the city’s main industries, and so studying in São Paolo means being right in the heart of Brazil’s fast-growing economy.

In recent years, the influx of people and rising wealth has led to an increasingly vibrant nightlife scene. Every evening a vast number of bars and nightclubs open their doors, many of which remain open late into the night.

The city is a venue for a number of high-profile music festivals. This year’s Lollapalooza, which took place in April, boasted a lineup featuring the Arctic Monkeys, Lenny Kravitz and more. For a more traditional music experience, visitors can take a trip to the Municipal Theatre, which is home to a range of shows, from ballet to opera, while those looking for a hands-on experience can visit a local samba school and learn Brazil’s signature dance.

As a city that welcomes all visitors, São Paolo is an enthusiastic host of one of the world’s biggest Pride parades, which also featured in popular Netflix show Sense8.

For those drawn to urban jungles, there is no better place to study abroad, developing the skills needed for a successful career in business.

The social solution to automation

Nowadays, one struggles to think of any jobs that will still be available for our children when they grow up. Panicked parents are increasingly trying to anticipate the next big digital thing so they can give their kids a leg-up over all the other humans whose jobs will soon be automated. Accountants and radiographers are already doomed, but surely the developers perfecting driverless cars or adding new features to Facebook are safe, right?

Instead of thinking this way, we should view the emergence of fabulously efficient digital technologies as an opportunity to create new kinds of jobs that satisfy our social natures. This approach would not only solve the problem of the ‘end of work’; it would also address one of modernity’s greatest ills: loneliness.

Nothing about the Digital Revolution requires us to stop valuing humans and human interactions

Socially isolated people are sadder and sicker than those who enjoy meaningful human connections, and their numbers are growing. According to a 2016 commentary in The New York Times: “Since the 1980s, the percentage of American adults who say they’re lonely has doubled from 20 percent to 40 percent.”

All by myself
A social-digital economy would respond simultaneously to the problems posed by automation and loneliness. Machines and algorithms already rule the digital economy, and humans must accept that they have no chance of competing with them in terms of efficiency and computing power. We should expect – and welcome – a future in which machines fly our passenger jets and perform our heart surgeries. Why put up with clumsy, distractible human pilots or surgeons if we don’t have to?

To be sure, some human workers will be required to manage things in the digital economy, but not nearly at the levels of the past. Meanwhile, all of the humans who would have become pilots, surgeons or accountants in earlier times can instead perform the jobs at which machines are inherently bad.

As Sherry Turkle of MIT notes, for some activities, the involvement of a machine spoils the experience. Consider social media: Facebook and Twitter cannot reduce loneliness because they are designed to serve up a biased sample of social experience. Like digital sugar, they can make a social interaction instantly gratifying, but they always leave an empty feeling behind. By offering merely a simulation of social experience, they ultimately make us lonelier.

In the past, the label of ‘social worker’ applied to a narrow cohort of professionals who cared for those who could not care for themselves. But in a social-digital economy, the meaning of the term would be expansive. After all, the barista who makes your latte also provides a social service merely by asking how your day is going. That simple question, even if motivated by compliance with workplace rules, would have no meaning coming from a machine.

A social species
Our need for social interaction is a product of our evolution. Humans, the social neuroscientist John Cacioppo explained in his book Loneliness, are “obligatorily gregarious”. A zookeeper tasked with creating a “proper enclosure for the species Homo sapiens”, he wrote, would “not house a member of the human family in isolation” for the same reason they would not “house a member of Aptenodytes forsteri [emperor penguins] in hot desert sand”. Put another way, if one wanted to torture an obligatorily social animal, the most cost-effective way would be to isolate it.

Throughout the industrial and post-industrial eras, our social nature has been suppressed by a cultural addiction to efficiency, but the Digital Revolution could help us rediscover what we have lost. Nowadays, the sole application of digital technologies in the workplace is to boost productivity, but with a socially minded approach, we would instead focus on giving human workers freer rein to express themselves. In a social economy, we would still care about efficiency, but we would make allowances for human fallibility. Just as we don’t expect perfect efficiency from our lovers, we should not expect it from human teachers, nurses or baristas.

In addition to efficiency, we should also be thinking about how we can socially enhance various professions, including those that don’t seem especially social. Consider astronauts: a focus on efficiency would require us to phase out human space explorers more or less immediately. Machines are already better at making course corrections and gathering data, and they don’t require the extra facilities that humans need to stay sated and sane in space.

But there is another way of thinking about space exploration – one in which the presence of humans is the entire point. Storytelling has always been a deeply enjoyable social experience for humans, and though robot rovers can stream data from atop Mars’ Olympus Mons, they will never be able to tell an emotionally satisfying story about what it’s like to climb it. Why explore space at all if not to contribute to the story of humankind? From a social perspective, replacing human astronauts with machines is a bit like replacing Meryl Streep with a CGI animation.

For anxious parents, the best way to predict the future of work is not to study the latest technologies, but rather our own past. Before Homo sapiens became farmers, we belonged to forager communities that satisfied many of the social needs that go unmet today. The future of work in the social economy will be about attending to those needs once again.

For that to happen, though, we need to change policymakers’ and businesses’ mindsets. As matters stand, the workers who deal most directly with other humans are often the first to be displaced by automated services. But this is a choice, not an economic necessity. Nothing about the Digital Revolution requires us to stop valuing humans and human interactions.

Rather than channelling the automation dividend into the pockets of a few billionaires, we should start using it to restore meaningful connections between obligatorily gregarious beings. Succeeding at that would be a human story worth telling.

©️ Project Syndicate 2019

The disruptive impact of fintech

As its Valentine’s Day present to the world, the Financial Stability Board (FSB) in Basel published a report titled Fintech and Market Structure in Financial Services. The report’s subtitle was more insightful, and revealed the authors’ intention to interrogate “market developments and potential financial stability implications”.

The report’s premise is straightforward: the arrival of established technology giants, or Big Tech, on the financial scene could “affect the degree of concentration and contestability in financial services, with both potential benefits and risks for financial stability”.

Maybe the time to assess the financial stability risks of open banking was in the consultation period before the directive was passed

The focus is on companies like Apple, Google, Facebook, Amazon and Ant Financial, rather than the myriad fintech start-ups in Silicon Valley, Israel or clustered round the Old Street roundabout in London. Central banks and finance ministries are beginning to ask whether the activities of the tech behemoths, whose market capitalisations now dwarf those of even the biggest banks, will be wholly benign.

Bowing to Big Tech
In a way, it might be thought surprising that the questions are being raised only now. Phrases like ‘bolting stable doors’ come to mind. In Europe, regulatory changes like the Second Payment Services Directive (PSD2) have been crucial in opening up the banking system, and regulators like the UK’s Financial Conduct Authority have for some time been running regulatory sandboxes to smooth the way for new entrants by helping them to structure themselves to comply with standards.

PSD2, often described as ‘open banking’, requires banks to offer their customer data to non-bank providers of payment and account information services. Aggregators can then present the customer with an integrated view of their finances and offer add-on services.

Maybe the time to assess the financial stability risks of open banking was in the consultation period before the directive was passed. Even now, the list of contributors to the FSB’s work shows that the European Commission and the key regulators in Europe and North America were not involved. So what did the FSB conclude?

The authors begin, tactfully, with a series of bows to Big Tech. They say, correctly, that “the greater efficiency of new players may enhance the efficiency of financial services in the longer term”. Certainly, the absence of the cost drag of legacy IT systems and underused branch networks (which have been seen as a kind of public service and are therefore hard to rationalise) allow for cheaper digital delivery mechanisms, which banks can only envy.

It is also entirely fair to argue, as the authors do, that increased competition in the supply of financial services may benefit consumers by expanding choice, stimulating innovation and driving down transaction costs. The pressure on traditional providers is generating strong incentives to reduce costs and improve service.

Incumbents can no longer afford to sit back and allow inertia to be their friend, as they did in the past when account switching was rare. But the FSB also warns that cross-subsidisation may allow Big Tech firms to gain market share rapidly and knock out existing providers. As a result, “their participation may not result in a more competitive market over the longer term”.

Remain vigilant
That is a warning policymakers should heed. But the FSB is supposed to be primarily interested in stability, and here the report points in both directions at once. On one hand, the authors argue that greater competition can create a more resilient financial system, with a wider range of companies managing the essential plumbing. On the other hand, the ability of new entrants to undercut banks significantly could make the latter “potentially more vulnerable to losses”. The accompanying reduction in “retained earnings as a source of internal capital”, the report argues, “could have an impact on financial sector resilience and risk-taking”.

The reader is largely left to decide which of these two scenarios is most likely to play out. But while the report is unambiguously positive about the impact of fintech start-ups, whether they remain standalone entities or join existing banks to create complementary offerings, the authors’ conclusions about Big Tech are far more nuanced. Whereas previous analyses have suggested that the financial stability implications of fintech are either benign or small, the FSB believes that “this could change quickly with deeper involvement of the large technology providers”.

One possible route to financial instability identified in the report is that banks may loosen lending standards unwisely. I would assess that risk as low: banks have been there before, in recent memory, and are not keen to go back. But the threat to profitability is real, particularly if loss-leader pricing strategies are adopted, as the FSB believes is possible. They refer explicitly to the risk of cross-subsidisation. Banks in Europe are not much in favour with investors today, trading well below book value in most cases, and a significant loss of market share in payment services would threaten their viability further.

In response, the FSB – unsurprisingly – argues for vigilance on the part of banking supervisors (when, one wonders, have supervisors been told that now is the time to turn a blind eye?). But I wonder if the answer lies with banking supervisors at all. Had a broader range of authorities contributed to the work, they might have more pertinently recommended vigilance from conduct and competition regulators, too. Following the FSB’s own logic, it is in these authorities’ territories where the biggest risks are most likely to emerge.

©️ Project Syndicate 2019

The AI governance challenge

On the sidelines of the last World Economic Forum Annual Meeting in Davos, Singapore’s Minister of Communications and Information quietly announced the launch of the world’s first national framework for governing AI. While the global media has glossed over this announcement, its significance reaches well beyond the borders of Singapore or the Swiss town where it was made. It is an example that the rest of the world urgently should follow and build upon.

Over the last few years, Singapore’s government, through the state-led AI Singapore initiative, has been working to position the country to become the world’s leader in the AI sector. And it is making solid progress: Singapore, along with Shanghai and Dubai, attracted the most AI-related investment in the world last year. According to one recent estimate, AI investment should enable Singapore to double the size of its economy in 13 years.

Managing the risks
Of course, AI’s impact extends globally. According to a recent McKinsey report, Notes from the AI Frontier, AI could add up to 16 percent of global GDP growth by 2030. Given this potential, the competition for AI investment and innovation is heating up, with the US and China predictably leading the way. Yet, until now, no government or supranational body has sought to develop the governance mechanisms needed to maximise AI’s potential and manage its risks.

Strengthening AI-related governance is in many ways as important as addressing failures in corporate or political governance

This is not because governments consider AI governance trivial, but because doing so requires policymakers and corporations to open a Pandora’s box of questions. Consider AI’s social impact, which is much more difficult to quantify – and mitigate, when needed – than its economic effects. Of course, AI applications in sectors like healthcare can yield major social benefits. However, the potential for the mishandling or manipulation of data collected by governments and companies to enable these applications creates risks far greater than those associated with past data-privacy scandals – and reputational risks that governments and corporations have not internalised.

Another McKinsey report notes: “Realising AI’s potential to improve social welfare will not happen organically.” Success will require “structural interventions from policymakers combined with a greater commitment from industry participants”. As much as governments and policymakers may want to delay such action, the risks of doing so – including to their own reputation – must not be underestimated.

In fact, at a time when many countries face a crisis of trust and confidence in government, strengthening AI-related governance is in many ways as important as addressing failures in corporate or political governance. After all, as Google CEO Sundar Pichai put it in 2018: “AI is one of the most important things humanity is working on. It is more profound than, I don’t know, electricity or fire.”

The European Commission seems to be among the few actors that recognise this, having issued its draft ethics guidelines for trustworthy AI at the end of last year. Whereas Singapore’s guidelines are focused on building consumer confidence and ensuring compliance with data-treatment standards, the European model aspires to shape the creation of human-centric AI with an ethical purpose.

Accepting responsibility
Yet neither Singapore’s AI governance framework nor the EU’s preliminary guidelines address one of the most fundamental questions about AI governance: where does ownership of the AI sector and responsibility for it and its related technologies actually lie? This question raises the fundamental issue of responsibility for AI, and whether it delivers enormous social progress or introduces a Kafkaesque system of data appropriation and manipulation.

The EU guidelines promise that “a mechanism will be put in place that enables all stakeholders to formally endorse and sign up to the guidelines on a voluntary basis”. Singapore’s framework, which also remains voluntary, does not address the issue at all, though the recommendations are clearly aimed at the corporate sector.

If AI is to deliver social progress, responsibility for its governance will need to be shared between the public and private sectors. To this end, corporations developing or investing in AI applications must develop strong linkages with their ultimate users, and governments must make explicit the extent to which they are committed to protecting citizens from potentially damaging technologies. Indeed, a system of shared responsibility for AI will amount to a litmus test for the broader ‘stakeholder capitalism’ model under discussion today.

‘Public versus private’ is not the only tension with which we must grapple. Political economist Francis Fukuyama pointed out: “As modern technology unfolds, it shapes national economies in a coherent fashion, interlocking them in a vast global economy.” At a time when technology and data are flowing freely across borders, the power of national policies to manage AI may be limited.

As attempts at internet governance have shown, creating a supranational entity to govern AI will be challenging, owing to conflicting political imperatives. In 1998, the US-based Internet Corporation for Assigned Names and Numbers was established to protect the internet as a public good by ensuring, through database maintenance, the stability and security of the network’s operation. Yet approximately half of the world’s internet users still experience online censorship.

The sky-high stakes of AI will compound the challenge of establishing a supranational entity, as leaders will need to address similar – and potentially even thornier – political issues.

Masayoshi Son, CEO of Japanese multinational conglomerate SoftBank and enthusiastic investor in AI, recently said his company seeks “to develop affectionate robots that can make people smile”. To achieve that goal, governments and the private sector need to conceive robust collaborative models to govern critical AI today. The outcome of this effort will determine whether humankind will prevail in creating AI technologies that benefit us without destroying us.

©️ Project Syndicate 2019

Why the Airbus A380 failed to take off

In 2005, when Virgin Atlantic placed an order for six Airbus A380 jets, Richard Branson joked that the airliners, which featured double beds and a casino, would offer customers “two ways to get lucky”, according to The New York Times. Airbus had placed its bet: as more passengers took to the skies each day, it predicted that airlines would demand larger jets as airports and routes became increasingly congested.

The aerospace manufacturer envisioned that the future of flying would be based on the ‘hub and spoke’ model, with passengers travelling between major airports and using connecting flights to reach their final destination.

Designed to compete with Boeing’s models, the A380 was never able to match the success of the 747

Fast-forward to 2019, and Airbus’ forecast has unravelled. Last year, Virgin cancelled its long-standing order of six A380s. Then, in February 2019, Emirates – the most loyal customer of the superjumbo programme – cut its order from 53 to 14 aircraft, while Qantas cancelled the delivery of its last eight A380s. This was the final nail in the coffin for the project: Airbus reluctantly announced the “painful” decision to permanently halt the A380 production line in 2021, following years of speculation regarding the model’s future. The plane that many thought would revolutionise aviation lasted just 16 years.

Efficiency is key
Meanwhile, on the other side of the Atlantic, Boeing made a bet of its own. Conversely to Airbus, the Seattle-based manufacturer envisioned customers flying point to point between smaller airports. As the A380 took to the skies for the first time in 2007 – just as the global financial crisis was starting to hit the pockets of operators and customers – Boeing unveiled the smaller, sleeker, more efficient Dreamliner 787 aircraft, a jet that could fly the same distance as Airbus’ jumbo on two fewer engines.

As margins were stretched, airlines struggled to afford the risks associated with jumbo jets. An aircraft that was smaller and quicker to refill was a far more appealing offer. “The A380 is an aircraft that frightens airline CFOs; the risk of failing to sell so many seats is just too high,” a senior aerospace industry source told Reuters in February. As Airbus grappled to find potential customers for its flagship craft, orders for the 787 were piling up.

“The future of aviation is being driven not by the big, established global players, but by small, low-cost start-up operators,” Martin Pugsley, Head of Financial Services at law firm DWF, told The New Economy. “These companies look to scale up quickly by offering lots of flights connecting lots of locations. To do this, they need smaller, lower-cost, fuel-efficient, single-aisle aircraft that have a quick turnaround time. The fastest-growing markets for aviation services are in the developing world, and most of these new players are following the low-cost model too.”

Airbus relied on Emirates to keep the A380 programme running. Prior to cutting its order, the Dubai-based airline had ordered a total of 162 jets (with 109 currently in operation) – nearly seven times more than the second-largest operator, Singapore Airlines (with 24). Only three other airlines have more than 10 in their fleet, while just one Chinese carrier – China Southern – operates the model. Not one American airline has made an order.

In total, there were 290 firm orders for the jet, of which 234 have been delivered – well short of Airbus’ target of 700. Analysts believe the company has only recuperated a small fraction of the estimated $20bn spent on research and development. The jet, first dreamed up in 1988 and blighted by years of delays as Airbus attempted to get it in the skies, never really took off.

Wrong place, wrong time
Many have asserted that the A380 programme was doomed to fail well before production began. Aviation analyst Richard Aboulafia slated the aircraft, calling it “simply the dumbest programme of modern times” in a 2019 interview with travel blog the Points Guy. Designed to compete with Boeing’s models, the A380 was never able to match the success of Boeing’s own jumbo, the 747 – an aeroplane that came to be known as the ‘Queen of the Skies’. Boeing sold roughly 1,500 of the aircraft, with its ‘hump’ becoming the most recognisable silhouette in the sky.

However, regulations have significantly changed since the jumbo jet’s introduction more than 50 years ago. In the early days of aviation, Extended-Range Twin-Engine Operational Performance Standards (ETOPS) decreed that two-engine aircraft had to fly within 60 minutes of an airport in case they encountered engine issues. As technology progressed and aircraft manufacturers improved their safety records, these regulations were gradually relaxed. Today, Boeing’s 777 and 787 models are able to fly as far as five hours from the nearest airport, leaving them free to operate on effectively any route in the world and opening up new opportunities to airlines – opportunities that have crowded out the A380.

The A380 had its first test flight in 2005 – two years before the new ETOPS directive was issued. With the aircraft designed to take advantage of the previous rules, the change to regulations were a huge blow. “What we are seeing here is the end of the large, four-engined aircraft,” Airbus CEO Tom Enders said in February. “There has been speculation that we were 10 years too early; I think it is clear that we were 10 years too late.” The 747 was a roaring success because it entered service during a period of strict regulation, making it the only option. The A380, meanwhile, deemed a financial flop, was overwhelmed by increased competition as a result of relaxed rules.

Oversized expectations
The spaciousness of the A380 made it a favourite with passengers, while its scale made it a favourite among plane spotters. A dedicated website was set up to help passengers choose a route served by the jet, and in February 2019, The Guardian’s transport correspondent Gwyn Topham said “it felt a miracle” that this enormous plane could fly.

But in spite of this reverence, business travellers – integral to the profit margins of legacy airlines and flag carriers on long-haul routes – demonstrated an overwhelming preference for flight frequency and flexible schedules over aircraft size. Airlines found they could fly two 777s on a typical long-haul route at a lower total cost than flying one A380, making the aircraft economically uncompetitive on all but a handful of routes worldwide.

“When Airbus unveiled the prototype in 2005, its main selling point was its sheer size. Ironically, the aircraft’s size is what turned potential airline customers off,” said Pugsley. “Each unit is naturally far more expensive than smaller rival models, which meant that Airbus was operating in a small market from the very beginning.”

The limited appeal was further compounded by a lack of airport facilities capable of handling the plane. Airbus had out-engineered the airports: gates had to be refitted to accommodate a plane of such magnitude; runways had to be strengthened to cope with its weight; and even terminals had to be adjusted to ease potential passenger congestion issues stemming from the A380’s vast capacity. “When you deal with new technologies where the industry has not enough previous experience, it’s easy to underestimate the challenges during the design,” Paolo Colombo, Global Industry Director at software developer ANSYS, told The New Economy.

As one innovation meets its demise, another is born. The Boeing 777-9 is due to make its first flight later this year and, though it has just a single deck, its expected capacity of around 400 passengers near enough matches that of the A380. Further, despite having half as many engines, it can fly just as far. Equipped with folding wing tips, the jet can operate in and out of all of the world’s major airports.

With the aircraft already outstripping the A380 in terms of orders a year before its formal launch, it looks like the new ‘Triple Seven’ could be the future of aviation. But, then again, the same was once said of the A380.

The attention economy commodifies human engagement, but it’s difficult to monetise

Brands don’t want your dollars anymore – they want your eyeballs. The ominously named concept of ‘eyeball marketing’, where a business’ value is derived from the amount of attention it garners, rather than its revenue, has become the modus operandi for today’s digitally focused brands. These companies are eschewing the notion that paying customers are loyal customers, and are instead looking to less tangible metrics.

Loyalty is the ultimate attention filter as it prevents consumers from even looking for other information, products or services

This move away from real currency has proved problematic, particularly for digital media companies with complex business models and no physical product revenue to rely on. As online advertising dollars are increasingly captured by social media sites, the so-called ‘attention economy’ has left firms with all eyes on their platforms, but no cash in the bank.

New rules
The concept of the attention economy was coined by Herbert A Simon in a 1971 essay titled Designing Organisations for an Information-Rich World. In the essay, Simon argued that the increasingly information-rich age in which we live has created an overwhelming catalogue of data that humans simply cannot process or absorb.

Attention, then, is used as a differentiating factor in the consumption of this information; a tool to prioritise what is deemed interesting, relevant or important.
In a commercial context, the theory is predicated on an understanding that consumers devote their attention to brands that appeal to them the most. According to Paul Armstrong, founder of technology consultancy HERE/FORTH, “eyeballs and clicks” have become a new currency – except, rather than consumers spending money with a brand, they’re spending time. This mark of success is far more immaterial and challenging to measure.

Metrics such as bounce rates, clicks and time spent viewing a page are used as measures of consumer attention, but the true value of these figures is questionable. While brands that sell physical products may be able to identify a clear correlation between attention and sales, for platforms that disseminate their content freely, the attention economy creates a far more challenging environment.

Attention seekers
Digital news sites are particularly vulnerable in the attention economy. Their entire business model is based upon building a greater readership, none of which pays for what it is reading. These platforms have traditionally relied on advertising to support them, but it has become increasingly clear in recent months that this is not enough to sustain many businesses within the sector.

This was proven earlier this year when more than 2,000 employees at publishers including Vice Media, HuffPost and BuzzFeed News were made redundant in the space of several weeks. Around 9.5 percent of HuffPost’s workforce was laid off in late January after its parent company, Verizon Media, slashed the value of two recent acquisitions by $4.6bn, admitting competition for digital advertising was leading to shortfalls in revenue and profit. BuzzFeed, meanwhile, cut 220 jobs in a bid to boost profitability, following pressure from its venture capital backers.

The redundancies are symptomatic of shifts in the attention economy, where once-safe digital advertising has been upended by tech giants like Facebook and Google, and companies are now struggling to find new sources of revenue. “Digital publishers are being beaten at all turns by big players,” Armstrong told The New Economy. “Few seem to be proactively looking for new models and it’s this lack of innovation that’s really hurting their businesses.”

The rise of the internet has also created an economy where products – in this case, the news – cost virtually nothing to reproduce, meaning suppliers face issues in adding value. Newness and findability are not enough to sustain a business model, so publishers must find ways to position themselves as a trusted source or produce personalised content to capture the attention and loyalty of their customers.

“Loyalty has never been harder to achieve thanks to the choice available and the transient nature of brands,” said Armstrong. Yet, it’s vital for brands to achieve this if they are to survive in the attention economy. Loyalty is the ultimate attention filter as it prevents consumers from even looking for other information, products or services. Some have hit the nail on the head: exclusive clothing brand Supreme, for example, creates loyalty through scarce product availability and a strong online presence, generating a “cultural phenomenon and legions of obsessive fans”, said Armstrong. Others, such as short-form video-hosting service Vine, have seen their customers running for the door in favour of more relevant or exciting options.

Innovate to survive
The only way to beat the metrics is to “focus on core strengths – remind people why they come to that brand and what that brand stands for”, Armstrong told The New Economy. But establishing a loyal audience is only the first step – the real challenge is monetising the attention of that audience.

Digital advertising alone is not sufficient, so some brands have opted to put up paywalls, meaning readers must pay a fee before they can access certain products or content, which serves to lock in attention to that brand. It’s proved a successful strategy for some – for example, The New York Times’ 2.5 million digital subscribers accounted for almost two thirds of its $417m revenue in Q3 2018. However, there is a caveat, explained Armstrong: “Paywalls depend on an existing audience and strong traffic. If you have the former, go forth and test whether it will pay for content. But, if you just have traffic, feel free to test a paywall, but be warned that it’s unlikely to become a huge source of revenue if interest is transitory.”

Others have established systems where users enter their personal information to access content, such as inputting an email address to download a report. Brands can use this information in future email marketing, increasing their consumer database and the likelihood of future sales. Similarly, though, “selling data relies on a core understanding of who your audience is and what they are willing to accept [in terms of subsequent marketing],” said Armstrong. “Too often data is abused, or brands apply too-broad strokes”, which has a detrimental effect on loyalty.

The fundamental issue with the attention economy, though, is that attention as a metric is valourised entirely by brands. “Attention is a currency in so much that brands like it and want more of it,” said Armstrong. “The average consumer doesn’t think about giving their attention to a brand’s content.” If consumers aren’t making a conscious decision about where they direct their attention, brands must undertake a significant amount of guesswork to establish how to influence them.

“Attention is a fool’s errand as a [key performance indicator],” said Armstrong. Rather than obsessing over the amount of time consumers spend viewing their content, brands should focus on creating pioneering products and experiences that have inherent value and are relevant to their target audience; those will ultimately draw long-lasting attention. Placing too much importance on where consumers spend their attention locks content creators into a never-ending feedback loop and slows innovation to a drip. As Armstrong concluded: “The best brand strategies are agile and show willingness to try new things, without necessarily being sure of the [return on investment].”

Fatal dam collapse in Brazil casts doubt over the mining industry

Before 2019, few had heard of Brumadinho, a small municipality of Brazil. Located in the state of Minas Gerais (which means ‘general mines’), the town is largely dependent on mining for its livelihood. On January 25, tragedy put Brumadinho on the map.

The local mining dam suddenly collapsed and a mudslide engulfed the area, bursting through the mine’s offices before descending on the community below. The most recent figures from the Civil Police of Minas Gerais, released on May 5, put the death toll at 237, with 33 people still missing, making it the deadliest incident in the history of Brazil’s mining industry.

The mining industry has tended to treat waste management as an afterthought and is now suffering the consequences

The owner of the complex, the mining company Vale, now faces public condemnation. Since the incident, 11 employees and two contractors have been arrested. This is not the first time Vale has been implicated in a dam collapse. A similar accident occurred in 2015, when a dam owned by Vale and BHP burst in Mariana (also located in Minas Gerais), killing 19 people and flooding the local area.

In both instances, the dams were tailings dams, which store the waste produced during the mining process. There is one particular type of tailings dam – the upstream dam – that has been known for more than 10 years to have a higher risk of collapse than any other. Upstream dams use the mine waste itself to form part of its structure, which can make it less stable over time. There are 88 dams of this kind in Brazil. Both the Brumadinho dam and the Mariana dam were upstream dams.

Erica Schoenberger, Professor of Environmental Health and Engineering at Johns Hopkins University, told The New Economy: “Upstream dams, where the sequential stages of the dam are built inwards, are cheap but not so good at retaining water and are particularly susceptible to seismic shocks.”

In spite of these safety concerns, dams built using the upstream method constitute the vast majority of tailings dams. There are approximately 3,500 tailings dams worldwide and they are among the largest man-made structures on Earth. Yet few people are aware of them. As containers for a toxic sludge of water and minerals, tailings dams exist to keep mining waste out of sight and out mind. The invisibility of these dams within public consciousness reflects their place in the mining industry, which has tended to treat waste management as an afterthought and is now suffering the consequences.

Behind the Vale
Vale is the world’s largest iron ore exporter. Given the significant role that natural resources play in the Brazilian economy, the organisation has enjoyed huge success over the years. This can be attributed in part to the commodities boom of the mid-2000s, which was spurred on by high resource demand from China. As a major job provider in Brazil, the mining industry has also tended to enjoy support from politicians. For instance, President Jair Bolsonaro campaigned on promises to increase mining in the Amazon rainforest. The sector has been incentivised to maximise production, seemingly at the expense of safety regulations.

Across many industries, the traditional approach to waste is to deal with it as cheaply as possible. For example, water dams are functionally very similar structures to tailings dams but their security is taken more seriously at the early stages of design since water is a valuable asset; there is profit to be made from storing water effectively.

By comparison, there is no economic incentive for storing mine waste – or rather, there is no clear economic incentive until disaster strikes. After the crisis, Vale’s shares fell 24 percent. The company recorded a $1.6bn loss in the first quarter of 2019. Union Investment is among the investors who have since walked away from the company.

Over the last decade, tailings dam collapses have become more common, and researchers at World Mine Tailings Failures predict that 19 very serious failures will occur between 2018 and 2027. As a result, investors are now realising the extent of the risk across the mining industry. In April 2019, the Church of England Pensions Board and Council on Ethics of the Swedish National Pension Funds called upon almost 700 listed mining companies to disclose information about their management of tailings dam facilities. They posited that investors have a right to better assess the risk of their holdings in mining companies.

Vale maintains that it has always abided by global regulatory standards, a claim that is being assessed by the ongoing criminal investigation. However, if true, it would suggest that the current global standards do not do enough to reduce the risk of collapses.

Searching for alternatives
On the one hand, the cause of the recent dam collapses stems from poor design and engineering. Tailings dams are outdated structures that have seen little innovation since they were created approximately 100 years ago. The Ecologist argues that this lack of innovation can be attributed to cutbacks to research and development departments in the downturn of the mining supercycle. Now many are calling for the use of alternatives to tailings dams.

One such alternative is the dry stack method. This involves dewatering the mine waste and compacting it for storage. This not only means that less water is used in the mining process but also makes the structure much more stable. However, it is not currently considered to be financially viable for larger mines and only works in certain environments. Its use was rejected for a new mine in British Columbia because it is less feasible in wet, mountainous regions.

A particularly attractive prospect looks to minimise the amount of waste produced by mining or even eliminate it altogether. Dr Bernhard Dold, Professor of Applied Geochemistry at Luleå University of Technology, believes such techniques could prove instrumental in making the mining industry more sustainable. “We are at the starting point of this transition of the mining process,” said Dold. “The final goal is to eliminate the waste (tailings and waste rocks) and hence the risk of dam failures and pollution.”

According to Dold, one approach is to separate the more dangerous, reactive materials in the tailings from non-reactive silicates like sand, which could be repurposed for construction and other industrial processes. An even more exciting prospect, however, would be to find a way of turning tailings into a georesource. If this became commonplace within the industry, then the mining sector would naturally invest in the safety of tailings dams, as these would now store an asset as opposed to a waste product.

Taking responsibility
It is unlikely that the right engineering solution will present itself immediately. It may be that what the industry requires first is a fundamental shift in mindset.

“The planning and design of tailings storage facilities (TSFs) need to be integrated into the overall plan for the mine,” said Schoenberger. “This may seem a no-brainer, but it has not been standard practice in the industry. Expertise in TSF engineering needs to be nourished within the firm so that expert advice is consistently part of the process. Given that tailings dam failures are the largest cause of mining disasters, mine management needs to fully embrace the idea that TSF design and construction must attain the highest standards regardless of the cost, if for no other reason than that a TSF failure will cost the company more.”

According to Schoenberger, the tailings dam failures are more the result of insufficient regulations and poor governance than they are a question of technology. In support of this argument, The Wall Street Journal reported that Brumandinho miners had warned managers that the dam was going to collapse and still nothing was done. Schoenberger argues for greater responsibility at the decision-making level, both within companies and also from governments.

“Government regulation of the industry – and robust enforcement of those regulations – is an absolute necessity if we want to reduce the number and severity of mining disasters,” said Schoenberger. “Regulations on paper must be translated into practices on the ground. This means also that governments have to embrace the same principles of best practice and commit to enforcing them.”

Historically, the economics of disaster risk has not been factored into the overall cost of building a mine. The Brumandinho disaster shows just how detrimental this shortcoming can be, both to mining companies and, more importantly, the communities they operate in. As shareholders become more aware of the risks posed by tailings dams and as governments place pressure on the industry to act, mining companies may be forced to find new, safer ways of managing their waste.