Facial recognition is rapidly progressing, but at what cost to privacy?

The advance in facial recognition technology is both remarkable and chilling in equal measure. According to its own data, Facebook’s facial recognition software is able to correctly identify a person’s face 98 percent of the time. Under the right conditions, some systems now boast 100 percent accuracy.

The technology’s applications have already entered the realms of science fiction; for example, the Chinese Government currently uses facial recognition software to catch jaywalkers, seek out seasoned criminals and even identify toilet roll thieves in public washrooms.

Commercial applications have also spiralled, with retailers adopting software that can identify customers and track their shopping habits through the use of CCTV.

In Russia, an app called FindFace can identify strangers with 70 percent accuracy by analysing data from a social network called VKontakte. The most unnerving application of all, however, is the recent discovery by Stanford academics that facial recognition software can identify men’s sexual orientation with 81 percent accuracy.

The face of it
Facial recognition technology traditionally works by identifying the ‘nodal points’ of one’s face, with each point acting as a coordinate for distinguishing the position of a person’s features.

Once all nodal points have been identified (human faces typically have in the region of 80) a machine will construct an image of the facial structure, known as a faceprint.

Though the true potential of facial recognition technology is still unknown, our anonymity does appear to be shrinking

When constructing the faceprint, the software pays particular attention to landmark measurements such as the distance between the eyes, the width of the nose and the shape of cheekbones. This information is then stored on a database so the same face can be identified and verified at a later date.

While this face mapping technique forms the basis of most facial recognition software, it often runs into problems, with many aspects of a person’s face being changeable. For instance, face mapping can be thrown by different camera angles, lighting, ageing or emotions.

98%

Success rate of Facebook’s facial recognition software

81%

Success rate when establishing men’s sexual orientation

55%

Success rate when identifying someone in a scarf, hat and sunglasses

Many such issues have started to be ironed out with the emergence of sophisticated 3D sensors, such as those developed by Apple for the new iPhone X. These sensors enable facial recognition software to gauge depth as well as just the distance between nodal points. By constructing a 3D image, the software can more accurately map important features such as the contour of eye sockets, the nose and jaw.

While 3D sensors are entering the mainstream, the accuracy of traditional facial recognition technologies continue to be watered down by the usual issues of tricky camera angles, lighting and photo resolution.

Therefore, when put to use in public places – through surveillance cameras, for example – the technology will continue to make mistakes. However, while it remains imperfect, its accuracy is steadily improving.

Code camouflage
But as facial recognition software continues to develop, so do people’s attempts to thwart it. Perhaps the most common effort to circumvent the technology is the scarf, hat and sunglasses approach, which simply aims to prevent the software from taking an accurate reading of one’s nodal points.

The effectiveness of this technique has been thrown into some doubt, however, with new research from academics at the University of Cambridge suggesting facial recognition technology can still recognise individuals wearing the full triad of accessories 55 percent of the time.

A more promising technique is to don clothes that are covered with images of faces. This works by confusing the machine; crowding it with hundreds of faces that distract from your own.

Another approach, developed by artist Adam Harvey, is to use hair and makeup techniques to trick the software by throwing off its symmetry trackers. There are plenty more examples of such experiments, from light-reflecting masks to infrared ‘privacy visor’ glasses – all of which function with varying degrees of success.

Though the true potential of facial recognition technology is still unknown, our anonymity does appear to be shrinking. Thus, we can expect the market for anti-surveillance products to expand in equal measure, matching developments in facial recognition blow-for-blow in the ultimate battle to save face.

The significant danger drug-resistant bacteria poses to human health

In May this year, G20 leaders committed to working together to address one of the world’s most pressing and perplexing security threats: antimicrobial resistance (AMR) – a fierce and evolving adversary against which conventional therapeutic weapons are of no use.

The threat is straightforward: bacteria and other microbes are becoming resistant to available medicines faster than new medicines are being developed. Every year, drug-resistant microbes kill about 700,000 people worldwide – more than three times the annual death toll from armed conflicts.

Stemming resistance
In 2014, a special panel commissioned by the British Government predicted that, by 2050, as many as 10 million more people will die from drug-resistant microbes every year. AMR now poses a clear and present danger to every person on the planet. Unless we confront it head-on, we could return to a world in which it is common for people to die from basic infections.

Beyond the cost in human lives, AMR could devastate the world’s economies. In Europe alone, the annual healthcare costs and productivity losses associated with AMR already total an estimated €1.5bn ($1.7bn).

The G20, for its part, has taken an important step forward. Each G20 country has promised to start implementing national plans to fight AMR in earnest, and to do more to promote new treatments against resistant microbes. To that end, G20 leaders are calling for an international R&D collaboration hub to “maximise the impact of existing and new antimicrobial basic and clinical research initiatives”. They have also promised to explore how market incentives can be used to encourage new research.

If antimicrobial resistance is allowed to spread, malaria will kill at least 116,000 more people every year

Beyond the G20, innovative public-private partnerships are emerging to deliver new treatments against drug-resistant killers such as tuberculosis. Some governments have already started filling critical roles in the global response to AMR, by collecting data on the spread of resistant strains of E. coli, salmonella and other pathogens.

Now it is up to political leaders to follow through on their commitments. New treatments for multidrug-resistant microbes aren’t expected to generate much return on investment, and so it is incumbent upon governments to make research and development in this field more attractive to private companies and, in order to stem the development of resistance, to ensure that new drugs are not overused.

When traditional market mechanisms fail, instruments such as transferable market exclusivities can help by allowing drug makers to transfer an antimicrobial medicine’s intellectual-property benefits to another drug.

A collaborative solution
In addition to policy innovations, more well-funded collaborations between governments and private institutions are needed. When private institutions enter into such collaborative efforts, they must be prepared to work outside of traditional boundaries, accept the challenges associated with complex public projects, and be willing to bring their skills, ideas and experience to the table.

In responding to AMR, we can learn some valuable lessons from other global public health efforts. Malaria, which is caused by a parasite transmitted by more than 100 species of Anopheles mosquito, is a leading cause of death in many parts of the world. But now that many governments and private institutions have made fighting the disease a high priority, its death toll has been halved over the last 15 years.

Still, the parasite that causes malaria is developing resistance to artemisinin, which forms the basis for the most effective treatment: artemisinin-based combination therapies. Artemisinin resistance first emerged in Cambodia just over a decade ago, and has since spread through Thailand, Laos, Vietnam, Myanmar and China. It is now approaching India, and experts are sure that it will eventually reach Africa.

According to one recent study, if artemisinin resistance is allowed to spread, malaria will kill at least 116,000 more people every year.

Unless new treatments become available, the tremendous progress that the world has made against malaria will have been tragically short-lived. Fortunately, those engaged in the global response to malaria recognise that just as parasites are adapting, so must we. New efforts are underway to identify and minimise the spread of resistant malaria, while simultaneously developing new artemisinin-free treatments.

For example, the regional artemisinin resistance initiative is working to halt the spread of resistant malaria in the Mekong Delta region by monitoring and sharing drug resistance data and promoting proper use of antimalarial treatments. So far, the initiative has secured €110m ($88.2m) through the Global Fund to Fight AIDS, Tuberculosis, and Malaria, which is financed primarily by governments.

Moreover, Novartis and the Medicines for Malaria Venture (with support from the Bill & Melinda Gates Foundation) are starting a new clinical trial next month to test KAF156, a molecule that could form the basis of a new treatment against artemisinin-resistant malaria strains.

At the 2016 World Economic Forum in Davos, 100 companies (including Novartis) and industry associations signed the Davos Declaration on AMR. Under that commitment, companies and governments promised to work together to slow the development of resistance by increasing investment in R&D and making high-quality antibiotics available to patients who need them.

The G20’s national action plans will, we hope, help us to meet these commitments. But political leaders, too, must marshal the will to turn words into action. Resources to monitor resistance, stronger incentives for R&D, and innovative financing mechanisms are urgently needed to ensure widespread access to accurate diagnoses and quality medicines.

The world cannot afford to lose the fight against AMR. Winning it will require large-scale public-private cooperation, underpinned by political leadership that makes global public health a top priority.

© Project Syndicate 2017

Online streaming boom signals the end of traditional broadcasting

From the way we listen to music and stock our closets to how we purchase everyday goods, it’s safe to say our habits are changing. Perhaps one of the most prominent changes, though, is the way we digest content in what industry gurus are calling the ‘post-TV era’. Traditional television is making way for on-demand streaming, with the former increasingly falling victim to ‘cord cutting’ – the phrase used to define the trend of consumers cancelling their multichannel cable subscriptions in order to fill their couch time with online content instead.

With this change in consumer habits making anyone ill-versed in the latest season of House of Cards feel like an alien, the rewards of producing the next big hit have never been greater. Thus, the entertainment industry has become a battleground: pioneers like Netflix now find themselves crowded by traditional content producers like Time Warner and Disney. Internet giants Amazon and Facebook are also entering the fray, and it seems inevitable that more streaming services are on the way.

This exponential growth can be attributed to a change in the online environment. “The video ecosystem is actively transforming,” said Christopher Vollmer, Global Advisory Leader for Entertainment and Media at PwC. “Pay-TV subscriptions are declining, viewers (other than those older than 65) are watching less linear TV, and younger users are spending a greater share of their time with digital media, including video streaming services.”

History lessons
The rapid surge in online streaming across different formats and connected devices presents a wide range of opportunities for the entertainment industry. In fact, according to Research and Markets, the trend is expected to transform into a $70bn market by 2021, more than doubling the $30.2bn registered in 2016. Recent announcements from the industry’s leading players suggest innovation is the catalyst, with original content currently the weapon of choice for conquering audiences around the world.

Instead of simply acquiring individual shows, Netflix has looked to attract people capable of creating many shows over time

In contrast, the success of video streaming casts doubt over its predecessor’s future: although history shows the radio survived the introduction of cinema and, subsequently, the television, people are less optimistic about the future of traditional television broadcasting. As seen with the demise of Blockbuster, a failure to adapt to the latest technological landscape will likely condemn your company to the annals of history.

The good news, however, is this transformation comes with a variety of financial support and benefits. “There’s a lot of engagement and revenue – from advertising, subscriptions and transactions – associated with video consumption,” Vollmer said. “Consequently, many media – and increasingly, technology – companies are seeing new video content, experiences and services as critical to their overall growth.”

This can be seen most clearly in the transformation of Netflix, which first surfaced as a DVD delivery service in 1997 before introducing its on-demand video platform 10 years later. Today, the company operates in 190 countries, with a total 104 million members consuming 125 million hours of content each day. In the second quarter of 2017, Netflix posted $2.79bn in revenue, beating forecasts to grow 32 percent year-on-year. Analysts now expect the company to outperform the market.

Stealing the show
But for Netflix, success has gone beyond numbers: since 2013, Netflix has reduced its reliance on licences by opting to create more original content, differentiating itself from competitors and lowering business risk. This move originally came in the form of award-winning drama House of Cards, which captivated millions before becoming the first online-only television series to receive major award nominations. Later, Netflix would go one further, purchasing the worldwide distribution rights to feature film Beasts of No Nation for $12m.

Advertising spend in the US

$83bn

Digital

$72.7bn

Television

It’s safe to say that Netflix, which is set to spend $6bn this year on both original and third-party content, identified the weakness of being an aggregator early. Indeed, as its success has grown, its relationship with Hollywood has soured. Bert Salke, President of Fox 21 Television Studios, even went as far as to call Netflix “public enemy number one”, after Fox accused Netflix of illegally poaching talent.

As a result, Fox has recently pulled titles such as House MD and The X Files from the platform, while others, like Family Guy, are also set to be axed from Netflix’s roster in the near future. Many of these shows are making their way to Netflix’s competitor Hulu, which is partly owned by the broadcasting giant.

Meanwhile, in August, Disney announced it would also be leaving Netflix, choosing instead to distribute its content via two new streaming services. The first, set to go live next year, will be dedicated to ESPN, while the other, which is scheduled for 2019, will show Disney films and TV shows. Marvel shows will continue to be shown on Netflix, however, as per a separate agreement between the companies.

Staying original
Paul Verna, Senior Analyst at market research firm eMarketer, stressed the importance of carrying an original offer: “Content is the biggest differentiator among the providers. Whether it’s a broadcast network like NBC, a premium cable channel like HBO or a streaming service like Netflix, every content owner is trying to reach essentially the same consumer.

“People no longer differentiate by distribution channel – just by content. In other words, people don’t care if their favourite show is delivered via over-the-air TV, cable or digital. They simply want to watch it.”

While the strength of giants like Disney affords them the opportunity to go it alone, no company racing towards independence is unaware of the dangers of rushing. For those who haven’t built their audiences yet, leaving a platform with millions of subscribers is far from advisable.

For this reason, Vollmer expects the market to combine competition and cooperation. “Near-term, competition will most likely increase as mass market and a growing amount of niche video streaming services battle for user time, attention and spending,” he said. “[However,] the path to profitability can be a multi-year effort, [so] some pockets of ‘coopetition’ are likely to emerge as smaller players partner with larger and more established players for distribution and bundling.”

Meanwhile, Netflix continues to make moves to not only attract users, but also to retain them. Verna highlighted that, instead of simply acquiring individual shows, Netflix has looked to attract people capable of creating a wider range of shows over a longer period of time. For example, in August, Netflix took the unprecedented step of purchasing comic book publisher Millarworld for an undisclosed sum. This was followed by an agreement with Disney’s ABC producer, Shonda Rhimes, who played a significant role in the production of Grey’s Anatomy and Scandal, among other successes.

Although it was a hardware device, the iPhone helped bridge the gap between TV and online content

One distinct advantage of the digital era is that success is not left to chance: like in e-commerce, big data allows content providers to understand what people want. Juan Damia, CEO of Havas Media-owned Data Business Intelligence (Latin America), explained this development: “Machine learning is generating a huge advantage to companies, allowing them to bring their customers more relevant content in real time. In order to generate the right content for the right people in the right moment, companies like Netflix or Amazon integrate, blend, process and generate insights with an acceptable level of certainty for a volume of data that can terrify the most experienced engineer in Silicon Valley.”

For example, Netflix knew House of Cards had the potential to be a huge success before it debuted, with users leaving imprints of interest across its platform. This data, revealed Damia, comes from a wide range of actions and facts, including browsing, scrolling, search, time of day, device and demographics. Everything is condensed to make sure viewers will find something else when they finish devouring a series, film or whatever their preference.

Prime position
Amazon isn’t sitting back either, with Jeff Bezos’ firm having made great strides since first entering the US streaming market in 2011 – a market PwC estimates to be the most valuable of any country, accounting for 47 percent of global internet video revenue in 2016. Last year, the e-commerce titan revealed plans to expand the reach of its Amazon Video service from five countries to 200. Although analysts have attributed the action to a ground plan aimed at increasing the number of Amazon Prime members – who spend almost five times more money than non-members, according to Morgan Stanley – the move has made Amazon one of Netflix’s strongest competitors.

Video streaming market value

$30.2bn

2016

$70bn

2021

In terms of content, Amazon has – until now – taken a different approach to Netflix, investing in original series at a regional rather than global level. Earlier this year, for instance, Amazon reached an agreement with the US’ National Football League, making games available to its subscribers via a deal estimated to be worth $50m. Amazon is also said to be exploring deals with a number of other sports – including basketball, soccer and lacrosse – in order to target fans with its own premium sports package.

Although nothing has been confirmed yet, an offer of this kind would also present an alternative to traditional pay-TV offerings during a time when cable channels are struggling. ESPN, for example, lost 2.9 million subscribers in the year ending May 2016. Netflix has since taken note of Amazon’s move, but reasons for concern keep mounting.

Tech giant Apple is also thought to be developing a new video streaming service, with the Cupertino-based firm first making moves into the TV business when it released the iPhone in 2007. Although it was a hardware device, the iPhone helped bridge the gap between TV and online content.

It is only natural, then, that rumours surrounding an Apple-branded service have been around for almost as long as the market itself. Now, this move finally appears to be imminent, with the iPhone maker recently budgeting for the acquisition and production of video for its platform. In true Apple fashion, the details of such a project remain vague, but sources familiar with the issue have suggested Apple will spend the budget on as many as 10 high-quality TV shows, comparable to HBO smash hit Game of Thrones.

This will build on Apple’s existing offering, which affords customers the opportunity to buy and rent movies via the iTunes Store. Although better known for its vast catalogue of some 40 million songs, the store also has more than 100,000 movies and TV shows ready for download. With Apple seeking to diversify the source of its revenue – which is predominantly generated by the iPhone – this catalogue is likely to increase further, presenting original content as a natural progression.

Consumer is king
As if there weren’t enough players in the video streaming market already, Mark Zuckerberg has announced Facebook is on the path to becoming a ‘video-first’ company. In September, Facebook launched its new service, Watch, in the US, with a view to reaching all two billion of its monthly users in the near future. Offering a range of original short videos (stored as episodes), Watch relies on a number of partners to produce engaging content for its user base.

Share of total viewing hours (online video content):

40%

Netflix

18%

YouTube

14%

Hulu

7%

Amazon Video

21%

Others

In line with the rest of the platform, these videos can then be shared in a personalised social environment, with categories like ‘shows your friends are watching’ encouraging users to engage with new content.

Unlike Netflix and Amazon, however, Watch doesn’t charge its users to access videos. Instead, the content is monetised through advertisements, with users forced to watch pre and mid-roll advertisements in exchange for their content. At a time when Facebook’s News Feed is reaching saturation point, Watch is presenting a new medium for the social network to increase its advertising revenues.

According to eMarketer, digital ad spending in the US surpassed TV ad spending for the first time this year, climbing to $83bn. With the introduction of Watch, Facebook is seemingly hoping to maximise its potential in both markets.

However, Facebook is not the only provider of free content, with YouTube also affording publishers and creators – affectionately known as ‘YouTubers’ – the opportunity to post original videos. Like Watch, these videos are monetised through advertisements, making use of the millions of views YouTubers attract from audiences all over the world. YouTube has also introduced its very own TV service: YouTube TV.

Introduced to the US market in April, YouTube TV presents an alternative to traditional pay-TV services, streaming live TV in return for a $35 monthly membership fee. As the company advertises, YouTube TV is “half the cost of cable with zero commitments”. Accessible from any internet-enabled device, the streaming service provides access to nearly 50 networks, as well as a cloud DVR, where users can store recordings and watch them whenever they want.

In the content era, consumers are increasingly choosing when, where and how they watch content. Although the television still has a special place in living rooms, its popularity is decreasing. According to Frédéric Vaulpre, Vice President of Eurodata TV (Worldwide) at Médiamétrie, daily linear TV viewing is shrinking worldwide, falling to three hours for adults and just two hours for young people.

In contrast, global studies have shown this viewing time is shifting to other platforms. Laptops are currently the most used device for streaming long-form video content, with tablets and smartphones next on the list. With so many alternatives readily available to customers around the world, it seems the question is no longer how long it will take for streaming to overtake television, but rather how long television has left.

Electric vehicle market at risk of stalling without adequate lithium supply

Electric vehicle ownership is set to rise over the coming years, pushed on one side by governments clamping down on air pollution, and pulled on the other by an automotive industry racing to develop a mass market substitute to petrol and diesel cars.

The success of electric vehicles owes much to lithium, since developments in lithium ion batteries allowed these vehicles to compete with petrol cars in terms of driving range and affordability. However, while lithium is not currently in short supply, the brine extraction process used to remove it from the ground is tricky, and operations require sustained investment; there is a creeping possibility of a gap between supply and demand over the coming years.

Reg Spencer is a Mining Analyst at investment firm Canaccord. In an interview with The New Economy, he outlined the expected growth in lithium demand as electric cars gain traction with consumers: “The market today is somewhere between 200,000 and 210,000 tonnes of lithium carbonate, and supply and demand would suggest we need another 550,000 tonnes by 2025.”

Driving force
In reality, this forecast likely falls short, since it assumes around 14 percent of people will drive an electric car by 2025. But, as governments attempt to phase out diesel and petrol cars, this figure will probably be considerably higher. “If you compare that 14 percent with the auto industry’s targets, some of these companies like Volkswagen, Daimler or BMW are targeting something closer to 20 or 25 percent,” added Spencer.

A steady supply of lithium is now crucial to the auto industry and will require a structural shift in a very old business model

A steady supply of lithium is now crucial to the auto industry and, as Spencer pointed out, this requires a structural shift in a very old business model: “For the last hundred years, their supply chains have relied on materials that are not scarce, so really the auto companies have never had to worry about the raw materials that they use for their motor vehicles.

“Now, all of a sudden, a $1trn auto industry is moving towards a technology that is based on a very small market in terms of lithium. The aluminium market is about $120bn a year, iron ore or steel markets are over $100bn and the oil market around $1trn.

“Compare that with the size of the global lithium market, at about $2bn, and you start to see how sourcing enough lithium to supply the electrification of their products can be quite challenging.”

Meeting demand
If electric vehicles are to run petrol cars off the roads, lithium supply must catch up to accelerating demand. According to Spencer, this will require a hefty $8bn investment by 2025: “Today, we have a reasonable amount of visibility on sources of supply up to 2021 or 2023, beyond that it gets rather difficult to see where a lot of this new production comes from.

750,000

The number of electric cars in use in 2014

2m

The number of electric cars in use in 2016

60%

The amount the electric vehicle market grew by in 2016

336,000

The number of electric cars registered in China in 2016, making it the largest electric car market

“Medium-term, there probably is sufficient supply, provided projects get financed. But in the longer term we need a lot more supply to come online for that demand to eventuate.”

The gap in investment will have to be met partly by the automotive industry itself, and as Spencer pointed out, car companies also have a considerable incentive to take ownership of lithium supply: “The security of raw material is absolutely critical to these guys; they spend billions and billions of dollars in R&D and building factories and supply chains and battery manufacturing capacity, and all of this is wholly reliant on a $2bn industry.

“I think we’re going to see increasing investment by auto companies in order to: number one, permit new supply to come online; and two, allow them to secure that supply for their own businesses and their own supply chains.”

As roads fill with electric cars in the coming years, the fundamental operation of the automotive industry will change beyond recognition.

The troubling influence algorithms have on how we make decisions

Google co-founder Sergey Brin once quipped: “The perfect search engine would be the mind of God.” But while Google’s uncanny ability to regularly find exactly what you are looking for may seem divine, in reality, even the most sophisticated uses of machine learning have already run into a wall of human fallibility and immorality.

At their heart, algorithms are simply tools that learn from large swathes of data, providing some sort of optimised response – like a ranking or rating – in accordance with a pre-programmed procedure. They have a reputation for being infallible, neutral and fundamentally fairer than humans, and as a result are quickly making their way into both commercial and public decision-making. While some may still seem far-fetched, we are already far more reliant on algorithmic decision-making than most people know.

The implications of delegating our decisions to these tools range from perpetuating fake news and conspiracy theories to unfairly lengthening the prison sentences of minority groups. Far from being neutral and all-knowing decision tools, complex algorithms are shaped by humans, who are, for all intents and purposes, imperfect. Algorithms function by drawing on past data while also influencing real-life decisions, which makes them prone, by their very nature, to repeating human mistakes and perpetuating them through feedback loops. Often, their implications can be unexpected and unintended.

A moral dilemma
Take Google, for example. Around 3.6 million searches are made through the company’s search engine every minute; I would question anyone who claimed their beliefs had never been shaped to some degree by the information they found through the site. When Google responds to a query, it considers a huge variety of metrics in order to ultimately decide which web pages to include, prioritise and exclude. The date a page was created, the number of keywords it mentions, the ‘reading level’ of the text, the number of inward and outward-bound links, and the ‘authoritativeness’ of the site are just a few of the components that will determine a site’s page ranking. There are hundreds of such criteria.

Like most machine-learning algorithms, Google not only analyses our behaviour: it shapes it

At times, this combination of carefully balanced metrics can cough up highly questionable results. Consider the fact that, for some time, Google’s algorithm responded to the query “did the Holocaust happen?” with a series of pages promoting Holocaust denial – the first of which was a page by a neo-Nazi site entitled ‘top 10 reasons why the Holocaust didn’t happen’. What’s more, autocomplete suggestions have been found to prompt searches for “climate change is a hoax” or “being gay is a sin”, with the top-ranking sites acting to confirm those notions.

Often, the search engine simply acts to perpetuate society’s ready-made biases. Some of the ugliest manifestations of this can be found in the autocomplete feature of the search bar. For instance, typing “why are black women…” has been found to autocomplete with “…so angry” and “…so loud”. Google’s Knowledge Box also falls foul of some grossly offensive untruths. The search term “are black people smart?” for some time responded with: “Blacks are the least intelligent race of all.”

The issue is that some of the seemingly harmless metrics Google employs can learn from the available data in such a way that prioritises harmful results. A key determinant of ranking is the number of page views (or popularity) a link has built up. This can lead to the site favouring extremes of opinion by promoting particularly shocking or ‘clickbait’ items. Some argue this, among other factors, has resulted in a systematic bias towards extremist views.

Furthermore, other metrics such as frequency of appearance, which is used to rank stories on Google’s news page, could lead to political bias if one election candidate receives a greater level of media attention. If popularity correlates with other factors, such as racial divides, this too will cause the results to be biased. The prioritisation of popularity as a metric can also mean the authority or truthfulness of a story can be sidelined in favour of its superficial draw – this has been shown by Google’s Snippets feature, which has been tricked by fake news stories in the past. What’s more, people trying to game the system could hijack any one of Google’s metrics, further exacerbating issues of bias.

The decisions Google’s algorithm makes, therefore, are important, particularly considering the ever-present potential for feedback loops. Like most machine-learning algorithms, Google not only analyses our behaviour: it shapes it. For instance, the popularity of a search term is a factor that drives it higher in the rankings, leading to what psychologist Robert Epstein calls the “digital bandwagon effect”. Epstein explained: “Popularity pushes one viewpoint – or candidate – higher in rankings; higher rankings increase the popularity of that viewpoint. This goes round and round until one viewpoint dominates people’s thinking.”

In April, Google released a blog post acknowledging its role in the proliferation of fake news, hoaxes, unsupported conspiracy theories and offensive content, noting: “[It has] become very apparent that a small set of queries in our daily traffic (around 0.25 percent) have been returning offensive or clearly misleading content, which is not what people are looking for.”

Yet, while these cases may be seen as extremes, issues also exist on a much broader level. For example, if you search “great poet”, the results are predominately white men. If you search “beautiful woman” under images, your result is likely to be a long stream of white women’s faces that match a narrow definition of beauty.

In Google we trust
Google may have little control over the existence of problematic content on the web, but it does control the information its algorithm pays attention to. “These machines will learn what you teach them: the only way that we can get them to be polite is to pass on some human savvy,” noted Jacob Metcalf, a researcher at Data & Society.

At times, it seems likely the algorithm is hand-fed specific commands. For instance, when Googling “FTSE 100 CEOs”, the lead ticker presents the user with five women and five men. This appears to be a conscious effort to counter a stereotype, especially given just seven FTSE 100 CEOs are women, and those companies are by no means the most prominent. Similarly, many of the examples previously mentioned have since been tweaked.

But the company’s ability to make tweaks or adjustments to avoid such results places a lot of power in its hands. These decisions are akin to programming a moral compass into the index of the internet; undoubtedly, this has steep implications. Meanwhile, the secretive nature of the algorithm also means people cannot scrutinise the decisions it makes. For example, when Google faced heavy public pressure for its role in proliferating fake news, it made a characteristically vague announcement that it had taken action by adjusting the “signals” its search tool uses, shifting a greater focus to the “authority” of a page.

The secretive nature of algorithms means people cannot scrutinise the decisions they make

“We have to demand to know what kind of influence these algorithms have over us,” said data scientist Cathy O’Neil, whose work focuses on exposing the implications of algorithmic decision-making. While the secretive nature of Google’s algorithm itself makes it hard to delve into such questions, research has begun to shine some light on the issue.

Most notably, Epstein recently directed a study that aimed to assess the effects of search engine rankings on people’s political beliefs. Participants in his study were presented with real internet pages while using a simulated search engine, which provided different ranking orders for different groups in the study. Unsurprisingly, he found that after being exposed to the search engine, people’s beliefs were strongly affected by the stance of pages located at the top of the rankings, an effect he calls the ‘search engine manipulation effect’ (SEME).

What is particularly striking is the influence of SEME. According to Epstein, SEME is one of the largest behavioural effects ever discovered in the field of psychology: “With a single exposure to search results that favour one candidate or viewpoint over another, we get a very large shift in people’s opinions – up to 80 percent in some demographic groups. This occurs even though people cannot detect the favouritism in the search results; in other words, the manipulation is invisible to people.”

As for the reason behind SEME, Epstein references the famous experiment conducted by psychologist Burrhus Frederic Skinner, in which rats were conditioned to adopt certain behaviours by administering a punishment or reward in response to specific actions. “Like rats in a Skinner Box, our daily experience with a search engine keeps teaching us what is at the top of the list is best,” Epstein said. “That’s why high-ranking search results have such a dramatic effect on people’s opinions, purchases and votes.” Most disturbing, perhaps, is that we are prepared to trust whatever Google offers us despite knowing the information we are reading is likely subjective.

Ruthless robots
Google is an algorithm that we are all familiar with, but it is far from being the only algorithmic decision-making tool influencing our daily lives. “Algorithms are now involved in many consequential decisions in government, including how people are hired or promoted, how social benefits are allocated, how fraud is uncovered and how risk is measured across a number of sectors,” said Nicholas Diakopoulos, Assistant Professor at the Northwestern University School of Communication.

As with Google, it is often only in retrospect that people notice when questionable results start to crop up. For instance, it only recently emerged that Facebook’s algorithm-based advertising feature enabled advertisers to specifically target anti-Semites. What’s more, it was only after some time it emerged LinkedIn’s algorithms suggested male names more frequently than female ones. One of the most disturbing discoveries, however, surrounds the risk scores currently used in criminal courts.

Google

3.5bn+

Searches per day

40,000+

Average number of queries answered per second

0.25%

Estimated number of queries returning offensive or misleading content (as of April)

In the US, an algorithm called COMPAS is often used to predict the likelihood of a defendant reoffending. When someone is booked into jail, they are taken through a questionnaire that covers topics such as previous arrests and childhood upbringing. The software then uses this information to calculate a ‘risk score’, which is provided to judges as guidance for decisions regarding bail and the length of sentence. The ethnicity of the defendant is never revealed, but a recent investigation by non-profit organisation ProPublica indicated the software’s results were systematically disadvantaging black people.

When ProPublica delved into data on risk scores and the criminal histories of 12,000 offenders in Florida, it found the algorithm was more likely to mistakenly brandish a black person as high risk than a white person. A similar bias occurred at the other end of the spectrum, with a greater number of white people being wrongly categorised as low risk. In reality, of those in the low risk category, white people were almost twice as likely to be rearrested than black people. Once again, this troubling bias emerges from the algorithm’s use of past data to make predictions, meaning any correlations picked up in historic datasets will affect decisions made in the future.

Algorithmic accountability
Diakopoulos points to the fact that while the problematic implications of many algorithms have been exposed, we may have only just begun to skim the surface. As such, Diakopoulos is leading a project that hopes to prompt investigations into algorithms currently being used or developed by the US Government: “We have more than 150 algorithms – 159 to be exact – on the list that we’ve identified as being potentially newsworthy if investigated further. These algorithms come from a wide variety of agencies across the federal government including everything from the Centres for Disease Control [and Prevention] and the Department of Veterans Affairs to the National Park Service.”

To narrow in on a single example, the ‘assessment decision tool’ is an algorithm designed to help HR professionals develop assessment strategies for skills and competencies, as well as other situational factors relevant to their hiring criteria. According to those on the project, this algorithm needs to be reviewed in case it is systematically excluding appropriate candidates.

The key issue the project hopes to target is transparency. Many of the most consequential algorithms currently being used in the public and private domains are complex and opaque, making it hard to attribute accountability to their actions. O’Neil also believes accountability is key: “If they restrict our options, and they take away options from us – a job, loan or affordable insurance – then we should have the right to understand it much more than we do now.”

Inside the black box
Improving transparency, however, is no easy task. Companies with algorithmic products would lose their competitive edge if they were forced to make their algorithms public. But, according to Epstein, there is one way to leapfrog this problem: “Transparency is not enough. In fact, because algorithms are quite complicated, information about Google’s algorithms would be useless. This is a simple matter. The index to the internet should be a public instrument, owned and controlled by the public. It should be a public utility. It should be an index, pure and simple – not a tracking device or a mechanism of manipulation. Google doesn’t own the internet, and it also should not own the index to the internet.”

Perhaps all that’s needed is a requirement to certify if an algorithm is safe or fair to use, and greater incentives to pay attention to the consequences of an algorithm once it’s up and running. According to Metcalf: “What we really need are review committees, we need policies within companies, we need codes of ethics, we need to have the cultures, habits and infrastructures to rigorously and efficiently pause and ask the question about who algorithms are affecting, how are they affecting them and how they might be improved. These moments don’t really exist in current engineering practices with any real sense of rigour.”

The most attractive option is to put the control of algorithms back into the hands of the people that are affected by them. In practice, this might work through the creation of a settings feature, where people can set their own personal algorithmic preferences. One would be given the option to instruct Google, say, to pay particular attention to location, popularity or timeliness when ordering results.

The same concept could feasibly be rolled out in all sorts of locations. Metcalf noted one of the most interesting applications of this idea would centre on autonomous vehicles. When life-or-death decisions arise, such as whether to save the driver or the pedestrian, humans make their own decisions in the moment. “With a machine, there has to be some sort of instruction,” Metcalf said.

“There is a suggestion that rather than letting the manufacturers decide, we should let the drivers decide. When I go and buy my new autonomous Mercedes, there should be a slider bar when I log in for the first time saying ‘do I save your life or do I save the pedestrian’s life?’” Such decisions are human by their very nature, and should be treated as such. Who knows, an algorithmic slider could, one day, form part of our daily lexicon. But, in the meantime, algorithms need to be managed; ensuring those with the power to shape our lives do so with some code of conduct.

AR Metallizing brings packaging with a sustainable focus

Shop shelves have come a long way in the last 50 years. Indeed, before the Mad Men era of the 1960s – and the accompanying explosion in the power of branding – the very nature of consumer decision-making was very different. Not so long ago, consumers visiting their local stores were greeted with only a handful of well-established products. Brands did not need to have great packaging in order to attract demand: they simply provided goods to the public, safe in the knowledge everyone knew what those products were.

But today, we live in a different world: brands can no longer simply rely on their reputation for quality, and modern consumers consistently find themselves bombarded with 10 or 20 different products all vying for their attention. Faced with such variety, the overwhelming majority of purchasing decisions occur in a split second. Of course, it can be hard to get a product’s full message and value proposition across in such a short period of time, but that’s why brands are increasingly looking to distinguish themselves with aesthetic packaging.

In-store purchasing decisions are rarely thought through with any serious degree of consciousness; instead, they are highly influenced by superficial impulses stemming from the way a product is packaged. As such, the very nature of the modern world dictates that brands enter the fight for shoppers’ attention in order to survive an overcrowded marketplace.

A double bind
The era of modern advertising has fostered an exceedingly competitive environment, in which fast-moving consumer goods (FMCG) companies simply cannot compromise on the quality of their packaging. And yet, while products cannot hope to survive the harsh nature of our fast-paced, consumer-driven world without an alluring façade, companies are also under increasing pressure to become environmentally sustainable.

Environmental awareness among a growing population of consumers is fuelling a demand for sustainability, which, in turn, is generating substantial pressure on firms to reduce the impact of packaging on the environment. There was a time prominent brands could simply say ‘we are not in the waste business’ and escape scrutiny, but today’s firms are forced to rethink such statements.

Modern advertising has fostered an exceedingly competitive environment, in which companies cannot compromise on the quality of their packaging

As a result, brands the world over face a conflict of priorities that can only be reconciled through sustainable packaging. Indeed, the market for sustainable offerings is steadily gaining pace; driven largely by consumer demand, but also bolstered by government legislation around the world. What’s more, certain brands have become leaders on the issue, introducing sustainable packaging or implementing system-wide recycling programmes.

According to the latest industry report by Research and Markets, the global sustainable packaging market is poised to grow approximately 7.7 percent per year over the next decade, and is likely to reach around $440.3bn by 2025. Global demand will be further boosted by a growing middle-class population, which is simultaneously becoming more affluent and environmentally conscious. The greatest growth is expected to come from the Asian market, driven in particular by demand in China and India.

Face value
Faced with marked shifts in consumer priorities, packaging companies are competing to create a product that is environmentally sustainable without compromising on aesthetics. One particularly exciting solution is the development of metallised papers and paperboards, which offer dramatic environmental benefits when compared with plastic products, aluminium foil or metallic inks. As well as reducing solid waste, paperboard products reduce the carbon footprint of the overall packaging process.

This can be seen in the state-of-the-art production techniques adopted when producing SilberBoard, which leverages a process called ‘metallised vacuum vaporisation’ to cover paper with a microscopic layer of aluminium. This innovative and environmentally friendly process gives brands’ packaging a more sophisticated and high gloss appearance, while eliminating the need for plastics. Crucially, SilberBoard is a print-ready end product that is easy to work with and compatible with any printing technology, helping to simplify companies’ supply chains and reduce the need for shipping.

Sustainable packaging market:

7.7%

Predicted annual growth in the next 10 years

$440bn

Estimated value by 2025

What’s more, care is taken at every stage of the process to ensure environmental sustainability. This is reflected in the fact all of AR Metallizing’s paperboard products are certified by the Forest Stewardship Council, and the company’s energy outlay is intelligently managed via thermal feedback. Therefore, SilberBoard not only enables premium brands to maintain – or even improve – their trusted look and feel, it also provides a dramatically more sustainable, fully recyclable alternative to the laminated polyester products that currently dominate the market.

Unique marker
And yet, the benefits of paper and paperboard products like SilberBoard extend beyond environmental and cosmetic appeal. Put to task by Broncochem, a leading cold remedy brand in Latin America, AR Metallizing was charged with creating a reliable solution to reinforce the brand’s anti-counterfeiting efforts.

By creating a unique packaging with a fully customised hologram, AR Metallizing ensured Bronochem had a product that was impossible to duplicate, securing the brand against the sale of counterfeit and lookalike products. Not only did this help to increase the company’s sales figures, the packaging also received both national and international acclaim.

Critically, AR Metallizing has proved it is possible for brands to acquire a premium high-quality look that can drive consumer engagement in a profitable yet environmentally friendly manner. Many brands have already chosen to convert to SilberBoard, but the industry still has some way to go in order to wean the packaging industry off plastic altogether – particularly in Europe.

Notably, while it is often believed Europeans are more environmentally conscious than those in the US, our company has experienced the opposite. In the US, pressure from consumers ensures brands are eager to switch to sustainable packaging. Meanwhile, in Europe, consumer advocacy for plastic-free packaging is still in its infancy, making the region particularly challenging to work in.

Fortunately, here at AR Metallizing, we quite like a challenge, and our aim is to consistently supply brands with the highest quality packaging. We believe companies can sell their products effectively, while also making their business and the wider FMCG market more sustainable. Ultimately, we want to inspire premium brands from around the world to adopt environmentally friendly practices, and ensure harmful plastic products are sent packing.

As mass data collection becomes the norm, concerns about surveillance are growing

In the late 18th century, English philosopher Jeremy Bentham formulated designs for a new social institution: the Panopticon, a prison in which a central observer could oversee all of the inmates within a circular structure.

The observer would be hidden from the overseen, meaning the latter had no idea when they were or weren’t being observed. In the end, it didn’t matter. The prevailing threat of surveillance meant inmates would be forced to conform.

When Bentham dreamed up his idea, the logistical difficulties of creating such an institution meant no true examples of his vision ever came to fruition. Modern technologies, however, have made Bentham’s Panopticon a very real possibility.

Sensors capable of harvesting your location, monitoring your health and tracking your spending habits are now ubiquitous. Facial recognition software, meanwhile, is being developed to determine everything from your political persuasion to your sexual orientation.

Mass data collection, however, is far from universal. Increasingly, the greatest risk to our personal privacy is taking place in urban areas, where infrastructural, strategic and temporal pressures are driving the need for greater efficiencies.

These so-called ‘smart cities’ are being developed at a rapid pace, with consumers and businesses being lured in by the promise of improved convenience, security and energy consumption. In order to access these advantages, however, the cost to our personal privacy could be huge.

Urban explosion
Smart cities are not gaining support because citizens are eager to have their every action pored over; the main motivation behind urban data collection is necessity. The proportion of the world living in built-up areas is rising rapidly, with 54 percent of the global population currently living in a city.

This proportion is expected to reach 66 percent by 2050. The existing infrastructure simply isn’t capable of managing such an increase, while building new bridges and roads comes with its own associated issues. To cope with mounting urban pressure, cities are refusing to build more, choosing instead to build smarter.

“The benefits of smart cities broadly fit into two categories,” explained Tim Winchcomb, Senior Consultant for Technology Strategy at Cambridge Consultants. “Cities must provide existing services at a lower cost or to a higher standard, while simultaneously introducing completely new services. At times of squeezed public spending, rapid technological developments are satisfying the public’s desire for better information and enabling new city services to emerge.”

A collection of small pieces of data can add up to a surprisingly complete picture of who you are, where you’ve been and what you’ve been doing

One of Europe’s best examples of a smart city is Barcelona, where forward-thinking local governance and the adoption of hi-tech sensors have brought behind-the-scenes efficiencies.

Currently, almost 20,000 smart energy meters help to ensure power isn’t being wasted, more than 1,000 LED streetlights are in place to monitor noise, weather and traffic, and countless ground-based sensors are used to identify the best places to park in the city.

“With so many aspects of our lives becoming smarter, from our homes and offices to our cars, the public expects more information and control than ever before,” Winchcomb told The New Economy. “Smart city services such as information about public transport, congestion or environmental conditions are no different.

As in other aspects of our ‘connected’ lives, we see that individuals are willing to sacrifice small amounts of privacy by sharing their data for increased convenience. The challenge lies in finding the right balance.”

Service or surveillance
For every example of a smart city bringing enhanced efficiency, there is an equally worrying instance of privacy invasion. Even at this early stage, there have been numerous cases of smart sensors harvesting data without consent.

In 2013, retail outlet Nordstrom used smartphone data to monitor customers in 17 stores throughout the US. By tracking Wi-Fi signals, the store was able to measure how long consumers stayed within a particular department. Although only a short-lived trial, customers were understandably concerned when news of the data collection came to light.

Similarly, London-based firm Renew was forced to cancel its marketing programme entirely when it emerged that recycling bins were being used to track the movements of passers-by. In this instance, the collected media access control addresses were anonymised, but the test provided a startling glimpse of the possibilities inherent in smart city infrastructure.

Always in search of the next big thing, major tech firms have begun embracing the burgeoning smart city scene. Technology by artificial intelligence company Emotient that can read ‘micro-expressions’ relating to happiness has already been used to determine what shoppers are thinking when viewing a product or service. Apple purchased the facial recognition start-up in an undisclosed deal in 2016.

Global population living in cities:

54%

2017

66%

2050

It is highly plausible – in fact, probable – that in the not-too-distant future your Bluetooth signal will inform stores of your arrival, a mobile payment device will let producers know how much money you have to spend, and facial recognition software will determine how likely you are to purchase any given item. This isn’t necessarily dangerous, perhaps you don’t even consider it intrusive, but it is certainly unsettling.

To infer is human
Sometimes privacy invasion relates less to the data that is taken and more to what can be inferred from it. It is worth bearing in mind that the only way for smart cities to function effectively is through the mass collection of data. It is not simply a case that smart cities are being implemented poorly; they are invasive by their very nature.

Individually, using your fingerprints to conduct biometric payments in Lille or deciding to scan NFC labels in San Francisco may not appear particularly nefarious. Collectively, however, this data can add up to a surprisingly complete picture of who you are, where you’ve been and what you’ve been doing.

This is why conversations regarding smart city data collection sometimes miss the point. Albert Gidari, Director of Privacy at the Stanford Centre for Internet and Society, believes focusing on personally identifiable information (PII) is myopic – particularly when there is so much valuable data that can be mined from citizens before you’ve asked for their identity directly.

Gidari said: “When the New York Taxi and Limo Commission was forced, under public record laws, to disclose the data collected from cab and Uber drivers in 2016, a data analyst was able to identify specific individuals, their addresses, their employers and their travel habits, even though not a single bit of PII had been collected.”

What’s more, as platforms become more integrated, the threat posed by data inference will increase. “We shouldn’t underestimate the power of adding up lots of small bits of information that by themselves are not particularly sensitive, but in combination may reveal more than we would like,” Winchcomb said. “These could be of commercial value to advertisers or retailers, for example.”

Plans are already afoot to make smart cities more holistic. In Hong Kong, a recently signed partnership between the government and a number of private enterprises is aiming to create a ‘technology ecosystem’.

In Nigeria, meanwhile, the Imperial International Business City is being built from the ground up with data and cloud connectivity at its very core. As smart cities become more established, a comprehensive pattern of a citizen’s life will be worth much more than a disparate collection of datasets.

Nothing to hide
One of the most persistent arguments put forward by proponents of data collection is that privacy concerns are irrelevant unless you have something to hide. On the surface, this has some merit, as the increased threat of surveillance brought about by smart sensors is certainly something that should keep criminals awake at night.

Already, Internet of Things (IoT) devices are being introduced to improve security in urban communities. Flock, for example, is a US-based start-up that describes itself as a “virtual security guard”. It consists of a sensor – paid for by each resident – which records the licence plate of every car that passes by.

When it comes to marketing campaigns, the opportunity to harness personal data may ultimately prove too difficult to turn down

Eventually, facial recognition will be incorporated into the device, but that hasn’t dulled its effectiveness today: evidence collected by the sensor has already been used to convict someone in a court of law.

So far, Flock has been limited to its trial in Atlanta, but support from start-up incubator Y Combinator indicates other neighbourhoods could soon be at risk from similar levels of protection. The company stresses that privacy can be maintained simply by choosing to opt out from the recording process, but that claim does not stand up to much scrutiny.

It is difficult for residents to opt out of something they may not even be aware of. This is why Gidari believes privacy has an inherent value that is worth protecting, regardless of whether it is of criminal interest or not.

“There is a lot about our lives we choose not to share with others,” Gidari said. “Do you want someone to know that you leave the house precisely at eight every day, park in the same spot for work, eat lunch two blocks away at the same place, leave work at five and drive the same route home? All seemingly unimportant points, except it tells others your home is empty for nine hours, where your car is located during that time and whether you should be a higher insurance risk based on your route home. Criminals aren’t the only ones who should value privacy.”

When it comes to marketing campaigns, the opportunity to harness personal data may ultimately prove too difficult to turn down. You may not have anything to hide per se, but you might not want sellers to start targeting you with promotional offers as you walk into a store or pass within 100 yards of a particular brand.

Physical outlets are already at a disadvantage when compared with online portals, which are harvesting data all the time. It may be concerning, but it is hardly surprising if physical infrastructure tries to tip the scales back in its favour. Personal privacy is simply the unlucky fall guy.

A future to fear
If the data being collected by the smart cities of today fills you with dread, then the possibilities posed by future developments do not bear thinking about.

In Chicago, the spectre of predictive policing has already begun to rear its head, with larger datasets leading to the swifter administration of justice. The city’s police force is currently combining crime figures, meteorological data and local business information with broader community statistics to anticipate violent crime. So far, the number of shootings in the city’s seventh district is down 39 percent year-on-year.

These improved crime statistics haven’t gone down well with all parties, however. The American Civil Liberties Union has argued the data being collected is adding bias to the Chicago Police Department’s work.

By contrast, underlying factors such as poor and reduced social mobility are not being addressed. It all sounds worryingly like a Minority Report-style dystopia, but as predictive analytics develops, expect fear of arrest to become a huge driver of improved safety figures.

Cyberattacks also pose a very real threat to future smart city infrastructure. Hackers have already successfully brought down the electricity grid in Kiev and infiltrated a dam in Rye Brook, New York. As valuable data becomes increasingly entwined with city sensors, it too will become a target.

In fact, a chilling vision of the future has already been created as part of an Amsterdam hackathon that took place in 2011. The app Makkie Klauwe – which means ‘easy pickings’ in Dutch slang – used openly available data to determine the best places to conduct crime. Data might prove popular with a city’s police force, but a criminal can put it to equally good use.

“Even if we are happy to share our personal data with a particular organisation, we can’t be sure it won’t be leaked, hacked or simply fall into the hands of someone with bad intentions,” Winchcomb said. “In addition, smart city threats may not become apparent for years to come. The cost of storing data is currently so low it makes sense to hold onto it indefinitely. Even if data cannot be exploited today, there is a chance it may be in the future.”

Data might prove popular with a city’s police force, but a criminal can put it to equally good use

Ultimately, whether or not you feel comfortable with IoT phone booths recording your location or digital changing rooms noting the clothes you wear will likely come down to an issue of trust. As time goes by, more and more connected sensors will begin harvesting your information. Government actors and private corporations may say your data is secure and anonymised, but the question is whether you believe them in a post-Snowden age.

Fight for your rights
If trust isn’t enough, then regulations must be strengthened to protect citizens’ rights. Winchcomb believes international standards are severely lacking and future developments remain untested: “The General Data Protection Regulation, which comes into force in the EU in May 2018, requires organisations to prove that they have consent to hold personal information. This will have an impact on smart city data collection, but it remains to be seen how effective it will be in addressing concerns.”

Investments in data sensors and smart infrastructure are accelerating, and cities cannot afford to fight against the tide. With much of the regulatory change likely to consist of learning as we go, missteps are to be expected, but complicated technology does not always require complicated legislation.

Primarily, transparency needs to improve, with city dwellers often unaware of when data is being collected and by whom. Do app developers, network providers, hardware manufacturers or government agencies possess the data being collected? Or does ownership reside with the individual?

According to Gidari, “data is already part of the mosaic of life”, but data security is not. As open standards proliferate, your car may share data with an environmental agency or your smartphone may provide information to a nearby lamppost. Encryption and malware protection will become increasingly vital to ensure smart city data remains in the hands of the relevant parties.

In the end, we may all find ourselves swept up in the smart city rush. As companies dangle rewards in exchange for data, the loss of privacy could be deemed a price worth paying. Seattle-based company Placed already has an app that will exchange cash or prepaid gift cards for precise location data. The ceding of personal data, however, cannot help but have an impact on human behaviour: for good or bad.

As cities expand, the walls of the Panopticon are drawing in. If the internet is anything to go by, perhaps we don’t mind. Consumers may be more than willing to give up their information in exchange for convenience, efficiency and financial savings. But as the race between technology and urbanisation heats up, it looks like personal privacy is coming dead last.

CyanoGuard is making moves to end cyanide poisoning

In March, a cyanide spill at the Veladero mine in Argentina marked the third such incident in the past 24 months. The first spill, which occurred in September 2015, witnessed at least one million litres of a highly toxic, cyanide-containing solution released into the local ecosystem.

Unfortunately, these are not isolated cases: a recent study by the NGO Earthworks found that a staggering 93 percent of all gold mines currently operating in the US have been involved in similar incidents, with 74 percent polluting local water sources as a result. Without a doubt, cyanide poses a serious threat to local workers, communities and ecosystems.

Cyanide is regarded by many as one of the major cost drivers in gold extraction, accounting for as much as 25 percent of the operating costs in some cases. Until today, no environmentally friendly alternative has managed to displace its necessity, which continues to increase at a growing rate due to the diminishing quality of many ore deposits.

Considering how little the cyanide detection landscape had changed in recent decades, it was obvious the technology needed to be taken out of the lab

The detection of cyanide is key to preventing environmental disasters similar to those that struck Veladero. However, most standard cyanide detection methods are cumbersome and, paradoxically enough, often require the use of hazardous chemicals.

Finding a solution
Inspired by the lack of suitable and sustainable alternatives, I founded CyanoGuard in 2016. While studying as a chemistry student at the University of Zurich, I came across a groundbreaking technology that would enable cyanide to be detected in solution with the naked eye.

Considering how little the cyanide detection landscape had changed in recent decades, it was obvious the technology needed to be taken out of the lab and put into the hands of those who needed it most.

A focus on mining was a natural choice: while mining activities notoriously leave enormous ecological footprints on the environment, they also play an important role in the production of everyday products we often take for granted. For example, without the regular extraction of metals there would be no smartphones, communication wires, specialised medical equipment or high-end beauty products.

93%

Proportion of US gold mines affected by cyanide spills and similar disasters

25%

Premium added to the price of gold by cyanide

According to estimates from the World Bank, our ongoing transition to green energy technologies will only heighten this demand for minerals and metals. Therefore, mining companies are under enormous pressure to improve their economic performance, develop more sustainable practices and minimise their impact on the environment.

Acid test
At CyanoGuard, we believe mining companies must change the way they address cyanide detection if they are to meet the growing demand for metals. As such, we have developed a cyanide test that changes colour (from orange to violet) when free cyanide binds to a corrin-based indicator.

It works like a pregnancy test, showing the user within seconds whether the free cyanide concentration of a sample is above or below a certain threshold.

We are not shy about the fact we have developed the world’s fastest and safest cyanide detection kit, but what’s more important is that the technology allows everyone to test aqueous samples in any setting, at any moment and without the use of hazardous substances or specialised equipment and training.

In 2012, the Mining Association of Canada released a list of the top 100 innovations in the mining industry. Surprisingly, only one of the innovations named on the list was directly related to cyanide management. We found this fascinating, especially considering how much shareholder value a cyanide spill can erode within just days.

This new technology will help mining companies reduce operational costs, safeguard local ecosystems and protect their public image. Here at CyanoGuard, we refer to this as the ‘mining as if there is a tomorrow’ approach: the key methodology ensuring sustainable industrial and artisanal mining practices, as well as economic prosperity.

My team also believes in the endless potential of the technology, and foresees it playing an important role in the industry’s ongoing digital transformation.

But we plan to extend our operations beyond just the mining industry, tackling issues relating to cyanide detection in healthcare and food processing as well.

The platform is also set to extend to other pollutants, with the stated aim of simplifying analytical procedures that are currently only possible in laboratory settings.

Ultimately, CyanoGuard hopes to protect communities and the wider ecosystem by placing professional detection kits in everyone’s pockets – a proposition truly worth its weight in gold.

Sony brings Aibo back from the dead

On November 1, Sony announced it would recommence the production of its pet robot dog, Aibo, some 18 years after it was first released. The revival of Aibo appears to be an effort to steal a march on the likes of Google, Apple and Facebook in the burgeoning AI market.

The latest version of the robot dog comes with an assortment of new features, most notably facial recognition, a capacity to store memories and the ability to avoid obstacles.

In addition to these improvements, Aibo also incorporates new actuator technology, which allows it to move in a more lifelike manner. Voice recognition technology also gives users the opportunity to take photos remotely and save them to the cloud. Users can then access photos uploaded online via Aibo’s companion app, My Aibo.

In 1999, Sony was seen to be at the cutting edge of robotic technology with its Aibo series

According to a statement made by Sony: “Aibo can form an emotional bond with new members of the household while providing them with love, affection and the joy of nurturing and raising a companion.”

Aibo will be released in Japan this January with a price tag of JPY 198,000 ($1,738). The Japanese tech giant may also roll out Aibo in overseas markets in the not-too-distant future.

In 1999, Sony was seen to be at the cutting edge of robotic technology with its Aibo series. Sony sold around 150,000 of its robotic companions  before production came to a halt in 2006, amid mounting price pressure from regional rivals.

The announcement comes off the back of Sony reporting its fiscal second-quarter earnings, which outpaced proposed forecasts, rising from JPY 135.6bn ($1.2bn) to JPY 204bn ($1.8bn). The tech giant expects this run of success to continue until the end of the financial year, projecting an operating income of JPY 630bn ($5.5bn) – a figure that will see the company beat its best year on record.

This upsurge can be attributed to the resounding success of its PlayStation division, as well as a series of restructuring efforts that began in 2014. The news of Aibo’s rerelease has led Sony shares to increase by 11 percent, reaching a nine-year high.

World’s largest banks and R3 build blockchain-based payments system

On October 31, fintech firm R3 and 22 of the world’s biggest banks announced the development of a new international payments system. Corda, R3’s distributed ledger platform, manages legal agreements between businesses, updating in real time and processing transactions quickly with the help of computer algorithms.

Corda doesn’t require manual processing or third-party verification and so both commercial and central banks recognise its potential to make international transactions a far smoother process.

Both commercial and central banks recognise Corda’s potential to make international transactions a far smoother process

Frederic Dalibard, Head of Digital for Corporate and Investment Banking at Natixis, one of the banks involved in the project, said: “Natixis believes in the potential of distributed ledger technology for cross-border payments and is exploring several initiatives in that space,” according to Reuters.

“This is a significant moment for the evolution of distributed ledger technology,” Ricard Gendal Brown, CTO of R3, told the International Business Times. “Most other platform providers don’t commit to this.”

A consortium of heavyweight banks led by R3, will see the likes of Barclays, HBSC, Bank of America Merrill Lynch, JP Morgan, Credit Suisse and Commerzbank adopt the new ledger platform.

Blockchain technology is seen by many financial experts to herald a new era for financial operations. While central banks all over the world point out that blockchain is still not developed enough to be a universal currency platform, various world banks have backed the software – via the R3-led consortium – because of its potential to facilitate smoother transaction processes.

US moves to make drone delivery reality

On October 25, US President Donald Trump issued an executive order which gives local governments the authority to allow drone tests over crowds and for long distances. The move is expected to speed up the development of new regulations in order to boost the industry’s progress, especially as players like Amazon are pushing to use drones in their delivery services.

“In order to maintain American leadership in this emerging industry here at home, our country needs a regulatory framework that encourages innovation while insuring airspace safety,” said Michael Kratsios, Deputy Assistant to the President and Deputy US Chief Technology Officer.

The new order is the White House’s first step towards allowing the use of drones in commercial activities. In the last few years, drones have been adopted as a form of domestic entertainment, but initiatives like Amazon’s have shown their revolutionary potential in other industries.

Drones have been adopted as a form of domestic entertainment, but initiatives like Amazon’s have shown their revolutionary potential in other industries

Amazon CEO Jeff Bezos has long been keen to transform the delivery industry with drone technology. The company’s R&D department has been working on Bezos’ futuristic plan, best known for outlining delivery services on the moon, since 2013. But it was in December 2016 that the e-commerce giant trialed its Prime Air drone delivery in Cambridge, UK for the first time.

With Trump’s new order, Amazon could begin the process of turning its proposal into a reality.

According to the document signed by Trump, the measure will also help to “enhance the safety of the American public, increase the efficiency and productivity of American industry and create tens of thousands of new American jobs,” The Wall Street Journal reported.

The news came after a federal advisory panel failed to provide the US Federal Aviation Administration with a recommendation regarding regulation for drones. Safety and security are their chief concerns.

Now, the option to test drones will “provide policy makers with the data they need to bring about the future of American aviation”, Kratsios said.

Meanwhile, tech developers are investing heavily in drones in order to secure a prominent position in the market when the order comes into effect.

 

Microsoft boosts green energy portfolio with General Electric agreement

On October 9, Microsoft announced the signing of a 15-year power purchase agreement with General Electric (GE).

As a result of the partnership, Microsoft has committed to purchasing 100 percent of the power produced by the 37-megawatt Tullahennel wind farm in County Kerry, Ireland.

Aside from energy production, the deal will also see Microsoft and GE work together to improve battery technology in the country.

Each turbine will be fitted with an integrated battery that will analyse data related to energy storage. Tests will then identify the best way for these batteries to store excess energy and feed it back to the Irish grid during times of high demand.

The utilisation of energy storage technology could help eliminate the peaks and troughs that are associated with renewable energy sources

Microsoft confirmed that the utilisation of energy storage technology could help eliminate the peaks and troughs that are associated with renewable energy sources.

The Washington-based firm also announced that the partnership with GE marks the first time batteries have been used in a European wind farm.

“Microsoft is proud to be deepening our long history of investment and partnership in Ireland with this agreement,” said Christian Belady, General Manager of Data Centre Strategy at Microsoft. “Our commitment will help bring new, clean energy to the Irish grid, and contains innovative elements that have the potential to grow the capacity, reliability and capability of the grid.”

Despite Microsoft’s insistence that the deal is motivated by environmental concerns, the use of green energy also makes financial sense for the company.

Alongside the falling price of renewable energy, legislation issued by governments all over the world has incentivised technology firms to find green energy sources for their data centres.

Back in July, for example, Google pledged to purchase all of the energy produced by a solar farm in the Netherlands for the next 10 years. Microsoft’s own commitment to clean energy sources, meanwhile, now totals almost 600 megawatts worldwide.