When humans become commoditised, it is time for change

people scrolling through phones

We live in the era of clever technology and AI, where machines are taught to learn and act without human intervention. It’s easy to reel off the numerous ways in which technology improves our lives. In business it has made the impossible possible. As individuals, we can instantly connect with others around the globe as never before. But the use of tech can also blur the boundaries of what it means to be human. Technology can often take over or pre-empt our decisions, influence our thoughts and affect our actions and behaviour. Alongside the infinite possibilities to the application of technology, there are also very real risks.

This is brilliantly highlighted in the chilling Netflix documentary-drama ‘The Social Dilemma’, which exposes how our social media platforms are powered by a surveillance-based business model designed to mine, manipulate and extract our human experiences for profit. It’s a stunningly successful strategy that has made tech giants like Facebook, Google and Instagram among the richest companies in the history of humanity. Now the very experts who built this technology are sounding the alarm on their own creations, concerned about its unforeseen effects.

Manipulation

What these whistle-blowing tech gurus know is that their platforms can affect our real-world behaviour. But users aren’t just being sold the latest fashion trend or gadget. The targeting capabilities of these platforms give anyone with a motive “the power and precision to influence us cheaply and with phenomenal ease.”

Disinformation campaigns have been cited in more than 70 countries and have doubled in the past two years[1].  These campaigns are used at scale to incite hatred, polarise those with opposing views, ferment protest, spread fake news and even interfere with elections.

They can do this because all social media is driven by algorithms, which as author and data scientist Cathy O’Neil succinctly explains, “are opinions embedded in code.” She stresses, “Algorithms are not objective. They’re optimised to some definition of success. And when they’re employed for commercial use, that’s usually profit.”

Using sophisticated psychological techniques such as ‘positive intermittent reinforcement’, the brains behind social media calculate how to manipulate users as fast as possible and keep us online by rewarding us with dopamine hits. It’s the same principle that drives the use of slot machines in Las Vegas.

As Tabitha Goldstaub explains in her latest book ‘How to Talk to Robots’, “If your social feed defines your spending habits or you’ve downloaded the latest filter to see what you’ll look like when you are old or now connect with your doctor using an app, have applied for a job online or used your phone to arrive at work in record time, AI is playing a part in how you live, work and play”. 

As Yale University professor Edward Tufte notes, “There are only two industries that call their customers ‘users’: illegal drugs and software.”


[1] New York Times

Human cost

On a human level, the pervasiveness of social media has had deeply worrying effects on mental health. Developed countries have seen massive growth in the number of hospital admissions and suicides since 2011, when social media platforms came into widespread use on smartphones.

It is notable that many in the technology industry will not give smartphones to their own children, do not allow them access to social media below the age of 16 and strictly limit screen time.

Turning point

So, has technology lost its way? Joe Toscano, former experience design consultant for Google, says, “I don’t think these guys set out to be evil. It’s just the business model that’s flawed.”

Former Facebook engineer Justin Rosenstein, co-inventor of Facebook’s Like button, agrees, “When we created the Like button, our entire motivation was to spread positivity and love in the world. The idea that today we’d have teens getting depressed when they don’t get enough Likes or political polarisation was nowhere on our radar.”

Recognising that technology’s promise to keep us connected has given rise to a host of unintended consequences, there is a growing movement towards building a more humane technology future. Spearheaded by the Center for Humane Technology, co-founded by former Google design ethicist Tristan Harris, former Nvidia executive Randima Fernando and Aza Raskin, former head of user experience at Mozilla and inventor of the infinite scroll, the non-profit aims to “address our broken information ecosystem” so that humanity returns to using technology for good, not greed.

Time for change

“It’s easy to lose sight of the fact that these tools have created some wonderful things in the world,” asserts Tim Kendall, former president of Pinterest and, before that, director of monetisation for Facebook. “Things like reuniting lost family members, finding organ donors. There were meaningful, systemic changes happening around the world because of these platforms that were positive. I think we were naïve about the flipside of that coin.”

Is it too late to put the genie back in the bottle?

The tech gurus behind the Center for Humane Technology believe there is still hope. Tristan Harris says, “The fabric of a healthy society depends on us getting off this corrosive business model. We can demand that these products be designed humanely. We can demand not to be treated as an extractable resource.”

Likewise, Justin Rosenstein passionately believes there must be a change in attitude but that it will only come with people power: “I feel like we’re heading on a fast track towards dystopia and it’s going to take a miracle to get us out of it. And that miracle is, of course, collective will. At the end of the day this machine isn’t going to turn around until there’s massive public pressure.”

Supporting Tech for Good

At Square Mile Accounting we believe that technology can of course be used for good but we all have a responsibility to use it within the boundaries and context of positive social values and moral grounding.  For a healthy society, technology should not be allowed to stealthily infiltrate every aspect of our lives unchecked. The Social Dilemma exposes our vulnerabilities as humans, showing technical capability must be balanced by human judgment to ensure it is used with integrity.

This year’s seismic events have given us all the opportunity to reflect on and re-evaluate our purpose. For David and all the team at Square Mile, this means actively helping and supporting the people behind the businesses we work with, especially those whose businesses have a positive social impact. For us, technology is an enabler. It’s the human element and social impact that is our focus.

Our mission is to help navigate the current and future technological developments consciously. We’ll be sharing our client stories as part of a campaign to highlight Tech for Good.  Let’s keep the conversation going. We all have a part to play.

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest