6 positive AI visions for the future of work – World Economic Forum

This site uses cookies to deliver website functionality and analytics. If you would like to know more about the types of cookies we serve and how to change your cookie settings, please read our Cookie Notice. By clicking the “I accept” button, you consent to the use of these cookies.
Current trends in AI are nothing if not remarkable. Day after day, we hear stories about systems and machines taking on tasks that, until very recently, we saw as the exclusive and permanent preserve of humankind: making medical diagnoses, drafting legal documents, designing buildings, and even composing music.
Our concern here, though, is with something even more striking: the prospect of high-level machine intelligence systems that outperform human beings at essentially every task. This is not science fiction. In a recent survey the median estimate among leading computer scientists reported a 50% chance that this technology would arrive within 45 years.
Importantly, that survey also revealed considerable disagreement. Some see high-level machine intelligence arriving much more quickly, others far more slowly, if at all. Such differences of opinion abound in the recent literature on the future of AI, from popular commentary to more expert analysis.
Yet despite these conflicting views, one thing is clear: if we think this kind of outcome might be possible, then it ought to demand our attention. Continued progress in these technologies could have extraordinarily disruptive effects – it would exacerbate recent trends in inequality, undermine work as a force for social integration, and weaken a source of purpose and fulfilment for many people.
In April 2020, an ambitious initiative called Positive AI Economic Futures was launched by Stuart Russell and Charles-Edouard Bouée, both members of the World Economic Forum’s Global AI Council (GAIC). In a series of workshops and interviews, over 150 experts from a wide variety of backgrounds gathered virtually to discuss these challenges, as well as possible positive Artificial Intelligence visions and their implications for policymakers.
Those included Madeline Ashby (science fiction author and expert in strategic foresight), Ken Liu (Hugo Award-winning science fiction and fantasy author), and economists Daron Acemoglu (MIT) and Anna Salomons (Utrecht), among many others. What follows is a summary of these conversations, developed in the Forum’s report Positive AI Economic Futures.
Participants were divided on this question. One camp thought that, freed from the shackles of traditional work, humans could use their new freedom to engage in exploration, self-improvement, volunteering, or whatever else they find satisfying. Proponents of this view usually supported some form of universal basic income (UBI), while acknowledging that our current system of education hardly prepares people to fashion their own lives, free of any economic constraints.
The second camp in our workshops and interviews believed the opposite: traditional work might still be essential. To them, UBI is an admission of failure – it assumes that most people will have nothing of economic value to contribute to society. They can be fed, housed, and entertained – mostly by machines – but otherwise left to their own devices.
People will be engaged in supplying interpersonal services that can be provided – or which we prefer to be provided – only by humans. These include therapy, tutoring, life coaching, and community-building. That is, if we can no longer supply routine physical labour and routine mental labour, we can still supply our humanity. For these kinds of jobs to generate real value, we will need to be much better at being human – an area where our education system and scientific research base is notoriously weak.
So, whether we think that the end of traditional work would be a good thing or a bad thing, it seems that we need a radical redirection of education and science to equip individuals to live fulfilling lives or to support an economy based largely on high-value-added interpersonal services. We also need to ensure that the economic gains born of AI-enabled automation will be fairly distributed in society.
One of the greatest obstacles to action is that, at present, there is no consensus on what future we should target, perhaps because there is hardly any conversation about what might be desirable. This lack of vision is a problem because, if high-level machine intelligence does arrive, we could quickly find ourselves overwhelmed by unprecedented technological change and implacable economic forces. This would be a vast opportunity squandered.
For this reason, the workshop attendees and interview participants, from science-fiction writers to economists and AI experts, attempted to articulate positive visions of a future where Artificial Intelligence can do most of what we currently call work.
These scenarios represent possible trajectories for humanity. None of them, though, is unambiguously achievable or desirable. And while there are elements of important agreement and consensus among the visions, there are often revealing clashes, too.
The economic benefits of technological progress are widely shared around the world. The global economy is 10 times larger because AI has massively boosted productivity. Humans can do more and achieve more by sharing this prosperity. This vision could be pursued by adopting various interventions, from introducing a global tax regime to improving insurance against unemployment.

Large companies focus on developing AI that benefits humanity, and they do so without holding excessive economic or political power. This could be pursued by changing corporate ownership structures and updating antitrust policies.

Human creativity and hands-on support give people time to find new roles. People adapt to technological change and find work in newly created professions. Policies would focus on improving educational and retraining opportunities, as well as strengthening social safety nets for those who would otherwise be worse off due to automation.

The World Economic Forum’s Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines to improve their AI procurement processes.
Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI – standards that can be eventually adopted by industries.
We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.
Society decides against excessive automation. Business leaders, computer scientists, and policymakers choose to develop technologies that increase rather than decrease the demand for workers. Incentives to develop human-centric AI would be strengthened and automation taxed where necessary.
New jobs are more fulfilling than those that came before. Machines handle unsafe and boring tasks, while humans move into more productive, fulfilling, and flexible jobs with greater human interaction. Policies to achieve this include strengthening labour unions and increasing worker involvement on corporate boards.
In a world with less need to work and basic needs met by UBI, well-being increasingly comes from meaningful unpaid activities. People can engage in exploration, self-improvement, volunteering or whatever else they find satisfying. Greater social engagement would be supported.

The intention is that this report starts a broader discussion about what sort of future we want and the challenges that will have to be confronted to achieve it. If technological progress continues its relentless advance, the world will look very different for our children and grandchildren. Far more debate, research, and policy engagement are needed on these questions – they are now too important for us to ignore.

Stuart Russell, Professor of Computer Science and Director of the Center for Human-Compatible AI, University of California, Berkeley
Daniel Susskind, Fellow in Economics, Oxford University, and Visiting Professor, King’s College, London
The views expressed in this article are those of the author alone and not the World Economic Forum.
Learn how global organizations are operationalizing responsible AI in healthcare.
We asked experts to discuss the challenges of artificial intelligence developments in the workplace, their economic impacts and share their positive AI visions for the future.
Subscribe for updates
A weekly update of what’s on the Global Agenda
© 2021 World Economic Forum

source

Leave a Comment