Workers across industries must figure out how they can adapt to rapidly changing conditions, and companies have to learn how to match those workers to new roles and activities. This dynamic is about more than remote working or the role of automation and AI. It’s about how leaders can re-skill and upskill the workforce to deliver new business models in the post-pandemic era.
In the Fourth Industrial Revolution, the lines between the biological, technological and physical realms are being blurred. In this age, the three most valuable skills for people are emotional intelligence, critical thinking and lifelong learning. Those three skills will protect people’s livelihoods should their jobs become automated or obsolete because of technology.
To meet these challenges, companies should craft a talent strategy that develops employees’ critical digital and cognitive capabilities, their social and emotional skills, and their adaptability and resilience. Now is the time for companies to double down on their learning budgets and commit to reskilling. Developing this muscle will also strengthen companies for future disruptions.
The workplace will be decentralized, and with decentralization comes complexity. If we can be curious, conscientious and confident that no matter the complexity, we will learn to find a way through it, we will succeed.
Admittedly, it will be challenging, with lots of pivots in new ways for dealing with conflict, anxiety and isolation, building team culture and just being human with others.
DigiconAsia: In your opinion, what are some basic foundations (based on ethics, government regulations, industry standards etc.) that should be in place before we allow artificial intelligence to become ‘coworkers’ in our organizations?
Cook: We live in an uncertain time that is characterized by change for which we have limited experience in handling. For the first time in history, man and machine will have to learn to live and work with one another in increasingly intimate ways.
The idea that digital omnipotence is inevitable worries me. What will human agency look like in this Brave New World where you have AI powered by quantum computing? Where everything is connected, knowable and predictable. Where people with access to technology have an unfair advantage?
Nuclear is a technology that poses an existential threat to the world. Five decades into the Nuclear Non-Proliferation Treaty and on balance, we believe it has been an effective deterrent of proliferation. Or, at the very least, has set the norm of non-proliferation.
Like nuclear technology, AI can be weaponized and although within itself, it is not a weapon of mass destruction, AI can create a dangerous monopoly of data-powered influence. Before we sign up for AI coworkers, we need to establish a treaty that guides the proliferation of AI.
This treaty or agreement should start with the Asilomar AI principles, which guide the development of AI towards opportunities to help and empower people in the decades and centuries ahead with beneficial AI.
It covers:
- Research issues such as goals and funding, as well as the link between AI researchers and policy makers
- Ethics and values – that ultimately hold humans accountable for an AI’s outcome and impact, and the sanctity of privacy – a fundamental human right
- Longer-term issues such as Super AI should only be developed in service of society and not for one nation or organization