Artificial intelligence: a brief on risks and opportunities

· 5 min read
By , and

The last few years have witnessed striking progress in artificial intelligence, driven by the machine learning revolution. However, as ChatGPT foreshadows, humanity is not prepared to govern extremely powerful AI systems as it remains difficult to understand their functioning and direct their influence.

To assist international policy actors in the development of sustainable technological development, we’re publishing this short guide to recent developments in artificial intelligence.

Risks & opportunities from artificial intelligence

1-sentence summary: Artificial Intelligence (AI) has transformative potential for personal well-being, economic growth, and scientific advancement, but risks of misaligned systems range from exacerbated inequality to loss of human control, with solutions including research, auditing, and standard-setting.

Definition: Artificial intelligence is the science of building systems that can understand, plan, and execute without continuous human supervision. Recent breakthroughs, like GPT-4 (the foundation model behind ChatGPT), use deep learning to train on large data sets autonomously and are then refined through reinforcement learning from human feedback

The technical details: how artificial intelligence (specifically machine learning) works:

  • For the most advanced systems,  AI = algorithms + compute + data

  • Algorithms convert data into useful outputs and allow the system to learn. AI algorithms have not advanced much since 2018. Transformers were a big step forward.

  • Data is the information used to train the system. More data = better systems. 

  • Computing power or ‘compute’ constitutes the chips used to run the systems. AI chips are difficult to build and have a complicated supply chain.

  • Recently, most advances have come from using more compute. Scaling compute leads to emergent AI properties (i.e unpredictable advances). This also means the cost of developing AI is rapidly rising, though the potential for profits is too. 

  • AI is a ‘black box’. The iterative process of AI system development means it is currently impossible to know what is happening within and to align the system with human requirements. There are two types of misalignment:

  • Currently, the most advanced systems are ‘large language models’ (e.g. the system underlying ChatGPT). Such ‘foundation models’ are extensively trained on large datasets and human feedback and can then be used in diverse applications.

The international AI landscape:

  • The most advanced systems are concentrated among a few big companies, but once a system has been trained, it is cheap to proliferate it on consumer technology. 
  • There have been numerous international attempts at AI ethics standards, with little implementation.
  • The EU AI Act is the leading attempt to legislate safe AI systems.

Opportunities from AI:

Risks from AI:

  • Increasing inequality, bias, and misinformation issues in generative media from incomplete training datasets and the black-box improvisational nature of AI systems leading to difficulties assessing authenticity of outputs and discriminating against underrepresented populations
  • Unprecedented instability due to drastic changes in key organizing features of our societies through automation - e.g. job displacement, community organization, international affairs, or social media
  • Autonomous misinformation or weapon systems increasing the number of conflicts through highly targeted, low-cost approaches and reducing the barrier to deployment and increasing willingness to take risks of terrorist and state actors
  • Loss of human control and agency due to the increasing returns in effectiveness of larger models, recursive self-improvement in AI agents, and exploitation of vulnerabilities of the human psyche - leading to the potential capture of organizations, markets, regulators and entire populations due to the incomplete consideration of all human values in AI goals

Key governance gaps in AI:

  • Insufficient democratic governance of AI and data regulation (AI development and access should not be broadened)
  • No internationally implemented standards for safety and interpretability of AI behavior
  • Barriers to cooperation between major AI developers

Solutions for the governance of AI:

Case studies in risks from AI:

Prominent experts speaking about the risks and solutions:

If you would like to suggest additions or have other feedback, please email