Input for High-Level Advisory Board Member Poonam Ghimire and the UN Foundation

· 5 min read
By

In August 2022, SI was invited to contribute to two consultations held by the United Nations Foundation and High-Level Advisory Board member Poonam Ghimire on ‘Peace and Security’ and ‘Digital Governance’. 

SI members discussed the need to incorporate catastrophic risks; to build institutions that can adapt to the rapid pace of social and technological change; and to consider the interests of future generations. Below is the summary of SI members’ points submitted later as written input.

Peace and Security

Context:

  • Emerging technologies such as transformative artificial intelligence, new generations of nuclear weapons and advances in synthetic biology will transform the conflict and security landscape

    • By introducing new risks (such as lethal autonomous weapons that could be directed to kill millions of people or entire ethnic groups)
    • By introducing new ways to reduce risks (for instance, technology that allows the international community to verify if a country has an active bioweapons programme)
    • By introducing new conflict-resolution tools (e.g AI-enabled mediation tools)
  • Alongside these specific risks and opportunities, there are structural, or systemic, risks and opportunities that come from autonomous systems. Emerging technologies could destabilise our societies and thus make conflict more likely and more devastating.

    • For instance, outsourcing military surveillance systems to AI which could easily be hacked
  • Why does this matter for youth?

    • Young people are disproportionately concentrated in conflict-affected areas
    • Young people will be both the potential victims and potential creators of emerging technology that increases or reduces conflict and conflict damage
  • Why does this matter for future generations?

    • Citizens of future-majority countries are disproportionately victims of conflict
    • The application of emerging tech to conflict presents existential risks, which could threaten the existence of future generations altogether

Recommendations:

  • HLAB should include the recommendation to not incorporate artificial intelligence into nuclear-armed states’ NC3 (Nuclear Command, Control and Communications) systems. Relevant paper here
  • HLAB should encourage institutions such as the UN Futures Lab to study the ability of AI to help mediate conflicts
  • HLAB should recommend a strengthening of the Biological Weapons Convention, by increasing the budget of its Implementation Support Unit and adding a Verification Mechanism
  • HLAB should advocate for UN agencies such as UNODA to encourage the adoption of a common global standard of what constitutes a Lethal Autonomous Weapon, as currently countries have differing definitions, making progress difficult.
  • HLAB should declare that conflict constitutes an existential risk factor to humanity as a ‘force multiplier’ to other existential threats: as it reduces international cooperation on key issues such as climate change, AI and disarmament.

Digital Governance

Context:

  • Many issues in Digital Governance can be traced back to the fact that modern Artificial Intelligence systems trained using machine learning are essentially ‘black boxes’:

    • We can observe their inputs and outputs, but do not know for sure how they come to their decisions or how they will react in new situations
  • Issues that fall under this category:

  • A strand of research called ‘AI Safety’ focuses on ensuring that AI systems are: 

    • Interpretable (we know what causes them to behave the way they do)
    • Robust (we can ensure that these systems will behave the same way in ‘training’ as in the real world)
    • Specified (AIs have clear goals that align with human values)
  • However, there are just c.300 researchers working on AI Safety (with another several thousand working on the broader field of AI Ethics), but over 40,000 researchers working to improve the power of AI systems (according to Global AI Talent Watch 2020)

  • Without greater investment into AI Safety, these systems’ power will outstrip their wisdom and humans’ ability to control them

  • However, researchers are working on making common standards for interpretability, robustness and specification

    • This is important science, but without a political push for all countries to adopt them, they will be useless
    • The latter is particularly likely if countries form ‘blocs’ that adopt different sets of standards
    • The UN is in a unique position to work with standard-setting bodies like the International Standards Organization to ensure safe AI standards with universal adoption
  • Why this matters to youth:

    • As AI systems advance, young people are disproportionately likely to be victims or discrimination, polarization or existential catastrophe
    • Young people are also most likely to become AI safety researchers or push for political solutions
  • Why this matters for future generations

    • Future generations will inhabit the world we build for them; we must ensure that it is a flourishing, fair and human-centric one
    • But also that it exists, and AI might be a key risk factor e.g. in nuclear arsenal management or the development of synthetic biology

Recommendations:

  • HLAB should specifically call for common global standards in interpretability, robustness and specification 
  • And set out steps for UN agencies to work on this with national delegations and the private sector
  • The most important paper that outlines this topic can be found here