Get Involved  /  
Publication

CODES Community Feedback on UN AI Advisory Board's Interim Report

April 25, 2024
04:01 (EST)
Nilushi Kumarasinghe
CODES Focal Point, Sustainability in the Digital Age & Future Earth

CODES Community Feedback to the UN AI Advisory Board’s Interim Report on Governing AI for Humanity

In December 2023, the UN AI Advisory Board that focuses on supporting global governance of AI, released their interim report on Governing AI for Humanity. The report discusses opportunities and risks of AI and presents a framework to strengthen international governance of AI. To encourage collective action, the AI Advisory Board called for feedback on the report.

The CODES secretariat reached out to the wider CODES community to engage them in generating a collective feedback document. A community meeting was held on February 26th, 2024 to inform the community of the call for a feedback and in March a draft report was circulated to the wider CODES community for their comments and edits. Together, feedback to the Interim Report was formulated and submitted. As CODES, we reinforce the message to prioritize environmental digital sustainability in the governance of AI, among other concerns.

We want to thank our CODES community for their valuable engagement and contribution to this work. Read the CODES Community reflections on key components of the UN AI Advisory Board’s Interim Report on Governing AI for Humanity below.

Reflections on AI Opportunities and Enablers highlighted in the AI Interim Report

  • We recognize the highlighted importance of investing in capacity building as an enabler in Paragraph 19. To add to this, we want to emphasize further that all key stakeholders, including policy makers, tech innovators, problem holders, civil society and more, should receive education and capacity building on how best to leverage AI in a safe, inclusive, and sustainable way.  
  • Some other examples for AI opportunities in the environment that the report can highlight are digital twins of the natural environment to aid large-scale biodiversity and ecosystem monitoring, scenario building, impact prediction; optimization of energy, water and other natural resources; and accurate early warning to climate change related hazards.   
  • While it is mentioned in Paragraph 20 (para 20), that AI can and should be deployed to support the SDGs, WE WISH THIS TO EXPLICITLY HIGHLIGHT DIGITAL ENVIRONMENTAL SUSTAINABILITY, WHICH WE FEEL IS OFTEN LESS EMPHASIZED. We appreciate the mentioned need to go beyond market mechanisms, the benevolence of the private sector and the call for a broader governance framework in order to facilitate this. We wish to add that a wide range of stakeholders, from diverse regions and sectors, must also be involved in the design and implementation of such frameworks. This includes for example tech producers, users, civil society, research communities, policy makers and investors.
  • CODES also recognizes the importance of increased data from the Global South as an enabler. For example, biodiversity data is largely skewed in North America and Europe, while the biodiversity rich countries do not have as much data collection up to present. It is also important to capture the voice of Indigenous communities, rural farmers, women and other marginalized yet critical groups of people in the advent of generative AI to mitigate inherent biases and unintendedly exacerbating inequality. Planet-focused Digital Public Infrastructure will play a key role, along with continued effort to build a global data commons with open data and fair data sharing principles.

Reflections on the risks and challenges

  • While environmental risks are mentioned in Box 3, under risks to (Eco)systems, GIVEN THE SCALE AND UNCERTAINTY OF AI RISKS TO OUR ENVIRONMENT, CODES CALLS FOR A SEPARATE RISK CATEGORY DEDICATED TO ACKNOWLEDGE THE HIGH ENVIRONMENTAL RISKS OF AI. 
  • While AI can increase energy efficiency they can also emit large quantities of GHG emissions. Studies estimate that the energy consumed by a search using generative AI takes up four to five times more energy than conventional web search. Furthermore, the energy consumption of ChatGPT has been estimated to equate to the energy consumed by 33,000 US households. In addition, training machine learning models can also consume large quantities of water, with some studies estimating AI water demand in 2027 to be 4.2-6.6 billion cubic meters [Studies highlighted in Nature review publication - https://doi.org/10.1038/d41586-024-00478-x].  
  • Indirect impacts of AI and other tech have not yet been fully understood and can be significant, for example the impact on consumption. Digital technologies are optimizing supply chains and enabling efficiency gains by reducing the time, transaction costs or human capital needed for various tasks. This is lowering the costs of production and distribution of goods and as a consequence, creating “rebound effects" by placing a downward pressure on the prices of goods and services, thereby enabling increased production and consumption.
  • We are in agreement with paras 31-33 and want to reiterate the importance of leveraging the precautionary principle to prepare for and plan around the uncertainties of AI.
  • We must prioritize energy efficiency in the development of AI systems, for example by optimizing code and hardware. 
  • We stress on para 24, on the opaqueness of AI. Current data gaps in AI systems prevent informed policy making from taking place. Researchers and developers should report and make available data on methodologies, resources used, including materials and energy, in the development and design of AI. This will enable comparisons to be made and help implement policies to mitigate understood risks. 
  • We call for circularity and support para 37 in the need to account for the entire lifecycle of AI so that metals and minerals used for digital products can be tracked, traced, recovered and recirculated.
  • Exaggerated misrepresentations of AI risks can distract the general public and policy makers from understanding  AI’s true direct problems and the responsibilities of AI creators. 
  • It is crucial to navigate away from business as usual and optimize AI for the public interest and human and planetary well-being. Governmental institutions are responsible for establishing an adequate playing field with legislation to enable this reorientation of AI and other emerging technologies.
  • We recommend to base AI risk assessments on impacts to human and planetary well-being. A clear process on risk classification is necessary.

Reflections on the Guiding Principles to guide the formation of new global governance institutions for AI

  • Under Guiding Principle 3, We call to highlight the foundational role of data standards for responsible and ethical AI practices. The data standards shall be based on the existing best practices and enrich and complement the existing developments. These standards ensure data quality, consistency, interoperability, and transparency throughout the AI lifecycle. 
  • Under Guiding Principle 5, We recognize the importance of Guiding Principle 5 but wish to recommend a step further than anchoring and PROPOSE A NEW OR REVISED PRINCIPLE THAT CALLS FOR THE FULL ALIGNMENT OF THE VISION, VALUES, AND OBJECTIVES OF AI WITH THAT OF SUSTAINABLE DEVELOPMENT, SPECIFICALLY ENVIRONMENTAL SUSTAINABILITY, TO ACHIEVE DIGITAL SUSTAINABILITY. This is a reorientation of digitalization that must be led by strong multi-stakeholder coalitions. This must include the greening of AI, to ensure that environmental impacts are measured and mitigated. This can support Guiding Principle 2 on governing AI for the interest of the public as well.

Reflections on the Institutional Functions that an international governance regime for AI should carry out

CODES is in strong support of the Institutional Functions listed. We wish to reiterate that the private sector/AI developers should be held accountable and be responsible for the sustainable governance of AI, in addition to member states. The accurate and complete data of the environmental impacts is very limited and the developers must be transparent about the environmental impacts of their models.

  • Under Institutional Function 1, CODES is in strong support of building a scientific foundation on AI through systematic processes similar to that of the IPCC. The process should be diverse, and engage globally diverse researchers and different knowledge systems. All impacts across its supply chain, including social, environmental and economic impacts should be observed and assessed. 
  • Under Institutional Function 2, we also propose a governance mechanism to report on measures taken to advance and address the risks of AI. 
  • Under Institutional Function 3, CODES reiterates the importance of developing global standards and indicators to assess the impacts of AI. We align well with the establishment of standards to measure the environmental impacts of AI as highlighted in Paragraph 65. The assessment of the environmental impacts shall be consistent and interoperable. 
  • Under Institutional Function 4 : As CODES, we want to continue advocating for the inclusion of environmental sustainability alongside social and economic sustainability and benefits. Therefore, we propose to include environmental benefits within the title of Institutional Function 4. 
  • Institutional Function 5: Institutional Function 5 should also include enforcement of existing copyright and patent laws to ensure data curators and content generators are paid their fair share on contributing to data sets that are later generated/or released by AI, for example by setting in place an international compensation framework.  

Other comments on the International Governance of AI

We want to bring attention to some additional elements that must be considered in the international governance of AI:

  • We must connect and build trust between tech innovators, users, civil society, researchers, and policy makers. There should be safe spaces for transparency and open collaboration where digital technologies can be designed through inclusive practices, integrating perspectives from diverse voices, sectors, and regions. This will help build a common understanding on AI opportunities and risks and help develop AI tools that center around environmental, social, and economic sustainability.   
  • There must be increased transparency in the operations of labor markets behind AI, in particular workers involved in the sorting and labeling of data for AI. 
  • We wish to reiterate the importance of not crossing “red lines” for AI as highlighted in Para 29 of the report. We strongly argue against the use of AI-enabled systems with autonomous functions on the battlefield, the broader concept of weaponizing AI, and other “red lines” as identified by the report. Restricting AI from crossing “red lines” should be enforced by national and international entities. 
  • AI practitioners, in particular, in high stake applications, should receive ethics certifications and adhere to professional standards in technical and ethical issues. Governments, the Tech Industry, and Professional bodies play an important role in ensuring this. Actors such as CEOs and governing bodies of high stake AI companies should also possess a deep understanding of AI ethics, including as well as environmental impacts. 

General feedback on the Interim Report

  • The introduction lacks sufficient reference to the environment risks, failing to address how AI can negatively impact on the environment. Despite the first box focusing on Climate change, there's a need for broader context and vision on environmental risks as well as the critical role of political will in supporting insights gained from AI. 
  • The AI Advisory Board should also consider guidelines and standards to limit the substitution of Human Intelligence with AI and generative AI in sensitive or high risk situations.
  • Finally, it's important to stress that the report represents a first step in an ongoing process of reflection and action on AI governance. It is essential that the recommendations and guiding principles presented in the report are followed by concrete action on the part of governments, businesses and civil society to ensure that AI is used responsibly and for the benefit of humanity as a whole. The report could further explore some concrete steps to strengthen international cooperation on AI governance. This could include initiatives such as data-sharing agreements, cross-border monitoring mechanisms and joint training programs for AI professionals.

The AI Advisory Board’s draft Interim Report can be found here.

No items found.