New Framework to Encourage Openness in AI Development by Mozilla

Published 4 months ago

The Columbia Institute of Global Politics and Mozilla have collaborated with leading AI experts to create a framework that promotes openness in artificial intelligence (AI). They have published a paper detailing the concept and structure of this framework.

The Value of Open Source Technologies

Open source technologies, which were instrumental in fostering innovation and safety in the early internet era, have provided essential building blocks for software developers. These technologies have been utilised in various fields, from arts to vaccine design, and the creation of globally used applications. The estimated worth of open source software exceeds $8 trillion. Previous attempts to constrain open innovation, such as export controls on early web browser encryption, have been counterproductive, further highlighting the value of openness.

Openness in AI Models and Systems

The paper examines the current methods of defining openness in AI models and systems, proposing a descriptive framework to comprehend how each element of the foundation model stack contributes to openness. Open source approaches for AI, particularly foundation models, promise significant societal benefits. However, defining “open source” for foundation models has been challenging due to its significant differences from traditional software development. This ambiguity has made it difficult to propose specific approaches and standards for developers to promote openness and reap its benefits.

Challenges with Openness in AI

Discussions around openness in AI are often high-level, making it tougher to understand the benefits and risks of openness in AI. Some policymakers and advocates associate open access to AI with certain safety and security risks, often without substantial evidence to support these claims. Conversely, the merits of openness in AI are widely proclaimed, but without clear guidance on how to actually utilise these opportunities.

Columbia Convening’s Role

In February, Mozilla and the Columbia Institute of Global Politics convened over 40 leading scholars and practitioners working on openness and AI. These individuals, from prominent open source AI startups, nonprofit AI labs, and civil society organisations, explored the meaning of “open” in the AI era. The recently published paper, resulting from this collaboration, provides a framework to deal with openness across the AI stack.

Outcomes of the New Framework

The paper proposes a framework to understand how each part of the foundation model stack contributes to openness. It offers an analysis of how to unlock specific benefits from AI, based on desired model and system attributes. The framework also adds clarity to support further work on this topic, including efforts to develop stronger safety safeguards for open systems.

The framework is expected to aid discussions within technical and policy communities. For instance, it can help clarify how openness in AI can support societal and political goals, such as innovation, safety, competition, and human rights. It can also assist AI developers in ensuring their AI systems help achieve their intended goals, promote innovation and collaboration, and reduce harms. The Columbia Institute of Global Politics, Mozilla, and the wider open source and AI community, along with policy and technical communities, anticipate further development based on this framework.