Does Open Source Software Still Matter?

Datanami

Elastic announced that Elasticsearch and Kibana are being licensed under the GNU Affero General Public License v3 (AGPL), which is an Open Source Initiative (OSI)-approved open source license.

Is that LLM Actually “Open Source”? We need to talk Open-Washing in AI Governance

Hackernoon

The Open Source AI Definition provides a solid foundation for understanding what constitutes an open-source AI system. It lays out clear criteria for transparency, accessibility, and ethical use, ensuring that AI models meet a minimum standard of openness. By adhering to the OSIAID, developers, and users can have confidence that an AI model meets basic standards of openness and transparency.

Debate over “open source AI” term brings new push to formalize definition

Ars Technica

The Open Source Initiative (OSI) recently unveiled its latest draft definition for “open source AI,” aiming to clarify the ambiguous use of the term in the fast-moving field. The move comes as some companies like Meta release trained AI language model weights and code with usage restrictions while using the “open source” label. This has sparked intense debates among free-software advocates about what truly constitutes “open source” in the context of AI.

We’re a big step closer to defining open source AI – but not everyone is happy

ZDNET

The OSI has been working diligently on creating a comprehensive definition for open-source AI, similar to the Open-Source Definition for software. This critical effort addresses the growing need for clarity in determining what makes up an open-source AI system at a time when many companies claim their AI models are open source without really being open at all, such as Meta’s Llama 3.1

We finally have a definition for open-source AI

MIT Technology Review

Open-source AI is everywhere right now. The problem is, no one agrees on what it actually is. Now we may finally have an answer. The Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source, has released a new definition, which it hopes will help lawmakers develop regulations to protect consumers from AI risks.