AI has become an integral part of everyday life, particularly in the tech sector, where major companies are investing billions in its development. However, that also raises an important question: where should we draw the line on its use?
In light of this, the Fedora Project is taking a clear step toward defining how artificial intelligence should be utilized within its community, aiming to strike a balance between enabling contributors to benefit from AI tools and protecting the community’s core values.
More specifically, this week, the Fedora Council, a top-level leadership and governance body of the Fedora Project, published a draft policy on AI-assisted contributions, opening a two-week window for community review and feedback before a final vote.
The proposal follows more than a year of discussions, beginning with a community survey in the summer of 2024 and continuing through Fedora’s Flock conference and Council meetings. The message from contributors was consistent: AI can help improve workflows, but it also raises questions about privacy, ethics, and overall quality.
The draft policy tries to address those concerns with a few key rules. First, contributors remain fully responsible for anything they submit. AI tools can assist in generating code or documentation, but human review and accountability remain non-negotiable. Fedora also asks for transparency—if AI had a significant role in creating a contribution, it should be noted in commit messages or PRs.
Another important point is that reviewers shouldn’t lean on AI to make final decisions about whether a patch or contribution gets accepted. The Council is drawing a firm line there: people make the calls, not machines.
On the project management side, the draft prohibits the use of AI/ML tools to evaluate or score items such as funding requests, code of conduct cases, or conference talk proposals. Automated tools for spam filtering and note-taking remain fine, but AI can’t replace human judgment in sensitive areas.
For Fedora’s users, the proposal stresses privacy and consent. Any AI-powered feature that sends data off the local system must be opt-in; it should never be enabled by default. At the same time, the Council encourages contributors to explore how AI can help with accessibility features such as translation, transcription, or text-to-speech.
Finally, the policy looks outward to Fedora’s role as a Linux platform. Packaging AI frameworks and tools for research and development is encouraged, provided they meet existing packaging and licensing rules. And for those training models on Fedora project data, the policy makes it clear: scraping that harms infrastructure isn’t allowed, and license obligations must be respected.
The Council states that this draft is intended to be a living document, adapting as technology evolves. After the two-week comment period, the policy will go to a formal vote through the Council’s ticket system.
For more information, refer to the Fedora Discussion board.