The Fedora Council, a top-level community leadership and governance body responsible for stewardship of the Fedora Project as a whole, has officially approved a new policy allowing AI-assisted contributions to Fedora projects.
The decision, which follows months of community discussion, sets clear boundaries on how tools like ChatGPT and GitHub Copilot can be used while ensuring human accountability remains central to the process.
Under the new rules, contributors are free to use AI tools to generate or assist with code, documentation, or other project materials — but only if the person submitting the work remains its true author.
Moreover, the Council emphasizes that all contributors must take full responsibility for the accuracy, safety, and licensing of their submissions, regardless of the extent of an AI tool’s involvement.
In practical terms, that means the person contributing remains the author and must ensure the work is accurate, safe, and legally compliant. AI can assist, but it can’t take the blame — or the credit.
To promote transparency, the policy encourages developers to disclose when AI played a significant role in their work. Fedora suggests using a commit trailer such as “Assisted-by: <AI tool name>” to make that clear in project history. The idea is not to restrict innovation, but to ensure everyone understands where the content came from and who ultimately stands behind it.
The policy also covers reviewers and maintainers. They’re allowed to use AI tools to assist with their review work, but the final decision on whether a contribution is accepted must always be made by a human. In other words, AI can help — it just can’t decide.
Additionally, for Fedora products or features that integrate AI directly, user consent remains mandatory. Any AI-powered functionality must be opt-in rather than automatically enabled, ensuring Fedora continues to respect user control and privacy.
Lastly, the Council describes this policy as a “living document,” meaning it may evolve as AI technologies advance and their impact on open-source workflows becomes clearer. For full details, see the Fedora Discussion post and the Council ticket.

Fedora had a bright future until IBM got involved with Red Hat/Fedora. Now with AI in the mix, that’s the last straw! I will now have nothing to do with Fedora or any distro that embraces AI and what it is surely to eventually become on a worldwide scale: Making mankind subservient to it!
“The development of full artificial intelligence could spell the end of the human race.” (2014)
– Stephen Hawking (1942-2018) – World renowned English theoretical physicist, cosmologist and author who suffered with ALS from age 21.
The ultimate result of uncontrolled AI is portrayed in this 2004 film:
“I, Robot” (2004)
https://www.imdb.com/title/tt0343818/?ref_=fn_all_ttl_1
What a shame. As a quite happy Fedora Kinoite user I will be on alert, and/or begin my migration plans to elsewhere. No possible way this turns out well for the privacy minded.
AI is just nasty on so many social, economic, & environmental levels. Of course, nobody will care when I leave, but I will hold to my values as we watch the ever escalating erosion of privacy, values and the inevitable political targeting of individuals/groups using bigtech AI spyware on steroids.
The only thing they are asking is disclosure.
Like it or not, right now or atlewst in a years time every single Linux distribution will have parts of code inside it that are AI generated, you cannot stop it, humans always take the road of least resistance and having AI analyse or fix or add patches to a piece of code you throw at it is so easy.
Do I like it, am I a fan of it, no. But it is going to happen, so it either happens sneaky or it happens with full disclosure, take your pick.
So you can leave, but unless you going to build/code your whole new system yourself you will, sadly enough, always end up with AI generated code.
If you want to remain AI free then take the last linux release from around a 2023 and stick with that 🙁
Fedora devs won’t become mindless once this rule will be enforced. Code will continue to be reviewed by the same people, that could also use AI to speed up reviews.
Nothing critical would be in the hands of AI because, as clearly stated, Fedora devs will continue to be responsible for the end product.
I find this proposal more realistic than explicitly forbidding AI tools because some devs actually use AI, and they could try to hide it to avoid their code being rejected (even if good), and be bashed by the community.
Fedora users will continue to trust competent Fedora devs to make good decision.
Now, nobody will be forced to use AI, and if Fedora become a mess in a year, devs will be the first affected (much more than users), but also the first to fork Fedora to save the thousand hours and love they’ve poured into it.
If AI has any chance to prevent some voluntary devs to burn-out by spending less time on it and attracting new devs, it would be a good thing. I neither don’t see any other positive outcome of using AI, but I’m not a Fedora dev.
So basically they can use all the ai they want and most of us will not know how much ai is being used on various things unless we are part of the project since the majority of us just use fedora without reviewing things that have to do with the code process.
Since all the most influential developer were against *all* AI content, this policy statement must be some kind of compromise with the IBM/Red Hat marketing suits. The corporatists must really want to burn capital like crazy in search of the best chatbot