After months of heated discussion, the Linux kernel community codified a simple-but-stern rule: you can use AI to write kernel code, but you — a named human — remain fully on the hook.
The policy, now reflected in the kernel's documentation for coding assistants, treats AI as a tool, not an author. That distinction is blunt and deliberate: AI-generated output may be submitted, but an AI may never perform the legal act of signing off on a patch. Only a human can add the "Signed-off-by" line that certifies the Developer Certificate of Origin (DCO). In short, the person who hits submit is legally and reputationally responsible for everything in that patch.
What the rules actually require
- AI agents MUST NOT add Signed-off-by tags. Only humans may certify the DCO.
- The human submitter must review all AI-generated code and ensure license compliance before contributing.
- When an AI tool is used, contributors are asked to declare it with an "Assisted-by" tag that includes the agent name, model version and tool used — an effort to improve transparency about how code was produced.
You can read the wording yourself in the kernel's documentation: coding-assistants.rst on the Linux repo.
Why does this matter? Because the Signed-off-by line is not ceremonial. It asserts you have the right to contribute the code and that you've reviewed it. Letting an AI sign off would sidestep that legal chain of custody, creating a potential mess for an open-source project whose code runs everywhere — from phones to servers to appliances.
Pragmatism over prohibition
Linus Torvalds and the maintainers landed on a pragmatic middle path. A ban felt pointless to many: developers already use tools like GitHub Copilot, local LLMs, and other code assistants as part of their workflow. The kernel's approach says: don't try to pretend AI is a magic author; use it to help, but do the hard work yourself.
That stance will be familiar to teams wrestling with AI-assisted development across the industry. Projects and companies experimenting with local models — for example, tools that lean on Apple’s machine learning work or new open models — are learning similar lessons about control and provenance. See how projects like Ollama tapping Apple's MLX for faster local LLMs and Google's open-model moves such as Gemma 4 are changing how teams think about bringing models closer to development workflows.
A response to "AI slop"
Part of the urgency came from maintainers' exhaustion. The term "AI slop" — low-quality, bulk-generated patches that haven't been meaningfully reviewed — circulated on kernel mailing lists for months. Review queues were becoming clogged with suggestions that looked machine-made and sometimes introduced regressions or security flaws. For a project that stresses correctness and performance, that trend was intolerable.
Security researchers and industry scanners have reported rising incidents tied to careless AI-assisted coding in recent quarters. For a codebase that forms the backbone of many systems, the maintainers decided to reduce ambiguity: good code is good code, whatever produced it, but the named author bears the consequences.
What this means for contributors and organizations
Practically, contributors who use AI will need to: carefully audit any AI output, double-check licensing (training data provenance is a hairy legal frontier), and document which tool produced the code with the Assisted-by tag. The kernel community's approach shifts liability away from vendors and models and onto the developer who asserts ownership.
That shifting of responsibility will ripple beyond the kernel. Large projects and enterprises will watch and likely borrow the model: accept AI as a force multiplier, forbid AI from making legal attestations, and require transparent attribution. The kernel's decision could become a template for other critical infrastructure projects that can't afford fuzzy provenance.
Why it matters beyond the mailing list
Linux isn't only a developer hobby; it powers phones, cloud instances, embedded devices and national infrastructure. France's recent push to move government PCs to Linux underlines how consequential kernel stability and trust are at a state level: choices about code provenance affect national deployments and digital sovereignty in real ways (France moves millions of government PCs from Windows to Linux).
The new policy won't make AI perfect overnight. It does, however, draw a clear line: accept the productivity gains, but keep your hands on the wheel — and your name on the patch.
No one is pretending this ends the conversation. Expect ongoing debates about what “sufficient review” looks like, how granular Assisted-by metadata should be, and whether tooling can help reviewers triage AI-suggested patches more quickly. For now, the kernel has set a simple cultural rule: tools can help you write, but people must own what they submit.




