Wikipedia talk:Blocking policy
| The project page associated with this talk page is an official policy on Wikipedia. Policies have wide acceptance among editors and are considered a standard for all users to follow. Please review policy editing recommendations before making any substantive change to this page. Always remember to keep cool when editing, and don't panic. |
| This is the talk page for discussing improvements to the Blocking policy page. |
|
This is not the page to report problems to administrators
or request blocks/unblocks. This page is for discussion of the Wikipedia blocking policy itself.
|
| See WP:PROPOSAL for Wikipedia's procedural policy on the creation of new guidelines and policies. See how to contribute to Wikipedia guidance for recommendations regarding the creation and updating of policy and guideline pages. |
| The content of Wikipedia:GlobalBlocking was merged into Wikipedia:Blocking policy on 18 October 2012. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted as long as the latter page exists. For the discussion at that location, see its talk page. |
| The content of Wikipedia:Block on demand was merged into Wikipedia:Blocking policy on 25 July 2016. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted as long as the latter page exists. For the discussion at that location, see its talk page. |
Wikilawyering over essays
[edit](Pinging @Thryduulf and @Chaotic Enby)
We were talking over at the WP:VPPR about a wikilawyer problem, namely that some people think only "policy" violations can result in blocks, and that admins are therefore prohibited from giving an essay (or even a guideline) as an explanation for a block. Obviously, this isn't true, because we issue blocks every day that give essays, particularly WP:NOTHERE and Wikipedia:Tendentious editing, as the explanation.
I see two possible paths towards addressing this.
- We could explicitly say that admins can block accounts even if there is no specific line in Official™ Policy that clearly applies. For example, WP:EXPLAINBLOCK currently says: "The community expects that blocks will be made for good reasons only, based upon reviewable evidence and reasonable judgment, and that all factors that support a block are subject to independent peer review if requested" (emphasis added), and we could add a sentence like "This includes blocking accounts when they deem it necessary or appropriate, even when no specific rule has explicitly been violated". WP:WHYBLOCK opens with "The following are some of the most common rationales for blocks"; this could be expanded to say "This list is not exhaustive. Admins are allowed to use their best judgment to block accounts whenever they believe a block is necessary to protect Wikipedia from harm or to reduce likely future problems, even if the specific situation is not explicitly named in any rule".
- We could explain that admins are allowed to use essays as the explanation. For example, WP:DISRUPTBLOCK says "A user may be blocked when their conduct severely disrupts the project; that is, when their conduct is inconsistent with a civil, collegial atmosphere and interferes with the process of editors working together harmoniously to create an encyclopedia". I think we all agree that promoting Neo-Nazism would be "inconsistent with a civil, collegial atmosphere" and would "interfere with the process of editors working together harmoniously". But it's a specific type of disruption, so instead of saying "indeffed per DISRUPTBLOCK", you might well say "indeffed per Wikipedia:No Nazis". In both explanation, the problem is disruption; the only difference is that the essay provides a more detailed and relevant explanation. To do this, we could expand WP:EXPLAINBLOCK to say "When explaining why an account has been blocked, a link to a policy or guideline may not be the best explanation, so it is acceptable to provide informal explanations or to link to a page containing a relevant explanation, such as the essay WP:NOTHERE".
What do you think? WhatamIdoing (talk) 18:45, 7 March 2026 (UTC)
- Essays are just explanations. Anyone can write an essay, and giving admins the power to block based on any essay (or based on nothing at all) will not be helpful. It may sound like having less rules means having more freedom, but that in fact means less freedom for the vast majority of editors (who are put at more of a risk to be blocked if they do something an admin finds "wrong"). We don't want to concentrate more power into the hands of the few while limiting the freedom of the many.The reason why WP:NOTHERE et al. are accepted by the community as reasons for blocking is that they are based on generally agreed upon interpretations of policy. A block based on an obscure essay can absolutely be brought at WP:AARV if it turns out that the consensus is against that particular essay's interpretation of policy, while the proposed additions would make these blocks an acceptable use of admin discretion. Admins are intended to responsibly use their tools in a way that reflects and enforces community consensus, and this proposed change does not work in that direction. Chaotic Enby (talk · contribs) 19:12, 7 March 2026 (UTC)
- @Chaotic Enby, do you understand that this policy says that admins already have the power to block based on any "good reasons" that seem "reasonable" to them? WhatamIdoing (talk) 19:17, 7 March 2026 (UTC)
- (edit conflict)I'm not convinced a policy change of any sort is necessary - I don't see anywhere that says blocks must be for policy reasons, and at least one place, under WP:DISRUPTBLOCK, that explicitly says guidelines are enough. Besides, where do they think policies come from? Votes on the talk page?Also, WP:Wikilawyering is an essay, so if someone tries to wikilawyer that we can't block people for violating essays, .... —Cryptic 19:12, 7 March 2026 (UTC)
- AFAICT we have editors, particularly newer ones, who believe that policies were handed down from on high, or that they sprang fully formed from the head of Zeus. It's hard for editors to grasp the idea that the Wikipedia:The difference between policies, guidelines, and essays is sometimes just which tag was on the page when we finally quit fighting over which tag should be on a given page. WhatamIdoing (talk) 19:19, 7 March 2026 (UTC)
- The difference is that guidelines and policies are guaranteed to have some kind of acceptance by the community, while essays can be written by anyone. There are obviously edge cases with essays having broad consensus, but there are just as many (if not more) essays that are rejected by the vast majority of the community, and still exist. We need to draw a line somewhere, and the P&G vs essay line is as good as any, as P&G require some level of explicit consensus. Chaotic Enby (talk · contribs) 19:23, 7 March 2026 (UTC)
- Sure, but I don't think that's relevant. This policy already says that admins can block anybody for any "good reasons". Therefore, if you, using your best judgement, decide that you have "good reasons" to block me, then you already have a policy (namely, the Wikipedia:Blocking policy) that supports that decision.
- Given that, we have options:
- Block for "good reasons" as authorized by this policy, and give a vague but officially endorsed explanation like WP:DISRUPTBLOCK
- Block for "good reasons" as authorized by this policy, and give a specific but informal explanation (e.g., a custom explanation on the user's talk page or a link to an essay)
- Given that all blocks for "good reasons" are authorized by this policy, do you think that it's important to give an officially endorsed explanation, even if that explanation will not actually explain anything to the user? WhatamIdoing (talk) 20:10, 7 March 2026 (UTC)
- This is a false dichotomy, as the policy/guideline being broken and the specific explanation of why should be complementary. Ideally, you should explain which policy was broken, and why precisely. Essays often help with that as a shorthand, and some of them can be a useful tool, although other essays are completely divorced from policy (and thus fail the first purpose). Chaotic Enby (talk · contribs) 20:15, 7 March 2026 (UTC)
- Where in this policy does it actually say that editors cannot be blocked unless they have "broken" a policy or guideline? WhatamIdoing (talk) 21:49, 7 March 2026 (UTC)
- WP:INDEF explicitly states that
It is designed to prevent further disruption, and the desired outcome is a commitment to observe Wikipedia's policies and guidelines, and to stop problematic conduct in the future.
, but I recognize that it should probably be written somewhere more prominent, as it is arguably the main way to consistently enforce conduct. If editors don't know what rules they have to follow, or can be blocked on the whims of any administrator without having broken a rule, how can they feel safe editing? Chaotic Enby (talk · contribs) 22:14, 7 March 2026 (UTC)- If you think this policy should prohibit blocks unless an editor breaks a rule written on a page that says {{policy}} at the top, then the first step towards that achieving that goal is admitting that this policy currently says no such thing.
- The fact is that editors can't "feel safe editing", if their notion of safety is following the written rules. We elect admins because we trust them not to block someone "whimsically", and we hope when we elect them that they will also have the courage to block people who are interfering with the overall goal even if they can't point to an exact sentence in a {{policy}} that the editor has clearly broken. WhatamIdoing (talk) 22:41, 7 March 2026 (UTC)
we hope when we elect them that they will also have the courage to block people who are interfering with the overall goal
, but the problem is that admins may decide what "interfering with the overall goal" means in a wide variety of ways, which can be at odds with the broader community's interpretation of it, which is why WP:AARV exists. Your proposed changes would put this under admin discretion, explicitly allowing these blocks "at whim", which completely removes admin accountability regarding blocks.Regardingthe first step towards that achieving that goal is admitting that this policy currently says no such thing
, well, I did say that it isn't said in these words, although, as I showed, the policy makes reference to this. Chaotic Enby (talk · contribs) 22:49, 7 March 2026 (UTC)- No, admitting that admins are supposed to decide what constitutes harming Wikipedia is not the same thing as "explicitly allowing these blocks "at whim"." It is admitting that these blocks are already allowed when admins carefully consider the situation and make a reasonable judgment call that Wikipedia will be better off without some contributors.
- Note the difference between exaggerations like "at whim" and standards already in the policy, like "carefully consider" and "good reasons". WhatamIdoing (talk) 23:18, 7 March 2026 (UTC)
- WP:INDEF explicitly states that
- Where in this policy does it actually say that editors cannot be blocked unless they have "broken" a policy or guideline? WhatamIdoing (talk) 21:49, 7 March 2026 (UTC)
- This is a false dichotomy, as the policy/guideline being broken and the specific explanation of why should be complementary. Ideally, you should explain which policy was broken, and why precisely. Essays often help with that as a shorthand, and some of them can be a useful tool, although other essays are completely divorced from policy (and thus fail the first purpose). Chaotic Enby (talk · contribs) 20:15, 7 March 2026 (UTC)
- The difference is that guidelines and policies are guaranteed to have some kind of acceptance by the community, while essays can be written by anyone. There are obviously edge cases with essays having broad consensus, but there are just as many (if not more) essays that are rejected by the vast majority of the community, and still exist. We need to draw a line somewhere, and the P&G vs essay line is as good as any, as P&G require some level of explicit consensus. Chaotic Enby (talk · contribs) 19:23, 7 March 2026 (UTC)
- AFAICT we have editors, particularly newer ones, who believe that policies were handed down from on high, or that they sprang fully formed from the head of Zeus. It's hard for editors to grasp the idea that the Wikipedia:The difference between policies, guidelines, and essays is sometimes just which tag was on the page when we finally quit fighting over which tag should be on a given page. WhatamIdoing (talk) 19:19, 7 March 2026 (UTC)
- The policy clearly does not say that "good reasoning" is a valid rationale for a block. It is in the explaining blocks subsection; administrators should be able to explain the blocks they make with good reasoning, it is not a blanket right for administrators to block whomever they wish. Blocks are given to prevent disruption to the encyclopaedia. When an administrator blocks and links an essay in the form, I very much hope that they are taking action based on their own comprehension of the underlying P&G the essay cites, and not the 'in a nutshell' messagebox from the essay itself.
- Further on that point, "good judgement" can vary from person to person ‒ or essay writer to essay writer. This is why P&Gs undergo an arduous process with much input to become codified. DatGuyTalkContribs 20:27, 7 March 2026 (UTC)
- I wouldn't call a WP:PROPOSAL an arduous process, but you've also got to keep in mind that before I wrote PROPOSAL in 2008, zero policies and only two guidelines (MEDRS and MEDMOS) had undergone that process. Let's see: Category:Wikipedia policies has 63 pages now, and archive.org says 56 in 2012 (its earliest copy), so, rounding generously, maybe 20% of our current policies followed the PROPOSAL process? And therefore about 80% of them didn't.
- This policy literally says "The community expects that blocks will be made for good reasons only, based upon reviewable evidence and reasonable judgment". This means that "good reasons" are required for all blocks, without exception.
- The policy gives some specific "good reasons", e.g.,:
Blocks should be used to:
- prevent imminent or continuing damage and disruption to Wikipedia;
- deter the continuation of present, disruptive behavior; and
- encourage a more productive, congenial editing style within community norms.
- but I think you'd be hard pressed to find anything in this policy that says reasons must be pre-approved in a policy, or to find a policy somewhere that explains what exactly constitutes "less productive, uncongenial" behavior or all the different ways that it's possible to behave "outside of community norms".
- It's not a case of blocking whomever they wish, but it seems to me that it is a case of blocking anyone whose behavior is actually disruptive, even if the admin can't point to an item in a numbered list called "Disruptive editing, type 3, subtype 6a". WhatamIdoing (talk) 21:48, 7 March 2026 (UTC)
This is an exaggeration, as no one requested anything close to that amount of precision. In fact, disruptive editing is already a part of the block policy, so pointing to WP:DISRUPTBLOCK is perfectly enough. Chaotic Enby (talk · contribs) 22:06, 7 March 2026 (UTC)It's not a case of blocking
whomever they wish
, but it seems to me that it is a case of blocking anyone whose behavior is actually disruptive, even if the admin can't point to an item in a numbered list called "Disruptive editing, type 3, subtype 6a".- Yes, and the problem is that some editors, particularly ones who have been blocked for behavior that is not productive and congenial, believe that if they behave in a way that most of us would describe as counterproductive and uncongenial, and the blocking admin says "per essay WP:NOTHERE" instead of writing "per policy WP:BLOCKP #3", then they believe that the admin has done something wrong and their block should be overturned on a technicality, even though NOTHERE and BLOCKP #3 describe the same behavior.
- What I think would reduce this kind of complaint, or make it quicker to deal with, is to have a sentence in this policy that says "Yes, admins are allowed to put 'per WP:ESSAY' in the block log, because we don't actually require them to name an 🌟Official™ Policy💛 when they're blocking someone". WhatamIdoing (talk) 22:48, 7 March 2026 (UTC)
- Of course we don't require them to name an official policy, and overturning a block on such a technicality would be absurd. However, we do want the block to be rooted in policy. No one here has said that you can't put an essay in the block log. An essay referencing and explaining a policy (including, but not limited to, those marked as "explanatory essay") can work just fine, as they explain the blocked user which policy they broke. However, many essays are not tied to policy, and these are more problematic as they can often give contradictory instructions (e.g. WP:BLUESKY and WP:NOTBLUESKY), or just be one person's personal thoughts on the project, meaning you can't expect editors to actually follow all of them. Chaotic Enby (talk · contribs) 22:53, 7 March 2026 (UTC)
- Thing is, no administrator is going to block someone for writing "The sky is blue." and providing a citation to it. Or I guess for not providing a citation to it (I assume that's what WP:NOTBLUESKY is about, not having read more than the first line). They're going to block because the user's behavior is problematic, and they should be providing the clearest and most specific reason for the block as possible. If an admin happens to be familiar with, say, the WP:Conspiracy theory accusations essay or the WP:Casting aspersions information page, and one happens to be a really good fit for the blockable behavior, I'd rather they pointed at that than at the underlying WP:No personal attacks policy that both are based on. There's plenty of essays that describe specific aspects of policy like that, and just because they happen to have an essay tag instead of a policy tag at the top doesn't make them wrong. —Cryptic 02:29, 8 March 2026 (UTC)
- I agree that just because they happen to have an essay tag instead of a policy tag at the top doesn't make them wrong. However, some inexperienced editors believe the opposite. This is not a new problem; there's a reason that the first misconception listed in Wikipedia:The difference between policies, guidelines, and essays is about blocking. WhatamIdoing (talk) 17:35, 9 March 2026 (UTC)
- Blocks are issued for disruption. Whether the page that describes the type of disruption is tagged as a policy, a guideline, an essay, or anything else matters not a jot. Nobody arguing "but that's a guideline" is doing so in good faith. Policies are supposed to be descriptive anyway, so if it's a commonly accepted block rationale, it's policy, tag or not. HJ Mitchell | Penny for your thoughts? 22:23, 9 March 2026 (UTC)
- I agree with you, but I think that we have some editors who genuinely believe that essays shouldn't be named as reasons for blocking people. They're wrong on the facts, but I think it's an honest error. WhatamIdoing (talk) 22:31, 9 March 2026 (UTC)
- The fundamental requirement of WP:ADMINACCOUNT is that an admin explain why they did something. Putting a link to a PAG in a block log comment is a convenient way to do that, but it's not a legally binding contract and it's not fatal if you link to the wrong shortcut. Anybody can always ask you later to provide additional justification. RoySmith (talk) 22:35, 9 March 2026 (UTC)
- Blocks are issued for disruption. Whether the page that describes the type of disruption is tagged as a policy, a guideline, an essay, or anything else matters not a jot. Nobody arguing "but that's a guideline" is doing so in good faith. Policies are supposed to be descriptive anyway, so if it's a commonly accepted block rationale, it's policy, tag or not. HJ Mitchell | Penny for your thoughts? 22:23, 9 March 2026 (UTC)
- I agree that just because they happen to have an essay tag instead of a policy tag at the top doesn't make them wrong. However, some inexperienced editors believe the opposite. This is not a new problem; there's a reason that the first misconception listed in Wikipedia:The difference between policies, guidelines, and essays is about blocking. WhatamIdoing (talk) 17:35, 9 March 2026 (UTC)
- Thing is, no administrator is going to block someone for writing "The sky is blue." and providing a citation to it. Or I guess for not providing a citation to it (I assume that's what WP:NOTBLUESKY is about, not having read more than the first line). They're going to block because the user's behavior is problematic, and they should be providing the clearest and most specific reason for the block as possible. If an admin happens to be familiar with, say, the WP:Conspiracy theory accusations essay or the WP:Casting aspersions information page, and one happens to be a really good fit for the blockable behavior, I'd rather they pointed at that than at the underlying WP:No personal attacks policy that both are based on. There's plenty of essays that describe specific aspects of policy like that, and just because they happen to have an essay tag instead of a policy tag at the top doesn't make them wrong. —Cryptic 02:29, 8 March 2026 (UTC)
- Of course we don't require them to name an official policy, and overturning a block on such a technicality would be absurd. However, we do want the block to be rooted in policy. No one here has said that you can't put an essay in the block log. An essay referencing and explaining a policy (including, but not limited to, those marked as "explanatory essay") can work just fine, as they explain the blocked user which policy they broke. However, many essays are not tied to policy, and these are more problematic as they can often give contradictory instructions (e.g. WP:BLUESKY and WP:NOTBLUESKY), or just be one person's personal thoughts on the project, meaning you can't expect editors to actually follow all of them. Chaotic Enby (talk · contribs) 22:53, 7 March 2026 (UTC)
- @WhatamIdoing I can't locate that discussion. Did it get archived? Do you have a link? RoySmith (talk) 15:23, 8 March 2026 (UTC)
New types of blocks
[edit]I propose some new types of blocks depending on the type of disruption the user is causing:
- Pending changes blocks: The user's edits on all non-talk pages during the block will require review to be visible, like what happens when an unregistered or new user edits a page that is under pending changes protection.
- Upload blocks: The user is blocked only from uploading files. This block would be applied if, for example, a user repeatedly performs image vandalism or repeatedly uploads copyrighted files but otherwise edits constructively.
- File overwrite blocks: The user is blocked only from overwriting files. This block would be applied if, for example, a user repeatedly overwrites files with nonsense but otherwise edits constructively.
- Creation blocks: The user is blocked only from creating pages. This will not block them from creating talk pages unless specified. This block would be applied if, for example, keeps creating malicious pages but otherwise edits constructively.
- Move blocks: The user is blocked only from moving pages. This block would be applied if, for example, a user repeatedly performs page-move vandalism but otherwise edits constructively.
~2026-16410-72 (talk) 14:04, 15 March 2026 (UTC)
- @~2026-16410-72: Why? What situations have arisen that makes these necessary? In what way are the present methods unable to handle these situations? Before proposing a solution, you need to show that there is a problem. --Redrose64 🌹 (talk) 14:13, 15 March 2026 (UTC)
- I would note that all of these block types are already available. Primefac (talk) 14:22, 15 March 2026 (UTC)
RFC: Include LLM usage as a reason to block
[edit]
|
Should the list of reasons to block be expanded to include "Persistent usage of large language models"?
We are seeing an increasing amount of threads at WP:ANI where users are creating large amounts of LLM-generated content and then blocked, which require excessive clean-up. This has been escalating over the past year and I believe is only going to get worse. We are getting to the stage where we should treat LLM content with the same seriousness as copyright violations, and block even when a user's actions are in good faith, to avoid wasting communities time in clean-up.
Adding the proposed text would directly change the blocking policy such that any administrator would be able to block on sight for LLM usage, and have a solid policy-backed reason for doing so. Ritchie333 (talk) (cont) 14:08, 22 March 2026 (UTC)
Survey (LLM usage as block reason)
[edit]- Yes, subject to refinement - we need some guidance on "persistent". I'd say it should be as basic as 'X uses LLM; is warned; uses LLM again; gets a block" (i.e. one strike and you're out). We'd need a proper WP:WARN template for this. GiantSnowman 14:15, 22 March 2026 (UTC)
- Such templates already exist, see {{uw-ai1}}, {{uw-ai2}}, {{uw-ai3}} and {{uw-ai4}}. They use the normal 4-stage unfired warning system (though, at discretion of the warning editor, like all other templates, you can skip stages if it's appropriate). If we want only one warning the system might need to be reworked. MolecularPilot Talk 03:01, 23 March 2026 (UTC)
- Oh sorry, I just scrolled down and realised others have already brought up the unified warning templates! MolecularPilot Talk 03:03, 23 March 2026 (UTC)
- Such templates already exist, see {{uw-ai1}}, {{uw-ai2}}, {{uw-ai3}} and {{uw-ai4}}. They use the normal 4-stage unfired warning system (though, at discretion of the warning editor, like all other templates, you can skip stages if it's appropriate). If we want only one warning the system might need to be reworked. MolecularPilot Talk 03:01, 23 March 2026 (UTC)
- Yes but allow for a warning first, most people just aren't aware of the issues with LLMs nor our PAGs about them, and they stop using them when made aware. Strongly support mainspace indef for contravening a warning. Unblocks could either be conditional on promising not to use LLMs anymore, or limit contributions to Draft space via AfC and edit requests for a bit, depending on the context Kowal2701 (talk, contribs) 14:19, 22 March 2026 (UTC)
- FYI, there's also talk of a watchlist notice at MediaWiki talk:Watchlist-messages#Major AI guideline change Kowal2701 (talk, contribs) 14:34, 22 March 2026 (UTC)
- Yes, but perhaps let's tweak the {{uw-ai}} series of warning templates, or introduce a new suite of user warnings for this purpose. Also, I think you wrote "usage of" twice on accident there, haha. MEN KISSING (she/they) T - C - Email me! 14:23, 22 March 2026 (UTC)
- Well spotted, d'uh. Ritchie333 (talk) (cont) 14:25, 22 March 2026 (UTC)
- I think we only need one warning template, which is 'stop or you'll be blocked'. GiantSnowman 14:27, 22 March 2026 (UTC)
- I was thinking of mirroring
{{uw-copyright}}, specifically "Wikipedia takes large language model content seriously, and persistent violators will be blocked from editing" Ritchie333 (talk) (cont) 14:37, 22 March 2026 (UTC)- Yes, good idea. GiantSnowman 14:38, 22 March 2026 (UTC)
- I created a proposed idea for the text of this at User:Athanelar/AI Templates. The current one there is based on the text of {{uw-copyright}}. I would suggest we also create one based on {{uw-copyright-new}} for warning new users or in cases where the AI usage is not as obvious or flagrant. Athanelar (talk) 14:54, 22 March 2026 (UTC)
- Amazing! Small detail, the bot icon might give the impression that the template was itself sent by a bot – should a version be made that crosses out the icon? Alternatively, we could go the way of the copyright template and add a magnifying glass, to get something like
(File:WikiProject AI Cleanup.svg) Chaotic Enby (talk · contribs) 16:33, 22 March 2026 (UTC)
- I changed the icon as you suggested, and also added a warning.svg icon for extra clarity that it's a warning. Athanelar (talk) 16:54, 22 March 2026 (UTC)
- Just FYI, that robot+magnifying glass icon you used is very hard to see (on Athanelar's draft warning) in Vector 2022 dark mode; there should be a separate white version that gets swapped out in dark mode to preserve accessibility (alternatively, just add a light background to the icon, as in your comment, but a white icon would look better). OutsideNormality (talk) 21:23, 22 March 2026 (UTC)
- Amazing! Small detail, the bot icon might give the impression that the template was itself sent by a bot – should a version be made that crosses out the icon? Alternatively, we could go the way of the copyright template and add a magnifying glass, to get something like
- @Ritchie333; previous community tolerance of copyright blocks indicates that we have to give 5 serious warnings; {{uw-copyright-new}} and then 4 {{uw-copyright}}. This can be a mix of CP listing notifications or G12s too, but 5 seems to be the threshold before we can indef someone. Are you proposing a similar threshold; five separate instances of LLM usage? Or would setting a lower threshold for LLMs be preferable? To be clear, I 100% get the frustration with LLM spam cleanup, but I think if we're going to base things off copyright warnings we should make the two blockable after the same amount of warnings since they both take a ridiculous amount of time for experienced editors to clean up. Sennecaster (Chat) 16:41, 28 March 2026 (UTC edited Sennecaster (Chat) 16:43, 28 March 2026 (UTC)
- I was thinking of mirroring
- I think we only need one warning template, which is 'stop or you'll be blocked'. GiantSnowman 14:27, 22 March 2026 (UTC)
- Well spotted, d'uh. Ritchie333 (talk) (cont) 14:25, 22 March 2026 (UTC)
- Yes, with the kind of warnings discussed above. Just look at ANI. Chaotic Enby (talk · contribs) 14:39, 22 March 2026 (UTC)
- Yes, as long the user has been adequately warned and then reoffends. BugGhost 🦗👻 14:42, 22 March 2026 (UTC)
- Yes, with warning. I was actually just now thinking about potentially adding a line to NEWLLM to the effect of "Continued violations of this guideline after warning will be considered disruptive editing and may be subject to sanctions."
I also think we need to make more of an effort to inform people in advance that these things are not allowwed. Editnotices on article creation should have big, bold text warning people about NEWLLM. If we manage to finally get a guideline passed about not using LLMs in talk pages, the same should go there. LLM use is near-ubiquitous in today's society, and our warnings against it need to be the same way. Athanelar (talk) 14:45, 22 March 2026 (UTC) - Yes - one warning then block if it continues (e.g. obvious use is denied via AI/LLM). This is far, far too disruptive and a good 25% of ANI reports seem to involve some form of AI-misuse. We've even had an autonomous AI-agent create it's own Wikipedia account and edit completely unsupervised by it's creator, so we need solid policies on AI sooner rather than later. We're woefully behind the curve on AI, it's moving fast and we need to respond quickly. As far as I'm concerned, if it's obvious to another editor that you're using AI, you're using it wrong. On another note, I suggested including a warning to avoid using AI to appeal blocks directly within block templates, but I think it got lost/forgotten (as in, I've lost/forgotten it). I'll have to dig that out because I can't remember where the original discussion was... Blue Sonnet (talk) 14:45, 22 March 2026 (UTC)
- Comment On mobile, so I can't type much, but I've been working on some calculations of how much time AI editors waste here. It's a work in progress, but the tl;dr is that ANI discussions of AI editors take about 10 volunteer hours per editor. I doubt most AI editors bring that much productivity, and if they did an unblock appeal wastes less community time than a block discussion. And this doesn't even try to quantify editor time spent discussing and cleaning up after them. EducatedRedneck (talk) 14:52, 22 March 2026 (UTC)
- Yes. Yes. And again, yes. Subject only to readily allowing appeals by inexperienced users who promise not to do so again and who keep that promise. Narky Blert (talk) 14:54, 22 March 2026 (UTC)
- Yes, with warning. XtraJovial (talk • contribs) 14:59, 22 March 2026 (UTC)
- Yes, WITHOUT warning - Reliance on LLM should be regarded as an immediate demonstration of a lack of competence. The visual editor is already so easy to use I'd say it's gotten to a point it's almost too easy given how much petty disruption or vandalism we get. If someone is so genuinely clueless with it that they need an LLM to do it for them then clearly they're incompetent. Rambling Rambler (talk) 15:02, 22 March 2026 (UTC)
- The people here who are suggesting that we first warn are not doing so, I think, out of leniency for the AI users themselves, but rather out of an abundance of caution to avoid false positives (or at least the accusations thereof). Athanelar (talk) 15:09, 22 March 2026 (UTC)
- Without warnings would affect those that are attempting to edit in good faith but unaware of wikipedia policies. We need to balance any decision with WP:BITE. BugGhost 🦗👻 15:50, 22 March 2026 (UTC)
- @Athanelar and @Bugghost I'll leave one reply:
- The RfC here is specifically for persistent LLM usage. So it's not a case of someone using it once and you're marched out to face the firing squad. If someone is persistently using it then it's already beyond a "warning first" scenario, so suggesting they should get another warning after persistent usage is a WP:SUICIDEPACT situation in my opinion. At that point they've shown a failure under WP:CIR as far as I'm concerned and therefore we shouldn't let bureaucracy get in the way given we already have enough backlogs and cases of people who should've been indeffed just walking away to cause more grief because noticeboard reports get lost in the weeds. Rambling Rambler (talk) 16:39, 22 March 2026 (UTC)
- But how do we define 'persistent?' I'd say "continuing after a warning" would be a sensible way. Athanelar (talk) 16:49, 22 March 2026 (UTC)
- Persistent as far as I understand the term (and seems to be supported as the definition) is one of two meanings:
- 1: Someone continues to do so after being told to knock it off (so there's already a warning). Therefore they shouldn't get another warning so I've said without warning (I guess more specifically it could read without further warning).
- 2: Someone does something for a significantly long period. That in my view makes this just a specific example of a WP:CIR block, and we aren't exactly lacking in cases where we'll quickly move to an article space block because of good-faith disruption someone persistently causes (with or without warning). Rambling Rambler (talk) 16:56, 22 March 2026 (UTC)
- The word "persistent" is used numerous times in WP:WHYBLOCK. I think we can use the same definition of persistent that is employed throughout. If the language is vague, that might be deliberate to give Administrators discretion and avoid having people try to toe the line. EducatedRedneck (talk) 19:13, 22 March 2026 (UTC)
- But how do we define 'persistent?' I'd say "continuing after a warning" would be a sensible way. Athanelar (talk) 16:49, 22 March 2026 (UTC)
- Yes with warning. The ai slop is getting out of hand User:Bluethricecreamman (Talk·Contribs) 15:43, 22 March 2026 (UTC)
- I'm confused, what is this RfC asking? Is it asking to edit a description of common block reasons? That doesn't need an RfC. Is it asking to make LLM use grounds for blocking? It already is, as evidenced by the fact that people are being blocked for it. We don't need the instruction creep of a comprehensive list of possible reasons for blocks. HJ Mitchell | Penny for your thoughts? 15:48, 22 March 2026 (UTC)
- Judging by most of the votes (including two that have come in since my comment!), it sounds like people think they're voting to allow admins to block for LLM use. Admins can and do already do that, so why are we having an RfC about it? HJ Mitchell | Penny for your thoughts? 17:09, 22 March 2026 (UTC)
- Admins can block LLM-usage as part of grounds to block. This is about adding to as an explicit reason to block under policy and therefore more readily available for unilateral use rather than going through a pointless ANI where we wait for a dozen or so people to state the obvious. Rambling Rambler (talk) 17:19, 22 March 2026 (UTC)
- There already is an explicit reason to block. It's called disruptive editing. SuperPianoMan9167 (talk) 15:06, 23 March 2026 (UTC)
- I thought this was about agreeing on a norm/best practice, and enshrining it somewhere Kowal2701 (talk, contribs) 17:20, 22 March 2026 (UTC)
- Well that's two different answers, neither of which correspond to the opening question. How on earth is this RfC supposed to achieve a consensus? But @Rambling Rambler it already is; I've made such blocks myself. We don't need a comprehensive list of block reasons for admins to be able to block, and practice is policy anyway, with or without the instruction creep. HJ Mitchell | Penny for your thoughts? 18:34, 22 March 2026 (UTC)
- "How on earth is this RfC supposed to achieve a consensus?"
- It's just adding it as an explicit example of blocking to the policy. Easy to achieve consensus on that. Given that there is clearly confusion over it being a reason to block by itself I don't think there's any harm in making it explicit in policy. Rambling Rambler (talk) 18:38, 22 March 2026 (UTC)
- You can't get a meaningful consensus from unclear answers to an unclear question. This RfC needs more workshopping if it's to produce anything useful. I would dispute that there is any confusion in the first place, unless you can point to instances of somebody doing disruptive and not being blocked because an admin didn't feel they had a policy basis for it. HJ Mitchell | Penny for your thoughts? 18:50, 22 March 2026 (UTC)
- I partly agree, but if the only outcome of this is that admins are emboldened to block after one warning, then it's worth it. I assume the hesitation is often due to it being unclear re how much evidence is needed? Kowal2701 (talk, contribs) 19:03, 22 March 2026 (UTC)
- Can you explain what about
Should the list of reasons to block be expanded to include "Persistent usage of large language models?"
is unclear? I'm having trouble thinking of another interpretation other than adding AI usage to the list extant at WP:DBLOCK. - As for why, my comment above notes that there is great ambiguity in what degree of LLM use is permitted before blocking, and the 10 editor hours per discussion plus the proliferation of LLM misuse is a drain on community resources. I think having it listed as an explicit reason for blocking will let admins be more comfortable acting without requiring ten hours of editor discussion first. It doesn't force them to block a productive contributor, but make them more likely to act unilaterally against an unambiguous case without a pro forma ANI thread. EducatedRedneck (talk) 19:22, 22 March 2026 (UTC)
- That won't change from this RfC, unless what you're suggesting is that we immediately block anyone as soon as they post anything that looks like it might have been near an LLM. There will still need to be discussions, and admins will still use their discretion.It's not clear but it doesn't specify whether we're just adding an already agreed item to a list (which doesn't need an RfC), or if we're changing the blocking policy to broaden the list of reasons for a block (which is unnecessary). Most supporters seem to be of the notion that admins can't already do this, despite the fact that it happens probably daily. Then, of course, there are the sub-discussions about warnings and the definition of "persistent". It's asking "would you prefer apples or oranges" and getting answers that range from "yes" to "potato". HJ Mitchell | Penny for your thoughts? 19:50, 22 March 2026 (UTC)
Most supporters seem to be of the notion that admins can't already do this, despite the fact that it happens probably daily
- Is this therefore not a reasonable reason to explicitly add it to the blocking list? Yes we don't want an exhaustive list, but if the argument is there's a common belief it isn't a reason to block then that demonstrates a need to explicitly say so. Rambling Rambler (talk) 20:26, 22 March 2026 (UTC)
- That won't change from this RfC, unless what you're suggesting is that we immediately block anyone as soon as they post anything that looks like it might have been near an LLM. There will still need to be discussions, and admins will still use their discretion.It's not clear but it doesn't specify whether we're just adding an already agreed item to a list (which doesn't need an RfC), or if we're changing the blocking policy to broaden the list of reasons for a block (which is unnecessary). Most supporters seem to be of the notion that admins can't already do this, despite the fact that it happens probably daily. Then, of course, there are the sub-discussions about warnings and the definition of "persistent". It's asking "would you prefer apples or oranges" and getting answers that range from "yes" to "potato". HJ Mitchell | Penny for your thoughts? 19:50, 22 March 2026 (UTC)
- You can't get a meaningful consensus from unclear answers to an unclear question. This RfC needs more workshopping if it's to produce anything useful. I would dispute that there is any confusion in the first place, unless you can point to instances of somebody doing disruptive and not being blocked because an admin didn't feel they had a policy basis for it. HJ Mitchell | Penny for your thoughts? 18:50, 22 March 2026 (UTC)
- Well that's two different answers, neither of which correspond to the opening question. How on earth is this RfC supposed to achieve a consensus? But @Rambling Rambler it already is; I've made such blocks myself. We don't need a comprehensive list of block reasons for admins to be able to block, and practice is policy anyway, with or without the instruction creep. HJ Mitchell | Penny for your thoughts? 18:34, 22 March 2026 (UTC)
- Admins can block LLM-usage as part of grounds to block. This is about adding to as an explicit reason to block under policy and therefore more readily available for unilateral use rather than going through a pointless ANI where we wait for a dozen or so people to state the obvious. Rambling Rambler (talk) 17:19, 22 March 2026 (UTC)
if the argument is there's a common belief it isn't a reason to block
"Some editors don't understand existing PAGs" is an awful reason to amend PAGs, particularly when nobody is presenting any evidence of the truth of that statement. The blocking policy is not unclear. Blocks exist to prevent disruption. Persistent LLM-editing after receiving a warning is disruptive editing. voorts (talk/contributions) 21:36, 22 March 2026 (UTC)- It's not some editors though. If this was five people across a year having issues then I'd agree it was heavy-handed. However ANI is now swamped every week with LLM reports that go on needlessly, and if you were to argue this RfC is evidence of confusion then it appears many of our most experienced editors don't believe LLM-usage is adequately presented as a reason to block in and of itself. Rambling Rambler (talk) 21:38, 22 March 2026 (UTC)
However ANI is now swamped every week with LLM reports that go on needlessly
Any examples of an admin not blocking such editors? We also have lots of other things at ANI. Are you suggesting we turn to a block first policy for everything else that we consider disruptive editing? If not, how do those cases differ?if you were to argue this RfC is evidence of confusion
Which editors in this discussion have said they don't believe that admins can already block for this? voorts (talk/contributions) 21:40, 22 March 2026 (UTC)
- It's not some editors though. If this was five people across a year having issues then I'd agree it was heavy-handed. However ANI is now swamped every week with LLM reports that go on needlessly, and if you were to argue this RfC is evidence of confusion then it appears many of our most experienced editors don't believe LLM-usage is adequately presented as a reason to block in and of itself. Rambling Rambler (talk) 21:38, 22 March 2026 (UTC)
- Judging by most of the votes (including two that have come in since my comment!), it sounds like people think they're voting to allow admins to block for LLM use. Admins can and do already do that, so why are we having an RfC about it? HJ Mitchell | Penny for your thoughts? 17:09, 22 March 2026 (UTC)
- No, not convinced (ec), nor persuaded by any comment above. There isn't a "list of reasons to block", just some examples. Disruption and violating policies and guidelines are included. This isn't a comment on whether people should be blocked for 'usage of LLM'. Admins already have that discretion. I just don't like a laundry list of bad things people could get up to, and especially want to avoid a list which could be wikilawyered because it doesn't include something. Specifying warnings is already comprehensively covered by WP:BEFOREBLOCK. Please put the proposal and support into the context of the existing policy. -- zzuuzz (talk) 15:53, 22 March 2026 (UTC)
- Yes, because we already have this list, which is a policy, and LLMs are a point of possible confusion for new users, who are the ones likely to be reading the blocking policy. The Moose 16:48, 22 March 2026 (UTC)
- Yes provided the OP has been suitably warned. AI/LLM is becoming more and more pervasive and common, whether we like it or not - therefore new editors may, in good faith, assume its usage is perfectly OK. It absolutely isn't, but we need to make sure we do not WP:BITE newcomers. Danners430 tweaks made 17:00, 22 March 2026 (UTC)
- Yes, with a warning. We need to take a stand. 331dot (talk) 17:37, 22 March 2026 (UTC)
- Comment: this should have been an RFCBEFORE (even if just a quick one). I boldly tried to close it as such [1], which was reverted [2], which is fine, because that was pretty bold. From reading the above I see people trying to workshop on the fly in an active RFC. What does "persistent" mean? Block with warning or without? Do we need a one warning template? (Adding one of my own) What about editors who deny clear LLM use? Previous LLM-related RFCs with no or truncated RFCBEFOREs have led to procedural opposition and/or required follow-up RFCs to resolve uncertainty about what was intended with the original proposal. I'm worried that is happening again. NicheSports (talk) 17:51, 22 March 2026 (UTC)
- Surprisingly, WP:RFCBEFORE doesn't talk about a pre-RfC workshopping phase, even though it is often used to refer to that process (which isn't required by policy, although the Wikipedia:Village pump (idea lab) can be helpful). In fact, it is about alternatives to RfCs that can be considered prior to starting one. In this case, as we're talking about a policy change, none of the alternatives listed there seem to apply. Chaotic Enby (talk · contribs) 18:19, 22 March 2026 (UTC)
- We really should rename that; rather than "RFCBEFORE" and "Before starting the process" it should be "RFCALT" and "ATRFC" and "Alternatives to starting the process" a la ATD/Alternatives to deletion. I've seen this trip people up more than once now; because RFCBEFORE evokes similarities to WP:BEFORE which is in fact a mandatory step before AfD. Athanelar (talk) 18:23, 22 March 2026 (UTC)
- Very good idea indeed! Yep, that might be where the confusion comes from. Chaotic Enby (talk · contribs) 18:24, 22 March 2026 (UTC)
- I'm not convinced this is the best resolution... to keep things on topic here, I responded more on your talk page and pinged Athanelar [3]. Lesson learned and I will not attempt such a close again; my apologies for doing so here. I do still believe this proposal should have been workshopped first. NicheSports (talk) 19:18, 22 March 2026 (UTC)
- Very good idea indeed! Yep, that might be where the confusion comes from. Chaotic Enby (talk · contribs) 18:24, 22 March 2026 (UTC)
- We really should rename that; rather than "RFCBEFORE" and "Before starting the process" it should be "RFCALT" and "ATRFC" and "Alternatives to starting the process" a la ATD/Alternatives to deletion. I've seen this trip people up more than once now; because RFCBEFORE evokes similarities to WP:BEFORE which is in fact a mandatory step before AfD. Athanelar (talk) 18:23, 22 March 2026 (UTC)
- Surprisingly, WP:RFCBEFORE doesn't talk about a pre-RfC workshopping phase, even though it is often used to refer to that process (which isn't required by policy, although the Wikipedia:Village pump (idea lab) can be helpful). In fact, it is about alternatives to RfCs that can be considered prior to starting one. In this case, as we're talking about a policy change, none of the alternatives listed there seem to apply. Chaotic Enby (talk · contribs) 18:19, 22 March 2026 (UTC)
- Yes, with a warning, and by warning, I don't mean some ambiguous, mealy-mouthed warning like you'd give a five-year-old who keeps forgetting to hang up his coat. It should be something that makes it clear to any new editor that we don't want one bit of LLM writing. Maybe even explicitly not one fucking bit of LLM writing, though I don't imagine there will be consensus to include an f-bomb in a policy or guideline. Some of our warnings about conduct can easily come across as mild suggestions. CoffeeCrumbs (talk) 17:56, 22 March 2026 (UTC)
- If we're allowed to drop a single f-bomb in all of our P&Gs, you have my support. Chaotic Enby (talk · contribs) 18:19, 22 March 2026 (UTC)
- MPA rules it is then. Rambling Rambler (talk) 18:21, 22 March 2026 (UTC)
- If we're allowed to drop a single f-bomb in all of our P&Gs, you have my support. Chaotic Enby (talk · contribs) 18:19, 22 March 2026 (UTC)
- Comment: Wikipedia:Village pump (policy) has been notified of this discussion. Chess enjoyer (talk) 18:27, 22 March 2026 (UTC)
- Should we WP:CENT it? Chaotic Enby (talk · contribs) 18:29, 22 March 2026 (UTC)
- Follow that link, @Chaotic Enby. Chess enjoyer (talk) 18:30, 22 March 2026 (UTC)
- Last checked it half an hour ago, neat to see it got added! Chaotic Enby (talk · contribs) 18:32, 22 March 2026 (UTC)
- Follow that link, @Chaotic Enby. Chess enjoyer (talk) 18:30, 22 March 2026 (UTC)
- WikiProject AI Cleanup as well. Chess enjoyer (talk) 18:36, 22 March 2026 (UTC)
- Should we WP:CENT it? Chaotic Enby (talk · contribs) 18:29, 22 March 2026 (UTC)
- Yes - Cleaning up AI usage, even when used in good faith, takes a really long time. There's already a backlog and an addition like this would help keep that from expanding even faster than it already is. There should be a warning given beforehand, as some new editors may not know the reasons why LLMs are discouraged, but the proposed wording of
"persistent usage"
should cover that already. InfernoHues (talk) 18:32, 22 March 2026 (UTC) - Yes I recently spent hours dragging someone to ANI to deal with their AI slop. Others had to clean up their mess, all that time could have been used to improving the project. I like the warning system we have now but I'm open to it being a one time warning. If there's a lot of content I'm also fine with blocking them and allowing an appeal, as long as it's constructive towards informing a user of their errors. Dr vulpes (Talk) 18:40, 22 March 2026 (UTC)
- Yes, with a warning. Mfield (Oi!) 18:42, 22 March 2026 (UTC)
- Yes, with a single warning which may take any reasonable form, including concerns raised on (user) talk pages through discussion. This is overdue and ANI disproportionately LLM-related. Iseult Δx talk to me 18:46, 22 March 2026 (UTC)
- Unnecessary RFC??? - Recent RFCs have made it clear that the community considers many/most forms of LLM use to be disruptive. Persistent disruptive editing is already addressed by applying a preventative block while explaining what behavior changes are needed for an unblock. Accounts that are used only for disruptive editing can already be blocked without warning to protect the project. So I guess I don't see what the point of this RFC is. -- LWG talk (VOPOV) 18:57, 22 March 2026 (UTC)
- If the goal is just to establish best practices for admins who see rapid/persistent LLM-use in the wild, I say consider the rate and scale of the disruption. Warn if feasible, block if necessary to prevent ongoing disruption. In either case remember that most disruptive LLM users are good faith, so any sanction should be accompanied by a polite but firm explanation of our expectations around LLM use and an assurance that they aren't being punished, we are just trying to prevent more harm while they adjust to our community expectations. -- LWG talk (VOPOV) 19:04, 22 March 2026 (UTC)
- Yes with a single warning. New users might not know AI slop is unacceptable on Wikipedia. If they keep going then they’re crossing such an obvious, non-subjective line (unlike other issues like civility or POV) there really isn’t a good faith interpretation or excuse. Dronebogus (talk) 19:04, 22 March 2026 (UTC)
- Yes, but: there are still Wikipedia templates that ask users to translate articles via LLM. They appear at the top of the page in the app, unfolded in large font, and contain rules that are completely different to the current guidelines on AI use. ExtantRotations (talk) 19:05, 22 March 2026 (UTC)
- uhhhhhhhhh what? where? to be clear, I believe you, but this is news to me and I'd guess a lot of people here Gnomingstuff (talk) 22:19, 23 March 2026 (UTC)
- See WP:MACHINETRANSLATION and Wikipedia:LLM-assisted translation. This latter is referenced in the recently passed WP:NEWLLM: "The use of LLMs to translate articles from another language's Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation." voorts (talk/contributions) 22:22, 23 March 2026 (UTC)
- I mean the templates that appear at the top of the app, I'm aware of WP:LLMTRANSLATE Gnomingstuff (talk) 23:19, 23 March 2026 (UTC)
- See WP:MACHINETRANSLATION and Wikipedia:LLM-assisted translation. This latter is referenced in the recently passed WP:NEWLLM: "The use of LLMs to translate articles from another language's Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation." voorts (talk/contributions) 22:22, 23 March 2026 (UTC)
- uhhhhhhhhh what? where? to be clear, I believe you, but this is news to me and I'd guess a lot of people here Gnomingstuff (talk) 22:19, 23 March 2026 (UTC)
- No. The funny thing is that we're inevitably going to end up using "AI" to deal with the increasing flood of generative LLM slop. Carlstak (talk) 22:31, 23 March 2026 (UTC)
- Yes, with wording change The wording as is suggests that any LLM usage is prohibited which is not the case. Something like "Persistent usage of large language models against guidelines" would resolve that issue. Jumpytoo Talk 19:09, 22 March 2026 (UTC)
- Comment At risk of being redundant, it feels like the raised eyebrows regarding this RfC comes from the fact that admins already have the ability to block, or are already blocking for, LLM usage because they are already able to block for disruptive editing. I guess the question I have is: if blocking outright for reasons like disruptive editing is already an alternative, why do AI slop cases get taken to AN/I? Is it currently ambiguous in policy if or how LLM editing qualifies as disruptive? yukko~hey 20:14, 22 March 2026 (UTC)
- @Tanakayuyuko how else would someone bring disruptive LLM use to admin attention? HJ Mitchell | Penny for your thoughts? 20:26, 22 March 2026 (UTC)
- Right, yeah, that was silly of me. yukko~hey 20:41, 22 March 2026 (UTC)
- @Tanakayuyuko how else would someone bring disruptive LLM use to admin attention? HJ Mitchell | Penny for your thoughts? 20:26, 22 March 2026 (UTC)
- Yes, although I will note that technically WP:DE already covers this. But if we're dotting t's and crossing i's, then yes, by all means. - The Bushranger One ping only 20:15, 22 March 2026 (UTC)
- Yes because even if it's covered elsewhere it needs to be very clear for new users who don't know about the mess LLMs cause on Wikipedia. Lijil (talk) 10:51, 1 April 2026 (UTC)
- Yes with warning. Give 'em some rope before they hit the one strike and are out.🚂ThatTrainGuy1945 Peep peep! 20:33, 22 March 2026 (UTC)
- Yes with warning. It’s important to distinguish between indiscriminate LLM use and assisted drafting under human judgment and proper sourcing. A warning-first approach helps avoid false positives while still allowing action in persistent disruptive cases.-- Carigval.97 (talk) 20:38, 22 March 2026 (UTC)
- Are you editing on mobile? How did you get a curly apostrophe? 🚂ThatTrainGuy1945 Peep peep! 21:10, 22 March 2026 (UTC)
- @ThatTrainGuy1945, some devices, notably Apple devices, use curly or "smart" quotes by default. – Epicgenius (talk) 21:40, 22 March 2026 (UTC)
- On macOS, you can configure it to use straight quotes in the keyboard settings, but the default is curly or smart quotes. --Gurkubondinn (talk) 22:45, 22 March 2026 (UTC)
- Asking the real questions here. ―Maltazarian (talkinvestigate) 21:43, 22 March 2026 (UTC)
- They are editing on mobile, per the edit summary. Some devices, such as Apple devices, make it so apostrophes are curly. Don’t know why, it’s often annoying. 1brianm7 (talk) 23:28, 22 March 2026 (UTC)
- I don't know about iPhones, but in an iPad you can go Settings → General → Keyboard → Smart Punctuation, and disable it. --Redrose64 🌹 (talk) 08:18, 23 March 2026 (UTC)
- Neat, that works on iOS as well! Thanks Redrose64. Otherwise you have to long-press the
"button on the touch keyboard and select straight quotes. --Gurkubondinn (talk) 11:49, 23 March 2026 (UTC) - Sweet. Now I don't have to hold the key down anytime I want to italicize or bold. Thanks! 1brianm7 (talk) 23:25, 23 March 2026 (UTC)
- Neat, that works on iOS as well! Thanks Redrose64. Otherwise you have to long-press the
- I don't know about iPhones, but in an iPad you can go Settings → General → Keyboard → Smart Punctuation, and disable it. --Redrose64 🌹 (talk) 08:18, 23 March 2026 (UTC)
- @ThatTrainGuy1945, some devices, notably Apple devices, use curly or "smart" quotes by default. – Epicgenius (talk) 21:40, 22 March 2026 (UTC)
- Are you editing on mobile? How did you get a curly apostrophe? 🚂ThatTrainGuy1945 Peep peep! 21:10, 22 March 2026 (UTC)
- Yes, with warning. I understand, and agree, that LLMs shouldn't be used to generate article text. Nonetheless, in many cases, we still give other types of unconstructive users (such as vandals and spammers) warning before we block. We need the blocked users to know that they're being blocked for disruption related to persistent LLM usage, not for using LLM one time (and even then, we should really be AGFing that someone just doesn't know about the LLM guidelines). – Epicgenius (talk) 20:39, 22 March 2026 (UTC)
Yes, after warning,
although I honestly think this would be more of a clarification than a wholly new rule. We just passed a guideline telling people not to use LLMs, and to persistently edit in blatant disregard for consensus after being warned is already disruptive editing worthy of a block in it's own right. That being said I see no issue with making it extra clear that this is a form of disruptive editing that can quickly lead to a block. ―Maltazarian (talkinvestigate) 21:11, 22 March 2026 (UTC)- No. Persistent use of an LLM after being warned not to do so is already disruptive editing and we already block editors for it. This should be maintained at WP:DISRUPTSIGNS, rather than this high-level policy. voorts (talk/contributions) 21:30, 22 March 2026 (UTC)
- Further comment RE editors arguing that we need to do this so that admins can block disruptive editors with/without X number of warnings: WE ALREADY CAN BLOCK THOSE EDITORS WITH NO WARNINGS TO PREVENT DISRUPTION. Acting as if we can't and then codifying this sets a really dumb precedent. Ironically, editors who are blocked going forward for a reason not listed in the blocking policy will now try to argue "it isn't listed in the blocking policy, so you can't block me for that", which is never how blocking has worked. I recognize the numerical imbalance in this discussion, but there's literally no policy-based argument being presented on the other side other than gut fear of LLMs and a complete misunderstanding of how blocking works. We shouldn't be making changes to PAGs based on vibes. voorts (talk/contributions) 12:50, 23 March 2026 (UTC)
- I couldn't agree more. There is no consideration in this RfC of how the policy is already structured or worded. It's based on a false assumption that the blocking policy contains a "list of reasons to block". There is no such list, instead blocking revolves around the the broad categories of damage, disruption, and protection. There are some common examples provided. People in this RfC are voting to support a thing which is already allowed by the policy because they don't like LLM, not because this proposal will improve the blocking policy. It's clear there is already a consensus to block 'for LLM', and it's clear that's something that already happens, under existing policy. Any closer will need to consider that no real policy change is being suggested. Admittedly, LLM is something we could already simply add as a common reason for blocking, even though that would be a bit of a stretch. It's preferable to bending the whole blocking policy out of shape due to a bad RfC. -- zzuuzz (talk) 13:52, 23 March 2026 (UTC)
- You do have a point in the RfC is worded in a less than ideal way. I interpreted it as just being a thing to add to examples of blockable offense and/or disruptive editing rather than the creation of an entirely new PAG, which I think is fine because the examples can be anything and so should simply be based on how useful it is to have something as an example. I don't even see how a consensus for anything other than that could even be obtained from an RfC without an explicit wording of the new PAG. The more I read this discussion the more I'm starting to think people don't share an idea of what is being discussed. ―Maltazarian (talkinvestigate) 14:04, 23 March 2026 (UTC)
- I couldn't agree more. There is no consideration in this RfC of how the policy is already structured or worded. It's based on a false assumption that the blocking policy contains a "list of reasons to block". There is no such list, instead blocking revolves around the the broad categories of damage, disruption, and protection. There are some common examples provided. People in this RfC are voting to support a thing which is already allowed by the policy because they don't like LLM, not because this proposal will improve the blocking policy. It's clear there is already a consensus to block 'for LLM', and it's clear that's something that already happens, under existing policy. Any closer will need to consider that no real policy change is being suggested. Admittedly, LLM is something we could already simply add as a common reason for blocking, even though that would be a bit of a stretch. It's preferable to bending the whole blocking policy out of shape due to a bad RfC. -- zzuuzz (talk) 13:52, 23 March 2026 (UTC)
- Further comment RE editors arguing that we need to do this so that admins can block disruptive editors with/without X number of warnings: WE ALREADY CAN BLOCK THOSE EDITORS WITH NO WARNINGS TO PREVENT DISRUPTION. Acting as if we can't and then codifying this sets a really dumb precedent. Ironically, editors who are blocked going forward for a reason not listed in the blocking policy will now try to argue "it isn't listed in the blocking policy, so you can't block me for that", which is never how blocking has worked. I recognize the numerical imbalance in this discussion, but there's literally no policy-based argument being presented on the other side other than gut fear of LLMs and a complete misunderstanding of how blocking works. We shouldn't be making changes to PAGs based on vibes. voorts (talk/contributions) 12:50, 23 March 2026 (UTC)
- Yes I do not believe that an explicit mention of a "warning" needs to be added. The adjective "persistent" already covers letting people off for a single honest mistake (which may already be too lenient...), and admins are generally sensible enough not to throw interdictions about carelessly. Being more explicit that persistent LLM use is disruptive, and having a specific line of policy to point to whenever the need arises, is a good thing. Stepwise Continuous Dysfunction (talk) 22:14, 22 March 2026 (UTC)
- Comment: I've already !voted on this, but I'd like to give my thoughts regarding some arguments that the exact proposal here is misguided, even if the sentiment it encapsulates has WP:SNOW levels of support.To a certain extent, yes, blocking users who abuse LLMs after being warned is something that already happens in practice, and it is technically already part of the rules with our shiny new WP:NEWLLM. That is all the more reason to list it as a common reason why a user might be banned. Our P&Gs are not up to speed with how we actually deal with LLM usage in practice, and this is a step towards rectifying that.As well, there was recently some concern raised at AnI that although we now have a guideline prohibiting most LLM content, we do not yet specify how it should be enforced. That's where this RfC came from, we're building consensus on how to enforce WP:NEWLLM. Adding LLM usage to the list of reasons to block is a fine way to document the consensus.Yes, LLM usage is a kind of disruptive editing. It's a big enough problem to deserve it's own section. MEN KISSING (she/they) T - C - Email me! 22:18, 22 March 2026 (UTC)
- Yes, gives admins a policy-backed reason to enact blocks that protect the encyclopedia. The current situation is untenable and lets too many slip through the cracks, this should give admins a solid ground to act on. --Gurkubondinn (talk) 22:41, 22 March 2026 (UTC)
- Yes as an increasingly common form of disruptive editing which deserves to be separated from "regular" DE. HurricaneZetaC 23:24, 22 March 2026 (UTC)
- Persistent LLM use is disruptive and disruptive editing is sufficient reason to block. What does separating it out on this one particular policy page functionally accomplish? voorts (talk/contributions) 23:46, 22 March 2026 (UTC)
- Yes, after warning: LLM walls of slop are taking up too much community time on noticeboards. WP:CIR is a big issue here. TarnishedPathtalk 23:39, 22 March 2026 (UTC)
- @TarnishedPath Did you mean to !vote on EEng's proposal or Ritchie's proposal? MEN KISSING (she/they) T - C - Email me! 23:46, 22 March 2026 (UTC)
- @MEN KISSING on Richie's. Sorry, I just scolled to the bottom. :) I'll move it now. TarnishedPathtalk 23:50, 22 March 2026 (UTC)
- How would this proposal reduce LLM slop being posted on noticeboards? LLM-generated posts are already immediately closed, hatted, or reverted, after which the editor is warned and then blocked if warnings are not heeded. voorts (talk/contributions) 00:07, 23 March 2026 (UTC)
- @TarnishedPath Did you mean to !vote on EEng's proposal or Ritchie's proposal? MEN KISSING (she/they) T - C - Email me! 23:46, 22 March 2026 (UTC)
- Yes, after one warning. LLM use is a serious and growing problem requiring time-consuming cleanup, but we are going to get newbies and even editors with a little experience who don't realise we don't allow it or don't realise the tool they used qualifies, or underestimate how much they needed to check the output. ("Persistent" in the block reason would in effect mean "after warning". That should be made clear in guidance pages; otherwise editors are going to think it means a week or 100 edits or something.) Yngvadottir (talk) 00:53, 23 March 2026 (UTC)
- Yes, with whether to give a warning dependent on administrator discretion (based on the scale and speed of AI usage, the intent and so on.) The amount AI can produce and the cleanup required means that we need to give administrators a free hand to react rapidly as necessary; they have the judgement to determine if a warning is needed. I would expect that usually a warning would be given first assuming no other policies were violated, the contributions were otherwise in good faith, and the speed with which things were added was not egregious; but like anything else it's a spectrum - and assuming it's someone's first block, they can probably be let back in relatively quickly on appeal if they understand they screwed up and promise not to do it again. --Aquillion (talk) 01:17, 23 March 2026 (UTC)
- Oppose as written for a few reasons, though I do support Athanelar's proposed addition to NEWLLM above. Admins already have the discretion to block for LLM abuse, and are using in; in cases where they aren't (editor using the LLM is highly experienced[4], evidence is not clear-cut enough, editor is using an LLM, but constructively), then the admin would likely have any block they made under this overturned (by community consensus or via another admin action) anyways. Translation via LLM is specifically condoned via RFC, as is using an LLM to grammar check; watching MFD, and LLM generated userspace essays or userpages are already tolerated. Looking at the G15 creation discussions, and the ability of editors to create LLM generate userpages without getting them G15-ed was something several experienced editors thought about when crafting the criterion/expanding it. The proposed wording here places a blanket prohibition on all of that, whether it means to, or not. Alternatively, let's think of it this way: even our COI and PAID policies do not prohibit editing with a conflict of interest; the language is strongly discouraged in mainspace, and banned without disclosure. That's for very practical reasons: banning all PAID/COI editing just encourages people to not disclose. Banning all LLM use, which, again, is what this effectively does, will also encourage people to not disclose. Similarly, as many a newbie vandal/anti-COI editor has discovered... the community doesn't like it when you just blanket revert/ask for sanctions/insult somebody for breaking a rule, when their actual edits are good. When the rare person comes along who can use an LLM constructively to copyedit their British English writing to American English, even if imperfectly, or create an outline of an article in their userspace, or create citation templates, or code a script, just like when the rare article subject comes along who is just removing unsourced information/updating their picture, the editor reverting them discovers that doesn't go so well.[5]. This proposed idea doesn't distinguish between that at all, and is only going to set our editor base up for failure when it comes to enforcing this rule. Telling some 17 year old kid with austim, just getting into Wikipedia editing "Hey, editors are not allowed to do Y action", when, in fact, they are allowed to do Y action, even if only unofficially, and Y action is hard to accurately detect & the other editor can always lie to get out of trouble ("I didn't use AI!" "Maybe somebody else got onto my computer and submitted the article for me!"[6]), is just going to end in tears. And, like I've been telling people in a few different places, stuff like this isn't actually going to make cleaning up LLM gernated articles easily - looking at CCI, where we already very explicitly prohibit copying material from outside sources, and actually trying to remove material can be like pulling teeth. It inherently takes a very long time if all goes well, but when it doesn't it gets you yelled at by the general public and other editors, admins will roll back the material you remove into articles if they feel like it and you're new, and other editors will fight tooth and nail to stop you using G5 because it "hurts the reader" -- unless they think the material was AI generated, in which case, yeah, they'll !vote delete at an AFD without even checking the content to see that you rewrote the entire article from scratch. :/ Yeah. Sorry, but this ain't gonna help cleanup efforts; those of you who think it will need to, seriously, engage with cleanup efforts in non-AI areas of the site, because you guys have it a lot easier than you think you do. More social acceptance of TNT-based AFD noms, a culture of accepting presumptive deletion, adding a giant great note in WP:PRESERVE re:LLM generated content, this would help. If I could wave a magic wand, I'd create an entire, AFD TNT deletion system, where articles were nominated purely on quality grounds, and, if issues with content/source fraud/bias/LLM generated text were severe enough, and nobody heymanned the article, then goodbye article. (We have a version of this for dealing with copyright problems - it's called WP:CPN and it's wonderful). GreenLipstickLesbian💌🧸 04:02, 23 March 2026 (UTC)
Telling some 17 year old kid with austim
sorry, what does autism have to do with this @GreenLipstickLesbian? ltbdl (free) 07:31, 23 March 2026 (UTC)- I had some thoughts on what GLL had to say here, there's some good points to respond to. But the "17 year old kid with autism" thing needs to be clarified first. MEN KISSING (she/they) T - C - Email me! 07:43, 23 March 2026 (UTC)
- "will also encourage people to not disclose" - LLM usage is easily detectable. "copyedit their British English writing to American English" - 99% of these conversions can be done by using Notepad's find and replace; no LLM is necessary for this task. "create citation templates, or code a script" - why do you think these are good use cases for LLMs? Firstly, LLMs suck at Wikipedia templates' syntaxes, secondly, I, for one, will oppose any vibecoding on Wikipedia and will uninstall any user script if I find out that it was written by an LLM, and I don't think people supporting this proposal have a different attitude. "the other editor can always lie to get out of trouble" - such liars are quiet quickly indeffed on ANI.
By the way, as a person with autism, I find the idea that autistic people are more likely to break the rules and then lie about it to be insulting.sapphaline (talk) 07:56, 23 March 2026 (UTC)- @Sapphaline Separate issues - people who lie about using LLM, and people who try to enforce the policy (hence "other editor can always lie to get out of trouble". I don't think I made any assumption about people people with autism using LLMs? GreenLipstickLesbian💌🧸 08:41, 23 March 2026 (UTC)
- I beg you (and everybody else) to stop weaponising a hypothetical editor with autism as a rhetorical device against AI regulation. This is the millionth time I've seen this point made by now and it is just as nonsensical as the first time. I think you're probably severely underestimating how many regular, productive editors (including ones participating in this very discussion) are autistic or otherwise neurodivergent. Believe it or not, Wikipedia editing as a hobby is a magnet for that sort of thing.
Hey, editors are not allowed to do Y action", when, in fact, they are allowed to do Y action, even if only unofficially,
Anyway, WP:IAR means that applies to every rule. What effect does this have on your rhetorical 17 year old with autism? Athanelar (talk) 08:16, 23 March 2026 (UTC)- I'm also on the spectrum.
- @GreenLipstickLesbian, I do trust that you wouldn't have meant anything nasty with the "17 year old with autism" remark, but I also don't see the purpose of it in your comment, so please clarify. MEN KISSING (she/they) T - C - Email me! 08:24, 23 March 2026 (UTC)
- @MEN KISSING The purpose of it was that I am an autistic person who first got into editing Wikipedia on this account at age 17, and I found a lot of these "you're meant to follow these rules, but also don't" very confusing and resist all efforts to add more of them. It took me until I was an adult to gain enough real life experience to go "Oh, I'm allowed to break that one in this context?", and, even then, it's still incredibly hard. GreenLipstickLesbian💌🧸 08:28, 23 March 2026 (UTC)
- Thank you for clarifying. It wasn't clear it was meant to be a sort of personal anecdote, and I think it read differently to a few other editors. MEN KISSING (she/they) T - C - Email me! 08:32, 23 March 2026 (UTC)
- I can see that. Am used to it -- autism is literally a disorder characterized by difficulties in communication, after all. The only part I'm actually unocmfortabke with is @Athanelar's assumption that I was weaponizing my own experiences, and that, when describing how those issues impacted me, that I could not possible be productive, but it's also nice to see their honest opinion of how my brain works without the social burden of trying to form their criticism of my in a polite way. GreenLipstickLesbian💌🧸 08:38, 23 March 2026 (UTC)
- Your experiences are your own, and do not (as you well know) reflect how other people with autism interact with the world. If you want to say "this rule would have been unintuitive to me at one point" then just say that, don't couch your purely anecdotal argument behind a hypothetical autistic teenager when said teenager is by no means guaranteed to experience the world the same way you did; I am tired of people using hypothetical neurodivergence as a rhetorical device in discussions around the regulation of AI here on Wikipedia, and my feelings on that stand even if the person doing so is themselves neurodivergent. That is no criticism of you, nor of how your brain works, it's a criticism of the way you presented your argument. Athanelar (talk) 08:45, 23 March 2026 (UTC)
- @Athanelar Again, it was not meant to be a hypothetical, and I didn't state it as such. I also don't believe you that
I think you're probably severely underestimating how many regular, productive editors (including ones participating in this very discussion) are autistic
, when brought up in direct opposite to what you believed to be a hypothetical example, wasn't meant to be a criticism of me -- even if you didn't know you were criticising me. The converse of those adjectives are "abnormal", "non-productive" -- which, again, is a very telling fact you used those base adjecties as the opposite to what you perceived to be a hypothetical example. GreenLipstickLesbian💌🧸 09:00, 23 March 2026 (UTC)- By "regular" I mean "editing regularly" in the temporal sense not "normal of nature," and by "productive" I mean in the sense that they engage with Wikipedia beneficially without clashing against its systems, not that the teenager in your (what I perceived to be a) hypothetical (and thereby, you) was doomed to not be productive. Athanelar (talk) 10:28, 23 March 2026 (UTC)
- Oh, thanks for explaining that. I was really wondering where this was coming from, because I didn't interpret
regular
as being meant in any other way than in the temporal sense. And fwiw, I agree with you on people using hypothetical neurodivergence as a rhetorical device in these discussions. --Gurkubondinn (talk) 10:46, 23 March 2026 (UTC)
- Oh, thanks for explaining that. I was really wondering where this was coming from, because I didn't interpret
- By "regular" I mean "editing regularly" in the temporal sense not "normal of nature," and by "productive" I mean in the sense that they engage with Wikipedia beneficially without clashing against its systems, not that the teenager in your (what I perceived to be a) hypothetical (and thereby, you) was doomed to not be productive. Athanelar (talk) 10:28, 23 March 2026 (UTC)
- Regarding the autism thing, there's at least one study suggesting that AI detection software flags writing by neurodivergent people disproportionately.
- Personally I'm a little skeptical of this for Wikipedia's purposes given that... well... this is Wikipedia, home of 20+ years and billions of edits' worth of writing by autistic people, and most of those don't sound like AI. But the study is out there and gets cited a lot. Gnomingstuff (talk) 16:16, 24 March 2026 (UTC)
- @Athanelar Again, it was not meant to be a hypothetical, and I didn't state it as such. I also don't believe you that
- Your experiences are your own, and do not (as you well know) reflect how other people with autism interact with the world. If you want to say "this rule would have been unintuitive to me at one point" then just say that, don't couch your purely anecdotal argument behind a hypothetical autistic teenager when said teenager is by no means guaranteed to experience the world the same way you did; I am tired of people using hypothetical neurodivergence as a rhetorical device in discussions around the regulation of AI here on Wikipedia, and my feelings on that stand even if the person doing so is themselves neurodivergent. That is no criticism of you, nor of how your brain works, it's a criticism of the way you presented your argument. Athanelar (talk) 08:45, 23 March 2026 (UTC)
- I can see that. Am used to it -- autism is literally a disorder characterized by difficulties in communication, after all. The only part I'm actually unocmfortabke with is @Athanelar's assumption that I was weaponizing my own experiences, and that, when describing how those issues impacted me, that I could not possible be productive, but it's also nice to see their honest opinion of how my brain works without the social burden of trying to form their criticism of my in a polite way. GreenLipstickLesbian💌🧸 08:38, 23 March 2026 (UTC)
- Thank you for clarifying. It wasn't clear it was meant to be a sort of personal anecdote, and I think it read differently to a few other editors. MEN KISSING (she/they) T - C - Email me! 08:32, 23 March 2026 (UTC)
- @MEN KISSING The purpose of it was that I am an autistic person who first got into editing Wikipedia on this account at age 17, and I found a lot of these "you're meant to follow these rules, but also don't" very confusing and resist all efforts to add more of them. It took me until I was an adult to gain enough real life experience to go "Oh, I'm allowed to break that one in this context?", and, even then, it's still incredibly hard. GreenLipstickLesbian💌🧸 08:28, 23 March 2026 (UTC)
- And I understand that frustration, but if @Athanelar's going to assume that somebody bringing up how the wording of a guideline is potentially going to negatively impact those with a neurotype is using their disability as a shield, or "weaponising a hypothetical editor with autism as a rhetorical device against AI regulation", when all I'm doing is arguing against the exact wording of a proposal, then there's really not much I can do to that assumption of bad faith. I'm not arguing against regulating AI, I'm arguing against this exact rule because I think it will be confusing to enforce, especially for people with my neurotype, because it's not honest about what we actually disallow as a community, not because I believe people should use AI. Which, again, from a literal reading - how all policies should be red -- the proposed wording gives admins blanket permission to block for using an LLM, not using an LLM in contravention of NEWLLM, and conflicts with Wikipedia:LLM-assisted translation. Again, why, in heaven's name, are you expecting that any editors, even those who are better at social situations than myself, to understand that yes, this page says they can report people merely for the act of using an LLM, and expect them to be blocked, when other policy pages explicitly allow the use of LLMs in certain contexts? Because it's obviously clear to you that this proposed change doesn't contradict that, based on, AFAICT, IAR, but it's not at all clear to me. This exact wording creates a contradiction to two rules, and while some people may have the social know how to know when they're meant to apply IAR, it's very bloody unfair for you to expect all editors to have that skill. GreenLipstickLesbian💌🧸 18:28, 23 March 2026 (UTC)
Which, again, from a literal reading - how all policies should be red
Um, no. SuperPianoMan9167 (talk) 18:42, 23 March 2026 (UTC)- Points back up to previous statements: that's inaccessible.[7]
- And, if you'll pardon my crassness, baking IAR into policies meant to stop good faith disruption is a bad idea. People who use LLMs to expand stubs genuinely believe that their work is improving Wikipedia - so obviously, they can ignore the rule, right? When you start writing the rules with an expectation that people will regularly IAR around them, you can't be surprised when that creates more headaches and timesinks and noticeboards, not less. GreenLipstickLesbian💌🧸 18:56, 23 March 2026 (UTC)
- This makes sense. SuperPianoMan9167 (talk) 18:58, 23 March 2026 (UTC)
- WP:IARUNCOMMON is directly relevant here. Any policy that requires IAR for more than occasional exceptions is a bad policy. Thryduulf (talk) 13:59, 28 March 2026 (UTC)
- I wasn't commenting on that, GreenLipstickLesbian. I just didn't understand where the
... converse of those adjectives are ...
part was coming from, because I didn't readregular
as anything other than the temporal meaning. But Athanelar's comment explained that for me. I've spent hours reading PAGs to understand them, still forget specifics and have to go back and re-read them and try to interpret them in some cases, and I have very little interest in arguing about policies or voice (much) opinions about proposed changes. There's far more clever people than me around here, so I just trust the project to do what it has always done. --Gurkubondinn (talk) 10:02, 24 March 2026 (UTC)
- Support, with a single warning. There are increasingly more threads on ANI about LLM misuse (as I write this there are 4 such threads, judging by the TOC), so clearly the process of dealing with it needs to be standardized. sapphaline (talk) 07:36, 23 March 2026 (UTC)
- It is standardized. Admins block for disruptive editing. Does there really need to be a mention of LLM misuse in the blocking policy for admins to block for LLM misuse? SuperPianoMan9167 (talk) 15:13, 23 March 2026 (UTC)
- How would this proposal decrease ANI reports? voorts (talk/contributions) 15:26, 23 March 2026 (UTC)
- Standardised process to deal with such editors = no discussion required on how to deal with them, no discussion required = no ANI report required. sapphaline (talk) 21:27, 23 March 2026 (UTC)
- No discussion does not imply that no ANI report is required. Say that you warn an editor against LLM use, and they continue. You are not an admin, so you cannot block them. Where would you report that LLM use after your warning? voorts (talk/contributions) 22:08, 23 March 2026 (UTC)
- well, there would be less discussion and more open-and-shut cases ~2026-18223-76 (talk) 10:18, 24 March 2026 (UTC)
- Why is that the case? voorts (talk/contributions) 19:39, 26 March 2026 (UTC)
- Ideally, to a new board or some other forum similar to AIV that can handle open and shut cases of undisclosed LLM use. Kingsmasher678 (talk) 19:56, 26 March 2026 (UTC)
- Why? What evidence do you have that ANI doesn't work or that this will reduce discussion? voorts (talk/contributions) 20:00, 26 March 2026 (UTC)
- ANI does, it just seems to be a whole lot of the reports. Implement a or two strike warning, report to AIV or similar. I don't see why we need to have much discussion around this at all, and why it needs to get to a public discussion notice board pretty much ever, unless it is subtle prohibited use. Kingsmasher678 (talk) 20:19, 26 March 2026 (UTC)
- We don't need to have discussion about it. Most of the comments I've seen in these ANI posts have been completely pointless. Unless there's something more to a case, editors need to chill out with commenting on ANI threads where the outcome should be obvious. Having five people post "you should block" this person isn't useful. Moving things to another noticeboard won't stop that. voorts (talk/contributions) 20:40, 26 March 2026 (UTC)
- ANI does, it just seems to be a whole lot of the reports. Implement a or two strike warning, report to AIV or similar. I don't see why we need to have much discussion around this at all, and why it needs to get to a public discussion notice board pretty much ever, unless it is subtle prohibited use. Kingsmasher678 (talk) 20:19, 26 March 2026 (UTC)
- Why? What evidence do you have that ANI doesn't work or that this will reduce discussion? voorts (talk/contributions) 20:00, 26 March 2026 (UTC)
- well, there would be less discussion and more open-and-shut cases ~2026-18223-76 (talk) 10:18, 24 March 2026 (UTC)
- No discussion does not imply that no ANI report is required. Say that you warn an editor against LLM use, and they continue. You are not an admin, so you cannot block them. Where would you report that LLM use after your warning? voorts (talk/contributions) 22:08, 23 March 2026 (UTC)
- Standardised process to deal with such editors = no discussion required on how to deal with them, no discussion required = no ANI report required. sapphaline (talk) 21:27, 23 March 2026 (UTC)
- Yes-- along with the recent clarification to the LLM use rules, I think this will help cut to the chase on obvious cases. Even if they could technically be banned for it anyway, there are a lot of lengthy threads debating what to do with specific editors who were caught, warned, continued, caught again, and are now either promising to stop or ruleslawyering demanding hard proof, wasting everyones time. There's enough murkiness around this policy that I think it is being applied inconsistently, and not every admin feels confident to block and move on. I think its useful to have a clear community consensus about how we should handle these. As for a warning... I don't personally think it needs to be mandatory. I would leave it up to the judgement of the blocker, depending on how quickly they are adding nonsense to pages-- at some point it borders on becoming vandalism. But if its not urgent, probably better to warn first. /ˌtiːoʊseɪˈæf.dʒə/ (talk) 10:16, 23 March 2026 (UTC)
- Comment-- just wanted to add, there's several people saying that we don't understand this is already allowed/technically doesn't change anything-- i think especially in a collaborative environment like this, getting explicit consensus can be useful even if it doen't technically change the "rules". It's good to have a clear idea of how the community wants this handled. It's probably true that this RFC isn't crystal clear about exactly what changes it's proposing, since there seem to be several interpretations, so I'll clarify that I am supporting something along the lines of "consensus that adding AI to articles is inherently a serious disruption, worthy of blocking without needing discussion or multiple warnings." There are levels of disruptive editing, this one is pretty high on the list. /ˌtiːoʊseɪˈæf.dʒə/ (talk) 07:20, 24 March 2026 (UTC)
there are a lot of lengthy threads debating what to do with specific editors who were caught, warned, continued, caught again, and are now either promising to stop or ruleslawyering demanding hard proof, wasting everyones time.
Why would this proposal reduce those discussions? Random editors at ANI won't suddenly stop loving the sound of their own voice and stop needlessly stating the obvious before an admin gets around to blocking. voorts (talk/contributions) 20:05, 26 March 2026 (UTC)- But i think that's at least partly due to the exact lack of clarity here-- people dont expect that admins will block just for AI (somewhat justified, from what I see some reports hang around for ages or get archived with no action despite very obvious evidence), so they want to be helpful. It's a bit unfair to say people are just commenting bc they like the sound of their own voice, when commenting is the only action avaible to non-admins if we think something should be done. (Which isnt blaming admins, ofc, some reports just arent actionable and you cant be everywhere)
- Hopefully, if we establish that AI misuse does consistently result in a block (indef, not 31 hours), then reporters can just focus on providing evidence. Or evaluating the evidence of other people to confirm "yes, this is AI," rather than "is this blockworthy"
- I also like the suggestion of a special noticeboard for this, or a rework of the AI noticeboard, to filter these out from the noise at ANI. /ˌtiːoʊseɪˈæf.dʒə/ (talk) 05:10, 27 March 2026 (UTC)
- You've made a total of 668 edits. 7 of them are at ANI. How could you possibly know if there's a problem with ANI's functioning or the current block policy? voorts (talk/contributions) 12:11, 27 March 2026 (UTC)
Hopefully, if we establish that AI misuse does consistently result in a block (indef, not 31 hours)
Why should admins not have discretion on this issue? If admins are required to indef after one warning, I personally simply won't be blocking editors for LLM misuse anymore. voorts (talk/contributions) 12:13, 27 March 2026 (UTC)
Or evaluating the evidence of other people to confirm "yes, this is AI," rather than "is this blockworthy"
LLM misuse is already blockworthy. voorts (talk/contributions) 12:15, 27 March 2026 (UTC)
- You've made a total of 668 edits. 7 of them are at ANI. How could you possibly know if there's a problem with ANI's functioning or the current block policy? voorts (talk/contributions) 12:11, 27 March 2026 (UTC)
- Yes, we already have 4 warning level templates, and AI-generated content being added to articles is becoming a greater problem as we speak, so I fully support this. As for whether or not this would hurt autistic editors, I personally don't think so. As an editor with autism myself, I've never had anyone mistake any of my content edits with AI. Maybe it's different for other people, I know that everybody with it has a different experience, but I really don't think this will hurt any valid contributions from these editors. I would also not be opposed to a filter on recent changes that can detect possible AI use (if that is even possible). CabinetCavers----DEPOSIT OPINION, [valued customer] 12:04, 23 March 2026 (UTC)
- Filters already exist. SuperPianoMan9167 (talk) 12:30, 23 March 2026 (UTC)
- Yay! I did not know that, thank you! CabinetCavers----DEPOSIT OPINION, [valued customer] 14:07, 23 March 2026 (UTC)
a filter on recent changes
Here you go. It may also be worthwhile including edits with the tag "Edit Check (paste) shown" as it seems like they are working on an LLM-specific edit check (T420258). OutsideNormality (talk) 03:43, 24 March 2026 (UTC)
- Filters already exist. SuperPianoMan9167 (talk) 12:30, 23 March 2026 (UTC)
- Yes, with NO warning. the Doug hole (a crew 4 life) 12:55, 23 March 2026 (UTC)
- Yes, after one warning Rolluik (talk) 13:09, 23 March 2026 (UTC)
- Yes, after warning, such that reports can be actioned upon quickly at WP:AIV without the need for a trip to WP:ANI where the editor naturally dissembles and sealions, aided by AI. ~~ AirshipJungleman29 (talk) 13:52, 23 March 2026 (UTC)
Yes, preferably with a warning in order to confirm thepersistent
nature of the AI usage. I'm sympathetic toward the argument that existing disruptive-editing policies already cover this, but I think placing an explicit target on disruptive LLM usage will enable admins to deal with such problems more efficiently. ModernDayTrilobite (talk • contribs) 14:51, 23 March 2026 (UTC)but I think placing an explicit target on disruptive LLM usage will enable admins to deal with such problems more efficiently.
We already have targeted it at WP:DISRUPTSIGNS. How would this functionally change anything? voorts (talk/contributions) 15:06, 23 March 2026 (UTC)- Huh—that's on me for not explicitly checking WP:DE in advance of my !vote, I guess. The section in WP:DISRUPTSIGNS seems to already cover what I'd most wanted out of this proposal, so with that in mind I'm switching to neutral on this one. ModernDayTrilobite (talk • contribs) 17:59, 23 March 2026 (UTC)
- The section in WP:DISRUPTSIGNS was created during this RfC, based on the apparent consensus, actually. MEN KISSING (she/they) T - C - Email me! 21:18, 25 March 2026 (UTC)
- Huh—that's on me for not explicitly checking WP:DE in advance of my !vote, I guess. The section in WP:DISRUPTSIGNS seems to already cover what I'd most wanted out of this proposal, so with that in mind I'm switching to neutral on this one. ModernDayTrilobite (talk • contribs) 17:59, 23 March 2026 (UTC)
- No, because this is pointless instruction creep. Admins can and do block for LLM misuse without warnings. As others have said, the blocking policy is not a list of things admins can block for. Any disruptive editing is ground for a block. SuperPianoMan9167 (talk) 14:59, 23 March 2026 (UTC)
- If this RfC is merely proposing to add "persistent misuse of LLMs" to the blocking policy as an example of disruptive editing, I would support. But most supporters above seem to be !voting on whether admins can block for LLM misuse at all, which is needlessly bureaucratic. SuperPianoMan9167 (talk) 15:03, 23 March 2026 (UTC)
- @SuperPianoMan9167 It's not even proposing adding "persistent misue of LLMs", it's saying "persistent use". GreenLipstickLesbian💌🧸 18:29, 23 March 2026 (UTC)
- Which links to WP:NEWLLM, making it clear that the "persistent usage" means "persistent usage in violation of NEWLLM", which is misuse. Unless, of course, the intent is to ban all LLM use. SuperPianoMan9167 (talk) 18:32, 23 March 2026 (UTC)
- @SuperPianoMan9167 And given that many !supporters are also talking about using LLMs on talkpages or in project space discussions, which is not at all related to NEWLLM, it's obviously not clear. GreenLipstickLesbian💌🧸 18:35, 23 March 2026 (UTC)
- Which links to WP:NEWLLM, making it clear that the "persistent usage" means "persistent usage in violation of NEWLLM", which is misuse. Unless, of course, the intent is to ban all LLM use. SuperPianoMan9167 (talk) 18:32, 23 March 2026 (UTC)
- @SuperPianoMan9167 It's not even proposing adding "persistent misue of LLMs", it's saying "persistent use". GreenLipstickLesbian💌🧸 18:29, 23 March 2026 (UTC)
- If this RfC is merely proposing to add "persistent misuse of LLMs" to the blocking policy as an example of disruptive editing, I would support. But most supporters above seem to be !voting on whether admins can block for LLM misuse at all, which is needlessly bureaucratic. SuperPianoMan9167 (talk) 15:03, 23 March 2026 (UTC)
- yes they're already getting blocked, might as well label it properly. -- Aunva6talk - contribs 18:01, 23 March 2026 (UTC)
- Yes With an option for a warning subject to admin discretion. Frankly there's probably only one or two things in the world that are a bigger threat to Wikipedia's mission than AI slopification. We should absolutely be empowering admins to prevent this problem from becoming disruptive. Simonm223 (talk) 18:43, 23 March 2026 (UTC)
- @SuperPianoMan9167 this is where I meant to post my comment and would prefer to have the discussion. I hear you regarding WP:BITE which is why I do support an option for a warning where appropriately. However, participation that takes the form of adding a bunch of slop that other editors clean up is honestly worse than no participation at all. Simonm223 (talk) 18:47, 23 March 2026 (UTC)
- How about just adding a link to Wikipedia:Disruptive editing#Persistent LLM use, which now has a shortcut at WP:LLMDISRUPT, and just leave it at that? SuperPianoMan9167 (talk) 18:56, 23 March 2026 (UTC)
- Having read more of the discussion I wouldn't be opposed to that outcome or to the formation of a specific policy. My concern is outcome-driven: keep slop off WP because of two main issues:
- hallucinations
- I personally find it very offensive when someone tells a bot to have a conversation with me in place of themself
- If we are preventing talk page discussion-by-bot and if we are preventing hallucinations I'm satisfied. Simonm223 (talk) 19:08, 23 March 2026 (UTC)
- Having read more of the discussion I wouldn't be opposed to that outcome or to the formation of a specific policy. My concern is outcome-driven: keep slop off WP because of two main issues:
- How about just adding a link to Wikipedia:Disruptive editing#Persistent LLM use, which now has a shortcut at WP:LLMDISRUPT, and just leave it at that? SuperPianoMan9167 (talk) 18:56, 23 March 2026 (UTC)
- @SuperPianoMan9167 this is where I meant to post my comment and would prefer to have the discussion. I hear you regarding WP:BITE which is why I do support an option for a warning where appropriately. However, participation that takes the form of adding a bunch of slop that other editors clean up is honestly worse than no participation at all. Simonm223 (talk) 18:47, 23 March 2026 (UTC)
- Yes, without warning: Look. We've been seeing persistent patterns at ANI for months now: people using AI, denying they're doing so, often responding with it, and weasel-wording and Wikilawyering at every step of the way ... ooo, the rules don't say they can't, ooo, NEWLLM doesn't say they can't, ooo this excuse and that excuse. And I'm sorry (well, no I really am not), but I'm a lot less concerned about the hypothetical 17-year-old neurodivergent would-be editor than I am about this existential threat to the encyclopedia. We are all among those who've put in 25 years of long, hard and sometimes painful work to bring Wikipedia from a website routinely sneered at by academics and authorities to the grandest encyclopedia in the history of the world. We're already under fire from those factions who can't stand any facts they can't rewrite in their own image, and nothing will ruin us faster and more surely than the implication that ChatGPT's doing the editing for us. Ravenswing 19:38, 23 March 2026 (UTC)
- We already have policy for this. SuperPianoMan9167 (talk) 19:39, 23 March 2026 (UTC)
- When have we ever cared about the wikilawyering or excuses of disruptive editors? voorts (talk/contributions) 20:02, 23 March 2026 (UTC)
- Oppose as written: Our policies and guidelines need to be consistent. Either we allow LLMs or we don't. WP:NEWLLM and WP:LLMTRANSLATE allow the limited use of LLMs; therefore, we can't block people for using LLMs unless we change those too. I don't necessarily oppose changing those too, but we cannot have inconsistent policies when it comes to blocking. (I assume most 17-year-olds with autism would agree with me on this.) Gnomingstuff (talk) 22:04, 23 March 2026 (UTC)
- It didn't occur to me that that was the intent of this proposal. I just assumed that the reference to "usage" was misusage, given the wikilink. voorts (talk/contributions) 22:10, 23 March 2026 (UTC)
- If the proposal is about misusage then it should say "misusage." But right now it says "usage," which is not the same thing.
- Also, as someone who has slogged through more AI text than almost everyone in this discussion, I don't think people realize just how widespread AI use is on Wikipedia by now. As swamped as ANI is with LLM reports, that's still a small fraction of cases. Gnomingstuff (talk) 22:15, 23 March 2026 (UTC)
- Thanks for doing AI cleanup work. We can't block our way out of our problems. voorts (talk/contributions) 22:15, 23 March 2026 (UTC)
- Didn't mean that to brag, just to note that the problem is about 100x worse than most people think it is (with the exceptions obviously of the other people who do cleanup here) Gnomingstuff (talk) 00:23, 24 March 2026 (UTC)
- +1. And given that the admin who proposed it doesn't seem to understand what is allowed/disallowed by NEWLLM[8][9] (
LLM generated text can be incorporated into an article following human review, in manner to recycling old public domain text
)..... well, there's some parables about making assumptions that I'm very tempted to stick here. GreenLipstickLesbian💌🧸 00:18, 29 March 2026 (UTC)
- Thanks for doing AI cleanup work. We can't block our way out of our problems. voorts (talk/contributions) 22:15, 23 March 2026 (UTC)
- comment-- noting this to agree that i !voted on the assumption that "using AI" means "adding AI generated content to articles," and does not include things that are explicitly allowed. If that's not clear, wording could be tweaked possibly /ˌtiːoʊseɪˈæf.dʒə/ (talk) 07:58, 24 March 2026 (UTC)
- FYI, the current LLM translation policy differs already based on user platform.
- On articles with an “expand from a foreign language” banner, app users are shown a completely different set of rules. Notably there is no reference to speaking the source language, and no requirement to access the original references. ExtantRotations (talk) 21:41, 24 March 2026 (UTC)
- It didn't occur to me that that was the intent of this proposal. I just assumed that the reference to "usage" was misusage, given the wikilink. voorts (talk/contributions) 22:10, 23 March 2026 (UTC)
- Yes, maybe expand AIV to report LLM use too? I was just pulled to ANI by an editor that couldn't or wouldn't stop using a LLM to communicate. It was, frankly, a waste of my and everyone else's time, and ideally they would have been banned on a 2nd offence instead of the 10th or so. A place to take all of these for prompt blocking would probably be a good idea, and should be easy to to add this to AIV. Kingsmasher678 (talk) 00:20, 24 March 2026 (UTC)
- LLM use is not vandalism as most LLM-using editors are trying to improve Wikipedia. If the LLM user is also a spammer then yeah, you can report them to AIV. SuperPianoMan9167 (talk) 00:57, 24 March 2026 (UTC)
- If User talk:Spider1217 is the case you're referring to, that editor was blocked after their second offense following warnings. I blocked them for 31 hours after I warned them at ANI regarding the LLM-generated slop they posted there and then I promptly indef'd them after they continued upon the block expiring. voorts (talk/contributions) 00:58, 24 March 2026 (UTC)
- Put another way, this editor twice took themself to ANI
for prompt blocking
. voorts (talk/contributions) 01:03, 24 March 2026 (UTC) - They did not. I warned them several time and they made it clear that they were aware here. ---Edited, reread and realized I misgauged tone.-- Sorry about the block vs. ban, I don't work enough in those spaces to remember the difference of the top of my head.
- My larger point is that if I had been able to report to AIV or a similar venue for the behavior, I would have done so before those threads were ever started and saved us all the time. I also to be clear, am not saying that LLM use is vandalism, just that it might be convenient to have AIV deal with the issue, since it would fit well with the workflow already in place. We could always amend the policy around AIV to allow the admins there to deal with these cases, similar to how spam is already dealt with at that location.
- Kingsmasher678 (talk) 01:11, 24 March 2026 (UTC)
- When did you warn this editor about LLM use before they posted their frivolous LLM-generated complaint at ANI? The only warnings of yours that I see on their talk page were for using unreliable sources and violating IMH. I also don't see any warnings about LLM use in your edit summaries on Lucky Bisht or List of snipers. Am I missing something? voorts (talk/contributions) 01:26, 24 March 2026 (UTC)
- I'll amend my statement, I was wrong when I went back through the edit history. I collapsed two posts at Talk:List of snipers and warned once, though tepidly, at [10]. Sorry, I should have gone and found diffs to verify my statement before, as I suppose these don't quite count as warnings as they don't mention the consequences to continued behavior. Kingsmasher678 (talk) 01:39, 24 March 2026 (UTC)
- When did you warn this editor about LLM use before they posted their frivolous LLM-generated complaint at ANI? The only warnings of yours that I see on their talk page were for using unreliable sources and violating IMH. I also don't see any warnings about LLM use in your edit summaries on Lucky Bisht or List of snipers. Am I missing something? voorts (talk/contributions) 01:26, 24 March 2026 (UTC)
- Put another way, this editor twice took themself to ANI
- AIV has always been for cases where things are immediate and obvious. It's because there's going to be 20 vandals lined up who are rampantly replacing entire pages with "YOU SUCK" and need dealing with NOW. If that fits an AIV LLM report, go for it, but any case where there's an element of doubt, or which requires a discussion or some modicum of investigation, will always be punted to ANI. -- zzuuzz (talk) 09:07, 24 March 2026 (UTC)
- The thing is, it's very easy to tell vandalism apart from good faith editing. Detecting LLM usage isn't always trivial, and should be a process with more input than just the reporter and an admin.AIV isn't right for it, but I do like the idea of having a different page aside from ANI for dealing with LLM abuse cases in a more expedited manner. But I think we should give the recent updates to our P&Gs (including this proposal) some time to see if they make things more manageable before we make a big change like that. MEN KISSING (she/they) T - C - Email me! 12:28, 24 March 2026 (UTC)
- I'll send you a talk page message at some point and maybe we can start working on a policy proposal for that, after the current stuff has settle.
- Kingsmasher678 (talk) 14:01, 24 March 2026 (UTC)
- That could be fun! But, again, I do insist we give it two or so weeks. There's a chance we might not have much of a problem to solve anymore if we give it some time. MEN KISSING (she/they) T - C - Email me! 01:36, 25 March 2026 (UTC)
- Wikipedia:WikiProject AI Cleanup/Noticeboard is the current venue for reporting misuse of AI technology. isaacl (talk) 17:47, 24 March 2026 (UTC)
- Yes, with one warning. Competence is a major part of Wikipedia. thejiujiangdragon 🔥🐉 01:04, 24 March 2026 (UTC)
- Oppose as written I do not see "Wikipedia only consists of human-generated content" anywhere in Wikipedia:Five pillars, nor do I see "Wikipedia is not edited directly or indirectly by machines" in Wikipedia:What Wikipedia is not. We've had bots helping to maintain the project for a long time. And I routinely use machine translation when I'm communicating with others, although usually this is happening in email instead of ENWP. I'm more than fine with considering proposals for how to improve defenses against wasteful uses of volunteer and staff time, but a bright-line rule is not what I'd suggest, and to be blunt, probably would be a losing and wasteful battle to attempt. However, there may be other reasons supported by existing policies for blocking an account that incompetently or wastefully uses automated tools, and I would be very willing to consider alternative proposals if existing policies are insufficient for addressing AI slop. ↠Pine (✉) 01:48, 24 March 2026 (UTC)
- Limited yes as an explicit example. LLMs are increasingly being used to read and interpret policies, and they seem liable to run wild with anything implicit or unsaid. More explicit rules, especially regarding LLMs, may unfortunately be needed to function as a "guardrail" that helps guide LLM reading of our PAGs. Block-related policies are one area where considering such guardrails may be most clearly needed, we don't help new users by putting them in a llm-induced wikilawyering spiral. However, I agree with the opposes above who note that a more expansive "yes" position is CREEP/BURO that risks reducing the implicit/unsaid discretion that admins already have, and that if there is a "Yes" close it should be along the lines of "consensus that admins can keep doing what they are already doing". CMD (talk) 03:32, 24 March 2026 (UTC)
- I would imagine (and my brief testing seems to agree) that a LLM policy would be one of the first things to be checked by an AI agent. Claude tell me that "The platform does not have a single, universally adopted policy, but rather a combination of guidelines and a failed formal policy proposal." WP:LLM, WP:LLMPOLICY, WP:NEWLLM? Really, the blocking policy is not the correct place to fashion a "single, universally adopted [LLM] policy", just like it's not the place to define policies for vandalism or edit-warring. -- zzuuzz (talk) 08:56, 24 March 2026 (UTC)
- I am thinking more about the AI-generated unblocks which seem to involve the LLM parsing the blocking policy. Adding a line mentioning llms would not make this page the locus of LLM policy, as it is not the locus of vandalism or edit-warring policy. CMD (talk) 09:21, 24 March 2026 (UTC)
- I would imagine (and my brief testing seems to agree) that a LLM policy would be one of the first things to be checked by an AI agent. Claude tell me that "The platform does not have a single, universally adopted policy, but rather a combination of guidelines and a failed formal policy proposal." WP:LLM, WP:LLMPOLICY, WP:NEWLLM? Really, the blocking policy is not the correct place to fashion a "single, universally adopted [LLM] policy", just like it's not the place to define policies for vandalism or edit-warring. -- zzuuzz (talk) 08:56, 24 March 2026 (UTC)
- Yes subject to a single warning. Stifle (talk) 10:48, 24 March 2026 (UTC)
- Yes Absolutely, persistent AI usage even after warnings only exhausts community 's time and energy. Zalaraz (talk) 13:02, 24 March 2026 (UTC)
- Yes, with at least one educational warning, with a confession (waterboarded or otherwise), a promise that they understand and will not do it again, and for emphasis to let it soak in, a trout at first sight. When AI is used in mainspace text it should be both removed and the editor educated that they should stop doing that immediately. If done again, a week's block, and after that the indef seems appropriate. Wikipedia is not Grokipedia, and never Mark Twain shall meet (Twain's in trouble). Randy Kryn (talk) 14:06, 24 March 2026 (UTC)
- I can't help it, Grokipedia always makes me think of, but not hungry for, un croque de merde. Narky Blert (talk) 15:23, 24 March 2026 (UTC)
- Yes. This is already common practice due to necessity, which can be confirmed by examining any recent WP:ANI archive page, and the proposed text should be included in the blocking policy to accurately reflect current community expectations. In this context, particularly with the link to WP:NEWLLM, usage of LLMs specifically refers to misuse of LLMs, and I would support replacing usage with misuse in the proposed text to clarify the intent, although I am supporting the proposal even if this change is not implemented. Whether a warning is needed prior to blocking can be determined on a case-by-case basis: I oppose the addition of any policy language that would require an advance warning before blocking an LLM-using editor for the violation of a different policy (unrelated to LLMs), when blocking an editor who does not use LLMs for the same violation would not require a warning. — Newslinger talk 20:38, 24 March 2026 (UTC)
- Yes, after warning. I continue to believe that most new users have good intentions, and can be guided towards compliance with policies intended to better the encyclopedia, like those that bar unreviewed slop from being dropped into article space or used as discussion points. BD2412 T 03:38, 25 March 2026 (UTC)
- Yes, after warning WP:AGF requires us to assume the editor does not know/believe their LLM use is a problem. Once spelled out, continuing to use it should lead to sanctions. Too much time is being wasted in both talk and the mainspace dealing with really disruptive use of LLMs. Orange sticker (talk) 10:04, 25 March 2026 (UTC)
- Yes, with a warning. To be able to usefully contribute, communication is required, and it has been more than adequately demonstrated that LLMs are incapable of acting as a trustworthy intermediary in this process. More so, when is often the case, the LLM is also being asked to translate back and forth between two different languages. Participation in an English-language project requires at both least a minimal level of competence in the English language, and a willingness to use it. We are under no obligation to cater for those who either can't, or won't, engage in the level of direct communication the project requires. Needless to say, this is already covered by existing policy, but making it explicit should simplify things. AndyTheGrump (talk) 14:06, 25 March 2026 (UTC)
- Oppose The language of the proposal is too loose as observed by Gnomingstuff. Here's an example: earlier today, I posted at Arbcom. They have a word count limit of 500 and I wasn't sure how close I was to that. So I gave Gemini a copy of my text and asked it for a word count. The answer seemed reasonable and I've no idea exactly how it arrived at it but don't suppose it was done by simple prediction of the most likely worded answer. I've double-checked the answer manually now and my count was one less but maybe that's because I counted "anti-social" as one word not two or maybe I just miscounted. Now such usage of one of these general-purpose assistants is presumably not what's meant by the proposal but I don't trust that this witch-hunt won't extend to include anything and everything. If this is going to be a hanging offense then the language needs to be much more precise and limited. Andrew🐉(talk) 19:17, 25 March 2026 (UTC)
- Or you could have just copy and pasted into any number of the online, AI free word counters?
- Kingsmasher678 (talk) 20:14, 25 March 2026 (UTC)
- Not trying to be an ass, just pointing out that alternatives exist for this, as they do for many of the use cases.
- Kingsmasher678 (talk) 20:17, 25 March 2026 (UTC)
- I'm with Kingsmasher, why on Earth are you using an LLM chatbot for a simple word count tool? That's really silly.
- I know your overall point is that you're worried that the exact language of "LLM use" would forbid users from blatantly non-problematic uses of LLM chatbots. The actual exact wording that would end up being added is going to have to be workshopped anyways, though, so I don't understand the hangups about an "exact wording" that some participants have referred to here. MEN KISSING (she/they) T - C - Email me! 21:11, 25 March 2026 (UTC)
- The RfC question here is
Should the list of reasons to block be expanded to include "Persistent usage of large language models"?
. This is the actual exact wording which we are discussing. Workshopping is supposed to happen before not after an RfC. Otherwise, the process would be a creepy pig in a poke. Andrew🐉(talk) 22:04, 25 March 2026 (UTC)
- The RfC question here is
- Yes, with warning when appropriate, specifically noting the block is for "misuse" of an LLM, in response to above concerns that our current framework carves out minor exceptions for LLM use. Severe misuse, much like the rest of our blocking policy (deceptive editing and abuse in discussions, for example) likely should result in a block with no warning, but that is always going to be admin's discretion. The editor can always request an unblock, which in many cases is the only way to get them to communicate effectively, and make a commitment we can then reference if it happens again. Any attempts to lie or hide use of an LLM should require no warning, but unfortunately it can be difficult to objectively prove such a thing in many cases. ASUKITE 00:34, 26 March 2026 (UTC)
- No: unnecessary. There are already plenty of rules available to justify a block of someone who is knowingly, deliberately, repeatedly, and after warnings using LLMs to edit. The proposal's broad language—"We are getting to the stage where we should treat LLM content with the same seriousness as copyright violations, and block even when a user's actions are in good faith, to avoid wasting communities time in clean-up"—precisely reflects the attitude that has made Wikipedia a notoriously forbidding place, increasingly closed to outsiders. We should be moving in the opposite direction. Larry Sanger (talk) 20:30, 26 March 2026 (UTC)
| Off-topic with respect to this request for comment. — Newslinger talk 17:00, 27 March 2026 (UTC) |
|---|
| The following discussion has been closed. Please do not modify it. |
|
- @Larry Sanger Huh. Very surprised to see you here.
- You should know, from what I understand a good majority of Wikipedia's community agrees that we should not be moving in the opposite direction. If anything, it seems that the P&Gs have been lagging far behind what users of LLMs should actually expect from the community.
- Editor retention is an issue, and I'm always going to advocate for changes that make the community more welcoming for newbies. But we need the help of people to build and maintain the encyclopedia, not their chatbots. Even still, that's something likely addressed by clarifying that users should only be blocked after they are warned. MEN KISSING (she/they) T - C - Email me! 02:30, 27 March 2026 (UTC)
- It is your opinion that "we should not be moving in the opposite direction." I and many other Wikipedians disagree with you. You have the right to vote as you do; we have the right to vote as we do. Our pushback is not necessarily about LLM policy (I agree that Wikipedia should not permit the unedited, unverified output of LLMs in articles) but about the very idea that Wikipedia will become any harder than it already is for new and inexperienced editors to engage in. Larry Sanger (talk) 16:12, 27 March 2026 (UTC)
- First, we don't vote, Larry, we work to build consensus. I know that I came down hard on you above, and I apologize if you viewed it as an attack on you. It was meant to be a disagreement with the idea that we need to make it easier to edit. This chatbot policy is about trying to make just as hard as it has ever been to edit. It really isn't hard to be part of this project as long as you have a slightly thick skin, the ability to take constructive criticism, and the ability to communicate. Use of LLMs after a warning is a clear sign that someone can't or is unwilling to take criticism and/or communicate, so why should we keep them around?
- Kingsmasher678 (talk) 17:36, 27 March 2026 (UTC)
- I want to expand on this slightly, actually. I think its not really even about making it hard to edit, its about increasing the number of useful edits. Making this an unwelcoming place for LLM edits will hopefully have the effect of encouraging the warned editors to make quality edits without the involvement of LLMs, and then they can help us. I would SUPPORT a template for this that includes a warning about blocking, but does so in a kind way with links to useful resources. I need to write an essay about WP:BITE honestly, because I think that several different things are in play with this (real) concern that you have about how we make newbies feel. Hopefully that explains my position better.
- Kingsmasher678 (talk) 18:01, 27 March 2026 (UTC)
- It is your opinion that "we should not be moving in the opposite direction." I and many other Wikipedians disagree with you. You have the right to vote as you do; we have the right to vote as we do. Our pushback is not necessarily about LLM policy (I agree that Wikipedia should not permit the unedited, unverified output of LLMs in articles) but about the very idea that Wikipedia will become any harder than it already is for new and inexperienced editors to engage in. Larry Sanger (talk) 16:12, 27 March 2026 (UTC)
- comment: i've been compiling a list of ani threads that involve llm use, and their outcomes. it's nowhere near close to done, but it's at user:ltbdl/exist. it's probably of interest to those participating in this rfc. ltbdl (taste) 14:18, 28 March 2026 (UTC)
- @Ltbdl This has prompted me to publish User:ClaudineChionh/Notes/Blocked for LLM use which I'd been compiling off-wiki – it can help fill in some of the blanks. ClaudineChionh (she/her · talk · email · global) 02:21, 29 March 2026 (UTC)
- I'm not seeing the catastrophic problem that everyone in this thread is describing. Seems like most of these editors are getting indef'd pretty quickly. voorts (talk/contributions) 15:04, 29 March 2026 (UTC)
- there's a part of me that is now considering adding the end date of each case. there's another part of me that is screaming. ltbdl (click) 13:16, 30 March 2026 (UTC)
- I'm not seeing the catastrophic problem that everyone in this thread is describing. Seems like most of these editors are getting indef'd pretty quickly. voorts (talk/contributions) 15:04, 29 March 2026 (UTC)
- i've completed the list up until the latest ani archive. by my very rough count, around 1/4 of cases lead to no action. ltbdl (jump) 12:59, 31 March 2026 (UTC)
- @Ltbdl This has prompted me to publish User:ClaudineChionh/Notes/Blocked for LLM use which I'd been compiling off-wiki – it can help fill in some of the blanks. ClaudineChionh (she/her · talk · email · global) 02:21, 29 March 2026 (UTC)
- Yes: as a reason for blocking, after being warned, typically with {{uw-ai1}}, {{uw-ai2}}, {{uw-ai3}} and {{uw-ai4}}, per the discretion of the admin(s). Boud (talk) 18:10, 28 March 2026 (UTC)
- Yes; LLM usage should be 100% banned from the project. The reality is that such a ban will only work temporarily. Wikipedia is already very behind the curve in responding to the AI and LLM issue. LLMs are only going to become better and will become indistinguishable from human writing. At that point, any policy trying to prevent it becomes moot. It will also be at that point that I leave Wikipedia as humans will no longer be needed on this project. Another temporary stop gap that needs to be implemented now (witness: [11]) is that a bot needs to be written to immediately undo any LLM generated edits. This will work for a time, but ultimately will fail as the LLMs become indistinguishable from humans. It will at least buy us some time. --Hammersoft (talk) 14:18, 29 March 2026 (UTC)
- Might I suggest that people work on WP:LLMPOLICY, or one of the other LLM pages, if they want to create an LLM policy. It's clear from most of the above discussions that no one even agrees what it is. One existing guideline tells me that, "Editors are permitted to use LLMs.." (I assume they can even follow that guideline 'persistently'). This blocking policy details how we block people for policy violations, distinct from the reasons for blocking. -- zzuuzz (talk) 20:34, 29 March 2026 (UTC)]
- Sure. In the meantime, this is a blockable reason. My stance on AI/LLM use is 100% against in all cases. This project, and indeed the WMF (as is the case with most things they do), is way...and I mean WAAAAAY...behind the curve on how to manage this. The lack of preparation for this has put the project in an extremely precarious position. We absolutely must stop it for this project to survive, if only to be able to take a breath for 12-24 months before AI/LLMs completely take over the project. The more tools we give ourselves to stop this the better. --Hammersoft (talk) 21:30, 29 March 2026 (UTC)
- Yes. In >95% of cases, LLM use is detrimental to the encyclopedia. If a user is incapable of contributing without LLMs, that's a competence issue that should be handled with a block. —pythoncoder (talk | contribs) 20:15, 29 March 2026 (UTC)
- • Yes with prior warnings LLMs tend to lie, create illogical statements, and don't have access to current facts. Dafootballguy (talk) 22:33, 29 March 2026 (UTC)
- Yes. Per the proposal. ~ ToBeFree (talk) 00:23, 30 March 2026 (UTC)
- Oppose. If you use an LLM to reformat existing text, and if it does a good job, I shouldn't block you for it at all. And if you're adding unvetted content or otherwise causing problems with LLMs, you can already be blocked for general disruption; this policy doesn't have and shouldn't have a comprehensive list of reasons to block. Also, JPxG, I just read what you wrote, since I didn't notice the section breaks and accidentally left my comment in the "A proposed policy" subsection. Nyttend (talk) 01:11, 30 March 2026 (UTC)
- "f you use an LLM to reformat existing text, and if it does a good job, I shouldn't block you for it at all." I'd like to see some examples where that has happened. Ritchie333 (talk) (cont) 06:38, 1 April 2026 (UTC)
- Oppose unless prior warnings are given for clearly unconstructive LLM use. There are some quasi-good uses of LLMs, such as for translating things from one language to another in case a user is not familiar with English. In those cases, I do not see why any sanctions should be necessary. We should also, as pointed out above, make sure warnings are issued before issuing any blocks. Gommeh (talk! sign!) 17:21, 30 March 2026 (UTC)
- Yes, with one warning. DoubleCross (‡) 19:48, 30 March 2026 (UTC)
- Yes but case by case. Sometimes I see LLMs being used in WP:SPAMBOTs, but other times a user may not be aware that using functions like "rewrite" or using GPT to get a needed article started is not appropriate for Wikipedia. And since it is impossible to distinguish AI from human written text unless if there are clear WP:AISIGNS we should use one of the other reasons in the block form rather than "AI". Aasim (talk) 20:26, 31 March 2026 (UTC)
- Oppose as written. I'm quite sympathetic to the plight of the editors who fight the flood of AI-generated content and I'd support the proposal if it were changed to "persistent and unconstructive", as suggested by u:Gommeh. This is general enough to give wide latitude to the editors dealing with AI slop and would make it clear that the legitimate use is okay (see WP:WikiProject AI Tools for examples of AI tools). Alaexis¿question? 20:57, 31 March 2026 (UTC)
- Yes I think we already assume good faith too much in regards to potential LLM-edits. All someone has to do is say they're not using AI and they have plausible deniability. I believe there is a lot of stuff that goes unnoticed. It will probably get even harder to detect in the future. ~WikiOriginal-9~ (talk) 12:21, 1 April 2026 (UTC)
A proposed policy
[edit]I humbly submit for discussion the following proposed policy:
- (a) A user judged by an admin, on a preponderance of the evidence, to have used AI to generate material (posted to articles, article talk pages, or anywhere else on the project) should be immediately blocked, indef, from article space.
- (b) If (on their talk page, or at ANI) the user owns up immediately to their AI use, doesn't argue about it, and appears to genuinely understand that they shouldn't be generating article content or talk-page posts using AI, then they can appeal this block after a minimum of six months of useful (and not AI-generated) talk-page contributions or draft creations. But if, during their article-space block, they even once use post AI-generated material, then the block should be upgraded to a full indef, as in (c).
- (c) If (in response to the article-space block) the user lies about their AI use, dances around the question, argues that there's nothing wrong with it, or pretends they don't hear us, then the block should be upgraded to a full indef, which they can appeal in one year via a convincing showing (in their appeal) that they have gained an understanding of why AI use, anywhere on the project, is unacceptable. A successful appeal will result in a conversion of the block to an indef from article space, as per (a) and (b) above.
Notes:
- Some may find draconian the provision that the article-space block be immediate, with no "last chance" warning. There are two reasons I believe that's necessary:
- First: In a very short period, AI use has gone from completely unknown on WP, to something that threatens the project's very existence as a trusted source of knowledge. The cost of the influx of AI slop is breathtaking: hundreds or thousands of hours of editor time may be required to undo the damage done by an AI slopfarmer in just a few minutes. "Shoot first, ask questions later" must be our policy, in order to stop further damage at the earliest possible moment.
- Second: It is my firm belief that any user who imagines it appropriate to use AI to generate article content, or talk-page content, is ipso facto a WP:CIR case, because no one who understands our basic policies (especially WP:V) can possibly think that having a trained monkey type something up is a useful way to contribute to the project. It's also possible that the user does know that AI content is inappropriate, but doesn't care e.g. because flooding the project with slop serves their WP:POV or WP:PROMO interests; in that case they're WP:NOTHERE. We block on sight for CIR and NOTHERE all the time.
- Further, I'm calling for the article-space block to last at least six months because it's not enough that the user promises not to do it again; in order to save more waste of community time, the editor must demonstrate that they can contribute without AI before they are let loose on article space. And the block is indefinite -- not time-limited -- in order to enforce that an affirmative determination of the user's rehabilitation (via review of their talk-page contributions during the block period) must be made before the user is unblocked; the mere passage of time proves nothing.
Thoughts? EEng 21:58, 22 March 2026 (UTC)
- Oppose for
a fewreasons:- Blocks are preventative, not punitive, and this feels punitive towards people who may be completely ignorant of our PAGs, including those who are not NOTHERE. Contrary to your second point above, a person cannot not
understand our basic policies
if those policies are unknown unknowns. - Part (b) is inconsistent with our unblocking policy, which provides admins with significant discretion in crafting appropriate remedies when unblocking editors.
Preponderance is far too high a standard for imposing a block. Given that blocks are preventative, not punitive, I view the standard for blocking as akin to reasonable suspicion at most. If blocks were punitive, then I think preponderance would be an appropriate minimum, if not clear and convincing evidence for certain block reasons.voorts (talk/contributions) 22:25, 22 March 2026 (UTC)- Thanks for the quick reply.
- Re 1.: There's nothing punitive here, though there's certainly the risk that it will work an injustice (in a sense) on someone who, as you say,
ay be completely ignorant of our PAGs
. But given the well-nigh emergency nature of the situation we find ourselves in, I'm asking the community to accept that risk now and then. Anyway, no one's going to the stir here, or the getting the chair -- we're just diverting them to Talk: and Draft: spaces for a period; they can still contribute. - Re 2.: I recognize that. It's my explicit intent to supply guidelines that admins will be expected to follow in most cases.
- Re 1.: There's nothing punitive here, though there's certainly the risk that it will work an injustice (in a sense) on someone who, as you say,
- EEng 22:53, 22 March 2026 (UTC)
- It appears that our systems in place to catch and revert bad AI usage are working. I'm not seeing the emergency such that we should need to mandate immediate blocks and significant probationary periods upon unblocking. I also don't see sufficient evidence to justify making this the sole area that should be carved out from the rule of general admin discretion in blocking/unblocking. voorts (talk/contributions) 23:12, 22 March 2026 (UTC)
- Those systems are working, but they're already absorbing enormous amounts of editor time, and that volume is increasing by leaps and bounds every day -- and we're only partway into Act I of "Invasion of the Chatbots". I honestly cannot understand that you don't see why AI represents an unprecedented threat requiring novel remedies -- as Ritchie333 said elsewhere:
We can't really go off existing policies and guidelines, as this stuff didn't exist when they were being formulated 25 years ago
. EEng 23:30, 22 March 2026 (UTC)- I don't disagree with blocking editors who persistently use LLMs in an inappropriate manner and I have personally done so. I don't see how your proposal would eliminate or significantly reduce the volume of reports at ANI per my comments below. voorts (talk/contributions) 23:42, 22 March 2026 (UTC)
- Those systems are working, but they're already absorbing enormous amounts of editor time, and that volume is increasing by leaps and bounds every day -- and we're only partway into Act I of "Invasion of the Chatbots". I honestly cannot understand that you don't see why AI represents an unprecedented threat requiring novel remedies -- as Ritchie333 said elsewhere:
- It appears that our systems in place to catch and revert bad AI usage are working. I'm not seeing the emergency such that we should need to mandate immediate blocks and significant probationary periods upon unblocking. I also don't see sufficient evidence to justify making this the sole area that should be carved out from the rule of general admin discretion in blocking/unblocking. voorts (talk/contributions) 23:12, 22 March 2026 (UTC)
- Thanks for the quick reply.
- Blocks are preventative, not punitive, and this feels punitive towards people who may be completely ignorant of our PAGs, including those who are not NOTHERE. Contrary to your second point above, a person cannot not
- Support First, I agree with the underlying reasons: AI slop is fundamentally unencyclopedic and chatbot exposure is toxic to the critical faculties, so WP:CIR and/or WP:NOTHERE violations are inevitable, indeed almost entailed by definition. Second, it does not seem plausible that any measures less stringent than those proposed would be effective. Regarding the potential conflict with the unblocking policy mentioned above, this is more than anything a reason to amend the list of cases where unblocking is not a good idea. Stepwise Continuous Dysfunction (talk) 22:30, 22 March 2026 (UTC)
WP:CIR and/or WP:NOTHERE violations are inevitable
It's impossible for a CIR violation to be "inevitable" because one of the "competencies" that "is required" isthe ability to communicate with other editors and abide by consensus.
We cannot know if an editor lacks that competence until they acknowledge that they have read and understood the relevant PAGs. Likewise, it's impossible for a NOTHERE violation to be inevitable because almost all of NOTHERE relates to ongoing misconduct, not creating a new account, using an LLM once, and then being blocked immediately. voorts (talk/contributions) 22:35, 22 March 2026 (UTC)
- Is this a !vote, or are you just workshopping this?
The "fess up or get blocked" aspect, I agree with in spirit. But I don't likeA user judged by an admin, on a
as the standard of evidence.preponderance of the evidence, to have used AI ...
I already worry a lot that the verbose way that I type online might make some folks assume I use an LLM. I don't want that fear to be justified with the idea that all it takes is a single admin to mistake me for an LLM user, and then under this proposal, I would essentially be forced to falsely confess to AI usage. That sounds horrible. MEN KISSING (she/they) T - C - Email me! 22:42, 22 March 2026 (UTC)- I'm workshopping (which is what should have happened before the RfC was opened).
- I changed preponderance to reasonable suspicion at voorts's suggestion. Then before you could say "Jack Robinson", he changed his mind. So I've changed it back. Satisfied?
- Meanwhile: Confess! Just kidding. Part of the reason the initial block is a partial one (article-space only) is so that the user can participate in discussion at ANI (or, of course, at their own talk page -- wherever the discussion happens to take place). In my list of sins that convert the article-space block to a full block -- the user
lies about their AI use, dances around the question, argues that there's nothing wrong with it, or pretends they don't hear us
-- you don't find "argues convincingly that AI was not, in fact, used". If discussants conclude that the accusation of AI use was incorrect, then of course all is forgiven and an apology will be issued.
- EEng 22:59, 22 March 2026 (UTC)
- I'm afraid I'm not satisfied; I would want to strengthen the wording to "unambiguous and blatant evidence", but that would exclude most problematic users. Anything less strong, and a single admin's opinion holds too much sway. Humans make mistakes (which is part of what makes us human), and although I trust our admins overall, an admin is a kind of human.
- Still, I understand the dire need for swifter, more efficient deliberation than what ANI can offer.
- What if the process for determining if an editor is abusing an LLM was a discussion similar in structure to XfD discussions? Any editor could present evidence another editor is abusing an LLM, and depending on how strong the evidence is determined to be by participants, and based on the accused editor's their reaction to the accusations, sanctions may or may not happen. MEN KISSING (she/they) T - C - Email me! 23:10, 22 March 2026 (UTC)
- Your last paragraph is a description of what ANI is for. voorts (talk/contributions) 23:15, 22 March 2026 (UTC)
- Right, but it can't be that a trip to ANI is required before a block is imposed. For two decades we've survived with the highly flawed blocking system that allows someone to be blocked by any one trigger-happy admin -- see my block log (including all its unblocks) for proof. And yet it works reasonably well, on the whole, and if make blocks for AI use subject to some more ponderous process than do any other kinds of blocks, we'll have achieved precisely nothing. Again, it's only an article-space block, and if it turns out to be mistaken it will be overturned just like any other block. I urge you not to let this point hang you up.Another point here is that, without doubt, established editors will be give more benefit of the doubt than someone who just appears out of nowhere -- which is the vast majority of cases. EEng 23:25, 22 March 2026 (UTC)
Right, but it can't be that a trip to ANI is required before a block is imposed.
It's not. I've run into obvious LLM edits, investigated the editor, saw sufficient warnings for LLM use, and blocked, all without a trip to ANI. In any event, admins aren't omnipotent and omnipresent, which is why there's a noticeboard to report incidents to them.Another point here is that, without doubt, established editors will be give more benefit of the doubt than someone who just appears out of nowhere -- which is the vast majority of cases.
I don't think any admin would be comfortable just IAR'ing an explicit policy telling them who they have to block and when, without exception.
- voorts (talk/contributions) 23:34, 22 March 2026 (UTC)
- Well, yes. The idea is it would be something faster than ANI, and with dedicated participants, but slower than unilateral admin action. And wouldn't it still be desirable to split off cases of LLM abuse from ANI?
- Also EEng I only just now saw your edit to your comment. Thank you, that's a good clarification and I'd feel a lot better about it if that was made explicit. I'm still a bit worried that it might be too difficult for falsely accused editors to rebuke accusations of using an LLM. I feel like if someone accused me of it right now, I wouldn't have anything more convincing to say other than "No I didn't. I just write that way.". MEN KISSING (she/they) T - C - Email me! 23:27, 22 March 2026 (UTC)
The idea is it would be something faster than ANI, and with dedicated participants, but slower than unilateral admin action.
That would be a cabal deciding which editors to block.And wouldn't it still be desirable to split off cases of LLM abuse from ANI?
Why?
- voorts (talk/contributions) 23:31, 22 March 2026 (UTC)
- Good point, that certainly wouldn't be desirable. I'm not sure if it would happen, or how that can be accounted for.
- I understand some lot of folks are sick of ANI being filled up with LLM abuse cases. That's a problem in and of itself.
- MEN KISSING (she/they) T - C - Email me! 23:36, 22 March 2026 (UTC)
- Right, but it can't be that a trip to ANI is required before a block is imposed. For two decades we've survived with the highly flawed blocking system that allows someone to be blocked by any one trigger-happy admin -- see my block log (including all its unblocks) for proof. And yet it works reasonably well, on the whole, and if make blocks for AI use subject to some more ponderous process than do any other kinds of blocks, we'll have achieved precisely nothing. Again, it's only an article-space block, and if it turns out to be mistaken it will be overturned just like any other block. I urge you not to let this point hang you up.Another point here is that, without doubt, established editors will be give more benefit of the doubt than someone who just appears out of nowhere -- which is the vast majority of cases. EEng 23:25, 22 March 2026 (UTC)
- Your last paragraph is a description of what ANI is for. voorts (talk/contributions) 23:15, 22 March 2026 (UTC)
- The volume of posts at ANI is a feature, not a bug. Even if it weren't, you'd have to look at what those posts at ANI are about, such as evaluating how many of those reports result in immediate blocks vs. prolonged discussions and the topics of discussions (are they focused on cleanup? warning the editor? proposing sanctions? providing diffs? something else?). voorts (talk/contributions) 23:41, 22 March 2026 (UTC)
- I'm workshopping (which is what should have happened before the RfC was opened).
- Oppose as written, per voorts's point (1). While I do agree on the basic premise of AI slop posing a risk to Wikipedia (which already receives criticism on the credibility side as it is), people won't know that it's against the rules to submit AI slop if they don't even know that it's banned in the first place. While, yes, editors should know our basic policies before they edit, many new editors don't, and currently we don't have a way of forcing them to read our policies before doing so. This "reasonable suspicion" clause can also get messy very fast - especially if an admin blocks a longtime editor on a suspicion that turns out to be incorrect - and goes against WP:AGF. – Epicgenius (talk) 22:43, 22 March 2026 (UTC)
- Oppose as written. I think (a) is far too harsh, people should at least be given a warning and a chance to either promise to not do it again or explain that AI signs were mistakenly identified. False positives do happen and I think it's quite unfair to block someone right off the bat (unless of course it's WP:G15 level obvious), and there's also linguistic barriers that make people turn to AI for assistance. Right now as it is written, this point doesn't assume good faith. If after that and it's exceedingly clear they're not listening the rest is fair game though. SecretSpectre (talk) 23:45, 22 March 2026 (UTC)
- Oppose without prejudice, not because it is flawed (although I certainly think it needs some time to bake), but because it doesn't seem ripe given the proposal above it has WP:SNOW levels of support. And WP:NEWLLM is still fresh off the RfC. I think we should wait and see if these P&G updates sufficiently abate frustrations with LLM abuse. We've survived three years of chatbot malarkey, we can survive another two weeks. MEN KISSING (she/they) T - C - Email me! 23:53, 22 March 2026 (UTC)
- Oppose While I support the premise, and have nearly as much distaste for LLM use on Wikipedia on Wikipedia as EEng, I also believe in making incremental changes, and then re-evaluating. There's a perfectly good consensus above for including LLM usage as a reason to block with a warning, and I don't want to risk the whole tamale by pushing things too quickly past where consensus is. CoffeeCrumbs (talk) 00:28, 23 March 2026 (UTC)
- Support —for purely aesthetic reasons, given that various implementation issues are addressed—as described by other collaborator above. Augmented Seventh (talk) 00:35, 23 March 2026 (UTC)
- No offense, but I'm not understanding how a policy can be implemented "for purely aesthetic reasons". Did you mean something else? Accessedgrant (Epicgenius mobile alt) (talk) 00:50, 23 March 2026 (UTC)
- Absolutely. I do not think the overall singular “voice” generated by the LLM is stylistically compatible with the written text generated by thousands of individual, unique voices. Beauty is my own main criteria for both maths and an encyclopedia project. I support any well though out proposals to limit the damage.
- cordially, Augmented Seventh (talk) 03:19, 23 March 2026 (UTC)
- That's a lovely sentiment. I have a lot of sympathy for the philosophical objection to LLMs on Wikipedia, that an encyclopedia written by humans (inclusive of automated bots operated by humans) will be inherently preferable to an encyclopedia written by an LLM, even if the LLM's output could consistently meet our content standards.
- That said, it's a sentiment that might be a bit out of scope here; I think we're just trying to save headaches at WP:ANI. LLM usage has already been, for the most part, prohibited from article space per the newly updated WP:NEWLLM. MEN KISSING (she/they) T - C - Email me! 05:45, 23 March 2026 (UTC)
- I fully agree, and I ultimately think that's the basis that our (hopeful, eventual) total ban on LLM usage will sit on. We must eventually enshrine and sanctify this as an encyclopedia by and for humans. After all, that's the entire thesis statement; an encyclopedia anybody can edit, an encyclopedia based on consensus. If we don't uphold the project's humanity, then all of that other stuff falls apart too. It becomes not "the encyclopedia anyone can edit" but "the repository of AI-generated slop anyone can sift through."
- Many people have said that Wikipedia will eventually die if we completely reject LLMs. Maybe. Maybe we won't. Maybe we'll die anyway because the masses will become so reliant on asking their chatbots for everything that Wikipedia won't have a function anymore anyway, their feedbags never empty of oats. Whatever the case may be, I think it is entirely valid to argue from a philosophical position that Wikipedia ought to position itself firmly against the ensloppification of society. Athanelar (talk) 08:23, 23 March 2026 (UTC)
- From my point of view, it's less that we have to take a position, and more about a need to uphold our policies and guidelines. Our credibility (such that it is) relies on people not adding policy-violating information using LLMs en masse. Even setting aside other issues, LLMs tend to add outright false or misleading information, as has been shown in myriad ANI discussions. – Epicgenius (talk) 14:39, 23 March 2026 (UTC)
- No offense, but I'm not understanding how a policy can be implemented "for purely aesthetic reasons". Did you mean something else? Accessedgrant (Epicgenius mobile alt) (talk) 00:50, 23 March 2026 (UTC)
- Comment - (b) is far too harsh. There are a lot of people who genuinely don't know the problems that arise with LLM usage, and they shouldn't be blocked for that if they
"genuinely understand that they shouldn't be generating article content."
InfernoHues (talk) 01:38, 23 March 2026 (UTC) - Oppose There are a few issues with this, but I will focus on that is a full ban on any LLM usage which is not what the current guidelines say. For example, per the wording of the proposal, someone following WP:LLMT would be indefed because they both used LLM to generate article content and will
argues that there's nothing wrong with it
(because they were following P&G). Jumpytoo Talk 05:13, 23 March 2026 (UTC) - Oppose, too harsh. sapphaline (talk) 07:36, 23 March 2026 (UTC)
- Oppose. Agree it's too harsh. 331dot (talk) 08:38, 23 March 2026 (UTC)
- Oppose - too harsh, conflicts heavily with WP:BITE. BugGhost 🦗👻 08:47, 23 March 2026 (UTC)
- Oppose-- copying what I said about this suggestion at ANI-- this is something that could be done by a new or younger editor in good faith, and should be given a chance to correct that. I could see the value of a "preventative block" rule if someone is using AI, block from article editing until they respond and promise to stop, but mandatory six months just seems punitive. /ˌtiːoʊseɪˈæf.dʒə/ (talk) 10:16, 23 March 2026 (UTC)
- Oppose gratuitous. I hate AI slop as much as everyone else here, but not to that irrational level. Smacks of “zero tolerance TUFF ON CRIME” sort of political theatrics that never actually solves anything. A single warning followed by a block if that warning is not heeded is sufficient. Dronebogus (talk) 12:38, 23 March 2026 (UTC)
Yes, that is what is being proposed. "Persistent" usage, not a no-tolerance policy.―Maltazarian (talkinvestigate) 12:51, 23 March 2026 (UTC)- oh no it is not wow i did not see that proposal sorry my mistake ―Maltazarian (talkinvestigate) 12:52, 23 March 2026 (UTC)
- No Too harsh. SuperPianoMan9167 (talk) 15:01, 23 March 2026 (UTC)
- Oppose because item (a) requires admins to make a judgement that will soon be impossible; items (b) and (c) don't allow for the possibility that the accused may be innocent. They require instant confession (whether guilty or not) on pain of equally instant banning. That's the stuff of witch-testing. AI by its nature is attempting to match a human model. It will get better at it, and the day will come (soon) when the signs of AI become very subtle indeed compared to the general variability in human writing-style. At that stage, we'll have admins banning people for having an overly-formal writing-style and using too much bold text. (Annoyingly, humans also learn from AI; unfortunately there will be people who deliberately choose to write in the style of chatGPT because they think that's right, so as AI migrates towards us, some of us will migrate towards AI - and the two will meet in the middle). Elemimele (talk) 17:13, 23 March 2026 (UTC)
- OPPOSE completely unnecessary as per WP:DE, plus it's very bitey -- Aunva6talk - contribs 19:17, 23 March 2026 (UTC)
- Oppose I do not see "Wikipedia only consists of human-generated content" anywhere in Wikipedia:Five pillars, nor do I see "Wikipedia is not edited directly or indirectly by machines" in Wikipedia:What Wikipedia is not. We've had bots helping to maintain the project for a long time. And I routinely use machine translation when I'm communicating with others, although usually this is happening in email instead of ENWP. I'm more than fine with considering proposals for how to improve defenses against wasteful uses of volunteer and staff time, but a bright-line rule is not what I'd suggest, and to be blunt, probably would be a losing and wasteful battle to attempt. ↠Pine (✉) 01:39, 24 March 2026 (UTC)
- Support. In addition to the other reasons, willingness to plagiarize is not a quality we want in editors. I'm not going to say "without warning", but a warning can be followed by months of disruption before it's followed up on. And for LLM abuse, months is a lot. If the argument against is that people can deny LLM use, then we should also toss WP:UPE. I also ask that every oppose voter help clean out the AI-disruption backlog as it comes in. If you're not willing to give the community strong tools to mitigate the problem, then at least be decent and make up the difference yourself. Thebiguglyalien (talk) 05:12, 24 March 2026 (UTC)
- Oppose. Risk of people being wrongfully identified as LLM. Stifle (talk) 10:48, 24 March 2026 (UTC)
- Oppose Agree with others that it's way too harsh and WP:BITEY. S5A-0043🚎(Talk) 11:55, 24 March 2026 (UTC)
- Oppose. Incredibly harsh. It places the threshold for a indefinite block at an estimated 51% chance of AI usage. It provide no option for the editor to contest the determination as a mistake and instead forces them to work in draftspace for at least six months. Many editors cannot use draft space because they don't want to write new articles, instead improving current ones. They could propose changes on the talk page, but this will only lead to minor, probably uncontroversial changes filling up a backlog. A new thing to patrol created entirely to punish people assumed to have used AI at one point in the past is extremely disproportionate. IsCat (talk) 16:10, 24 March 2026 (UTC)
- Oppose as written - I don't mean this to sound harsh, but I trust the average AI Cleanup wikiproject member or AfC reviewer to correctly identify AI use much more than I trust the average admin. They're different skillsets, and there are nuances to spotting AI vs. human writing that you can only pick up by experience -- whether that be using AI yourself, as one study suggests, or just reading a lot of it. Our WP:AISIGNS page is pretty good but doesn't cover those subtleties (since it's long enough already) and is mostly geared toward GPT-4o (since GPT-5 is still too new for most research to have caught up). Also, like I said above, we don't actually prohibit all LLM use, so we can't block people over it unless we start actually prohibiting it. Gnomingstuff (talk) 16:33, 24 March 2026 (UTC)
- Oppose as written. This would literally mandate blocking an editor who uses some snippet of AI-generated text in a discussion solely for the purpose of demonstrating that another editor's article content was likely AI-generated text. BD2412 T 03:39, 25 March 2026 (UTC)
- (Moved my !vote to where I intended to put it - thanks :MEN_KISSING AndyTheGrump (talk) 17:21, 25 March 2026 (UTC))
- @AndyTheGrump, did you mean to !vote on EEng's proposal, or on Ritchie's original proposal? MEN KISSING (she/they) T - C - Email me! 15:29, 25 March 2026 (UTC)
- Oppose. Draconian overreach. Carlstak (talk) 15:25, 25 March 2026 (UTC)
- Oppose, as over the top. We need to cut some slack for well-intentioned newcomers, who can't possibly be expected to know every last detail of WP policy before posting. Warn first, and see how they respond. AndyTheGrump (talk) 17:24, 25 March 2026 (UTC)
- Oppose. Extremely harsh. We need to move in the opposite direction. Larry Sanger (talk) 20:32, 26 March 2026 (UTC)
- Oppose. LLM is not and should never be a reason to block someone. If LLMs are being used disruptively then they can be and should be blocked for disruptive editing. If the editor is wasting others' time with LLM use then block them for WP:NOTHERE, WP:CIR or whatever other reason actually applies. If they are using LLMs in a way that introduces errors or copyvios into the text then block them for that reason. If no existing reason to block an LLM-using editor applies then they should not be blocked. Thryduulf (talk) 13:56, 28 March 2026 (UTC)
- @Thryduulf, this !vote seems applicable more to Ritchie's proposal than EEng's more harsh proposal. Did you mean to put your !vote here?
- As well, you should know LLM use to generate article content is now prohibited with two expections, per the new WP:NEWLLM (or WP:NOLLM, rather) MEN KISSING (she/they) T - C - Email me! 16:44, 28 March 2026 (UTC)
- @MEN KISSING yes I did mean to put this as a reply to the initial proposal, but actually I think it applies equally to both and the adoption of NOLLM doesn't change anything: If someone is being so disruptive that a block is necessary they can be and should be blocked for being disruptive, if someone is not being sufficiently disruptive that a block is necessary then they should not be blocked. Whether or not they are (suspected of) using LLMs is irrelevant. Thryduulf (talk) 16:51, 28 March 2026 (UTC)
- Oppose as written I concur with Gnomingstuff above about the wording; this proposal should make it clear that LLM misuse is the target, not LLM use in general. XtraJovial (talk • contribs) 00:25, 29 March 2026 (UTC)
Yes. In >95% of cases, LLM use is detrimental to the encyclopedia. If a user is incapable of contributing without LLMs, that's a competence issue that should be handled with a block. —pythoncoder (talk | contribs) 20:15, 29 March 2026 (UTC)- @Pythoncoder This might also be the wrong section (but might not be), this is for EEng's much harsher proposal and not Ritchie's original proposal. A lot of folks have been getting this confused so I wanna make sure. MEN KISSING (she/they) T - C - Email me! 20:25, 29 March 2026 (UTC)
- @Pythoncoder If this is a competence issue, and we can (and do) already block people for competence issues, why do we need to block for LLM use separately when not all LLM use is detrimental to the encyclopaedia? Thryduulf (talk) 20:31, 29 March 2026 (UTC)
- You're misreading what I said. Also I posted this !vote in the wrong section —pythoncoder (talk | contribs) 20:43, 29 March 2026 (UTC)
- Oppose. Nobody is going to read anything written this far down an RfC. jp×g🗯️ 00:15, 25 March 2026 (UTC)
- Due to some strange behavior of the reply tool, this comment seems to have accidentally placed underneath the following section when I submitted it. This is a response to EEng's proposal. jp×g🗯️ 02:56, 31 March 2026 (UTC)
Another Proposed Policy
[edit]| Withdrawn. voorts (talk/contributions) 22:13, 23 March 2026 (UTC) |
|---|
| The following discussion has been closed. Please do not modify it. |
|
|
- False. Fun Chaos (talk) 01:13, 25 March 2026 (UTC)
- (wrong section)
@JPxG: This isn't a vote, so could you please provide an explanation as to why you oppose this proposal? —pythoncoder (talk | contribs) 20:22, 29 March 2026 (UTC)- I have explained my opinion on this subject dozens, and likely hundreds of times. Out of all these times, I have virtually never seen any evidence of a person reading it. This is doubly true in the context of an RfC that's already got tens of kilobytes of commentary from the majority of the active userbase who have an interest in the subject. I am not interested in typing out multi-paragraph essays that nobody will read. Note to anybody about to move this comment: This is a serious opinion. Please do not move this comment to another section. Please do not reformat this comment. jp×g🗯️ 02:51, 31 March 2026 (UTC)
- Well... uh... JPxG, would you at least be okay with it if I collapsed all of the text between Aunva's proposal and the misplaced unblock request? I think things have been made too confusing and the RfC is already enough of a mess. MEN KISSING (she/they) T - C - Email me! 03:15, 31 March 2026 (UTC)
- I have explained my opinion on this subject dozens, and likely hundreds of times. Out of all these times, I have virtually never seen any evidence of a person reading it. This is doubly true in the context of an RfC that's already got tens of kilobytes of commentary from the majority of the active userbase who have an interest in the subject. I am not interested in typing out multi-paragraph essays that nobody will read. Note to anybody about to move this comment: This is a serious opinion. Please do not move this comment to another section. Please do not reformat this comment. jp×g🗯️ 02:51, 31 March 2026 (UTC)
- (wrong section)
Yes. In >95% of cases, LLM use is detrimental to the encyclopedia. If a user is incapable of contributing without LLMs, that's a competence issue that should be handled with a block. —pythoncoder (talk | contribs) 20:15, 29 March 2026 (UTC)- @Pythoncoder Did you mean to reply to Ritchie's original proposal instead?
Voorts'Aunva's proposal was withdrawn. - Also, I believe JPxGs !vote was in jest. MEN KISSING (she/they) T - C - Email me! 20:19, 29 March 2026 (UTC)
- This was not proposed by voorts, @MEN KISSING. You might want to strike that, or maybe just remove all of this as a mutual withdrawal. Chess enjoyer (talk) 20:25, 29 March 2026 (UTC)
- Oh, right, Voorts is the one who collapsed it, but it was Aunva's proposal MEN KISSING (she/they) T - C - Email me! 20:27, 29 March 2026 (UTC)
- This was not proposed by voorts, @MEN KISSING. You might want to strike that, or maybe just remove all of this as a mutual withdrawal. Chess enjoyer (talk) 20:25, 29 March 2026 (UTC)
- @Pythoncoder Did you mean to reply to Ritchie's original proposal instead?
Request for unblock (User:Gokulrecap)
[edit]Requesting unblock for my account Gokulrecap (talk) 16:35, 27 March 2026 (UTC)
- This is not the place to request unblock, and since you posted here, you aren't blocked anyway. 331dot (talk) 16:37, 27 March 2026 (UTC)
- If you are referring to your block on incubator, you'll need to address your block there. 331dot (talk) 16:38, 27 March 2026 (UTC)