The proposals section of the village pump is used to offer specific changes for discussion. Before submitting:
Check to see whether your proposal is already described at Perennial proposals. You may also wish to search the FAQ.
This page is for concrete, actionable proposals. Consider developing earlier-stage proposals at Village pump (idea lab).
This is a high-visibility page intended for proposals with significant impact. Proposals that affect only a single page or small group of pages should be held at a corresponding talk page.
We are temporarily limiting logged-out editing from your location. Please create an account to edit, or try again later.
As discussed at Wikipedia talk:Temporary accounts recently, the wording is causing confusion for logged-out people trying to edit (also below on the same page).
As I said there, to me reads as if Wikipedia has run out of server bandwidth and is only allowing edits by logged-in accounts for locations like New York, or Europe, or however you interpret "location". Indeed, users are reporting their city or saying they have been trying for four days.
Logged-out visitors to this wiki using your IP address have created too many accounts in the allowed time period. Please create an account to edit.
Or any better alternative. I don't see any point in saying try again later, it could be 24 hours or a month (if you are one of the lucky six).
Note: this proposal is not about discussing what I understand to be the de facto super-block on libraries/universities etc caused by this rate limiter - just to improve the message. Commander Keane (talk) 11:30, 31 December 2025 (UTC)[reply]
Strong support on improving the message, the current version is just too vague, I would have read it the same way as Keane did if I were a newbie. Support the version above, as we have don't have a better proposal yet. ~/Bunnypranav:<ping>12:52, 31 December 2025 (UTC)[reply]
I agree with everyone above, this needs improvement and Commander Keane's suggestion is good (although I'd wikilink IP address). Improving it locally is a good regardless of whether it is also pushed upstream, although we should do that too if we can. Thryduulf (talk) 19:15, 31 December 2025 (UTC)[reply]
Reading it again, I mostly agree with this - "have created too many accounts ... create an account" is going to be super confusing. However the accounts are "temporary" not "temporal", and "temporary accounts" should probably be wikilinked somewhere as well. Thryduulf (talk) Thryduulf (talk) 16:50, 2 January 2026 (UTC)[reply]
Attached is an image of a short conversation with @Thiemo Kreuz (WMDE) regarding the phrasing of the updated message. It is in Gerrit as part of the code review in my patch to push this change upstream. Any thoughts on how to make it more easier to understand for non-technical folks, especially regarding the mention of IP address? ~/Bunnypranav:<ping>13:10, 4 January 2026 (UTC)[reply]
Looking at the gerrit discussion, my new suggestion is:
Logged-out editing from your network location is temporarily limited. Please create an account to edit."
I'm not sure that's possible as I think that there is only a single message that gets called regardless of which throttle is hit (it's possible that this is for BEANS reasons, but that's speculation on my part). We could list the possible reasons, but the gerrit discussion suggests that this list is quite long and includes things that will be difficult to explain in non-technical language.
While knowing why will be interesting to some people, all that is relevant to most people is:
They can't currently edit without logging in
This is not something that can be fixed by just trying again
I was alerted to the Phabricator task working on the flow that will eventually replace this message (link). From an image on T410386 I came up with:
To prevent spam, we limit editing without an account. Creating an account is quick, free and lets you edit any time. Create an account.
It meets Thryduulf's three points. It is vague but a layperson can understand it. Perhaps it is not 100% accurate, but it is not alarming (the point of my initial request). If a user wants to learn about IP addresses, network locations, Temporary accounts and explicit throttle triggers they can study computer science and read the MediaWiki codebase. I am thoroughly frustrated (along with those editors trying again later for the 67th time because Wikipedia has blocked Norway) and begging someone to change MediaWiki:Wikimedia-acct creation throttle hit-temp. Per T410386, it is going to get changed in the future anyhow. Commander Keane (talk) 00:22, 7 January 2026 (UTC)[reply]
The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion.A summary of the conclusions reached follows.
Due to the anniversary coming soon in 4 days, and a lack of close despite a request at WP:AN, this is an unideal WP:INVOLVED close by the nominator. There is clear consensus for a temporary logo change, but there is a lack of interest on the duration of the logo change. One participant suggested a month.
For the Legacy vector skin, four options were proposed. There is consensus against the proposed pixelated puzzle globe for the Legacy Vector. There is consensus for TheWanderingTrader's globe, given that no one exclusively supported Chaotic Enby's globe or the default globe after TWT's globe was posted. A few noted the Vector 2022 logo requires a CSS hack and looked visually cluttered, supporting the removal of the globe in the skin altogether. However, not enough participants discussed the idea to determine consensus. Catalk to me!01:14, 11 January 2026 (UTC) (Link to Phabricator ticket for the logo change)[reply]
On 15 January 2026, Wikipedia will celebrate the 25th anniversary of its founding in 2001. @BMcnally-WMF has proposed logo designs for the occasion on October 2025, which was improved with community discussion on Meta. BMcnally has also proposed a unique puzzle globe illustration for the Vector Legacy skin, which replaces the standard 3D puzzle globe.
Questions:
Should the current logo temporarily be replaced with commemorative logo depicted in the mockup?
Can you please link to the exact image(s) you plan on swapping in, if available? I assume image #2 in your screenshot is a placeholder and not the actual proposal. –Novem Linguae (talk) 15:13, 5 January 2026 (UTC)[reply]
Hi! Following the discussions we've had on Meta, I oppose the Legacy Vector logo change, as it looks more like a pixellized globe than a puzzle one. I support the other changes aesthetically, although I'm less affected by them as I mainly use Legacy Vector myself. Chaotic Enby (talk · contribs) 15:21, 5 January 2026 (UTC)[reply]
I agree with Chaotic Enby - oppose the legacy Vector logo, support everything else, although as a Monobook user I won't see the pixellated globe myself. Thryduulf (talk) 15:53, 5 January 2026 (UTC)[reply]
Oppose the pixelated logo but support including the standard logo with the additional wording underneath. The pixelated design doesn't seem fit to serve as any logo on Wikipedia let alone honoring its quarter-century mark. The first one, using the current wikiball, works well, although the 25th anniversary printing should be as shown in the pixelated version. Randy Kryn (talk) 16:29, 5 January 2026 (UTC)[reply]
TWT's Proposal (Updated Color)Here's my attempt for legacy vector, this is not quite as bold, but the single blue piece is more akin to the other logos for vector 2022. I additionally added "Celebrating 25 years" below, I originally attempted adding "25 years of the free encyclopedia" but when rendered it was too small. - TheWanderingTraders (talk) 01:53, 6 January 2026 (UTC)[reply]
Honestly, once this discussion is closed, I wouldn't be opposed to this one taking the main file title and my "proposal" being clearly marked as. Not the actual one. Chaotic Enby (talk · contribs) 10:51, 6 January 2026 (UTC)[reply]
+1. Would be better still if the colour silver were incorporated somehow, as a 25th anniversary is a silver anniversary. Ham II (talk) 10:55, 6 January 2026 (UTC)[reply]
Hi @TheWanderingTradersThank you so much for creating this and everyone else adding their preference! This is super helpful. We really like what TheWanderingTraders designed. We just made one little tweak to it Vector Legacy logo proposal for Wikipedia's 25th anniversary,we updated the blue to the core blue we are using in all Wikipedia 25 assets! BMcnally-WMF (talk) 18:03, 7 January 2026 (UTC)[reply]
No problem @BMcnally-WMF! Glad I helped, one last thing, I'm not sure if the gradient and relief on the edge of the blue piece was fully added back, I updated my image with the color from your version and larger scaling of the 25 if that helps, no need to use it if this was intentional; besides it's honestly hard to see the difference when at scale. - TheWanderingTraders (talk) 02:22, 8 January 2026 (UTC)[reply]
Support Vector2022 changes. Not explicitly opposed to original Vector changes and other skin discussions, but Vector2022 is our publicly visible face and should be the primary change under consideration. How long are we thinking of keeping it up, one month? CMD (talk) 00:18, 6 January 2026 (UTC)[reply]
Support the changes, except for the pixelated globe on Legacy Vector. Note that in these mockups, the Wikipedia globe is included as part of Vector 2022 because a MediaWiki:Vector-2022.css hack is required to position the wordmark properly when there is no actual logo (the Wikipedia globe), which is what the WMF recommends. Personally, I'd prefer following that advice and dropping the Wikipedia globe on Vector 2022/Minerva for the duration that we're having this banner, because the header becomes visually cluttered otherwise. Chlod (say hi!) 04:56, 6 January 2026 (UTC)[reply]
Support change with alternate Legacy Vector logo. Either TheWanderingTraders' logo or something similarly unintrusive -- maybe one with 25 in different writing systems? Giraffer (talk) 09:43, 6 January 2026 (UTC)[reply]
Support V22/Minerva versions and TheWanderingTraders' versions. However, I would prefer removing the globe altogether and replacing it or something with the 25 puzzle, as Chlod mentions. ARandomName123 (talk)Ping me!14:41, 6 January 2026 (UTC)[reply]
Support V22/Minerva versions and TWT's, also support removing the globe per Chlod. The blue puzzle piece motif is nice. Thanks to all the graphic designers for their work on this. Levivich (talk) 17:30, 6 January 2026 (UTC)[reply]
Support temporarily changing to add the puzzle piece and text; the 3rd globe here is much better. absolutely not for the pixelated globe though. Either change that one less dramatically or leave it alone.~ Argenti Aertheri(Chat?)21:11, 6 January 2026 (UTC) Updated @ 20:49, 7 January 2026 (UTC)[reply]
Support all but the pixel globe The puzzle piece from the celebration kit is lovely and I quite support using it. But the pixel globe is rather poor quality. Let's just keep the regular 'ol globe. CaptainEekEdits Ho Cap'n!⚓ 21:30, 6 January 2026 (UTC)[reply]
Support File:Wikipedia-logo-v2-en-25-alt.svg for the real Vector (and MonoBook). I like having the blue puzzle piece in the globe, and while making the whole globe blue is interesting in theory, having just the one piece be blue highlights the 25 the best. Neutral on Minerva and Vector2022 because I avoid those skins like the plague. Also, I made a Cologne Blue mockup for the lulz (see image at right). —pythoncoder (talk | contribs)03:17, 7 January 2026 (UTC)[reply]
If there are any Cologne Blue users who actually want to have this in their site title, paste this code into your cologneblue.css user subpage:
Oh my god I can't believe you actually did it. FWIW it lines up a bit better if you set all of width/height/background-size to 36px, and right to -40px —pythoncoder (talk | contribs)18:59, 7 January 2026 (UTC)[reply]
Comment I'm going to demonstrate my ignorance of how the skins work. I use Timeless and I know I'm not the only one, but I haven't seen it mentioned here. Will Timeless happily display the one of the new or old Vector versions of the logo? @BMcnally-WMF can you make sure that Timeless users aren't left out of the celebrations? (if even Cologne Blue gets to join in...)ClaudineChionh (she/her · talk · email · global) 21:44, 7 January 2026 (UTC)[reply]
Obviously this should be available in all skins. I can post the required CSS here, then interface admins will paste it into relevant MediaWiki: pages. sapphaline (talk) 21:57, 7 January 2026 (UTC)[reply]
Support with TheWanderingTraders' Vector 2010 version including the colour tweaks, oppose original pixelated logo as unsuitable; it looks like it belongs to the 1990s while predating the encyclopedia. CNC (talk) 10:06, 8 January 2026 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Approximate results as of 8 January
Skin
Screenshot
Required CSS
Vector-2022
@media screen {
.mw-logo-wordmark, .mw-logo-tagline {
object-position: -9999px;
background-repeat: no-repeat;
background-size: contain;
}
.mw-logo-wordmark {
background-image: url("https://upload.wikimedia.org/wikipedia/commons/5/55/WP25_Primary_lockup_white_25.svg");
}
.mw-logo-tagline {
background-image: url("https://upload.wikimedia.org/wikipedia/commons/9/9a/WP25_Vector_2022_LINE.svg");
}
.mw-logo-icon {
display: none;
}
.mw-header {
padding-top: 1em; /* make spacing identical to how it is with the globe */
}
}
Modern and Cologne Blue are exceptions. This still applies for the supported skins (Vector Legacy, Vector 2022, and Minerva), which is what the vast majority of readers and users see. Chlod (say hi!) 13:57, 8 January 2026 (UTC)[reply]
Logo change has been scheduled for 21:00 UTC, 14 January 2026, as the closest backport window to 00:00 UTC, January 15. The next window after that is at 08:00 UTC on January 15. You can preview the logos at T414271. Note that this does not include changes for Modern and Cologne Blue, plus a styling change for Vector 2022, which require local CSS changes by an interface administrator. More details on the task. Chlod (say hi!) 03:27, 11 January 2026 (UTC)[reply]
@Chaotic Enby: Hmm, it's showing up normally for me. Vector Legacy still uses a PNG for the logo, with a maximum resolution of 270×310 px, so how blurry it is may depend on your screen resolution or how much you zoom into the logo. Chlod (say hi!) 04:06, 15 January 2026 (UTC)[reply]
The Wikipedia sign up page disclaimer idea
Hello everyone, I was told by the Wikimedia foundation email to direct my query here.
Recently I have seen an influx in what I call “dud-articles”, by this I mean articles that people try to make about themselves, their company, their mother etc. and I believe that this wastes our users’ time, declining and reading these pages, so on the Teahouse I suggested a new system, a disclaimer on the account creation page, saying something along the lines of:
”If you are coming to Wikipedia to write about yourself, a family member, business or influencer please reconsider and refrain from making such articles”
Even if that was the deter only one person that would still be an improvement. I did get support for this idea on the Teahouse from other users on this issue.
I'm personally for the harsh wording. being soft about it seems to make the kind of editors that need this warning think they can be an exception to the rules. The amount of times I've seen these WP:NOTHERE types try to argue something isn't technically against the rules because we try to not speak in absolutes is laughable mgjertson (talk) (contribs) 20:28, 8 January 2026 (UTC)[reply]
Cease and desists letters and mortgage letters can certainly be pretty harsh, or intimidating- but I don’t think we want too harsh or too soft, a middle ground half way would be the ideal solution. Mwen Sé Kéyòl Translator-a (talk) 10:10, 12 January 2026 (UTC)[reply]
Anything we say on the sign-up page should be policy-based and link to the policy. Imho a WP:LOCALCONSENSUS here is not strong enough for changing what every new user sees as "instructions" for signing up from now on. I think we could say that it is discouraged, but not that they cannot do it, unless we change policy first, to change 'discouraged' to 'must not' and arm it with some teeth with what kind of sanctions happen if they do so anyway. Mathglot (talk) 03:58, 6 January 2026 (UTC)[reply]
Perhaps something not as harsh, I believe the policy says that writing about yourself is discouraged, and is a COI, so perhaps it could say “Please note, writing about yourself, your company or another close topic to you is discouraged and if you do decide to join to write a page on these please reveal your Conflict of Interest”
Not recommending sanctions at all; what I was trying to say, is that you ought not say a user must not do something on the sign-up page, unless the policy page supports that; the flip side is that if you want to say 'must not' here, then the policy page must be changed first. I wouldn't support that, just explaining what's dependent on what. Hope I am being clearer, but if not, I blame the wine. Mathglot (talk) 10:00, 6 January 2026 (UTC)[reply]
Oh sorry I completely understand you now. I think a good compromise is recommending not to (discourage), that doesn’t stop someone, but one or two people might instead look at the rules, or decide not to write that page on their mum, or their dog, or their favourite TikToker etc. Mwen Sé Kéyòl Translator-a (talk) 10:19, 6 January 2026 (UTC)[reply]
I would be opposed to adding any extraneous information or links to a sign-up page that are unrelated to signing up for an account. Once they are signed up, they are automatically assigned a mentor, and often (but not invariably) receive one of the welcome templates maintained by the Welcoming committee. A link to Help:Your first article is present in 23 different welcome templates, including the most popular template {{Welcome}}, which is present on the talk pages of over 600,000 users. Also, the sign-up page disappears after they are signed up, and they lose the link, whereas their User talk page remains, and they can consult the links anytime. Mathglot (talk) 02:49, 8 January 2026 (UTC)[reply]
At face value, this seems like a good idea. But as with any idea, there could be unintended consequences:
Thousands of new accounts are created every day. Most of those accounts never make an edit. Do we really need to show all these people this additional information? Would a scary warning message discourage users who never intended to edit promotionally at all?
Most accounts never engage in promotional editing. By showing everyone a message telling them not to do it, we may give them ideas that they previously didn't have.
If we imply that these people shouldn't create an account, will they simply make promotional edits without an account (from TAs) instead?
I mean in life people are told not to do crime, or “do not steal” or “employees only” signs, and most abide, it doesn’t discourage someone from living their life or going into a shop as they know they won’t do what the signs tell them not to do. I don’t think we would give them ideas, because they would know it will be declined hence the warning, thirdly I do see your last point about TAs instead, but TAs can’t make pages I believe so that cures the issue.
Most don't "abide" because they read a sign that says "Don't steal"; they were never going to steal anyway. Vandals gonna vandalize, no matter what words you add (which they will skip). I question whether there is any point to a wording change at all, especially wording they will see once, and never again. Mathglot (talk) 22:22, 11 January 2026 (UTC)[reply]
I do so what you mean, but perhaps there would be users who simply don’t know that Wikipedia isn’t a sort of LinkedIN or Instagram and would stop when told, like people on the Teahouse who accept their mistakes and don’t continue with the page on themselves, a family member etc.
Has anyone called for any stats (easily done) to find out just how often such articles that people try to make about themselves, their company, their mother etc. actually arrive in the NPP feed or even asked the 800-strong NPP community? They are dealt with swiftly at CSD. There are dozens of other totally worse articles that creep in under the radar of the less experienced patrollers. Kudpung กุดผึ้ง (talk) 14:36, 11 January 2026 (UTC)[reply]
The initial proposal probably doesn't say what the OP intended. The wording of the initial proposal includes "yourself, a family member, business or influencer". It doesn't say "your business", and therefore it includes any business and any influencer – including, e.g., Microsoft and MrBeast.
@Kudpung, I second your idea about getting more information. I sometimes wish for a list of research ideas (e.g., for grad students in search of ideas). I wonder what we would learn if someone contacted the last 50 companies for which articles were created, and said "I'm doing research and would love to know what it took to get your Wikipedia article and if you have any advice for other companies". How many would say "We hired ScammersRUs" or "We just had an intern write it"? How many would say they were unaware of it having been created? WhatamIdoing (talk) 22:54, 11 January 2026 (UTC)[reply]
Apologies for not being clear, I do mean “your business” and not all influencers (as some now warrant a page, like Mr Beast). I may have a talk with the NPP team and see if they have any stats on this matter. Mwen Sé Kéyòl Translator-a (talk) 10:15, 12 January 2026 (UTC)[reply]
You could also ask the approx. 200-strong (and growing) Wikipedia Mentor community. Anecdotally, I see more autobiographies and my-band/my-biz articles on new account User pages (not subpages) than I do appearing in Draft space (marked for submission or not). Besides welcoming new users, I also hang out at thhe WP:Teahouse and often see them there as well. Data is always a good idea, and if you want to measure something, draw up a null hypothesis and a proposal for an A–B test where half of new attempts get a create account page including the text you want to measure (A), and the other half (B) do not, and let them run it for a few months. Later, you can analyze the data and see what happened. Keep in mind that things may not go the way you want: one possible outcome (besides adherence to guidelines) is that A numbers go down, while B numbers stay the same. Then you'd have to argue whether that's a good thing, if A folks who went on to complete registration ended up adhering to the rules a bit better. Mathglot (talk) 00:06, 12 January 2026 (UTC)[reply]
I think that if someone thinks it's harmless fun to create an article about their pet rabbit (real example), they wouldn't have stopped to read a sign-on notice, just like many of us ignore the "terms and conditions" when shopping. If someone is determined for some reason to promote a person, place or whatever, they'll do it in any case. A sign-on notice wouldn't deter them. Also, in general, how much do we think of editors being in a contractual relationship with Wikipedia? If we do think of it that way then, yes, terms and conditions apply. If we think of it more as an informal personal relationship then it's more about assuming good faith, trusting and forgiving, but accepting we'll have to spend time – maybe too much time? – working on the relationship by debating the case for or against every silly or promotional article on a trivial subject. --Northernhenge (talk) 16:47, 14 January 2026 (UTC)[reply]
Yes but a large warning on the sign up page will have to be read, because it’s in your face, not some small Terms and conditions text (which I will admit I’ve never read). I don’t want to dissuade people from editing, but even a polite notice might deter a few who were going to make pages on themselves or what not. Then again most always feel like they are the exception and will try and try and try, so most probably will read the disclaimer and continue on regardless. Mwen Sé Kéyòl Translator-a (talk) 18:03, 14 January 2026 (UTC)[reply]
Gosh there really is a Wikipedia page for everything 😂 perhaps we just need a big flashing words on the screen that blocks the sign up page until you read it fully like that smiling virus thing (I’m joking btw. I wouldn’t go that extreme). Mwen Sé Kéyòl Translator-a (talk) 09:09, 15 January 2026 (UTC)[reply]
They never have let me use blink text on wiki, but we sometimes get to use big red text.
If you wanted to work on warnings for creating articles, then that should appear when you click the [Edit] button. It might be possible to put something into the software itself. Imagine something that triggers if the edit count is <50, and now you have to answer a few simple questions before it will let you proceed. WhatamIdoing (talk) 23:14, 15 January 2026 (UTC)[reply]
A user project is in the making to address precisely the registration page which by offering a few simple words of very short text in the nicest possible way, would channel new users to through a new route that at the same time would not only prevent the creation of nonsense articles, but also provide much better on-boarding and truly interactive help than the current development at the WMF which is in its 3rd (or fourth?) year with limited success. Kudpung กุดผึ้ง (talk) 22:22, 14 January 2026 (UTC)[reply]
Sounds interesting; can we please get a link to 'project is in the making'? Also, what is the the current development with limited success in its 3rd/4th year? Thanks, Mathglot (talk) 00:22, 15 January 2026 (UTC)[reply]
Given that at this point the prohibition against using LLMs in user-to-user communication WP:LLMCOMM has become something of a norm, I think it would be sensible to make it an official guideline as part of the ongoing attempt to strengthen our LLM policy.
Rather than just promote the exact text of LLMCOMM, I've decided to try to create something which synthesises LLMCOMM, HATGPT and general advice about LLMs in user-to-user communication. My proposal as it currently stands is at User:Athanelar/Don't use LLMs to talk for you
My proposed guideline would forbid editors from using LLMs to generate or modify any text to be used in user-to-user communications. Please take a look at it and let me know if there's anything that should be added or modified, and if you agree with the proposed restrictions. I'd love to workshop this a bit and get it to a stage where it can be RfCed. Athanelar (talk) 13:33, 7 January 2026 (UTC)[reply]
My thoughts:
Your proposal is much more strict than LLMCOMM (which is already enshrined in guidelines as WP:AITALK). It doesn't just synthesize LLMCOMM and HATGPT, which both allow exceptions for refining one's ideas; it goes beyond that and bans LLM use entirely for writing comments. This, combined with the Editors should not use an LLM to add content to Wikipedia phrasing of the proposed NEWLLM expansion, would effectively ban all use of LLMs anywhere on the English Wikipedia. This makes sense given your stated opinions on LLM policy, but I'm sure this is going to get significant opposition.
Your proposal also goes beyond commenting to basically say that LLMs are useless for any Wikipedia editing at all, as indicated by the section about copyediting. This seems out of place for a guideline that is supposed to be about using LLMs for comments. Again, this makes sense given your stated anti-LLM sentiments, but I have seen it repeatedly demonstrated that such a sentiment is far from universal.
I am concerned mainly because this guideline assumes bad faith from LLM-using editors. Most LLM-using editors are unaware of their limitations because of the massive hype surrounding them. My opinion is that instead of setting down harsh sanctions for LLM use, we should instead educate new users on why LLMs are bad and teach them to contribute to Wikipedia without them.
Finally, a lot of editors are just worn out at this point from having so many LLM policy discussions in such a short period of time. Can we at least wait until the NEWLLM expansion proposal is over? SuperPianoMan9167 (talk) 14:23, 7 January 2026 (UTC)[reply]
I appreciate your feedback and your continued presence as a moderate force in these discussions.
I recognise my proposal is quite extreme. My goal was to shoot for 'best case' and compromise from there as necessary.
The subsection on copyediting exists to justify the restriction against using LLMs to refactor, modify, fix punctuation etc; because at best the LLMs are unfit for this task anyway, and at worst it provides a get out of jail free card for bad faith editors. The overall section is in fact expressly intended to demonstrate that LLMs simply are not any good at doing the things people might want them to do in discussions.
I have tried to avoid that by pointing out that I believe the motivation to use LLMs in these cases comes from a good place (concerns about one's abilities)
I understand; but I still have the passion and energy, and I hope others do too. We are in something of a race against the clock here; every month we wait before strengthening our policies is another month of steadily being invaded by this type of content.
Missing the biggest reason not to use LLMs for your comments: it will make people more likely to dismiss your comments, not less.
As usual I think we need to specifically name what tools we are talking about. People genuinely don't know things are AI that actually are, and if we can't convince them of that, we can at least say "don't use ____" in the guideline.
they are not specifically trained in generating convincing-sounding arguments based on Wikipedia policies and guidelines, and considering they have no way to actually read and interpret them - Technically you could provide policies and guidelines in a prompt. Most people probably aren't doing that, but they could.
There are probably better copyedit examples; the first one seems like splitting hairs, and the original sentence had the same problem with different punctuation. The one where an AI copyedit turned "did not support Donald Trump" to "withdrew her support for Donald Trump" comes to mind. Better yet would be a copyedit to a talk page comment, though that might be hard to come by without using AI yourself.
"Editors are not permitted to use large language models to generate or modify any text for user-to-user communication" will have a disparate impact that discriminates against people with some kinds of disabilities, such as dyslexia. A blanket ban is therefore in conflict with WP:ACCESS and possibly with foundation:Wikimedia Foundation Universal Code of Conduct.
I think it is patronizing to tell people "You don't need it" when some of them actually do. I oppose telling English language learners to simply go away ("If your English is insufficient to communicate effectively, then once again, you unfortunately lack the required language ability to participate on the English Wikipedia, and you should instead participate on the relevant Wikipedia for your preferred language"), because (a) that's rude, and (b) sometimes we need them to bring information to us. If you don't speak English, but you are aware of a serious problem in an English Wikipedia article, I want you to use all reasonable methods to alert us to the problem.
Here's the Y goal:
A: I don't know English very well, but the name on the picture in this article is wrong.
B: Thanks for letting us know about this factual error. I'll fix it.
Here's the N anti-goal:
A: I don't know English very well, but the name on the picture in this article is wrong.
B: This is obvious AI slop. If you can't write in English without using a chatbot to translate, then just go away and correct the errors at the Wikipedia for your native language instead!
A: But the error is at the English Wikipedia.
B: I don't have to read your obvious machine-generated post!
Discussion on English competence requirements on enwiki
At minimum there must be a carve-out for machine translation because basically all machine translation nowadays uses the LLM architecture, as it typically performs better than other types of neural networks. (In fact, the very first transformer from the 2017 paper Attention Is All You Need was not designed for text generation; it was designed for machine translation. The generative aspect was pioneered by OpenAI's GPT model architecture with the release of GPT-1 in 2018.)
I understand your point, but what you're essentially arguing then is that WP:CIR also needs to be modified because we shouldn't require communicative English proficiency.
I think it is patronizing to tell people "You don't need it" when some of them actually do. My point is that the people who need AI to talk for them, translate for them, interpret PAGs for them etc have a fundamental CIR issue that the LLM is being used to circumvent. We can't simultaneously say "competence is required" and also "if you lack competence you can get ChatGPT to do it for you" Athanelar (talk) 19:27, 7 January 2026 (UTC)[reply]
From WP:CIR: It does not mean one must be a native English speaker. Spelling and grammar mistakes can be fixed by others, and editors with intermediate English skills may be able to work very well in maintenance areas. If poor English prevents an editor from writing comprehensible text directly in articles, they can instead post an edit request on the article talk page.
Nor am I saying anyone must be a native speaker, merely that if someone's English level is so low that they require an LLM to communicate legibly, then they are blatantly not meeting the CIR requirement to have the ability to read and write English well enough [...] to communicate effectively
Saying "actually, if you can't communicate effectively then you can just have an LLM talk for you" seems to be sidestepping this requirement.
I also simply don't see the reason. Other-language Wikipedias already struggle for editors compared to enwiki, why should we encourage editors without functional English to find loopholes to edit here rather than being productive members of the wider Wikipedia project? Athanelar (talk) 19:41, 7 January 2026 (UTC)[reply]
Because we need people to tell us about errors in our English-language articles even if they can't communicate easily in English. It is better to have someone using LLM-based machine translation to say "Hey, this is wrong!" than to have our articles stay wrong.
This should not be a difficult concept: Articles must be accurate. If the only way to make our articles accurate is to have someone use an LLM-based machine translation tool to tell us about errors, then that's better than the alternative of having our articles stay wrong. WhatamIdoing (talk) 19:55, 7 January 2026 (UTC)[reply]
We really don't need English competence. If you don't know English, you can post in your native language, and someone else can translate it. By the way, the CIR discussion seems to be tangential. Nononsense101 (talk) 19:35, 7 January 2026 (UTC)[reply]
WP:Competence is required directly states editors must have the ability to read and write English well enough to avoid introducing incomprehensible text into articles and to communicate effectively. and I have absolutely never heard of it being acceptable to participate in the English wikipedia by typing in another language and having others translate. Athanelar (talk) 19:42, 7 January 2026 (UTC)[reply]
ENGLISHPLEASE says: This is the English-language Wikipedia; discussions should normally be conducted in English. If using another language is unavoidable, try to provide a translation, or get help at Wikipedia:Embassy. (emphasis mine)
Athanelar, I don't know how else to say this: This is a huge project, and you've only been editing for two years. There's a lot you've never heard of. For example, I'd guess that you've never heard of the old Wikipedia:Local Embassy system, in which the ordinary and normal thing to do was "typing in another language and having others translate". Just because one editor (any editor, including me) hasn't seen it before doesn't mean that it doesn't happen, or even that it isn't officially encouraged in some corner of this vast place. WhatamIdoing (talk) 19:58, 7 January 2026 (UTC)[reply]
Yes, I get what you mean, but I've also seen the contrary plenty of times; people show up to the teahouse or helpdesk and ask questions not in English, and the response is universally "sorry, this is the English Wikipedia"
It just seems needlessly obtuse to say "well, there's technically hypothetically a carveout for occasional non-English participation here, sometimes, maybe" when in practice that really isn't (and shouldn't be) the case. Athanelar (talk) 21:14, 7 January 2026 (UTC)[reply]
Okay, sure, but IAR can never be used as a justification to not prohibit something, because by that logic we can't forbid anything because IAR always provides an exception. Athanelar (talk) 21:21, 7 January 2026 (UTC)[reply]
Yes, editors are sometimes inhospitable and dismissive. Yes, editors sometimes misquote and misunderstand the rules. I could probably fill an entire day just writing messages telling people that they'd fallen into another one of the common WP:UPPERCASE misunderstandings. It is literally not possible for anyone to know and remember all the rules. Even if you tried to read them all, by the time you finished, you'd have to start back at the beginning to figure out what had changed while you were reading. None of this should be surprising to anyone who's spent much time in discussions. But the fact that somebody said something wrong doesn't prove that the rule doesn't exist. It only shows their ignorance.
The ideal in the WP:ENGLISHPLEASE rule (part of Wikipedia:Talk page guidelines) is for non-English speakers to write in their own language, run it through translation, and paste both the non-English original and the machine translation on wiki. A guideline that says not to use machine translation on talk pages would conflict with that. WhatamIdoing (talk) 21:21, 7 January 2026 (UTC)[reply]
I really have an issue with this line of logic, because what does if using another language is unavoidable even mean? It seems to directly conflict with both itself and WP:CIR
Please use English on talk pages, and also you are required to be able to communicate effectively in English, but if you can't then actually you aren't required and you can just machine-translate it.
Nevermind my guideline proposal, it sounds like the existing guidelines and norms are already in a quantum superposition on this issue. Athanelar (talk) 21:24, 7 January 2026 (UTC)[reply]
@WhatamIdoing, spelling out this scenario has helped me think through some of what I'm seeing in this discussion. I think that a weak point in LLMCOMM, CIR, and similar guidelines is that there are really at least three different broad categories of "editors" who have different needs and interests:
People who genuinely want to help build an encyclopaedia and may be in this for the long term ("Wikipedians") – most of our policies and guidelines are written with these editors in mind
People who have identified serious problems in specific articles (regardless of whether they're article subjects or have a COI, or are uninvolved) – if there are serious problems that need to be fixed, we need to fix them, and we should be thanking these helpful non-Wikipedians, not putting up barriers based on CIR or LLMCOMM
People who are here for self-promotion, not to build an encyclopaedia – we have rules and procedures for dealing with these
Amazingly I don't think this is said anywhere in LLM PAGs or essays, but we should say somewhere that "Wikipedia does have a steep learning curve and it is very normal for a new editor to struggle. Some learn quicker than others, and people are obligated to be patient with new editors and help them improve." Basically, don't worry if you find it hard. I'd rather something like that replaced "You don't need it" Kowal2701 (talk) 21:23, 7 January 2026 (UTC)[reply]
Note that I have slightly rewritten the "You don't need it" section to focus a bit more on the encouragement, and also to soften the language around English proficiency. @WhatamIdoing @SuperPianoMan9167 et al, is this something more in line with your ideal spirit? Athanelar (talk) 21:34, 7 January 2026 (UTC)[reply]
Yes! I'm still somewhat opposed to the general premise, banning all use of LLMs in comments, but that section is much better now.
My ideal version of such a guideline would be:
Generating comments with LLMs (outsourcing your thinking to a chatbot) is prohibited. You have to be able to come up with your own ideas.
Modifying comments with LLMs, such as using them for formatting, is strongly discouraged. This is due to the risk of the LLM going beyond changing formatting and fundamentally changing the meaning of the comments.
I do also like this. Many editors say that they used LLMs "only for grammar" while having the kind of issues that only comes with LLM generation (for example, the same vague, nonspecific boilerplate reassurances that can be found almost word-for-word in at least half of the unblock requests I've seen), and others might genuinely not realize that the LLM has completely changed the meaning of their comment behind a facade of "grammar fixes". Chaotic Enby (talk · contribs) 23:04, 7 January 2026 (UTC)[reply]
Revision 2
Revision 2
Per the feedback given, I have changed the scope of the proposal. The proposal now:
Forbids the use of LLMs to generate user-to-user communication, including to generate a starter or idea that a human then edits. (this clause is added to close the inevitable loophole that would arise from that)
Strongly discourages the use of LLMs to review or edit human-written user-to-user communication, and explains that if doing so results in text which appears wholly LLM-generated, then it may be subject to the same remedies as for LLM-generated text
So, LLM-written and LLM-written, human-reviewed communications; not allowed.
The sentence about people unwilling or unable to communicate/interpret/understand feedback etc. should be reworded to the following: People unable to communicate with other editors, interpret and apply policies and guidelines, understand and act upon feedback given to them etc. should ask for help at the teahouse. If you keep the current, the word incompatible should not be linked as the linked page is something on categories and redirects, not related to the linking sentence. In any case, I support the proposal. Nononsense101 (talk) 02:39, 8 January 2026 (UTC)[reply]
Nobody is arguing that we should treat text as AI generated just because GPTZero says so; this is a strawman. I even have another proposal specifically to address the identification of AI generated text, but that's for another time. Athanelar (talk) 00:39, 9 January 2026 (UTC)[reply]
Nobody (here) is arguing that we should trust GPTZero, and I suspect that everybody here has seen editors actually do that, and believe they are completely justified in doing that. WhatamIdoing (talk) 03:30, 9 January 2026 (UTC)[reply]
Sure, but if someone quoted my hypothetical guideline to justify collapsing an evidently good-faith, human-written edit request just because GPTZero said it's AI generated, I think any sensible editor seeing that would say it's not a reasonable application of the guideline.
You can't argue against a guideline by taking the worst possible way a person could misinterpret it. It constantly happens that editors accuse other editors of personal attacks because they get told their contribution was bad, does that mean WP:NPA isn't fit for purpose? Athanelar (talk) 03:48, 9 January 2026 (UTC)[reply]
For many editors, "GPTZero said it's AI generated" proves that it's not a "human-written edit request". If you don't want that to happen per your proposal, then you need to increase its already bloated (~1800 words) size even more, to tell editors not to believe GPTZero. WP:NPA might be a viable model for this, as it explains both what is and isn't a personal attack, and how to respond to differing scenarios.
I can, and have, since before some our editors were even born, argued against potentially harmful rules by taking the worst possibles way a person could misinterpret it, and then deciding whether that worst-case wikilawyer is both tolerable and likely. Thinking about how your wording might be misunderstood or twisted out of recognition is how you're supposed to write rules.
This has been known since at least the 18th century, when James Madison wrote in Federalist No. 10 that "It is in vain to say, that enlightened statesmen will be able to adjust these clashing interests, and render them all subservient to the public good. Enlightened statesmen will not always be at the helm: Nor, in many cases, can such an adjustment be made at all, without taking into view indirect and remote considerations, which will rarely prevail over the immediate interest which one party may find in disregarding the rights of another, or the good of the whole", and went on to propose a large federal republic as a way of keeping individual liberty (which is a necessary precondition for factionalism) and national diversity (which leads to factionalism through an us-versus-them mechanism) while reducing the opportunity for any one faction to seize power over the others.
I recommend Madison's work on factionalism is to anyone who wants a career in policy writing, but for now, spend a few minutes thinking about how we could adapt Madison's definition of a faction: "a number of Wikipedians...who are united and actuated by some common impulse of passionagainst AI...adverse to the rights of other Wikipedians (e.g., to have others focus on the content, instead of focusing on the tools used to write it), or to the permanent and aggregate interests of the community (e.g., to not WP:BITE newcomers or have hundreds of good-faith contributors told they're not welcome and not WP:COMPETENT)."
In the present century, we call this phenomenon things like misaligned incentives (e.g., editors would rather reject comments on a technicality than go to the trouble of correcting errors in articles or explaining why it isn't actually an error, but articles need to be corrected, and explanations help real humans), and we address it through processes like designing for evil (e.g., don't write "rules" that can be easily quoted out of context; don't optimize processes for dismissive or insulting responses) and use cases (e.g., How will this rule affect a person who doesn't speak English well? A WP:UPE? A person with dyslexia? An autistic person? A one-off or short-term editor?).
For example:
Protect the English language learner by declaring AI-based machine translation to be acceptable.
Ignore the UPE's AI use as small potatoes and block them for bigger problems.
Educate anti-AI editors that both human- and AI-based detectors make mistakes, and that these mistakes are more likely to result in editors unintentionally discriminating against editors with communication disabilities.
Remind editors to WP:Focus on content, which sometimes means saying "Thanks for reporting the error" instead of collapsing AI-generated comments.
I do understand your point, and am truly appreciative of the time and effort you're taking to make it. I still have two concerns with it;
The first is bloat; as you've indicated, words are precious in any policymaking effort and the longer people have to read to 'get to the point' the less chance they will. I'm concerned at how much weight should be added to cover things like "it's also possible to make mistakes without AI" that in any case should be assumed by any reasonable audience. It also feels redundant, i.e., AGF and BITE still apply even if I don't explicitly restate them. The existence of a guideline prohibiting AI-generated text is by no means a carte blanche to ignore those other, more fundamental principles.
Given that your primary cause for concern seems to be about collapsing AI-generated comments; well, that already exists as WP:HATGPT, all I'm doing is restating it here. However, on rereading that, I suppose I could (and will) add some language specifying that conversations should not be collapsed if their content proves otherwise extraordinarily useful, which should cover the edge cases you're concerned about, with super-useful AI users and overly anal-retentive wikilawyers.
@Athanelar, when I'm working on policy-type pages, the definitions in RFC 2119 are never far from my mind. Here are the most relevant bits:
SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.
MAY This word, or the adjective "OPTIONAL", mean that an item is truly optional.
And now let's compare what you wrote vs the text at HATGPT:
Comments, nominations, and opening statements that are obviously generated (not merely refined) by a large language model or similar AI technologymay be struck or collapsed...
In short, you've said that this "SHOULD" normally happen unless someone has carefully considered the situation and decided to make a special exception (e.g., for "extraordinarily useful" comments), and the existing guideline says that this "MAY" happen, but it's strictly optional and not ever required. Can you see the gap between what you're proposing and the existing guideline? If you genuinely believe that well, that already exists as WP:HATGPT, all I'm doing is restating it here, then I don't think we're speaking the same language. WhatamIdoing (talk) 01:31, 12 January 2026 (UTC)[reply]
Sure, nevertheless, the kind of person you're describing would do what you're saying regardless of whether it's 'should' or 'may', and my entire contention is as to whether it's realistic enough of a concern to affect anything, which I simply doubt that it is. People who fail to AGF won't get a free pass just because 'Athanelar's guideline says 'should' so that means I can collapse whatever I want"
I do think that if someone reads AI-generated comment, and collapses it per your proposal, that they should "get a free pass", because they were actually following the guideline to the best of their good-faith ability.
As a minimum, I suggest that when you change "may" optionally to "should" normally, you don't present that as a non-change that is already enshrined in guidelines. This is a significant change; either own it as being a change, or don't propose it. WhatamIdoing (talk) 23:40, 12 January 2026 (UTC)[reply]
Who are these editors who are relying on GPTZero and nothing else? That doesn't describe anyone I'm aware of working on AI cleanup, and it doesn't describe most of what goes on at ANI (the people who bring in GPTZero or whatever tend to be uninvolved participants). Gnomingstuff (talk) 14:27, 9 January 2026 (UTC)[reply]
There was a discussion not long ago about LLMs creating spam; see here. As I said there, I think this is one way to look at it -- we will not be able to detect all uses of LLMs, but if our rules force LLMs to become hard to detect (because they have improved the usefulness of their posts) maybe that's the best outcome we can hope for. I can see why we want to ban LLMs for user communication, and for things like FAC and GAN reviews, but there is no guaranteed way to detect LLM-generated text. Plus I'd argue that in the right hands they are useful. I have used them myself to find problems in articles I have worked on, for example. TL;DR: I am not strongly opposed to a rule like the one suggested here, but I doubt it will be very useful. I don't have a better suggestion, though. Mike Christie (talk - contribs - library) 03:56, 8 January 2026 (UTC)[reply]
I wonder how long it will be before attempting a ban is just pointless, either because we can't detect it at all, or because the amount of time spent arguing over whether a comment is prohibited type of AI overtakes the cost of permitting it. WhatamIdoing (talk) 00:19, 9 January 2026 (UTC)[reply]
The amount of effort spent arguing whether something is AI-generated is already at times greater than the amount of effort spent determining whether the content of that something is actually problematic. Thryduulf (talk) 03:23, 9 January 2026 (UTC)[reply]
In my anti-goal scenario above, what's motivating B to ignore the reported error and focus on the communication style? Why is B failing at Postel's law of practical communication? Assume that B would respond in a practical manner if he was realistically able to. Is the problem within B himself (e.g., B is fixated on rule-following, to the point of not being able to recognize that the error report is more important than the method by which the error report was communicated)? Is the problem accumulated pain (this is the 47th one just today, and B's patience expired several comments ago)? Is the problem in our systems (e.g., if B can quickly dismiss a request as AI-generated, then harder work can avoided)? WhatamIdoing (talk) 05:28, 9 January 2026 (UTC)[reply]
I have to admit I've never really thought about that before. My gut reaction is to say it's mostly "accumulated pain" with a bit of over-focus in there too: Someone finds and fixes a problem, then they find another similar one and fix that. At some point they realise they've seen a quite a few of these and start looking for others to fix. This becomes an issue if they get overwhelmed by the scale of the issue and/or stop looking at the wider context to see whether it is actually a problem that needs to be fixed. Thryduulf (talk) 12:27, 9 January 2026 (UTC)[reply]
It's kind of a mix of:
This is the 47th one today, and the majority of the last 46 people were either hostile or uncooperative.
AI copyedits go much farther than "normal" copyedits do in terms of rewriting meaning -- they're more akin to line edits -- but AI companies do not always make it clear how much they're changing. So when someone hears "I just used it for copyediting," they're inclined to distrust that.
In reality, this kind of conversation does not usually begin with "Hi, I found a serious factual error in the article. Here's a source to show I'm not making this up," it begins with a wordy behemoth full of AI platitudes. But even those -- at least on article talk pages -- often don't result in that because many editors watching individual articles aren't aware that AI is even a thing (still). Where this conversation usually happens is someone asking a question, and receiving an AI reply.
Plus there's another 103 to go, and it always feels like "I" am the only person doing this (because there's no way to know how many people have already checked the thing that I'm checking now).
How much an AI copyedit changes shouldn't be visible to the other talk page participants.
I love that link, and I hope someone gets her book and improves Copyediting with it. I wonder if British editors are more irritated by AI 'copyediting' for tone/voice reasons.
Kristi Noem seems like a standard "why is Wikipedia mentioning this public scandal, it must be politically biased" type comment, so the good outcome was already off the table
Primerica seems like a standard "why is Wikipedia mentioning this negative coverage of a company instead of promoting it?" type comment, which, same
Scott Wiener comment hatted four months after it was posted; there was a substantive discussion before then
Pythagorean triple thread began with a wordy platitude behemoth yet still was not hatted until several comments deep (after the LLM had already provided low-quality sources when asked about them)
Hard to tell what's going on in the 2026 Tamil Nadu Legislative Assembly election thread but it looks like there was some backstory and at least one block prior to the comment
I'm not saying that none of these deserved to be hatted. I'm saying that they're not evidence supporting the claim that what "usually happens is someone asking a question, and receiving an AI reply".
You can repeat the search yourself if you'd like, and pick a different sample set. It's sorted to have the most recently edited Talk: pages containing that template at the top (e.g., /Archive pages if an archive bot just did its daily run – it's most recent edit of any kind, not specifically the most recent addition of the template). WhatamIdoing (talk) 05:53, 10 January 2026 (UTC)[reply]
As a Polish native speaker, my English is strong, but it is not at a native level. I can easily understand written and spoken English, but expressing myself in English - especially writing comments - is much harder for me than it is for native speakers. Banning the use of machine translation tools (which increasingly rely on LLMs) to edit messages would be exclusionary and would push people like me out of discussions about building the encyclopedia, even when we can judge whether a translation faithfully conveys what we meant.
This is even more pronounced for non-native speakers with dyslexia. Without tools that help with grammar and punctuation, even strong substantive arguments can come across as weaker or less persuasive - not because the reasoning is bad, but because the English reads poorly. Grudzio240 (talk) 10:49, 12 January 2026 (UTC)[reply]
Myself and many others have expressed and will continue to express that a poorly-worded non-native comment will always read better and be stronger and more persuasive than a comment which reads like LLM generation. In the former we can at least be confident that the ideas and convictions presented are your own, whereas in the latter we have no means to differentiate you from any of the other people that generate some boilerplate slop and call it a day. Athanelar (talk) 10:55, 12 January 2026 (UTC)[reply]
I understand the concern: when a comment reads like it was generated, it’s harder to trust that the wording reflects the editor’s own thinking. That said, as machine translation and AI-assisted editing become more common - and harder to detect - “imperfect English” will increasingly become a marker that singles people out. In practice, that can discourage non-native speakers (and people with dyslexia) from participating, even when their underlying points are solid. I think the better approach is to focus on the substance and evidence, and allow limited language assistance (especially translation), while still discouraging using LLMs to generate arguments or positions.
Also, reactions to AI-assisted text vary a lot. Not everyone reacts negatively to AI-assisted wording, and I don’t think policy should be optimized for the most suspicious readers. If the content is clear and sourced, that should matter more than whether the phrasing “sounds too polished”. Grudzio240 (talk) 11:08, 12 January 2026 (UTC)[reply]
Regarding that should matter more than whether the phrasing “sounds too polished”, the fact is that this isn't the most glaring sign of AI writing. We've seen many editors write in a quite refined way, without being suspected of using LLM assistance, as LLMs will overuse specific kinds of sentence structures (e.g. WP:AISIGNS). This is very different from the pop-culture idea of "anyone who uses refined or precise language will sound like a LLM".As these get associated with people using these tools to generate arguments completely divorced from policy, they get picked up as cues that make readers tune out from the substance of arguments, and end up hurting the non-native speakers they hope to help. As a non-native speaker myself, I might worry about flawed grammatical structures here and there, but I would much prefer that to other editors reading my answers with immediate suspicion due to obvious AI signs. Chaotic Enby (talk · contribs) 11:17, 12 January 2026 (UTC)[reply]
Defaults treating comments that have “AI SIGNS” as suspicious may undermine Wikipedia:Assume good faith. "Unless there is clear evidence to the contrary, assume that fellow editors are trying to improve the project, not harm it." We should start by evaluating the content; proceed to distrust only when the content or behavior indicates a real problem. Grudzio240 (talk) 11:38, 12 January 2026 (UTC)[reply]
The entire conceit of this guideline is that AI-generated comments are problematic in and of themselves. If the guideline said "ignore whether the comment is AI generated and just assess whether it violated any other policy or guideline" then it would be a pointless guideline. Obviously AI-generated comments which violate other PAGs are already forbidden -- because they violate other PAGs. The point of this is to forbid AI generating comments regardless of whether their content breaks any other PAG (obviously subject to the usual exception) Athanelar (talk) 11:42, 12 January 2026 (UTC)[reply]
Well, the the close of the VPP discussion emphasized that “The word ‘generative’ is very, very important” and that “This consensus does not apply to comments where the reasoning is the editor's own, but an LLM has been used to refine their meaning… Editors who are non-fluent speakers, or have developmental or learning disabilities, are welcome … [and] this consensus should not be taken to deny them the option of using assistive technologies to improve their comments.”WP:AITALK was written on the basis of that discussion, and it seems to rest on exactly this distinction: LLMs should not be used to generate the substance of user-to-user communication (i.e., the arguments/positions themselves), but meaning-preserving assistance e.g. translation or limited copyediting where the editor’s reasoning remains their own) was explicitly not the target of that consensus. Grudzio240 (talk) 11:58, 13 January 2026 (UTC)[reply]
The core issue is that AI signs are heavily correlated with fully AI-generated arguments, themselves usually detached from policy. AGF is not a suicide pact, and editors used to the preponderance of flawed AI-generated arguments (compared to the few meaningful arguments where AI has only played a role in translation/refinement) might discount all as falling in the former category. This is magnified by many editors choosing to defend clearly abusive uses of AI (for example, adding hallucinated citations) as only using it to refine grammar or correct typos, even when that manifestly wasn't the case. Chaotic Enby (talk · contribs) 12:10, 12 January 2026 (UTC)[reply]
For approximately the gazillionth time, saying that text is likely to have been generated by AI says nothing about good faith or bad faith. It is pointing out characteristics of text. By this logic, adding a "copyedit" or "unreferenced" tag is assuming bad faith. Gnomingstuff (talk) 17:20, 12 January 2026 (UTC)[reply]
Yes, but it's just a short step from "I think he used AI" to Chaotic Enby's "heavily correlated with fully AI-generated arguments" to "This person is just wasting my time with fake arguments and has no interest in helping improve Wikipedia". WhatamIdoing (talk) 23:45, 12 January 2026 (UTC)[reply]
That is very much assuming bad faith in what I said – I'm not saying that we should discount comments on that basis, only that some editors will do it, and I was explaining the source of that distrust rather than defending it. Chaotic Enby (talk · contribs) 00:07, 13 January 2026 (UTC)[reply]
I know this is not entirely what you are arguing, but this kind of LLM-nihilist stance I keep hearing like "why do you care how it was made if the content is policy compliant?" seems patently absurd to me. If the only thing we care about is the substance and not who made it or how we might as well use Grokipedia. It's rather like presenting someone with a dish of Ortolan and saying "if it tastes good, why concern yourself with the ethical implications of its production?" The ends simply do not justify the means, in my mind. Athanelar (talk) 11:33, 12 January 2026 (UTC)[reply]
I think there are huge differences in values that people have, from a cultural or philosophical perspective. Some people see AI as inherently evil and some just see it as a tool. If there was an end product of English Wikipedia, something we were actually trying to finalise one day, it seems silly to the faction that sees AI as a tool to make humans do the job of machines. You get the same thing either way, we're just making it harder on ourselves. They see no value in human labour over machine labour. The means don't need to be justified because they don't see anything wrong with them. People who prefer human output do. This is especially true in the larger context of modern society, where the system requires people to work even when there's no work to be done. If a machine is doing your job for you, that doesn't mean you don't have to work, it means you have to create a need for yourself. If robots are writing encyclopaedias, that's just another existing need filled. ~2026-24291-5 (talk) 12:13, 12 January 2026 (UTC)[reply]
You're missing the third camp of "AI is a tool, but a flawed one". Using AI as a tool to write an encyclopedia would work in theory, and might be a very real possibility in the future, but has shown its current limits, and regulating it is necessary to address those immediate concerns, rather than for more abstract philosophical reasons. Chaotic Enby (talk · contribs) 12:18, 12 January 2026 (UTC)[reply]
I would vote no on any guideline that purports to tell editors what technology they can and can't use to draft communications. It's none of our business. You don't like somebody's writing style? Too bad; don't read it. It doesn't matter if it was generated by ChatGPT or polished by Grammarly or if it's just bad writing: we can judge people for the content of their posts (e.g., WP:NPA, WP:NOTFORUM, WP:BLUDGEON, etc.), but not for the tools they use to draft that content. Also, if you've been active on Wikipedia for 3 months, maybe you don't try to write a new guideline that purports to tell everybody else what tools they can and can't use to communicate on Wikipedia. If most of your edits are about trying to fight against LLM use, you might be WP:RGW instead of WP:HERE. Levivich (talk) 17:59, 12 January 2026 (UTC)[reply]
We can and already do judge people for the tools they use to edit: that is why we have a bot policy, for example, or limitations on fast tools such as AWB. In these cases, the reason is the same as the proposed reasons to limit AI-generated writing. Namely, the potential for fast disruption at scale: someone can generate 50 proposals in a few minutes, leaving other editors in need of a disproportionate effort to address them all – or leave the unread ones to be accepted as silent consensus, as no one will take the time to analyze 50 different proposals in detail.Additionally, it isn't necessarily helpful to say that if you've been active on Wikipedia for 3 months, maybe you don't try to write a new guideline, as newcomers can absolutely learn fast and have worthy insights – especially as you wish to judge others for the content of their posts. Chaotic Enby (talk · contribs) 18:44, 12 January 2026 (UTC)[reply]
This isn't a new guideline. It's a refinement of WP:AITALK, which has existed for an entire year now. The RfC that produced the guideline was closed stating in part (boling mine): There is a strong consensus that comments that do not represent an actual person's thoughts are not useful in discussions. Thus, if a comment is written entirely by an LLM, it is (in principle) not appropriate. The main topic of debate was the enforceability of this principle..
Sorry, but both of you missed what I was saying. Re CE: I didn't say to edit, I said to draft communications, and our existing bot policy already prohibits spam (as you point out). Re GS: WP:AITALK is about the content--the output, what gets published on this website--not about the method. AITALK doesn't say editors can't use AI to start or refine their posts, or to copyedit or fix grammar. Any proposed guideline that says anything like This prohibition includes the use of large language models to generate a 'starter' or 'idea' which is then reviewed or substantially modified by a human editor. or Editors are strongly discouraged from using large language models to copyedit, fix tone, correct punctuation, create markup, or in any way cosmetically adjust or refactor human-written text for user-to-user communication. would draw an oppose vote from me. Re both: how long before the community thinks repeated anti-LLM RFCs are a bigger problem than the use of LLMs on Wikipedia? Be judicious, mind the backlash, note the difference between the LLM proposals that have passed, and the ones that have failed. (Hint: the super-anti-AI proposals are the ones that have failed. The ones that allow use within reasonable boundaries have passed.) Levivich (talk) 18:52, 12 January 2026 (UTC)[reply]
The main problem with those RfCs is that the stricter proposals get shot down by editors wanting reasonable boundaries, and the more lenient proposals get shot by "all-or-nothing" editors. Given that, and the speed at which the technology advances, it isn't surprising that we are often discussing these issues – especially since recent proposals have been closed with consensus that the community wants some regulation but disagreed on the exact wording proposed. In that regards, the disruption doesn't come from the RfCs themselves, but from the inability of editors on both sides to compromise.Additionally, we also regulate what someone may do to draft communications, with proxy editing being the best example – if we can disallow proposals coming from a banned user, we can disallow proposals coming from a tool that has repeatedly proven disruptive. Chaotic Enby (talk · contribs) 19:09, 12 January 2026 (UTC)[reply]
Re proxy edits, that is not regulating the technology used to draft communications. We don't tell people what word processor to use, or whether they can use a typewriter, or which spellchecker to use, etc. etc. This proposed guideline would be a first in that sense, and I believe is doomed for that reason.
As to the main problem with the RFCs, yes, I agree with you, but does this proposed guideline look like any kind of compromise? It's proposing rules that are stricter than the rules we have for mainspace (for Pete's sake!), and it's still trying to do the thing that the community has repeatedly said no to, which is to stop or "strongly discourage" all or almost all use of LLMs (as opposed to just "bad" use of LLMs). The drafter, in comments above, below, and elsewhere, is very transparent that the goal of the proposed guideline is to get people to stop using LLMs (as opposed to getting them to use LLMs correctly rather than incorrectly).
I'll say again the same the thing I said about the last doomed RFC: hey, go ahead and run it, maybe I'm wrong and it'll get consensus, or maybe the next one will :-P
But really, CE, you've been around long enough to know what's up, I think you know I'm right... the reason your proposal at WT:TRANSLATE is on its way to passing is because that was a good proposal that compromised and is obviously responsive to community concerns from other RFCs (btw, great job there!). This proposal is not like that, it's almost the opposite in its stubbornness.
And I know you've personally put in a lot of time and effort into trying to get a handle on increased LLM usage on Wikipedia, what with the WikiProject and all, and I hate to see those productive efforts get sunk because we (collectively) aren't being clear enough to the hard liners in saying: "No. Stop trying to stop everybody from using LLMs, it's counterproductive." Because right now, NEWLLM is still laughably short, and it's not getting any better, because we're wasting time on uncompromising proposals like this one, instead of on compromise proposals like the translation one. And, frankly, it's because people who have no experience building consensus are being allowed to drive the bus, and are driving it off the road, rather than deferring to people who do know how to build consensus (like you). Levivich (talk) 21:06, 12 January 2026 (UTC)[reply]
Yep, I think we agree on the broad strokes here. I still respectfully disagree that proxy edits are that far away from using ChatGPT to generate an argument from scratch (as in both cases, you're delegating the thoughts to someone/something else), but the crux of the issue isn't a specific policy detail, but the fact that compromises end up being overshadowed by more hardline proposals on which a consensus can't realistically be reached. Chaotic Enby (talk · contribs) 21:42, 12 January 2026 (UTC)[reply]
I am explicitly open to compromise here. I want people to propose compromises that they find acceptable. I have already changed my initial proposal in response to one such compromise. I know you know that, I just want to put it out there. Athanelar (talk) 21:52, 12 January 2026 (UTC)[reply]
The problem with allowing starters or ideas generated by AI is because first it permits an unfalsifiable loophole ("My comment isn't subject to this guideline because it's not AI generated, I just used AI to tell me what to say and then reworded it") and second, while the style of AI-generated posts is certainly problematic, another problem (as addressed in my guideline) is the content, and generating a starter with AI means the idea is still not yours but is rather the AI's, which is the whole thing this guideline aims to address.
If the AI tells someone to wikilawyer by citing a nonexistent policy or misapplying one that does exist, it doesn't matter if they do it in their own words or not.
So the point is to say that the ideas need to be your own, not just the presentation thereof.
As for how long before the community thinks repeated anti-LLM RFCs are a bigger problem than the use of LLMs on Wikipedia? to take a page from your own book in dismissing one's interlocutor; perhaps a person who is not active in the constant organised AI cleanup efforts doesn't have the best perspective on how much of a problem LLMs are.
I really encourage you to take some time and tackle one of the tracking subpages at WP:AINB some time. Take a look at this one of a user who generated 200+ articles on mainspace wholesale using AI with no review or verification and tell us again how the people trying to fight the fire are the real problem because they're getting everything wet in the process. Athanelar (talk) 19:10, 12 January 2026 (UTC)[reply]
Yeah, ironically, If most of your edits are about trying to fight against LLM use, you might be WP:RGW instead of WP:HERE is closer to actually assuming bad faith than anything people doing AI cleanup have been accused of. Gnomingstuff (talk) 21:04, 12 January 2026 (UTC)[reply]
...which is the whole thing this guideline aims to address. Yes, that's the problem, in my view: you are trying to address something Wikipedia has absolutely no business to address, which is what technology people use to communicate. As has been pointed out by others above, there is, first and foremost, the accessibility issues and the issues for non-native English speakers (like me btw). But beyond that, how a human being gets from a thought in their head, to a policy-compliant non-disruptive comment posted on Wikipedia, is none of our (the community's) business. It doesn't matter if they use a typewriter or what spellcheck or Grammarly or an LLM. If the output is not disruptive--if it's not bludgeoning or uncivil, etc.--we have no business telling an editor what technology they can and can't use to generate that output. (And btw if you think 200+ bad articles is a lot, lol, we've had people generate tens of thousands of bad articles, redirects, etc., without using LLM, and that's happened for the entire history of Wikipedia--we still never banned people from using scripts or bots, despite the fact that they've been abused by some, and with much worse consequences that what's being reported at AINB). Levivich (talk) 21:15, 12 January 2026 (UTC)[reply]
we still never banned people from using scripts or bots, But... we do? As pointed out before, we absolutely do restrict what technology people use to edit. You need express permission to operate a bot because of the potential for rapid, large-scale disruption.
You cannot seriously compare an LLM to a word processor or typewriter. Neither of those things is capable of wholesale generating a reply without any human thought behind it. Athanelar (talk) 21:21, 12 January 2026 (UTC)[reply]
We don't require permission to use a script. You don't need permission to use the WP:API. What is regulated is the output--specifically, BOTPOL and MEATBOT prevent unauthorized bot-like editing regardless of whether a script is actually used or not. It's the effect, not the method, that's regulated (in fact, the effect is regulated the same way -- bot or meat -- regardless of the method!). And yes, I am absolutely comparing LLMs to the pen, the typewriter, the word processor, the spellchecker, the grammar checker, autocorrect, predictive text, etc. It's just the latest technological advance in writing tools. And LLMs are not capable of generating anything "without any human thought behind it"; they require prompts, which require human thought, and their training data is a bunch of human thought. Levivich (talk) 22:49, 12 January 2026 (UTC)[reply]
Sure, but that's like arguing that paying someone to do your homework is materially the same as if you did it yourself, because you still had to describe the task to somebody else and then they still came up with an answer. You must know you're splitting hairs by now. Athanelar (talk) 22:53, 12 January 2026 (UTC)[reply]
Maybe hiring a secretary to write a letter on your behalf would be a more relevant analogy: Bob Business tells his secretary to send a letter saying he accepts their offer to buy 1,000 widgets but wants to change the delivery date slightly. He glances over the letter, decides that it makes the points that he wanted to communicate, and signs it before mailing it.
Do you think the typical recipient of that letter would be offended to discover that Bob didn't choose every single word himself? Is the recipient likely to believe that the facts communicated did not represent Bob's own thoughts? WhatamIdoing (talk) 23:33, 12 January 2026 (UTC)[reply]
That analogy only makes sense if you assume AI never makes up new arguments, and that it is only ever used to clarify existing thoughts that have been communicated in the prompt, rather than something like "please write me an unblock request". In the latter case, the fact that the substance of the unblock request isn't an original thought (but only the request to write one) is problematic, as we can't evaluate whether or not the blocked user properly understands the issues. That specific case is very much not theoretical, as around half of unblock requests have strong signs of LLM writing. Chaotic Enby (talk · contribs) 23:42, 12 January 2026 (UTC)[reply]
That analogy makes lots of sense, if you've ever worked with (or been) a human secretary.
The problem is that this analogy is very far removed from the actual situations we're facing, and makes it harder to talk about them in precise terms. In one case, you're having a secretary playing a purely functional role of transmitting a message and helping convey thoughts to an interlocutor, possibly adding some context of their own. The key task is to transmit the information, and using a secretary (or AI) to do it makes sense. On the other hand, an unblock request aims to show that the blocked user has some level of understanding of the situation. If a secretary (or AI) writes the unblock request, with the blocked user having only told them "write me an unblock request", then the unblock request fails at its purpose. Chaotic Enby (talk · contribs) 00:12, 13 January 2026 (UTC)[reply]
But how do we know what the prompt was? If the prompt was "write me an unblock request" and that's it, then your point holds true. But what if the prompt was "write an unblock request that says [user's own understanding]"? Like, for example, "write an unblock request that says I lost my cool and said something I shouldn't have and in the future I'll be sure to walk away from the keyboard when things get too heated and also I'm going to avoid this topic area for a while"? Could you tell what the prompt was based on the output? I don't think so... Levivich (talk) 00:25, 13 January 2026 (UTC)[reply]
We don't know what the prompt was exactly, but we can get some strong indications when the user leaves unfilled phrasal templates, or apologizes for nonexistent issues completely unrelated to their behavior, or only writes generic, nonspecific commitments that could apply to literally any unblock request. In many of these cases (and, again, these are a large proportion of the unblock requests I'm seeing), I'd probably be even more worried if the prompt came from the user's own "understanding". Chaotic Enby (talk · contribs) 00:56, 13 January 2026 (UTC)[reply]
The unblock process might fail at its intended purpose, but that's entirely within the realm of normal secretary behavior. Have you never read tales like the https://www.snopes.com/fact-check/the-bedbug-letter/? Or heard stories about secretaries who make sure that the boss always remembers to buy a present for his wife's birthday, send flowers on their wedding anniversary, and so forth?
In the end, I think that it might make more sense for us to re-design the unblock process (to make it more AI-resistant) than to tell people they shouldn't use AI. Maybe a series of tickboxes, setting up a sort of semi-customizable contract? "▢ I agree that I won't put the word poop in any more articles" or "▢ I agree that I won't write long comments on talk pages" or whatever. WhatamIdoing (talk) 00:37, 13 January 2026 (UTC)[reply]
To note: last time someone generated tens of thousands of redirects, we had to create a whole new speedy deletion criterion for it. More generally, there have been many discussions on article creation at scale (the other WP:ACAS) and attempts at building a framework to regulate it. So, while I don't disagree that we can't control everything, the issue of disruption at scale isn't new to Wikipedia, and efforts to address it aren't new either. Chaotic Enby (talk · contribs) 21:45, 12 January 2026 (UTC)[reply]
Yeah, we did that this time, too, and kudos to the community, it got to WP:G15 much faster than it took to get to WP:X1. But you know what we didn't do about the redirects or sports articles? Prohibit, or try to prohibit, people from using scripts or templates or bots, etc. We never went after the technology that made that spam possible, we went after the editors who did the spamming, and made new tools to efficiently deal with the spam (csd's). And those were 100,000-page problems; whereas this is like thousands of articles? (How many G15s have there been so far? I see 46 in the logs in the last two days.) So like an order or two orders of magnitude less? And our response, or some folks' response, has been an order or two orders of magnitude stronger. Levivich (talk) 22:41, 12 January 2026 (UTC)[reply]
More to the point: We didn't try to "Prohibit, or try to prohibit" everyone else "from using scripts or templates or bots, etc." just because a few people abused those tools. WhatamIdoing (talk) 23:20, 12 January 2026 (UTC)[reply]
But I never said we should prohibit anything entirely, just have a framework to regulate it. Which is exactly what we've done with bots (through WP:BRFA), with mass creation of articles and redirects (through draftification and new page patrolling), etc. Chaotic Enby (talk · contribs) 23:22, 12 January 2026 (UTC)[reply]
You: "I never said we should prohibit anything entirely".
Proposal: "Editors are not permitted to use large language models to generate user-to-user communications" (emphasis in the original)
The main worry I have with AI is that it is much more widely distributed. We don't have a few editors who can be blocked to get rid of the spamming, but tools that have been causing issues in the hands of a much broader range of editors, mostly because, sadly, many of them don't know how to use it responsibly. Banning the tool entirely is too harsh, blocking individual editors doesn't solve the underlying problem, meaning we're in this problem zone where it's hard to craft good policy.G15 is for the most extreme, blatant cases, but Category:Articles containing suspected AI-generated texts contains nearly 5000 pages, while Category:AfC submissions declined as a large language model output adds another 4000, just from the last 6 months. With all the smaller tracking categories, plus the expired drafts, we're easily above 10,000 pages. Chaotic Enby (talk · contribs) 23:20, 12 January 2026 (UTC)[reply]
I agree that we're in a difficult place. I don't like the idea of Wikipedia appearing to be AI-generated (even if it's not). I don't like the idea of Wikipedia having the problems associated with AI-generated content (including, but not limited to, factual errors).
But if:
We can't accurately detect/reject AI-generated content before it's posted
Many people believe that it's normal, usual, and reasonable to use AI tools to create the content they need for Wikipedia
The individual incentives to use AI (e.g., being able to post in a language you can barely read; being able to post an article quickly) exceed the expected costs (e.g., the UPE's throwaway account may get blocked)
then I think that having a rule, or even having an ✨Official™ Policy🌟, will not change anything (except maybe making our more rule-focused editors even angrier, which is not actually helpful). WhatamIdoing (talk) 00:27, 13 January 2026 (UTC)[reply]
@Gnomingstuff I think it would help readers if the summary also reflected an important nuance from the earlier RfC: it explicitly carved out cases where the reasoning is the editor’s own and an LLM is used only to refine meaning (e.g. for non-fluent speakers or users with disabilities). This consensus does not apply to comments where the reasoning is the editor's own, but an LLM has been used to refine their meaning... Editors who are non-fluent speakers, or have developmental or learning disabilities, are welcome ...
The current proposal seems materially more restrictive than that consensus, because it prohibits even “starter/idea” use and goes further by strongly discouraging copyediting/tone/formatting with LLMs:
Editors are strongly discouraged from using large language models to copyedit...
If the intent is to align with the earlier consensus, it may be worth explicitly stating that assistive uses that don’t outsource the editor’s reasoning (especially accessibility/translation-adjacent cases) are not what the guideline is trying to discourage. Grudzio240 (talk) 09:25, 13 January 2026 (UTC)[reply]
and i have one more concern about the “copyedit/tone/formatting” section: it reads as shifting the downside risk onto the editor in a way that can chill legitimate assistive use. The proposal first strongly discourages even cosmetic LLM assistance, and then says that editors who do so “should be understanding” if their LLM-reviewed comment “appears to be LLM-generated” and is therefore subject to collapsing/discounting/other remedies.
Editors who choose to do so despite this caution … should be understanding if their LLM-reviewed comment/complaint/nomination etc. appears to be LLM-generated and is subject to the remedies listed above.
That framing seems to pre-emptively validate adverse outcomes based on appearance (“looks LLM”) rather than on whether the editor’s reasoning is their own. If the intent is accessibility/meaning-preserving assistance to remain acceptable, it may be worth rewording this to avoid implying that a “looks LLM” judgment is presumptively correct, and explicitly protect meaning-preserving copyedits/formatting from being treated as fully LLM-generated. Grudzio240 (talk) 09:31, 13 January 2026 (UTC)[reply]
Like the idea of the different prompt examples. That said, if someone is writing I understand that edit warring and insulting other editors was disruptive, and that in the future I plan to avoid editing disputes which frustrate me in that way to prevent a repeat of my conduct, and that I am willing to accept a voluntary 1RR restriction if it will help with my unblock., it seems like they could just... say that, instead, without AI, and that doing so would be more likely to produce a positive outcome. Gnomingstuff (talk) 15:52, 13 January 2026 (UTC)[reply]
I have added the explanatory paragraph "These are examples of a prompt that would result in an obviously unacceptable output and a prompt that would result in a likely acceptable one, to act as guidance for editors who might use LLMs. They should not be taken as a standard to measure against, nor is the prompt given necessarily always going to correlate with the acceptability of the output. Whether or not the output falls afoul of this guideline depends entirely on whether it demonstrates that it reflects actual thought and effort on the part of the editor and is not simply boilerplate." Athanelar (talk) 16:28, 13 January 2026 (UTC)[reply]
I appreciate the effort but I'm probably not the best person to give feedback given I think (1) there shouldn't be a new guideline at all (Wikipedia needs fewer WP:PAGs, not more); (2) there shouldn't be a new guideline about "LLM communication" (as opposed to about LLM use in mainspace or LLM translation); (3) "Large language models are unsuited for and ineffective at accomplishing this, and as such using them to generate user-to-user communication is forbidden." is a deal breaker for me, in principle (I don't agree it's ineffective or unsuited or that it should be forbidden); (4) I do not support "a prohibition against outsourcing one's thought process to a large language model"; (5) I do not support "Editors are not permitted to use large language models to generate user-to-user communications"; (6) I do not agree with "It is always preferable to entirely avoid the use of LLMs and instead make the best effort you can on your own"; (7) the entire section "Large language models are not suitable for this task" is basically wrong, including "Large language models cannot perform logical reasoning" (false/misleading statement, they do perform some logical reasoning); and (8) I disagree with the entire section "Anything an LLM can do, you can do better". This is a guideline that says, in a nutshell, LLMs are bad and you shouldn't use them, and since I think LLMs are good, and people should use them, I don't think we're going to find a compromise text here. For me. But I'm just one person. Levivich (talk) 18:34, 13 January 2026 (UTC)[reply]
Understandable. So long as your disagreements are ideological and not "there's a fundamental contradiction" or the likes, that's still a good indication for me that I'm in the right direction. Much appreciated. Athanelar (talk) 18:44, 13 January 2026 (UTC)[reply]
I don't think this is an ideological disagreement. (Some proponents of a ban on LLMs may be operating from an ideological position; consider what Eric Hoffer about movements rising and spreading without a God "but never without belief in a devil". AI is the devil that they blame for many problems.) I do think that as someone hoping to have a successful WP:PROPOSAL, it's your job to seek information about what the sources of disagreement are, and to take those into account as much as possible, so that you can increase your proposal's chance of success (which I currently put at rather less than 50%, BTW). Feedback is a gift, as they say in software development.
For example:
Levivich expresses concerns about the proliferation of new guidelines. There have been several editors saying things like that recently. Do you really, really, really need a {{guideline}} tag on this? Maybe you should consider alternatives, like putting it in the project namespace and waiting a bit to see if/how editors use it.
He wonders whether a new guideline against "LLM communication" should be prioritized over AI problems in the mainspace. What are you going to say to editors who look at your proposal and say that it's weird to advocate for a total ban on the Talk: pages, when it's still 'legal' to use AI in the mainspace? You don't have to agree with him, but you should consider what he's telling you and think about whether you can re-write (or re-schedule) to defend against this potential complaint.
Your statement that "Large language models are unsuited for and ineffective at accomplishing this" is a claim of fact (getting us back to that ideology power word: opponents of LLMs are entitled to their own opinions, but not to their own facts). Are LLMs really unsuited and ineffective? Can you back that up with sources? Does it logically follow from "success depends on the ability of its participants to communicate" that a tool helping people communicate is always going to be ineffective at accomplishing our goals?
What if the use of AI in a particular instance is "Dear chatbot, please re-write the following profanity-laced tirade so that it is brief and polite, because I am way too angry to do this myself"? Does that interfere with the goal of "civil communication"? Or would that use of a chatbot actually improve compliance with our Wikipedia:Civility policy? Is it really true that "Anything an LLM can do, you can do better" – right now?
What if the use is a newbie who is pointing out a problem and who used an LLM to try to present their information as "professionally" as possible? What I'm seeing in my news feed is that Kids These Days™ aren't doing so well with reading and writing in school. Does trying to communicate clearly interfere with our goals of reaching consensus, resolving disagreements, and finding solutions?
What if the realistically available alternatives are also less than ideally effective? You've added a paragraph about dyslexia and English language learners (thank you), but how is the average editor supposed to know whether the person has a relevant limitation? For comparison, many years ago, we briefly had an editor whotypedallthewordstogetherlikethis and said that pressing the space bar was painful due to Repetitive strain injury, which he thought we should accept on talk pages as a reasonable accommodation for his disability. I never have been able to decide whether he had a surprisingly inflated sense of entitlement or if it was a piece of performance art, but we sent him on his way with a recommendation to look into speech-to-text software. Thinking back, I'd have preferred that he used an LLM to what he was doing. It would have been more effective at supporting communication than what he was doing. But: If he was here today, and used an LLM today, how would the other editors know that he had (in his opinion) a true medical reason for using an LLM? More importantly, if LLMs are effective for those groups of people, does that invalidate the factual claim that LLMs are "unsuited for and ineffective at" discussions?
You should go through the rest of Levivich's feedback and see whether there is any adjustment you can make that might reduce the likelihood that anyone else would vote against your proposal on the same grounds. Can you re-write it to be less strident? Less absolute?
Or take the opposite approach: Write an essay, and tell us how you really feel. Don't say "The substance of this guideline is a prohibition against outsourcing one's thought process to a large language model"; instead say something like "Whenever I see LLM-style comments on talk pages, I feel like I'm talking to a machine instead of a human. I worry that if you aren't writing in your own words, you won't read or understand my reply. I worry that if you're misunderstanding something, you won't care – you'll just tell the LLM 'she said I'm wrong; write a reply that explains why I'm right anyway'. That's not what I'm WP:HERE for." WhatamIdoing (talk) 20:51, 13 January 2026 (UTC)[reply]
This is all very good, and gives me something to work with for another round of improvements on this thing, so I appreciate it greatly. One thing I want to address specifically in this reply is the question of "why a guideline rather than an essay in WPspace?" and the answer is that while I absolutely do have a lot to say about LLMs on Wikipedia, I want to materially improve the situation by doing something about it, not just vent. The community norm is already against the use of LLMs in talk pages. People who use LLMs for that pretty much universally get told "hey, quit it" so I thought it would be sensible to make the unwritten rule written rather than having it exist in this nebulously-enforceable grey area. Athanelar (talk) 21:16, 13 January 2026 (UTC)[reply]
AITALK doesn't forbid the use of LLMs for discussions, it merely suggests that they may be hatted (which, by the way, if you didn't notice I also changed my 'should' to 'may'.) The only time LLM use in talk pages tends to escalate to sanctions is when a user persistently lies about it; which to be fair is common, but what I'm proposing is that any persistent (i.e., continuing after being notified and obviously subject to the limited carveouts) LLM usage for discussions should be considered disruptive. As my title here says, it's more WP:LLMCOMM than it is WP:AITALK. LLMCOMM begins with the sentence Editors should not use LLMs to write comments generatively. and my whole goal here is to basically turn that 'should' into a 'must' (while giving reasoning, addressing loopholes, and also synthesising AITALK into it to provide remedies/sanctions for the prohibited action) Athanelar (talk) 21:23, 13 January 2026 (UTC)[reply]
I have been looking at this with fresh eyes, and I think that the entire ==Large language models are not suitable for this task== section can be safely removed.
Overall, I feel like the bulk of the page is trying to persuade the reader to hold the Right™ view, instead of laying out our (proposed) rules.
The ===Boldness is encouraged and mistakes are easily fixed=== subsection is irrelevant. Boldness is encouraged in articles. Mistakes can be fixed in articles (though if you're listening to what people are saying about fixing poor translations and LLM-generated text, "easily" is not true). In the context of user-to-user communication, boldness has costs, and some mistakes are not fixable. Maybe a decade ago, we had an influx of Indian editors (a class?) who had some problems, and in a well-intentioned effort to be warm and friendly, they addressed other editors as "buddy" (e.g., "Can you help me with this, buddy?"). This irritated some editors to the point that there were complaints about the whole group being patronizing, rude, etc. As the sales teams say, you only have one chance to make a first impression. Even if you're just trying to fix grammar errors and simply typos, the Halo effect is real, and it is especially real in a community that takes pride in our brilliant prose (←the original name for Wikipedia:Featured articles). A well-written comment really does get a better reception here than broken English or error-filled posts.
Also, "using an LLM to communicate on your behalf on Wikipedia fails to demonstrate that you...have the required competence to communicate with other editors" might feel ableist to people with communication disorders. The link to Wikipedia:Not compatible with a collaborative project is misleading (it's about people who are arrogant, mean, or think they should be exempt from pesky restrictions like copyrights; it's not about people who are trying to cooperate but struggle to write in English).
I have been thinking about an essay along these lines:
How to encourage non-AI comments
There are practical steps experienced editors can take to encourage non-AI participation.
Please do not bite the newcomers. People who use AI regularly are often surprised that this community rejects most LLM-style content. Gently inform newcomers about the community's preferences.
Focus on content, not on the contributor or your perception of their skills. Don't tell newcomers that the Wikipedia:Competence is required essay says they have to be able to communicate in English. Kind and helpful responses to broken English, machine translation, non-English comments, typos, and other mistakes encourage people to participate freely. If people see that well-intentioned comments written in less-than-perfect English sometimes produce rude responses, they will be more motivated to use AI tools.
Accept mistakes, apologies, corrections, and clarifications with grace. Ask for more information if you think the person's comment doesn't make sense. Ask for a short summary if it is particularly long.
but I'm not sure it would actually help. People who are most irritated by "AI slop" don't automatically all have the social and emotional skills to be patient with the people who are irritating them.
I've posted a much shorter (~20%) and softer version of this proposal in my sandbox. I tried to remove persuasive content and examples from the mainspace, as well as shortening the few explanations that I kept. I also added practical information for experienced editors (so we're permitting dyslexic editors to use LLMs, but you're permitted to HATGPT, so...let's at least not edit war?). Maybe the contrast between the two will be informative. WhatamIdoing (talk) 19:53, 14 January 2026 (UTC)[reply]
I much prefer WAID's version as it restricts itself to the point and doesn't preach or demonise anyone or anything. I would though reprhase the authorised uses section so as to focus on the uses rather than actions, advice or specific conditions. Perhaps something like:
The following uses are explicitly permitted:
Careful copyediting: You may use an LLM to copyedit what you have written (for example to check your spelling and grammar), but you must always check the output as the tools sometimes change the meaning of a sentence.
As an assistive technology: If you have a communication disorder, for example severe dyslexia, LLM tools are permitted as a useful assistive technology. You are not required to disclose any details about your disability.
Translation. People with limited English, including those learning the language, may use AI-assisted machine translation tools (e.g., DeepL Translator) to post comments in English. Please consider posting both your original text plus the machine translation.
You are not required to state why you are using an LLM but in some cases doing so may help other editors understand you.
I do plan to synthesise some of WAID's into mine, but I still have major issues with the suggestions for how to handle some of these carveouts; because they provide any bad-faith editor (which, given the amount of people I see lie about using LLMs, is a lot) a get-out-of-jail free card. Or rather, it means we essentially can't enforce the guideline in good faith at all. We can't simultaneously say "you shouldn't generate comments with LLMs" and also say "but if you have certain exempting circumstances, you can essentially do whatever you want with LLMs with no disclosure whatsoever" because it makes it impossible for us to enforce against users using LLMs 'wrong' without inevitably catching, for example, a dyslexic editor who decides they want an LLM to compose their entire comment and so it sounds 100% AI generated. Athanelar (talk) 02:28, 15 January 2026 (UTC)[reply]
Yes, this is a problem. We can declare a total ban and thereby officially endorse discrimination against people with disabilities and English language learners into our guidelines.
Alternatively, we can permit reasonable accommodations and give editors no way to be certain that the person is using it truly qualifies for it. We can predict that we will have a number of emotional support peacocks in addition to people who don't know that it's banned, people who legitimately do fall into one of the reasonable exceptions, some rule-breaking jerks, and some people who believe that what they're doing is reasonable (in their eyes) and therefore the community's rule is unreasonable and shouldn't be enforced against them. (I'm pretty sure psychology has a name for the belief that rules don't apply to you unless you agree with/consent to them, but I don't remember what the word is.)
Plus, of course, no matter what we write, there would still be the problem of editors incorrectly hatting comments written by English language learners and autistic editors, because AI-generated text resembles some common ESL and autistic writing styles (e.g., simpler sentence structure).
I support revision 3 as is, without any changes that would further weaken its language. Having seen how LLM use is currently being handled by the community at other venues, including article talk pages, content-related noticeboards, and WP:ANI, my impression is that the discussion here is not representative of the community sentiment toward LLM use as a conduct issue, which is much more negative than is being portrayed here. A request for comment will invite input from the editors who spend more time resolving issues resulting from LLM use but do not closely follow all of the relevant village pump discussions. — Newslingertalk05:11, 15 January 2026 (UTC)[reply]
Oppose for all the reasons that WhatamIdoing explained in the discussion far more eloquently than I can. It's not a guideline to help editors undestand the issues and good practice around LLM use on talk pages it's an overly-long essay proslethising the evils of LLMs (well, that's a bit hyerbolic, but not by huge amounts). Don't get me wrong, we should have a guideline in this area, but this is not it. Thryduulf (talk) 19:44, 15 January 2026 (UTC)[reply]
Oppose the guideline per WhatamIdoing and support her alternative proposal at User:WhatamIdoing/Sandbox. This policy conflates all sorts of problems with AI (what is the section User:Athanelar/Don't use LLMs to talk for you#Yes, even copyediting doing here when the substance of that section is about copyediting articletext in a guideline that is about talk page comments?), makes a number of dubious claims about LLMs that rather than supported by evidence are supposed to be taken on faith, and is once again either dubiously unclear or internally contradictory (the claim that guideline does not aim to restrict the use of LLMs [for those with certain disabilities or limitations], for example). This would be great as an WP:Essay, but definitely not as a guideline. Katzrockso (talk) 23:03, 15 January 2026 (UTC)[reply]
what is the section User:Athanelar/Don't use LLMs to talk for you#Yes, even copyediting doing here when the substance of that section is about copyediting articletext in a guideline that is about talk page comments? Showing that LLMs have trouble staying on task when copyediting is relevant regardless of where that copyediting takes place, whether it's in articletext or talk page comments. It's a supplement to the caution in the 'Guidance' section about using LLMs to cosmetically enhance comments. Athanelar (talk) 23:11, 15 January 2026 (UTC)[reply]
Elsewhere I have used LLMs to copyedit few times and I have noticed this phenomenon (LLMs making additional changes beyond what you asked) using the freely available LLMs (I believe that the behavior of models is wildly variable so I cannot speak about the paid ones, which I refuse to pay for on principle). However, this was not a problem when I gave the LLM more specific instructions (i.e. do not change text outside of the specific sentence I am asking you to fix). The gist of the argument in that section is a non-sequitur: from the three examples given, the conclusion LLMs cannot be trusted to copyedit text and create formatting without making other, more problematic changes does not follow. Katzrockso (talk) 23:41, 15 January 2026 (UTC)[reply]
Athanelar, guidelines don't normally spent a lot of time trying to justify their existence. Think about an ordinary guideline, like Wikipedia:Reliable sources. You don't expect to find a section in there about what would happen to Wikipedia if people used unreliable sources, right? This kind of content is off topic for a guideline. WhatamIdoing (talk) 00:24, 16 January 2026 (UTC)[reply]
Sure, but LLMs are a topic that people are uniquely wont to quibble about, whether because their daily workflow is already heavily LLM-reliant or simply because they have no idea why anybody would want to restrict the use of LLMs. I think it's sensible to assume that our target audience here will be people who aren't privy to LLM discourse, especially Wikipedia LLM discourse, and so some amount of thesis statement is sensible. Athanelar (talk) 01:44, 16 January 2026 (UTC)[reply]
Oppose We should do something, but this manifesto isn't it. For example:
This is supposed to be about Talk: pages, and it spends 200+ words complaining about LLMs putting errors into infoboxes and article text.
Sections such as A large language model can't be competent on your behalf repeatedly invoke an essay, while apparently ignoring the advice in that same essay (e.g., "Be cautious when referencing this page...as it could be considered a personal attack"). In fact, that same essay says If poor English prevents an editor from writing comprehensible text directly in articles, they can instead post an edit request on the article talk page – something that will be harder for editors to do, if they're told they can't use machine translation because the best machine translation for the relevant language pair now uses some form of LLM/AI – especially DeepL Translator.
Overall, this is an extreme, maximalist proposal that doesn't solve the problems and will probably result in more drama. In particular, if adopted, I expect irritable editors to improperly revert comments that sound like it was LLM-generated (in their personal opinion) when they shouldn't. IMO "when they shouldn't" includes comments pointing out errors and omissions in articles, people with communication disorders such as severe dyslexia (because they'll see "bad LLM user" and never stop to ask why they used it), people with autism (whose natural, human writing style is more likely to be mistaken for LLM output), and people who don't speak English and who are trying to follow the WP:ENGLISHPLEASE guideline. WhatamIdoing (talk) 23:07, 15 January 2026 (UTC)[reply]
I agree in principle. That said, from the discussion above, I do think we need to redesign the unblock process to make it less dependent on English skills, because needing to post a well-written apology is why many people turn to their favorite LLM. I'm looking at the Wikipedia:Unblock wizard idea, which I think is sound, but it still wants people to write "in your own words". For most requests, it would probably make more sense to offer tickboxes, like "Check all that apply: □ I lost my temper. □ I'm a paid editor. □ I wrote or changed an article about myself, my friends, or my family. □ I wrote or changed an article about my client, employer, or business" and so forth. WhatamIdoing (talk) 00:40, 16 January 2026 (UTC)[reply]
From my limited experience reading unblock requests, it appears that the main theme that administrators are looking for is admission of the problem that led to the block and a genuine commitment to avoiding the same behavior in future editing. I think some people might object to such a formulaic tickbox (likely for the same reasons they oppose the use of LLMs in unblock requests) as it removes the ability of editors to assess whether the appeal is 'genuine' (whether editors are reliable arbiters of whether an appeal is genuine or not is a different question), which is evinced from the wording and content of the appeal. Katzrockso (talk) 01:25, 16 January 2026 (UTC)[reply]
I think we need to move away from model in which we're looking for an emotional repentance and towards a contract or fact-based model: This happened; I agree to do that. WhatamIdoing (talk) 04:06, 16 January 2026 (UTC)[reply]
I think the key thing that needs to be communicated is that they understand why they were blocked. Not just a "I got blocked for editwarring" but an "I now understand editwarring is bad because...". Agreeing what happened is a necessary part of that (if you don't know why you were blocked you don't know what to avoid doing again) but not sufficient because if you don't understand why we regard doing X as bad, then you're likely to do something similar to X and get blocked again. Thryduulf (talk) 04:33, 16 January 2026 (UTC)[reply]
My thought with tickboxes is that there is no opportunity to use an LLM when all you're doing is ticking a box.
l partly agree with your view that "It doesn't matter whether they're sorry". It doesn't matter in terms of changing their behavior, but it can matter a lot in terms of restoring relationships with any people they hurt. This is one of the difficulties. WhatamIdoing (talk) 17:50, 16 January 2026 (UTC)[reply]
Sure, there's no opportunity to use an LLM. But then we have exactly the same problem that we have when they're using LLMs: we don't actually know that they understand anything at all. -- asilvering (talk) 18:38, 16 January 2026 (UTC)[reply]
I'd support it if it was tweaked First, a preamble. We continue to nibble around the edges of the LLM issue without addressing the core issues. I still think we need to make disclosure of AI use mandatory before we're going to have any sort of effective discussion about how to regulate it. You can't control what you don't know is happening. That might take software tools to auto-tag AI likely revisions, or us building a culture where its okay to use LLM as long as you're being open about it.General grumbles aside, lets approach the particular quibbles with this proposal. This guideline is contradictory. The lead says that using LLM is forbidden...but the body is mostly focused on trying to convince you that LLM use is bad. Its more essay than guideline. I also think that it doesn't allow an exemption for translation, which is...lets be honest...pervasive. Saying you can't use translate at all to talk to other editors will simply be ignored. I think this needs more time on the drawing board, but I'd tentatively support this if the wording was "therefore using them to generate user-to-user communication is strongly discouraged." rather than forbidden. CaptainEekEdits Ho Cap'n!⚓ 01:33, 16 January 2026 (UTC)[reply]
Just one small point, but from a literal reading of two current rules, you are already required to disclose when you produce entirely LLM generated comments or comments with a significant amount of machine generated material; the current position of many Wikipedia communities (relevantly, us and Commons) is that this text is public domain, and all editors, whenever they make an edit with public domain content, "agree to label it appropriately". [5]. Therefore, said disclosure is already mandatory - mainspace, talkspace, everywhere. The fact that people don't disclose, despite agreeing that they will whenever they save an edit, is a separate issue to the fact that those rule already exists. GreenLipstickLesbian💌🧸 06:31, 16 January 2026 (UTC)[reply]
@GreenLipstickLesbian, I think that's a defensible position, but not one that will make any sense to the vast majority of people who use LLMs. So if we want people to disclose that they've used LLMs, we have to ask that specifically, rather than expecting them to agree with us on whether LLM-generated text is PD. -- asilvering (talk) 18:40, 16 January 2026 (UTC)[reply]
@Asilvering Yes, but the language not being clear enough for people to understand is, from my perspective, a separate issue as to whether or not the rule exists. We don't need to convince editors to agree with us that LLM generated text is PD, just the same way I don't actually need other editors to agree with me on whether text they find on the internet is public domain or that you can't use the Daily Mail for sensitive BLP issues- there just needs to be a clear enough rule saying "do this", and they can follow it and edit freely, or not and get blocked.
And just going to sandwich on my point to @CaptainEek - it is becoming increasing impossible to determine if another editor's text in any way incorporates text from a LLM, given their ubiquity in translator programs and grammar/spellcheck/tone checking programs, which even editors themselves may not be aware use such technology. So LLMDISCLOSE, as worded, will always remain unenforceable and can never be made mandatory - an that's before getting into the part where it says you should say what version on an LLM you used, when a very large segment of the population using LLMs simply is not computer literate enough to provide that information. (Also, I strongly suspect that saying "I used an LLM to proofread this" after every two line post which the editor ran through Grammarly, which is technically what LLMDISCLOSE calls for, would render the disclosures as somewhat equivalent to the Prop 65 labels - somewhere between annoying and meaningless in many cases, and something which a certain populace of editors stick on the end of every comment because you believe that's less likely to get you sanctioned than forgetting to mention you had Grammarly installed)
However, conversely, what the average enWiki editor cares about is substantial LLM interference - creation of entire sentences, extensive reformulation - aka, the point at which the public domain aspect of LLM text and the PD labeling requirement starts kicking in. It's not a perfect relationship, admittedly, but it covers the cases that I believe most editors view should be disclosed, while leaving alone many of the LLM use cases (like spellcheck, limited translation, formatting) that most editors are fine with or can, at the very least, tolerate. GreenLipstickLesbian💌🧸 19:35, 16 January 2026 (UTC)[reply]
@GreenLipstickLesbianWP:LLMDISCLOSE isn't mandatory though, just advised. In a system where it is not mandated, it won't be done unless folks are feeling kindly. But I acknowledge that with the current text of LLMDISCLOSE, we could begin to foster a culture that encourages, rewards, and advertises the importance of LLM disclosure. We may need a sort of PR campaign where it's like "are you using AI? You should be disclosing that!" But I think it'd be more successful if we could say you *must*. CaptainEekEdits Ho Cap'n!⚓ 18:57, 16 January 2026 (UTC)[reply]
For the most part, people do what's easy and avoid what's painful. If you want LLM use disclosed, then you need to make it easy and not painful. For example, do we have some userboxes, and can that be considered good enough disclosure? If so, let's advertise those and make it easy for people to disclose. Similarly, if we want people to disclose, we have to not punish them for doing so (e.g., don't yell at them for being horrible LLM-using scum). WhatamIdoing (talk) 20:18, 16 January 2026 (UTC)[reply]
Unfortunately, there's no foolproof way to tell whether a comment was LLM generated or not (sure, there are WP:AISIGNS, but again, those are just signs). Agree with Katzrockso that this would work better as an essay than a guideline. Some1 (talk) 02:30, 16 January 2026 (UTC)[reply]
Oppose. Too long, and I don't think a fourth revision would address the problems; this is trying to do too much, some of which is unnecessary and some of which is impossible to legislate. I agree with those who say a paragraph (or even a sentence) somewhere saying LLMs should not be used for talk page communication would be reasonable. Mike Christie (talk - contribs - library) 13:38, 16 January 2026 (UTC)[reply]
Support the crux of the proposal, which would prohibit using an LLM to "generate user-to-user communication". This is analogous to WP:LLMCOMM's "Editors should not use LLMs to write comments generatively", and would close the loophole of how the existing WP:AITALK guideline does not explicitly disallow LLM misuse in discussions or designate it as a behavioral problem. A review of the WP:ANI archives shows that editors are regularly blocked for posting LLM-generated arguments on talk pages and noticeboards, and the fact that our policies and guidelines do not specifically address this very common situation is misleading new editors into believing that this type of LLM misuse is acceptable. Editors with limited English proficiency are, of course, welcome to use dedicated machine translation tools (such as the ones in this comparison) to assist with communication. The passage of the WP:NEWLLM policy suggests that LLM-related policy proposals are more likely to succeed when they are short and specific, so I recommend moving most of the proposed document to an information or supplemental page that can be edited more freely without needing a community-wide review. — Newslingertalk14:17, 16 January 2026 (UTC)[reply]
I said above under version 2 that I don't think much of what is being addressed here is legislatable at all, but if anything is to be added I'd like to see a sentence or two added to a suitable guideline as Novem Linguae suggests. I think making this into an essay is currently the best option. Essays can be influential, especially when they reflect a common opinion, so it's not the worst thing that can happen to your work. Mike Christie (talk - contribs - library) 16:41, 16 January 2026 (UTC)[reply]
@Newslinger, I've read that some of the "dedicated machine translation" tools are using LLMs internally (e.g., DeepL Translator). Even some ordinary grammar check tools (e.g., inside old-fashioned word processing software like MS Word) are using LLMs now. Many people are (or will soon be) using LLMs indirectly, with no knowledge that they are doing so. WhatamIdoing (talk) 17:44, 16 January 2026 (UTC)[reply]
Which is one of the reasons why 1) people who can't communicate in English really shouldn't be participating in discussions on enwiki and 2) people who use machine translation (of any type) really should disclose this and reference the source text (so other users who either speak the source language or prefer a different machine translation tool can double-check the translation themselves). -- LWGtalk(VOPOV)17:52, 16 January 2026 (UTC)[reply]
We sometimes need people who can't write well in English to be communicating with us. We need comments from readers and newcomers that tell us that an article contains factual errors, outdated information, or a non-neutral bias. When the subject of the article is closely tied to a non-English speaking place/culture, then the people most likely to notice those problems is someone who doesn't write easily in English. If one of them spots a problem, our response should sound like "Thanks for telling us. I'll fix it" instead of "People who can't communicate in English really shouldn't be participating in discussions on enwiki. This article can just stay wrong until you learn to write in English without using machine translation tools!" WhatamIdoing (talk) 19:51, 16 January 2026 (UTC)[reply]
IMO if they are capable of identifying factual errors, outdated information, or non-neutral bias in content written in English, then they should be capable of communicating their concerns in English as well, or at least of saying "I have some concerns about this article, I wrote up a description of my concerns in [language] and translated it with [tool], hopefully it is helpful." With that said, I definitely don't support biting newbies, and an appropriate response to someone who accidentally offends a Wikipedia norm is "Thanks for your contribution. Just you know, we usually do things differently here, please do it this other way in the future." -- LWGtalk(VOPOV)20:04, 16 January 2026 (UTC)[reply]
Because English is the lingua franca of the internet, millions of people around the world use browser extensions that automatically translate websites into their preferred language. Consequently, people can be capable of identifying problems in articles but not actually be able to write in English. WhatamIdoing (talk) 20:22, 16 January 2026 (UTC)[reply]
Support. I agree with the concerns that it is too long, and certainly far from a perfect proposal, but having something imperfect is better than a consensus against having any regulation at all. I do also agree with Newslinger's proposal of moving the bulk of it to an information page if there is consensus for it. Chaotic Enby (talk · contribs) 17:22, 16 January 2026 (UTC)[reply]
Oppose - To restrictive and long. There is a reasonable way to use LLMs and this effectively disallows it which is a step to far. That coupled with the at best educated guessing on if it is actually is an LLM and assuming it is all unreviewed makes it untenable. PackMecEng (talk) 17:31, 16 January 2026 (UTC)[reply]
Support the spirit of the opening paragraph, but too long and in need of tone improvements. Currently the language in this feels like it is too internally-oriented to the discussions we have been having on-wiki about this issue, whereas I would prefer it to be oriented in a way that will help outsiders with no context understand why use-cases for LLMs that might be accepted elsewhere aren't accepted here. The version at User:WhatamIdoing/Sandbox is more appropriate in length and tone, but too weak IMO. I would support WhatamIdoing's version if posting the original text/prompt along with LLM-polished/translated output were upgraded from a suggestion to an expectation. With that said, upgrading WP:LLMDISCLOSE and WP:LLMCOMM to guidelines is the simplest solution and is what we should actually do here. -- LWGtalk(VOPOV)17:34, 16 January 2026 (UTC)[reply]
I would also support those last two proposals, with the first one being required from a copyright perspective (disclosure of public domain contributions) and the second one being a much more concise version of the proposal currently under discussion. Chaotic Enby (talk · contribs) 17:36, 16 January 2026 (UTC)[reply]
Support per Chaotic Enby and Newslinger, I don't see an issue with length since the lead and nutshell exist for this reason, but am fine with some of it being moved to an information page. LWG's idea above is also good, though re LLMDISCLOSE, Every edit that incorporates LLM output should be marked as LLM-assisted by identifying the name and, if possible, version of the AI in the edit summary is something nobody is going to do unprompted (and personally I've never seen). Kowal2701 (talk) 19:01, 16 January 2026 (UTC)[reply]
something nobody is going to do unprompted true, but it's something people should be doing. Failing to realize you ought to disclose LLM use is understandable, but failing to disclose it when specifically asked to do so is disruptive - there's simply no constructive reason to conceal the provenance of text you insert into Wikipedia. So while I don't expect people to do this unprompted, I think we should be firmly and kindly prompting people to do it. -- LWGtalk(VOPOV)19:11, 16 January 2026 (UTC)[reply]
Oppose as written. For an actual guideline, I would prefer something like User:WhatamIdoing/Sandbox. It makes clear the general expectations of the community should it be adopted. This proposal reads like an essay; it's trying to convince you of a certain viewpoint. Guidelines should be unambiguous declarations about the community's policies. For me, the proposed guideline is preaching to the choir, I agree with basically all of it, but I don't see it as appropriate for a guideline. I second what Chaotic Enby, Newslinger, and CaptianEek have said, and absolutely support the creation of a guideline of this nature. -- Agentdoge (talk) 19:27, 16 January 2026 (UTC)[reply]
Prioritize current towns over former municipalities in infoboxes and leads
Hi, I'm new here on en.wiki, therefore I don't know if this is the right place, if it isn't please move the thread to the right one. I'm already gone thorugh the idea lab, where I've received an approving opinion and a dissenting one (this last one though it seemed to me that it was due to a misunderstanding, probably caused by a poor formulation of the proposal on my part).
I'm opening this discussion to propose to prioritize current towns (and other sub-municipal entities) over former municipalities in infoboxes and leads in the countries in which we don't have different articles between a municipality and its administrative center, like Italy, Germany, Switzerland. What I'm talking about:
Let's take a look to three articles about former municipalties that have become municipal sub-entities in these three countries:
Italy, Bazzano, Valsamoggia: the infobox and the lead are about the present frazione, and it is mentioned that it was an independent municipality until 2014
Germany, Bachfeld: here too the infobox and the lead are about the present Ortsteil, mentioning that it was independent until 2019
Switzerland, Adlikon bei Andelfingen: here both the infobox and the lead are about the former municipality, although it is still an independent town with its own borders and everything (see here for reference). This means that we won't be able to update the infobox and the lead with new statistics about population (ok, in Switzerland data about municipal sub-entities are usually scarce, but still existent) and administrative divisions, because they are not about the town, but the former municipality. Please note that this case is not isolated, but common to every former Swiss municipality.
What would it change:
Taking always Adlikon bei Andelfingen as example, the proposal if accepted would lead to this kind of changes:
In the lead, from Adlikon bei Andelfingen (or simply Adlikon) is a former municipality in the district of Andelfingen in the canton of Zürich in Switzerland to Adlikon bei Andelfingen (or simply Adlikon) is a town in the municipality of Andelfingen, in the canton of Zürich in Switzerland. It was an independent municipality until 31 December 2022.
In the infobox it would be added a "|subdivision_type4=" with the current municipality the town is part of, and the "|neighboring_municipalities=" parameter would be either removed or replaced with the neighbouring towns (altough I couldn't find an appropriate parameter in the infobox settlement). Moreover the population data would be updated if and when new data will become available. Finally, the website would be removed unless (which is the case for some towns) it is still in some way an official wbsite about the town.
Please note that I've talked only about Switzerland, but the proposal applies to every country in which we don't have different articles for former municipalities and their former administrative center.
Are there opinions on the matter? --Friniate ✉ 14:21, 9 January 2026 (UTC)[reply]
I've notified of this discussion the users who had taken part to the discussion at the idea lab, the wikiprojects Switzerland and Geography, and the talk pages of the involved infoboxes.--Friniate ✉ 14:28, 9 January 2026 (UTC)[reply]
We should summarize what reliable sources say about these localities, not update to the most recent information about the topic. If a locality changes in some formal administrative status today, we don't need to change the first sentence of the article until a preponderance of reliable sources recognize this change. Wikipedia is an encyclopedic summarization of what reliable sources say about a subject, giving a broad historical view, rather than the most up-to-date information about a topic. That might mean emphasizing a former administrative status than the newer one depending on what the sources say. Katzrockso (talk) 14:29, 9 January 2026 (UTC)[reply]
@Katzrockso Well, now I don't think that we are really doing that, we're basically differentiating between countries: with Italy and Germany we're prioritizing the current entities, with Switzerland the former municipalities (at least for the municipalities that were suppressed after Wikipedia was born). If the administrative change is official and confirmed by reliable sources, I don't see why we should prioritize a former entity suppressed maybe years ago...
Yes, there is a reliable source which puts the former municipality first, but can't we take an editorial decision on matters like these, in order to avoid a different treatment of very similar cases across countries? The decision itself to talk about the former municipality and the present village in a single article is an editorial decision... There are anyway also other sources which portray first of all the current town, and on subjects like these I doubt that we will find much else. --Friniate ✉ 14:54, 9 January 2026 (UTC)[reply]
I generally support this proposal. The procedure of WP:NAMECHANGES, which is fairly similar to what we're doing here, is to prioritize the sources after the change/merger. For these tiny ex-villages, we usually don't have many sources, which means I think we can take the news articles on the mergers at face value and assume that future sources will refer to the ex-village as part of another municipality. (I hope this rambling makes sense.) Toadspike[Talk]11:57, 11 January 2026 (UTC)[reply]
I agree with the spirit of the proposal, it's probably hard to easily generalize to all cases.
In most cases, I would find it odd to call a former municipality as a "town". Depending on the situation, the terms of "District" or "Neighborhood" (example: Wollishofen) may be more accurate. But that's not specific to Switzerland.
There are also situations where "former municipality" may be the best descriptor. For example, my hometown St-Legier was now merged with Blonay as Blonay - Saint-Légier. Most people who live there would, when asked, say they live in "St-Legier". However, most official sources would describe St-Legier as a former municipality (example). 7804j (talk) 19:58, 12 January 2026 (UTC)[reply]
@7804j I fear that the HLS/DHS/DSS uses this approach with every former municipality, at least the ones that were dissolved in the last decades. So if we follow that approach, we should keep the present situation.
We have in any case also official sources for sub-municipal entities (it's where I've taken the translation of "towns" since the "Répertoire officiel des localités" is translated as "Official index of cities and towns"). --Friniate ✉ 20:09, 12 January 2026 (UTC)[reply]
I agree the historical dictionary isn't the best example as it always talks about former municipalities. But I also don't think the official of cities and towns is. I'm actually not sure why they keep St-Legier and Blonay as distinct, and I wonder if that's because they didn't update it yet? For example, St-Legier-la-Chiesaz was itself the merger of St-Legier and La-Chiesaz, but since this is much older, the distinction disappeared in practice. Also I find the translation of "localités" i to "cities and towns" as very odd -- you would certainly not refer to a village of a few thousand or hundreds of people as a "town" in Switzerland (in French for example, both town and cities would be called "ville", and in Switzerland something is called a "ville" starting from 10k inhabitants). 7804j (talk) 03:41, 13 January 2026 (UTC)[reply]
@7804j No, no, it's updated, here the french version: as you can see under "Limites de la commune" there is as "Nom officiel de la commune: Blonay - Saint-Légier".
As for the translation I've no objection to "village" or other words, I used "town" simply because it's the official translation, but English is not my mother tongue so I've no opinion on it. --Friniate ✉ 14:03, 13 January 2026 (UTC)[reply]
There are so many definitions of town and city in English that whichever you use is unlikely to be completely wrong. Many years ago, I remember seeing an official sign for the "City of ____", Population: 6. In the US, city can be an indication of size (a city is bigger than a town) or of legal status. WhatamIdoing (talk) 20:57, 13 January 2026 (UTC)[reply]
Ah I wasn't aware that "localité/Ortschaft" was an official term under Swiss law. Good to know!
Then I think it's more an issue of translation. When I see "localité" in French, it makes it clear that it doesn't refer to a proper municipality (commune/Gemeinde). But I think the term "town" or "city" in English would suggest that it's actually a commune/Gemeinde, with all the things that come with it (including different taxation rate, etc.).
I generally support this as well, the proposal makes sense for situations where a name has continuously referred to a particular place and the only difference is a change in how the place is classified. I think the OP deserves some praise for identifying and presenting this niche issue so clearly and for using proper en.Wiki-process as well. I hope they stick around. JoelleJay (talk) 17:09, 14 January 2026 (UTC)[reply]
"Adlikon bei Andelfingen (or simply Adlikon) is a former municipality...", in general "X is a former Y...", makes sense if that is the main way X is discussed today. A UK example might be if a small village is mainly notable for being a former County Town. I imagine, though, that such cases are unusual. Readers would normally want to first know what X is currently, and then maybe later know what it was formerly, probably in a history section. --Northernhenge (talk) 17:28, 14 January 2026 (UTC)[reply]
Generally I agree with others above that we should prioritise current information, however it is very difficult to make a general rule on these matters. This combines two tricky challenges, precisely defining an article topic, and using the very rigid format of infoboxes on topics that lack this rigidity. That said, the specific proposal seems sounds. It gets to part of the second issue by noting the infobox will have to be swapped out. CMD (talk) 00:37, 15 January 2026 (UTC)[reply]
@Chipmunkdavis@Northernhenge I've tried to do a bit of research on related policies on the matter. I see that on Wikipedia:WikiProject Cities/US Guideline even in the case of "ghost towns" the guideline provides that the emphasis should be put on the current situation... The same can be said also for old cities, Troy has an infobox about the current archeological site. I agree though that in principle a good guideline should always give enough wiggle room to deal with any kind of situation and exception case by case, but I don't think that this means that we can't have guideline for the vast majority of common situations, in which we'll have simply a former municipal center that became a village in a new (greater...) municipality.
As I said before, if we simply leave it to the sources, the risk is that the we are overly influenced by the approach of one single (although undoubtedly reliable) source, as in this case the historical dictionary of Switzerland, which puts the most emphasis on former administrative divisions in all these cases, without a case-by-case evaluation. It wouldn't be wrong per se, but it's certainly detrimental to consistency inside the encyclopedia.
So, maybe a good compromise could be something along the lines of: articles about human settlements should be normally updated to the current situation in terms of population, administrative classification, etc. Exceptions can be made in the case in which there is a clear consensus among many reliable sources that a former situation is more relevant (for example in the case of ghost towns, if the reliable sources mainly talk about the town as it was still inhabited rather than on the current ghost town). If both the subjects are relevant you could consider the possibility of having two different articles dedicated to each of them (for example Sparta is about the old city-state, and Sparta, Laconia about the current city, see WP:MULTIPLESUBJECTS). --Friniate ✉ 17:49, 15 January 2026 (UTC)[reply]