The proposals section of the village pump is used to offer specific changes for discussion. Before submitting:
Check to see whether your proposal is already described at Perennial proposals. You may also wish to search the FAQ.
This page is for concrete, actionable proposals. Consider developing earlier-stage proposals at Village pump (idea lab).
This is a high-visibility page intended for proposals with significant impact. Proposals that affect only a single page or small group of pages should be held at a corresponding talk page.
For a listing of ongoing discussions, see the dashboard.
RfC: Logo change for 25th anniversary
The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion.A summary of the conclusions reached follows.
Due to the anniversary coming soon in 4 days, and a lack of close despite a request at WP:AN, this is an unideal WP:INVOLVED close by the nominator. There is clear consensus for a temporary logo change, but there is a lack of interest on the duration of the logo change. One participant suggested a month.
For the Legacy vector skin, four options were proposed. There is consensus against the proposed pixelated puzzle globe for the Legacy Vector. There is consensus for TheWanderingTrader's globe, given that no one exclusively supported Chaotic Enby's globe or the default globe after TWT's globe was posted. A few noted the Vector 2022 logo requires a CSS hack and looked visually cluttered, supporting the removal of the globe in the skin altogether. However, not enough participants discussed the idea to determine consensus. Catalk to me!01:14, 11 January 2026 (UTC) (Link to Phabricator ticket for the logo change)[reply]
On 15 January 2026, Wikipedia will celebrate the 25th anniversary of its founding in 2001. @BMcnally-WMF has proposed logo designs for the occasion on October 2025, which was improved with community discussion on Meta. BMcnally has also proposed a unique puzzle globe illustration for the Vector Legacy skin, which replaces the standard 3D puzzle globe.
Questions:
Should the current logo temporarily be replaced with commemorative logo depicted in the mockup?
Can you please link to the exact image(s) you plan on swapping in, if available? I assume image #2 in your screenshot is a placeholder and not the actual proposal. –Novem Linguae (talk) 15:13, 5 January 2026 (UTC)[reply]
Hi! Following the discussions we've had on Meta, I oppose the Legacy Vector logo change, as it looks more like a pixellized globe than a puzzle one. I support the other changes aesthetically, although I'm less affected by them as I mainly use Legacy Vector myself. Chaotic Enby (talk · contribs) 15:21, 5 January 2026 (UTC)[reply]
I agree with Chaotic Enby - oppose the legacy Vector logo, support everything else, although as a Monobook user I won't see the pixellated globe myself. Thryduulf (talk) 15:53, 5 January 2026 (UTC)[reply]
Oppose the pixelated logo but support including the standard logo with the additional wording underneath. The pixelated design doesn't seem fit to serve as any logo on Wikipedia let alone honoring its quarter-century mark. The first one, using the current wikiball, works well, although the 25th anniversary printing should be as shown in the pixelated version. Randy Kryn (talk) 16:29, 5 January 2026 (UTC)[reply]
TWT's Proposal (Updated Color)Here's my attempt for legacy vector, this is not quite as bold, but the single blue piece is more akin to the other logos for vector 2022. I additionally added "Celebrating 25 years" below, I originally attempted adding "25 years of the free encyclopedia" but when rendered it was too small. - TheWanderingTraders (talk) 01:53, 6 January 2026 (UTC)[reply]
Honestly, once this discussion is closed, I wouldn't be opposed to this one taking the main file title and my "proposal" being clearly marked as. Not the actual one. Chaotic Enby (talk · contribs) 10:51, 6 January 2026 (UTC)[reply]
+1. Would be better still if the colour silver were incorporated somehow, as a 25th anniversary is a silver anniversary. Ham II (talk) 10:55, 6 January 2026 (UTC)[reply]
Hi @TheWanderingTradersThank you so much for creating this and everyone else adding their preference! This is super helpful. We really like what TheWanderingTraders designed. We just made one little tweak to it Vector Legacy logo proposal for Wikipedia's 25th anniversary,we updated the blue to the core blue we are using in all Wikipedia 25 assets! BMcnally-WMF (talk) 18:03, 7 January 2026 (UTC)[reply]
No problem @BMcnally-WMF! Glad I helped, one last thing, I'm not sure if the gradient and relief on the edge of the blue piece was fully added back, I updated my image with the color from your version and larger scaling of the 25 if that helps, no need to use it if this was intentional; besides it's honestly hard to see the difference when at scale. - TheWanderingTraders (talk) 02:22, 8 January 2026 (UTC)[reply]
Support Vector2022 changes. Not explicitly opposed to original Vector changes and other skin discussions, but Vector2022 is our publicly visible face and should be the primary change under consideration. How long are we thinking of keeping it up, one month? CMD (talk) 00:18, 6 January 2026 (UTC)[reply]
Support the changes, except for the pixelated globe on Legacy Vector. Note that in these mockups, the Wikipedia globe is included as part of Vector 2022 because a MediaWiki:Vector-2022.css hack is required to position the wordmark properly when there is no actual logo (the Wikipedia globe), which is what the WMF recommends. Personally, I'd prefer following that advice and dropping the Wikipedia globe on Vector 2022/Minerva for the duration that we're having this banner, because the header becomes visually cluttered otherwise. Chlod (say hi!) 04:56, 6 January 2026 (UTC)[reply]
Support change with alternate Legacy Vector logo. Either TheWanderingTraders' logo or something similarly unintrusive -- maybe one with 25 in different writing systems? Giraffer (talk) 09:43, 6 January 2026 (UTC)[reply]
Support V22/Minerva versions and TheWanderingTraders' versions. However, I would prefer removing the globe altogether and replacing it or something with the 25 puzzle, as Chlod mentions. ARandomName123 (talk)Ping me!14:41, 6 January 2026 (UTC)[reply]
Support V22/Minerva versions and TWT's, also support removing the globe per Chlod. The blue puzzle piece motif is nice. Thanks to all the graphic designers for their work on this. Levivich (talk) 17:30, 6 January 2026 (UTC)[reply]
Support temporarily changing to add the puzzle piece and text; the 3rd globe here is much better. absolutely not for the pixelated globe though. Either change that one less dramatically or leave it alone.~ Argenti Aertheri(Chat?)21:11, 6 January 2026 (UTC) Updated @ 20:49, 7 January 2026 (UTC)[reply]
Support all but the pixel globe The puzzle piece from the celebration kit is lovely and I quite support using it. But the pixel globe is rather poor quality. Let's just keep the regular 'ol globe. CaptainEekEdits Ho Cap'n!⚓ 21:30, 6 January 2026 (UTC)[reply]
Support File:Wikipedia-logo-v2-en-25-alt.svg for the real Vector (and MonoBook). I like having the blue puzzle piece in the globe, and while making the whole globe blue is interesting in theory, having just the one piece be blue highlights the 25 the best. Neutral on Minerva and Vector2022 because I avoid those skins like the plague. Also, I made a Cologne Blue mockup for the lulz (see image at right). —pythoncoder (talk | contribs)03:17, 7 January 2026 (UTC)[reply]
If there are any Cologne Blue users who actually want to have this in their site title, paste this code into your cologneblue.css user subpage:
Oh my god I can't believe you actually did it. FWIW it lines up a bit better if you set all of width/height/background-size to 36px, and right to -40px —pythoncoder (talk | contribs)18:59, 7 January 2026 (UTC)[reply]
Comment I'm going to demonstrate my ignorance of how the skins work. I use Timeless and I know I'm not the only one, but I haven't seen it mentioned here. Will Timeless happily display the one of the new or old Vector versions of the logo? @BMcnally-WMF can you make sure that Timeless users aren't left out of the celebrations? (if even Cologne Blue gets to join in...)ClaudineChionh (she/her · talk · email · global) 21:44, 7 January 2026 (UTC)[reply]
Obviously this should be available in all skins. I can post the required CSS here, then interface admins will paste it into relevant MediaWiki: pages. sapphaline (talk) 21:57, 7 January 2026 (UTC)[reply]
Support with TheWanderingTraders' Vector 2010 version including the colour tweaks, oppose original pixelated logo as unsuitable; it looks like it belongs to the 1990s while predating the encyclopedia. CNC (talk) 10:06, 8 January 2026 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Approximate results as of 8 January
Skin
Screenshot
Required CSS
Vector-2022
@media screen {
.mw-logo-wordmark, .mw-logo-tagline {
object-position: -9999px;
background-repeat: no-repeat;
background-size: contain;
}
.mw-logo-wordmark {
background-image: url("https://upload.wikimedia.org/wikipedia/commons/5/55/WP25_Primary_lockup_white_25.svg");
}
.mw-logo-tagline {
background-image: url("https://upload.wikimedia.org/wikipedia/commons/9/9a/WP25_Vector_2022_LINE.svg");
}
.mw-logo-icon {
display: none;
}
.mw-header {
padding-top: 1em; /* make spacing identical to how it is with the globe */
}
}
Modern and Cologne Blue are exceptions. This still applies for the supported skins (Vector Legacy, Vector 2022, and Minerva), which is what the vast majority of readers and users see. Chlod (say hi!) 13:57, 8 January 2026 (UTC)[reply]
Logo change has been scheduled for 21:00 UTC, 14 January 2026, as the closest backport window to 00:00 UTC, January 15. The next window after that is at 08:00 UTC on January 15. You can preview the logos at T414271. Note that this does not include changes for Modern and Cologne Blue, plus a styling change for Vector 2022, which require local CSS changes by an interface administrator. More details on the task. Chlod (say hi!) 03:27, 11 January 2026 (UTC)[reply]
@Chaotic Enby: Hmm, it's showing up normally for me. Vector Legacy still uses a PNG for the logo, with a maximum resolution of 270×310 px, so how blurry it is may depend on your screen resolution or how much you zoom into the logo. Chlod (say hi!) 04:06, 15 January 2026 (UTC)[reply]
The Wikipedia sign up page disclaimer idea
Hello everyone, I was told by the Wikimedia foundation email to direct my query here.
Recently I have seen an influx in what I call “dud-articles”, by this I mean articles that people try to make about themselves, their company, their mother etc. and I believe that this wastes our users’ time, declining and reading these pages, so on the Teahouse I suggested a new system, a disclaimer on the account creation page, saying something along the lines of:
”If you are coming to Wikipedia to write about yourself, a family member, business or influencer please reconsider and refrain from making such articles”
Even if that was the deter only one person that would still be an improvement. I did get support for this idea on the Teahouse from other users on this issue.
I'm personally for the harsh wording. being soft about it seems to make the kind of editors that need this warning think they can be an exception to the rules. The amount of times I've seen these WP:NOTHERE types try to argue something isn't technically against the rules because we try to not speak in absolutes is laughable mgjertson (talk) (contribs) 20:28, 8 January 2026 (UTC)[reply]
Cease and desists letters and mortgage letters can certainly be pretty harsh, or intimidating- but I don’t think we want too harsh or too soft, a middle ground half way would be the ideal solution. Mwen Sé Kéyòl Translator-a (talk) 10:10, 12 January 2026 (UTC)[reply]
Anything we say on the sign-up page should be policy-based and link to the policy. Imho a WP:LOCALCONSENSUS here is not strong enough for changing what every new user sees as "instructions" for signing up from now on. I think we could say that it is discouraged, but not that they cannot do it, unless we change policy first, to change 'discouraged' to 'must not' and arm it with some teeth with what kind of sanctions happen if they do so anyway. Mathglot (talk) 03:58, 6 January 2026 (UTC)[reply]
Perhaps something not as harsh, I believe the policy says that writing about yourself is discouraged, and is a COI, so perhaps it could say “Please note, writing about yourself, your company or another close topic to you is discouraged and if you do decide to join to write a page on these please reveal your Conflict of Interest”
Not recommending sanctions at all; what I was trying to say, is that you ought not say a user must not do something on the sign-up page, unless the policy page supports that; the flip side is that if you want to say 'must not' here, then the policy page must be changed first. I wouldn't support that, just explaining what's dependent on what. Hope I am being clearer, but if not, I blame the wine. Mathglot (talk) 10:00, 6 January 2026 (UTC)[reply]
Oh sorry I completely understand you now. I think a good compromise is recommending not to (discourage), that doesn’t stop someone, but one or two people might instead look at the rules, or decide not to write that page on their mum, or their dog, or their favourite TikToker etc. Mwen Sé Kéyòl Translator-a (talk) 10:19, 6 January 2026 (UTC)[reply]
I would be opposed to adding any extraneous information or links to a sign-up page that are unrelated to signing up for an account. Once they are signed up, they are automatically assigned a mentor, and often (but not invariably) receive one of the welcome templates maintained by the Welcoming committee. A link to Help:Your first article is present in 23 different welcome templates, including the most popular template {{Welcome}}, which is present on the talk pages of over 600,000 users. Also, the sign-up page disappears after they are signed up, and they lose the link, whereas their User talk page remains, and they can consult the links anytime. Mathglot (talk) 02:49, 8 January 2026 (UTC)[reply]
At face value, this seems like a good idea. But as with any idea, there could be unintended consequences:
Thousands of new accounts are created every day. Most of those accounts never make an edit. Do we really need to show all these people this additional information? Would a scary warning message discourage users who never intended to edit promotionally at all?
Most accounts never engage in promotional editing. By showing everyone a message telling them not to do it, we may give them ideas that they previously didn't have.
If we imply that these people shouldn't create an account, will they simply make promotional edits without an account (from TAs) instead?
I mean in life people are told not to do crime, or “do not steal” or “employees only” signs, and most abide, it doesn’t discourage someone from living their life or going into a shop as they know they won’t do what the signs tell them not to do. I don’t think we would give them ideas, because they would know it will be declined hence the warning, thirdly I do see your last point about TAs instead, but TAs can’t make pages I believe so that cures the issue.
Most don't "abide" because they read a sign that says "Don't steal"; they were never going to steal anyway. Vandals gonna vandalize, no matter what words you add (which they will skip). I question whether there is any point to a wording change at all, especially wording they will see once, and never again. Mathglot (talk) 22:22, 11 January 2026 (UTC)[reply]
I do so what you mean, but perhaps there would be users who simply don’t know that Wikipedia isn’t a sort of LinkedIN or Instagram and would stop when told, like people on the Teahouse who accept their mistakes and don’t continue with the page on themselves, a family member etc.
Has anyone called for any stats (easily done) to find out just how often such articles that people try to make about themselves, their company, their mother etc. actually arrive in the NPP feed or even asked the 800-strong NPP community? They are dealt with swiftly at CSD. There are dozens of other totally worse articles that creep in under the radar of the less experienced patrollers. Kudpung กุดผึ้ง (talk) 14:36, 11 January 2026 (UTC)[reply]
The initial proposal probably doesn't say what the OP intended. The wording of the initial proposal includes "yourself, a family member, business or influencer". It doesn't say "your business", and therefore it includes any business and any influencer – including, e.g., Microsoft and MrBeast.
@Kudpung, I second your idea about getting more information. I sometimes wish for a list of research ideas (e.g., for grad students in search of ideas). I wonder what we would learn if someone contacted the last 50 companies for which articles were created, and said "I'm doing research and would love to know what it took to get your Wikipedia article and if you have any advice for other companies". How many would say "We hired ScammersRUs" or "We just had an intern write it"? How many would say they were unaware of it having been created? WhatamIdoing (talk) 22:54, 11 January 2026 (UTC)[reply]
Apologies for not being clear, I do mean “your business” and not all influencers (as some now warrant a page, like Mr Beast). I may have a talk with the NPP team and see if they have any stats on this matter. Mwen Sé Kéyòl Translator-a (talk) 10:15, 12 January 2026 (UTC)[reply]
You could also ask the approx. 200-strong (and growing) Wikipedia Mentor community. Anecdotally, I see more autobiographies and my-band/my-biz articles on new account User pages (not subpages) than I do appearing in Draft space (marked for submission or not). Besides welcoming new users, I also hang out at thhe WP:Teahouse and often see them there as well. Data is always a good idea, and if you want to measure something, draw up a null hypothesis and a proposal for an A–B test where half of new attempts get a create account page including the text you want to measure (A), and the other half (B) do not, and let them run it for a few months. Later, you can analyze the data and see what happened. Keep in mind that things may not go the way you want: one possible outcome (besides adherence to guidelines) is that A numbers go down, while B numbers stay the same. Then you'd have to argue whether that's a good thing, if A folks who went on to complete registration ended up adhering to the rules a bit better. Mathglot (talk) 00:06, 12 January 2026 (UTC)[reply]
I think that if someone thinks it's harmless fun to create an article about their pet rabbit (real example), they wouldn't have stopped to read a sign-on notice, just like many of us ignore the "terms and conditions" when shopping. If someone is determined for some reason to promote a person, place or whatever, they'll do it in any case. A sign-on notice wouldn't deter them. Also, in general, how much do we think of editors being in a contractual relationship with Wikipedia? If we do think of it that way then, yes, terms and conditions apply. If we think of it more as an informal personal relationship then it's more about assuming good faith, trusting and forgiving, but accepting we'll have to spend time – maybe too much time? – working on the relationship by debating the case for or against every silly or promotional article on a trivial subject. --Northernhenge (talk) 16:47, 14 January 2026 (UTC)[reply]
Yes but a large warning on the sign up page will have to be read, because it’s in your face, not some small Terms and conditions text (which I will admit I’ve never read). I don’t want to dissuade people from editing, but even a polite notice might deter a few who were going to make pages on themselves or what not. Then again most always feel like they are the exception and will try and try and try, so most probably will read the disclaimer and continue on regardless. Mwen Sé Kéyòl Translator-a (talk) 18:03, 14 January 2026 (UTC)[reply]
Gosh there really is a Wikipedia page for everything 😂 perhaps we just need a big flashing words on the screen that blocks the sign up page until you read it fully like that smiling virus thing (I’m joking btw. I wouldn’t go that extreme). Mwen Sé Kéyòl Translator-a (talk) 09:09, 15 January 2026 (UTC)[reply]
They never have let me use blink text on wiki, but we sometimes get to use big red text.
If you wanted to work on warnings for creating articles, then that should appear when you click the [Edit] button. It might be possible to put something into the software itself. Imagine something that triggers if the edit count is <50, and now you have to answer a few simple questions before it will let you proceed. WhatamIdoing (talk) 23:14, 15 January 2026 (UTC)[reply]
Realistically, non-trivial software development requires a budget allocation and then time to actually do the work. It's January now, so the best-case scenario would be to join the annual budget planning process (which is starting now), and to have a team assigned to begin work in July (beginning of the new fiscal year) and then maybe to have something to test next calendar year. But "years" is more likely. WhatamIdoing (talk) 22:38, 17 January 2026 (UTC)[reply]
@WhatamIdoing. This is the fundamental problem when WMF intervention is needed for a just few lines of code on something critically important but because it was a community idea and not their own, they find any excuse not to entertain it. They also appear to have a strong opinion that because they are paid for what they do, nobody among the tens of thousands of volunteers has any technical clue even though some of us have done MediaWiki installations or built extensions ourselves. This comes down to even throwing a simple switch on one of the default prefs on a MedWiki package. AFAIK, the registration page has jealously guarded WMF access only. Kudpung กุดผึ้ง (talk) 05:56, 18 January 2026 (UTC)[reply]
A user project is in the making to address precisely the registration page which by offering a few simple words of very short text in the nicest possible way, would channel new users to through a new route that at the same time would not only prevent the creation of nonsense articles, but also provide much better on-boarding and truly interactive help than the current development at the WMF which is in its 3rd (or fourth?) year with limited success. Kudpung กุดผึ้ง (talk) 22:22, 14 January 2026 (UTC)[reply]
Sounds interesting; can we please get a link to 'project is in the making'? Also, what is the the current development with limited success in its 3rd/4th year? Thanks, Mathglot (talk) 00:22, 15 January 2026 (UTC)[reply]
@KeyolTranslater. Saw your request at WT:NPR. The best place to go for stats is probably Wikipedia:Request a query. I would suggest taking a sample size of a recent month and asking for a breakdown of articles deleted under G1, G2, G3, G10, A1, A3, A7, A9, and A11. These are the WP:CSD criteria, broadly construed, that address your proposal. It's fair to assume that most CSDs are tagged at NPP. You may then have to parse them manually to fit your criteria, but it's something we all have to do when we want to back up our claims with data. It would also be an excellent exercise if you are thinking of embarking on a career as an NPPer. There are roughly 800 rights holders, less than 10% are truly active, and the backlog is in its worse crisis for years, a lot of help is needed. If you were to obtain those stats it would also be really useful on several nascent ideas. Kudpung กุดผึ้ง (talk) 22:13, 16 January 2026 (UTC)[reply]
@KeyolTranslater. @Mathglot. On the premise that G11 and A7 are the main reasons for the majority of deletions, these numbers now need to be looked at from the perspective of two audiences: 1. The creators, 2. The curators (i.e. NPP) and how any change in the wording of the interface might:
Discourage/Deter new users from creating something that is highly likely to be deleted
Provide some relief for the NPPers by reducing the overall number of new articles in the daily feed.
Unless I'm missing something, I don’t see any significant rises/falls in the numbers of deletions over the 3-year sample (G15 is a very new criterion), but what is important now is to see how these deletions correlate to a rise/fall in new article submissions over the sample period and decide if a change in wording would have sufficient impact on the new users and ultimately - which IMO is more important - reduce the workload at NPP.
For reasons I have not been entirely able to pinpoint, despite the strong number of NPP rights holders (now around 800) since the right was created in 2016, the NPP system has been reduced to pattern of backlog drives having become the rule rather than the exception, and why enough interest cannot be generated in patrolling new pages to keep a flat line backlog at a sustainable level. Perhaps the thread at Investigating the cause(s) of backlogs explains much of it and maybe Novem Linguae has some suggestions. Kudpung กุดผึ้ง (talk) 20:09, 17 January 2026 (UTC)[reply]
I was informed that there isn’t a large database for AFC submission declines unfortunately, which would’ve really helped, the statistics above are merely NPP deletions and therefore are by confirmed users who will just publish their article, not new editors (who seem much more likely to make pages on the criteria above). Mwen Sé Kéyòl Translator-a (talk) 08:55, 18 January 2026 (UTC)[reply]
@KeyolTranslater. I don't think declined AfC submissions will skew the results much. It depends on what percentage of new articles in the sample period are received into AfC. There are two types of AfC: ones moved to draft at NPP because they have the potential to reach mainspace if more sources are added or if the text is cleaned up, and articles that are created immediately as drafts which are probably the most likely to be deleted. The latter are possibly quickly dealt with under A7 and G11, while both kinds can end up at G13 (abandoned drafts). If you can get them, it might be interesting see how many drafts get deleted at A7 and G11, but G13 is best left out of the equation as it would be hard work to parse them into different types. Kudpung กุดผึ้ง (talk) 18:20, 18 January 2026 (UTC)[reply]
RfC: Turning LLMCOMM into a guideline
initial discussion
Given that at this point the prohibition against using LLMs in user-to-user communication WP:LLMCOMM has become something of a norm, I think it would be sensible to make it an official guideline as part of the ongoing attempt to strengthen our LLM policy.
Rather than just promote the exact text of LLMCOMM, I've decided to try to create something which synthesises LLMCOMM, HATGPT and general advice about LLMs in user-to-user communication. My proposal as it currently stands is at User:Athanelar/Don't use LLMs to talk for you
My proposed guideline would forbid editors from using LLMs to generate or modify any text to be used in user-to-user communications. Please take a look at it and let me know if there's anything that should be added or modified, and if you agree with the proposed restrictions. I'd love to workshop this a bit and get it to a stage where it can be RfCed. Athanelar (talk) 13:33, 7 January 2026 (UTC)[reply]
My thoughts:
Your proposal is much more strict than LLMCOMM (which is already enshrined in guidelines as WP:AITALK). It doesn't just synthesize LLMCOMM and HATGPT, which both allow exceptions for refining one's ideas; it goes beyond that and bans LLM use entirely for writing comments. This, combined with the Editors should not use an LLM to add content to Wikipedia phrasing of the proposed NEWLLM expansion, would effectively ban all use of LLMs anywhere on the English Wikipedia. This makes sense given your stated opinions on LLM policy, but I'm sure this is going to get significant opposition.
Your proposal also goes beyond commenting to basically say that LLMs are useless for any Wikipedia editing at all, as indicated by the section about copyediting. This seems out of place for a guideline that is supposed to be about using LLMs for comments. Again, this makes sense given your stated anti-LLM sentiments, but I have seen it repeatedly demonstrated that such a sentiment is far from universal.
I am concerned mainly because this guideline assumes bad faith from LLM-using editors. Most LLM-using editors are unaware of their limitations because of the massive hype surrounding them. My opinion is that instead of setting down harsh sanctions for LLM use, we should instead educate new users on why LLMs are bad and teach them to contribute to Wikipedia without them.
Finally, a lot of editors are just worn out at this point from having so many LLM policy discussions in such a short period of time. Can we at least wait until the NEWLLM expansion proposal is over? SuperPianoMan9167 (talk) 14:23, 7 January 2026 (UTC)[reply]
I appreciate your feedback and your continued presence as a moderate force in these discussions.
I recognise my proposal is quite extreme. My goal was to shoot for 'best case' and compromise from there as necessary.
The subsection on copyediting exists to justify the restriction against using LLMs to refactor, modify, fix punctuation etc; because at best the LLMs are unfit for this task anyway, and at worst it provides a get out of jail free card for bad faith editors. The overall section is in fact expressly intended to demonstrate that LLMs simply are not any good at doing the things people might want them to do in discussions.
I have tried to avoid that by pointing out that I believe the motivation to use LLMs in these cases comes from a good place (concerns about one's abilities)
I understand; but I still have the passion and energy, and I hope others do too. We are in something of a race against the clock here; every month we wait before strengthening our policies is another month of steadily being invaded by this type of content.
Missing the biggest reason not to use LLMs for your comments: it will make people more likely to dismiss your comments, not less.
As usual I think we need to specifically name what tools we are talking about. People genuinely don't know things are AI that actually are, and if we can't convince them of that, we can at least say "don't use ____" in the guideline.
they are not specifically trained in generating convincing-sounding arguments based on Wikipedia policies and guidelines, and considering they have no way to actually read and interpret them - Technically you could provide policies and guidelines in a prompt. Most people probably aren't doing that, but they could.
There are probably better copyedit examples; the first one seems like splitting hairs, and the original sentence had the same problem with different punctuation. The one where an AI copyedit turned "did not support Donald Trump" to "withdrew her support for Donald Trump" comes to mind. Better yet would be a copyedit to a talk page comment, though that might be hard to come by without using AI yourself.
"Editors are not permitted to use large language models to generate or modify any text for user-to-user communication" will have a disparate impact that discriminates against people with some kinds of disabilities, such as dyslexia. A blanket ban is therefore in conflict with WP:ACCESS and possibly with foundation:Wikimedia Foundation Universal Code of Conduct.
I think it is patronizing to tell people "You don't need it" when some of them actually do. I oppose telling English language learners to simply go away ("If your English is insufficient to communicate effectively, then once again, you unfortunately lack the required language ability to participate on the English Wikipedia, and you should instead participate on the relevant Wikipedia for your preferred language"), because (a) that's rude, and (b) sometimes we need them to bring information to us. If you don't speak English, but you are aware of a serious problem in an English Wikipedia article, I want you to use all reasonable methods to alert us to the problem.
Here's the Y goal:
A: I don't know English very well, but the name on the picture in this article is wrong.
B: Thanks for letting us know about this factual error. I'll fix it.
Here's the N anti-goal:
A: I don't know English very well, but the name on the picture in this article is wrong.
B: This is obvious AI slop. If you can't write in English without using a chatbot to translate, then just go away and correct the errors at the Wikipedia for your native language instead!
A: But the error is at the English Wikipedia.
B: I don't have to read your obvious machine-generated post!
Discussion on English competence requirements on enwiki
At minimum there must be a carve-out for machine translation because basically all machine translation nowadays uses the LLM architecture, as it typically performs better than other types of neural networks. (In fact, the very first transformer from the 2017 paper Attention Is All You Need was not designed for text generation; it was designed for machine translation. The generative aspect was pioneered by OpenAI's GPT model architecture with the release of GPT-1 in 2018.)
I understand your point, but what you're essentially arguing then is that WP:CIR also needs to be modified because we shouldn't require communicative English proficiency.
I think it is patronizing to tell people "You don't need it" when some of them actually do. My point is that the people who need AI to talk for them, translate for them, interpret PAGs for them etc have a fundamental CIR issue that the LLM is being used to circumvent. We can't simultaneously say "competence is required" and also "if you lack competence you can get ChatGPT to do it for you" Athanelar (talk) 19:27, 7 January 2026 (UTC)[reply]
From WP:CIR: It does not mean one must be a native English speaker. Spelling and grammar mistakes can be fixed by others, and editors with intermediate English skills may be able to work very well in maintenance areas. If poor English prevents an editor from writing comprehensible text directly in articles, they can instead post an edit request on the article talk page.
Nor am I saying anyone must be a native speaker, merely that if someone's English level is so low that they require an LLM to communicate legibly, then they are blatantly not meeting the CIR requirement to have the ability to read and write English well enough [...] to communicate effectively
Saying "actually, if you can't communicate effectively then you can just have an LLM talk for you" seems to be sidestepping this requirement.
I also simply don't see the reason. Other-language Wikipedias already struggle for editors compared to enwiki, why should we encourage editors without functional English to find loopholes to edit here rather than being productive members of the wider Wikipedia project? Athanelar (talk) 19:41, 7 January 2026 (UTC)[reply]
Because we need people to tell us about errors in our English-language articles even if they can't communicate easily in English. It is better to have someone using LLM-based machine translation to say "Hey, this is wrong!" than to have our articles stay wrong.
This should not be a difficult concept: Articles must be accurate. If the only way to make our articles accurate is to have someone use an LLM-based machine translation tool to tell us about errors, then that's better than the alternative of having our articles stay wrong. WhatamIdoing (talk) 19:55, 7 January 2026 (UTC)[reply]
We really don't need English competence. If you don't know English, you can post in your native language, and someone else can translate it. By the way, the CIR discussion seems to be tangential. Nononsense101 (talk) 19:35, 7 January 2026 (UTC)[reply]
WP:Competence is required directly states editors must have the ability to read and write English well enough to avoid introducing incomprehensible text into articles and to communicate effectively. and I have absolutely never heard of it being acceptable to participate in the English wikipedia by typing in another language and having others translate. Athanelar (talk) 19:42, 7 January 2026 (UTC)[reply]
ENGLISHPLEASE says: This is the English-language Wikipedia; discussions should normally be conducted in English. If using another language is unavoidable, try to provide a translation, or get help at Wikipedia:Embassy. (emphasis mine)
Athanelar, I don't know how else to say this: This is a huge project, and you've only been editing for two years. There's a lot you've never heard of. For example, I'd guess that you've never heard of the old Wikipedia:Local Embassy system, in which the ordinary and normal thing to do was "typing in another language and having others translate". Just because one editor (any editor, including me) hasn't seen it before doesn't mean that it doesn't happen, or even that it isn't officially encouraged in some corner of this vast place. WhatamIdoing (talk) 19:58, 7 January 2026 (UTC)[reply]
Yes, I get what you mean, but I've also seen the contrary plenty of times; people show up to the teahouse or helpdesk and ask questions not in English, and the response is universally "sorry, this is the English Wikipedia"
It just seems needlessly obtuse to say "well, there's technically hypothetically a carveout for occasional non-English participation here, sometimes, maybe" when in practice that really isn't (and shouldn't be) the case. Athanelar (talk) 21:14, 7 January 2026 (UTC)[reply]
Okay, sure, but IAR can never be used as a justification to not prohibit something, because by that logic we can't forbid anything because IAR always provides an exception. Athanelar (talk) 21:21, 7 January 2026 (UTC)[reply]
Yes, editors are sometimes inhospitable and dismissive. Yes, editors sometimes misquote and misunderstand the rules. I could probably fill an entire day just writing messages telling people that they'd fallen into another one of the common WP:UPPERCASE misunderstandings. It is literally not possible for anyone to know and remember all the rules. Even if you tried to read them all, by the time you finished, you'd have to start back at the beginning to figure out what had changed while you were reading. None of this should be surprising to anyone who's spent much time in discussions. But the fact that somebody said something wrong doesn't prove that the rule doesn't exist. It only shows their ignorance.
The ideal in the WP:ENGLISHPLEASE rule (part of Wikipedia:Talk page guidelines) is for non-English speakers to write in their own language, run it through translation, and paste both the non-English original and the machine translation on wiki. A guideline that says not to use machine translation on talk pages would conflict with that. WhatamIdoing (talk) 21:21, 7 January 2026 (UTC)[reply]
I really have an issue with this line of logic, because what does if using another language is unavoidable even mean? It seems to directly conflict with both itself and WP:CIR
Please use English on talk pages, and also you are required to be able to communicate effectively in English, but if you can't then actually you aren't required and you can just machine-translate it.
Nevermind my guideline proposal, it sounds like the existing guidelines and norms are already in a quantum superposition on this issue. Athanelar (talk) 21:24, 7 January 2026 (UTC)[reply]
@WhatamIdoing, spelling out this scenario has helped me think through some of what I'm seeing in this discussion. I think that a weak point in LLMCOMM, CIR, and similar guidelines is that there are really at least three different broad categories of "editors" who have different needs and interests:
People who genuinely want to help build an encyclopaedia and may be in this for the long term ("Wikipedians") – most of our policies and guidelines are written with these editors in mind
People who have identified serious problems in specific articles (regardless of whether they're article subjects or have a COI, or are uninvolved) – if there are serious problems that need to be fixed, we need to fix them, and we should be thanking these helpful non-Wikipedians, not putting up barriers based on CIR or LLMCOMM
People who are here for self-promotion, not to build an encyclopaedia – we have rules and procedures for dealing with these
Amazingly I don't think this is said anywhere in LLM PAGs or essays, but we should say somewhere that "Wikipedia does have a steep learning curve and it is very normal for a new editor to struggle. Some learn quicker than others, and people are obligated to be patient with new editors and help them improve." Basically, don't worry if you find it hard. I'd rather something like that replaced "You don't need it" Kowal2701 (talk) 21:23, 7 January 2026 (UTC)[reply]
Note that I have slightly rewritten the "You don't need it" section to focus a bit more on the encouragement, and also to soften the language around English proficiency. @WhatamIdoing @SuperPianoMan9167 et al, is this something more in line with your ideal spirit? Athanelar (talk) 21:34, 7 January 2026 (UTC)[reply]
Yes! I'm still somewhat opposed to the general premise, banning all use of LLMs in comments, but that section is much better now.
My ideal version of such a guideline would be:
Generating comments with LLMs (outsourcing your thinking to a chatbot) is prohibited. You have to be able to come up with your own ideas.
Modifying comments with LLMs, such as using them for formatting, is strongly discouraged. This is due to the risk of the LLM going beyond changing formatting and fundamentally changing the meaning of the comments.
I do also like this. Many editors say that they used LLMs "only for grammar" while having the kind of issues that only comes with LLM generation (for example, the same vague, nonspecific boilerplate reassurances that can be found almost word-for-word in at least half of the unblock requests I've seen), and others might genuinely not realize that the LLM has completely changed the meaning of their comment behind a facade of "grammar fixes". Chaotic Enby (talk · contribs) 23:04, 7 January 2026 (UTC)[reply]
Revision 2
Revision 2
Per the feedback given, I have changed the scope of the proposal. The proposal now:
Forbids the use of LLMs to generate user-to-user communication, including to generate a starter or idea that a human then edits. (this clause is added to close the inevitable loophole that would arise from that)
Strongly discourages the use of LLMs to review or edit human-written user-to-user communication, and explains that if doing so results in text which appears wholly LLM-generated, then it may be subject to the same remedies as for LLM-generated text
So, LLM-written and LLM-written, human-reviewed communications; not allowed.
The sentence about people unwilling or unable to communicate/interpret/understand feedback etc. should be reworded to the following: People unable to communicate with other editors, interpret and apply policies and guidelines, understand and act upon feedback given to them etc. should ask for help at the teahouse. If you keep the current, the word incompatible should not be linked as the linked page is something on categories and redirects, not related to the linking sentence. In any case, I support the proposal. Nononsense101 (talk) 02:39, 8 January 2026 (UTC)[reply]
Nobody is arguing that we should treat text as AI generated just because GPTZero says so; this is a strawman. I even have another proposal specifically to address the identification of AI generated text, but that's for another time. Athanelar (talk) 00:39, 9 January 2026 (UTC)[reply]
Nobody (here) is arguing that we should trust GPTZero, and I suspect that everybody here has seen editors actually do that, and believe they are completely justified in doing that. WhatamIdoing (talk) 03:30, 9 January 2026 (UTC)[reply]
Sure, but if someone quoted my hypothetical guideline to justify collapsing an evidently good-faith, human-written edit request just because GPTZero said it's AI generated, I think any sensible editor seeing that would say it's not a reasonable application of the guideline.
You can't argue against a guideline by taking the worst possible way a person could misinterpret it. It constantly happens that editors accuse other editors of personal attacks because they get told their contribution was bad, does that mean WP:NPA isn't fit for purpose? Athanelar (talk) 03:48, 9 January 2026 (UTC)[reply]
For many editors, "GPTZero said it's AI generated" proves that it's not a "human-written edit request". If you don't want that to happen per your proposal, then you need to increase its already bloated (~1800 words) size even more, to tell editors not to believe GPTZero. WP:NPA might be a viable model for this, as it explains both what is and isn't a personal attack, and how to respond to differing scenarios.
I can, and have, since before some our editors were even born, argued against potentially harmful rules by taking the worst possibles way a person could misinterpret it, and then deciding whether that worst-case wikilawyer is both tolerable and likely. Thinking about how your wording might be misunderstood or twisted out of recognition is how you're supposed to write rules.
This has been known since at least the 18th century, when James Madison wrote in Federalist No. 10 that "It is in vain to say, that enlightened statesmen will be able to adjust these clashing interests, and render them all subservient to the public good. Enlightened statesmen will not always be at the helm: Nor, in many cases, can such an adjustment be made at all, without taking into view indirect and remote considerations, which will rarely prevail over the immediate interest which one party may find in disregarding the rights of another, or the good of the whole", and went on to propose a large federal republic as a way of keeping individual liberty (which is a necessary precondition for factionalism) and national diversity (which leads to factionalism through an us-versus-them mechanism) while reducing the opportunity for any one faction to seize power over the others.
I recommend Madison's work on factionalism is to anyone who wants a career in policy writing, but for now, spend a few minutes thinking about how we could adapt Madison's definition of a faction: "a number of Wikipedians...who are united and actuated by some common impulse of passionagainst AI...adverse to the rights of other Wikipedians (e.g., to have others focus on the content, instead of focusing on the tools used to write it), or to the permanent and aggregate interests of the community (e.g., to not WP:BITE newcomers or have hundreds of good-faith contributors told they're not welcome and not WP:COMPETENT)."
In the present century, we call this phenomenon things like misaligned incentives (e.g., editors would rather reject comments on a technicality than go to the trouble of correcting errors in articles or explaining why it isn't actually an error, but articles need to be corrected, and explanations help real humans), and we address it through processes like designing for evil (e.g., don't write "rules" that can be easily quoted out of context; don't optimize processes for dismissive or insulting responses) and use cases (e.g., How will this rule affect a person who doesn't speak English well? A WP:UPE? A person with dyslexia? An autistic person? A one-off or short-term editor?).
For example:
Protect the English language learner by declaring AI-based machine translation to be acceptable.
Ignore the UPE's AI use as small potatoes and block them for bigger problems.
Educate anti-AI editors that both human- and AI-based detectors make mistakes, and that these mistakes are more likely to result in editors unintentionally discriminating against editors with communication disabilities.
Remind editors to WP:Focus on content, which sometimes means saying "Thanks for reporting the error" instead of collapsing AI-generated comments.
I do understand your point, and am truly appreciative of the time and effort you're taking to make it. I still have two concerns with it;
The first is bloat; as you've indicated, words are precious in any policymaking effort and the longer people have to read to 'get to the point' the less chance they will. I'm concerned at how much weight should be added to cover things like "it's also possible to make mistakes without AI" that in any case should be assumed by any reasonable audience. It also feels redundant, i.e., AGF and BITE still apply even if I don't explicitly restate them. The existence of a guideline prohibiting AI-generated text is by no means a carte blanche to ignore those other, more fundamental principles.
Given that your primary cause for concern seems to be about collapsing AI-generated comments; well, that already exists as WP:HATGPT, all I'm doing is restating it here. However, on rereading that, I suppose I could (and will) add some language specifying that conversations should not be collapsed if their content proves otherwise extraordinarily useful, which should cover the edge cases you're concerned about, with super-useful AI users and overly anal-retentive wikilawyers.
@Athanelar, when I'm working on policy-type pages, the definitions in RFC 2119 are never far from my mind. Here are the most relevant bits:
SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.
MAY This word, or the adjective "OPTIONAL", mean that an item is truly optional.
And now let's compare what you wrote vs the text at HATGPT:
Comments, nominations, and opening statements that are obviously generated (not merely refined) by a large language model or similar AI technologymay be struck or collapsed...
In short, you've said that this "SHOULD" normally happen unless someone has carefully considered the situation and decided to make a special exception (e.g., for "extraordinarily useful" comments), and the existing guideline says that this "MAY" happen, but it's strictly optional and not ever required. Can you see the gap between what you're proposing and the existing guideline? If you genuinely believe that well, that already exists as WP:HATGPT, all I'm doing is restating it here, then I don't think we're speaking the same language. WhatamIdoing (talk) 01:31, 12 January 2026 (UTC)[reply]
Sure, nevertheless, the kind of person you're describing would do what you're saying regardless of whether it's 'should' or 'may', and my entire contention is as to whether it's realistic enough of a concern to affect anything, which I simply doubt that it is. People who fail to AGF won't get a free pass just because 'Athanelar's guideline says 'should' so that means I can collapse whatever I want"
I do think that if someone reads AI-generated comment, and collapses it per your proposal, that they should "get a free pass", because they were actually following the guideline to the best of their good-faith ability.
As a minimum, I suggest that when you change "may" optionally to "should" normally, you don't present that as a non-change that is already enshrined in guidelines. This is a significant change; either own it as being a change, or don't propose it. WhatamIdoing (talk) 23:40, 12 January 2026 (UTC)[reply]
Who are these editors who are relying on GPTZero and nothing else? That doesn't describe anyone I'm aware of working on AI cleanup, and it doesn't describe most of what goes on at ANI (the people who bring in GPTZero or whatever tend to be uninvolved participants). Gnomingstuff (talk) 14:27, 9 January 2026 (UTC)[reply]
There was a discussion not long ago about LLMs creating spam; see here. As I said there, I think this is one way to look at it -- we will not be able to detect all uses of LLMs, but if our rules force LLMs to become hard to detect (because they have improved the usefulness of their posts) maybe that's the best outcome we can hope for. I can see why we want to ban LLMs for user communication, and for things like FAC and GAN reviews, but there is no guaranteed way to detect LLM-generated text. Plus I'd argue that in the right hands they are useful. I have used them myself to find problems in articles I have worked on, for example. TL;DR: I am not strongly opposed to a rule like the one suggested here, but I doubt it will be very useful. I don't have a better suggestion, though. Mike Christie (talk - contribs - library) 03:56, 8 January 2026 (UTC)[reply]
I wonder how long it will be before attempting a ban is just pointless, either because we can't detect it at all, or because the amount of time spent arguing over whether a comment is prohibited type of AI overtakes the cost of permitting it. WhatamIdoing (talk) 00:19, 9 January 2026 (UTC)[reply]
The amount of effort spent arguing whether something is AI-generated is already at times greater than the amount of effort spent determining whether the content of that something is actually problematic. Thryduulf (talk) 03:23, 9 January 2026 (UTC)[reply]
In my anti-goal scenario above, what's motivating B to ignore the reported error and focus on the communication style? Why is B failing at Postel's law of practical communication? Assume that B would respond in a practical manner if he was realistically able to. Is the problem within B himself (e.g., B is fixated on rule-following, to the point of not being able to recognize that the error report is more important than the method by which the error report was communicated)? Is the problem accumulated pain (this is the 47th one just today, and B's patience expired several comments ago)? Is the problem in our systems (e.g., if B can quickly dismiss a request as AI-generated, then harder work can avoided)? WhatamIdoing (talk) 05:28, 9 January 2026 (UTC)[reply]
I have to admit I've never really thought about that before. My gut reaction is to say it's mostly "accumulated pain" with a bit of over-focus in there too: Someone finds and fixes a problem, then they find another similar one and fix that. At some point they realise they've seen a quite a few of these and start looking for others to fix. This becomes an issue if they get overwhelmed by the scale of the issue and/or stop looking at the wider context to see whether it is actually a problem that needs to be fixed. Thryduulf (talk) 12:27, 9 January 2026 (UTC)[reply]
It's kind of a mix of:
This is the 47th one today, and the majority of the last 46 people were either hostile or uncooperative.
AI copyedits go much farther than "normal" copyedits do in terms of rewriting meaning -- they're more akin to line edits -- but AI companies do not always make it clear how much they're changing. So when someone hears "I just used it for copyediting," they're inclined to distrust that.
In reality, this kind of conversation does not usually begin with "Hi, I found a serious factual error in the article. Here's a source to show I'm not making this up," it begins with a wordy behemoth full of AI platitudes. But even those -- at least on article talk pages -- often don't result in that because many editors watching individual articles aren't aware that AI is even a thing (still). Where this conversation usually happens is someone asking a question, and receiving an AI reply.
Plus there's another 103 to go, and it always feels like "I" am the only person doing this (because there's no way to know how many people have already checked the thing that I'm checking now).
How much an AI copyedit changes shouldn't be visible to the other talk page participants.
I love that link, and I hope someone gets her book and improves Copyediting with it. I wonder if British editors are more irritated by AI 'copyediting' for tone/voice reasons.
Kristi Noem seems like a standard "why is Wikipedia mentioning this public scandal, it must be politically biased" type comment, so the good outcome was already off the table
Primerica seems like a standard "why is Wikipedia mentioning this negative coverage of a company instead of promoting it?" type comment, which, same
Scott Wiener comment hatted four months after it was posted; there was a substantive discussion before then
Pythagorean triple thread began with a wordy platitude behemoth yet still was not hatted until several comments deep (after the LLM had already provided low-quality sources when asked about them)
Hard to tell what's going on in the 2026 Tamil Nadu Legislative Assembly election thread but it looks like there was some backstory and at least one block prior to the comment
I'm not saying that none of these deserved to be hatted. I'm saying that they're not evidence supporting the claim that what "usually happens is someone asking a question, and receiving an AI reply".
You can repeat the search yourself if you'd like, and pick a different sample set. It's sorted to have the most recently edited Talk: pages containing that template at the top (e.g., /Archive pages if an archive bot just did its daily run – it's most recent edit of any kind, not specifically the most recent addition of the template). WhatamIdoing (talk) 05:53, 10 January 2026 (UTC)[reply]
As a Polish native speaker, my English is strong, but it is not at a native level. I can easily understand written and spoken English, but expressing myself in English - especially writing comments - is much harder for me than it is for native speakers. Banning the use of machine translation tools (which increasingly rely on LLMs) to edit messages would be exclusionary and would push people like me out of discussions about building the encyclopedia, even when we can judge whether a translation faithfully conveys what we meant.
This is even more pronounced for non-native speakers with dyslexia. Without tools that help with grammar and punctuation, even strong substantive arguments can come across as weaker or less persuasive - not because the reasoning is bad, but because the English reads poorly. Grudzio240 (talk) 10:49, 12 January 2026 (UTC)[reply]
Myself and many others have expressed and will continue to express that a poorly-worded non-native comment will always read better and be stronger and more persuasive than a comment which reads like LLM generation. In the former we can at least be confident that the ideas and convictions presented are your own, whereas in the latter we have no means to differentiate you from any of the other people that generate some boilerplate slop and call it a day. Athanelar (talk) 10:55, 12 January 2026 (UTC)[reply]
I understand the concern: when a comment reads like it was generated, it’s harder to trust that the wording reflects the editor’s own thinking. That said, as machine translation and AI-assisted editing become more common - and harder to detect - “imperfect English” will increasingly become a marker that singles people out. In practice, that can discourage non-native speakers (and people with dyslexia) from participating, even when their underlying points are solid. I think the better approach is to focus on the substance and evidence, and allow limited language assistance (especially translation), while still discouraging using LLMs to generate arguments or positions.
Also, reactions to AI-assisted text vary a lot. Not everyone reacts negatively to AI-assisted wording, and I don’t think policy should be optimized for the most suspicious readers. If the content is clear and sourced, that should matter more than whether the phrasing “sounds too polished”. Grudzio240 (talk) 11:08, 12 January 2026 (UTC)[reply]
Regarding that should matter more than whether the phrasing “sounds too polished”, the fact is that this isn't the most glaring sign of AI writing. We've seen many editors write in a quite refined way, without being suspected of using LLM assistance, as LLMs will overuse specific kinds of sentence structures (e.g. WP:AISIGNS). This is very different from the pop-culture idea of "anyone who uses refined or precise language will sound like a LLM".As these get associated with people using these tools to generate arguments completely divorced from policy, they get picked up as cues that make readers tune out from the substance of arguments, and end up hurting the non-native speakers they hope to help. As a non-native speaker myself, I might worry about flawed grammatical structures here and there, but I would much prefer that to other editors reading my answers with immediate suspicion due to obvious AI signs. Chaotic Enby (talk · contribs) 11:17, 12 January 2026 (UTC)[reply]
Defaults treating comments that have “AI SIGNS” as suspicious may undermine Wikipedia:Assume good faith. "Unless there is clear evidence to the contrary, assume that fellow editors are trying to improve the project, not harm it." We should start by evaluating the content; proceed to distrust only when the content or behavior indicates a real problem. Grudzio240 (talk) 11:38, 12 January 2026 (UTC)[reply]
The entire conceit of this guideline is that AI-generated comments are problematic in and of themselves. If the guideline said "ignore whether the comment is AI generated and just assess whether it violated any other policy or guideline" then it would be a pointless guideline. Obviously AI-generated comments which violate other PAGs are already forbidden -- because they violate other PAGs. The point of this is to forbid AI generating comments regardless of whether their content breaks any other PAG (obviously subject to the usual exception) Athanelar (talk) 11:42, 12 January 2026 (UTC)[reply]
Well, the the close of the VPP discussion emphasized that “The word ‘generative’ is very, very important” and that “This consensus does not apply to comments where the reasoning is the editor's own, but an LLM has been used to refine their meaning… Editors who are non-fluent speakers, or have developmental or learning disabilities, are welcome … [and] this consensus should not be taken to deny them the option of using assistive technologies to improve their comments.”WP:AITALK was written on the basis of that discussion, and it seems to rest on exactly this distinction: LLMs should not be used to generate the substance of user-to-user communication (i.e., the arguments/positions themselves), but meaning-preserving assistance e.g. translation or limited copyediting where the editor’s reasoning remains their own) was explicitly not the target of that consensus. Grudzio240 (talk) 11:58, 13 January 2026 (UTC)[reply]
The core issue is that AI signs are heavily correlated with fully AI-generated arguments, themselves usually detached from policy. AGF is not a suicide pact, and editors used to the preponderance of flawed AI-generated arguments (compared to the few meaningful arguments where AI has only played a role in translation/refinement) might discount all as falling in the former category. This is magnified by many editors choosing to defend clearly abusive uses of AI (for example, adding hallucinated citations) as only using it to refine grammar or correct typos, even when that manifestly wasn't the case. Chaotic Enby (talk · contribs) 12:10, 12 January 2026 (UTC)[reply]
For approximately the gazillionth time, saying that text is likely to have been generated by AI says nothing about good faith or bad faith. It is pointing out characteristics of text. By this logic, adding a "copyedit" or "unreferenced" tag is assuming bad faith. Gnomingstuff (talk) 17:20, 12 January 2026 (UTC)[reply]
Yes, but it's just a short step from "I think he used AI" to Chaotic Enby's "heavily correlated with fully AI-generated arguments" to "This person is just wasting my time with fake arguments and has no interest in helping improve Wikipedia". WhatamIdoing (talk) 23:45, 12 January 2026 (UTC)[reply]
That is very much assuming bad faith in what I said – I'm not saying that we should discount comments on that basis, only that some editors will do it, and I was explaining the source of that distrust rather than defending it. Chaotic Enby (talk · contribs) 00:07, 13 January 2026 (UTC)[reply]
I know this is not entirely what you are arguing, but this kind of LLM-nihilist stance I keep hearing like "why do you care how it was made if the content is policy compliant?" seems patently absurd to me. If the only thing we care about is the substance and not who made it or how we might as well use Grokipedia. It's rather like presenting someone with a dish of Ortolan and saying "if it tastes good, why concern yourself with the ethical implications of its production?" The ends simply do not justify the means, in my mind. Athanelar (talk) 11:33, 12 January 2026 (UTC)[reply]
I think there are huge differences in values that people have, from a cultural or philosophical perspective. Some people see AI as inherently evil and some just see it as a tool. If there was an end product of English Wikipedia, something we were actually trying to finalise one day, it seems silly to the faction that sees AI as a tool to make humans do the job of machines. You get the same thing either way, we're just making it harder on ourselves. They see no value in human labour over machine labour. The means don't need to be justified because they don't see anything wrong with them. People who prefer human output do. This is especially true in the larger context of modern society, where the system requires people to work even when there's no work to be done. If a machine is doing your job for you, that doesn't mean you don't have to work, it means you have to create a need for yourself. If robots are writing encyclopaedias, that's just another existing need filled. ~2026-24291-5 (talk) 12:13, 12 January 2026 (UTC)[reply]
You're missing the third camp of "AI is a tool, but a flawed one". Using AI as a tool to write an encyclopedia would work in theory, and might be a very real possibility in the future, but has shown its current limits, and regulating it is necessary to address those immediate concerns, rather than for more abstract philosophical reasons. Chaotic Enby (talk · contribs) 12:18, 12 January 2026 (UTC)[reply]
I would vote no on any guideline that purports to tell editors what technology they can and can't use to draft communications. It's none of our business. You don't like somebody's writing style? Too bad; don't read it. It doesn't matter if it was generated by ChatGPT or polished by Grammarly or if it's just bad writing: we can judge people for the content of their posts (e.g., WP:NPA, WP:NOTFORUM, WP:BLUDGEON, etc.), but not for the tools they use to draft that content. Also, if you've been active on Wikipedia for 3 months, maybe you don't try to write a new guideline that purports to tell everybody else what tools they can and can't use to communicate on Wikipedia. If most of your edits are about trying to fight against LLM use, you might be WP:RGW instead of WP:HERE. Levivich (talk) 17:59, 12 January 2026 (UTC)[reply]
We can and already do judge people for the tools they use to edit: that is why we have a bot policy, for example, or limitations on fast tools such as AWB. In these cases, the reason is the same as the proposed reasons to limit AI-generated writing. Namely, the potential for fast disruption at scale: someone can generate 50 proposals in a few minutes, leaving other editors in need of a disproportionate effort to address them all – or leave the unread ones to be accepted as silent consensus, as no one will take the time to analyze 50 different proposals in detail.Additionally, it isn't necessarily helpful to say that if you've been active on Wikipedia for 3 months, maybe you don't try to write a new guideline, as newcomers can absolutely learn fast and have worthy insights – especially as you wish to judge others for the content of their posts. Chaotic Enby (talk · contribs) 18:44, 12 January 2026 (UTC)[reply]
This isn't a new guideline. It's a refinement of WP:AITALK, which has existed for an entire year now. The RfC that produced the guideline was closed stating in part (boling mine): There is a strong consensus that comments that do not represent an actual person's thoughts are not useful in discussions. Thus, if a comment is written entirely by an LLM, it is (in principle) not appropriate. The main topic of debate was the enforceability of this principle..
Sorry, but both of you missed what I was saying. Re CE: I didn't say to edit, I said to draft communications, and our existing bot policy already prohibits spam (as you point out). Re GS: WP:AITALK is about the content--the output, what gets published on this website--not about the method. AITALK doesn't say editors can't use AI to start or refine their posts, or to copyedit or fix grammar. Any proposed guideline that says anything like This prohibition includes the use of large language models to generate a 'starter' or 'idea' which is then reviewed or substantially modified by a human editor. or Editors are strongly discouraged from using large language models to copyedit, fix tone, correct punctuation, create markup, or in any way cosmetically adjust or refactor human-written text for user-to-user communication. would draw an oppose vote from me. Re both: how long before the community thinks repeated anti-LLM RFCs are a bigger problem than the use of LLMs on Wikipedia? Be judicious, mind the backlash, note the difference between the LLM proposals that have passed, and the ones that have failed. (Hint: the super-anti-AI proposals are the ones that have failed. The ones that allow use within reasonable boundaries have passed.) Levivich (talk) 18:52, 12 January 2026 (UTC)[reply]
The main problem with those RfCs is that the stricter proposals get shot down by editors wanting reasonable boundaries, and the more lenient proposals get shot by "all-or-nothing" editors. Given that, and the speed at which the technology advances, it isn't surprising that we are often discussing these issues – especially since recent proposals have been closed with consensus that the community wants some regulation but disagreed on the exact wording proposed. In that regards, the disruption doesn't come from the RfCs themselves, but from the inability of editors on both sides to compromise.Additionally, we also regulate what someone may do to draft communications, with proxy editing being the best example – if we can disallow proposals coming from a banned user, we can disallow proposals coming from a tool that has repeatedly proven disruptive. Chaotic Enby (talk · contribs) 19:09, 12 January 2026 (UTC)[reply]
Re proxy edits, that is not regulating the technology used to draft communications. We don't tell people what word processor to use, or whether they can use a typewriter, or which spellchecker to use, etc. etc. This proposed guideline would be a first in that sense, and I believe is doomed for that reason.
As to the main problem with the RFCs, yes, I agree with you, but does this proposed guideline look like any kind of compromise? It's proposing rules that are stricter than the rules we have for mainspace (for Pete's sake!), and it's still trying to do the thing that the community has repeatedly said no to, which is to stop or "strongly discourage" all or almost all use of LLMs (as opposed to just "bad" use of LLMs). The drafter, in comments above, below, and elsewhere, is very transparent that the goal of the proposed guideline is to get people to stop using LLMs (as opposed to getting them to use LLMs correctly rather than incorrectly).
I'll say again the same the thing I said about the last doomed RFC: hey, go ahead and run it, maybe I'm wrong and it'll get consensus, or maybe the next one will :-P
But really, CE, you've been around long enough to know what's up, I think you know I'm right... the reason your proposal at WT:TRANSLATE is on its way to passing is because that was a good proposal that compromised and is obviously responsive to community concerns from other RFCs (btw, great job there!). This proposal is not like that, it's almost the opposite in its stubbornness.
And I know you've personally put in a lot of time and effort into trying to get a handle on increased LLM usage on Wikipedia, what with the WikiProject and all, and I hate to see those productive efforts get sunk because we (collectively) aren't being clear enough to the hard liners in saying: "No. Stop trying to stop everybody from using LLMs, it's counterproductive." Because right now, NEWLLM is still laughably short, and it's not getting any better, because we're wasting time on uncompromising proposals like this one, instead of on compromise proposals like the translation one. And, frankly, it's because people who have no experience building consensus are being allowed to drive the bus, and are driving it off the road, rather than deferring to people who do know how to build consensus (like you). Levivich (talk) 21:06, 12 January 2026 (UTC)[reply]
Yep, I think we agree on the broad strokes here. I still respectfully disagree that proxy edits are that far away from using ChatGPT to generate an argument from scratch (as in both cases, you're delegating the thoughts to someone/something else), but the crux of the issue isn't a specific policy detail, but the fact that compromises end up being overshadowed by more hardline proposals on which a consensus can't realistically be reached. Chaotic Enby (talk · contribs) 21:42, 12 January 2026 (UTC)[reply]
I am explicitly open to compromise here. I want people to propose compromises that they find acceptable. I have already changed my initial proposal in response to one such compromise. I know you know that, I just want to put it out there. Athanelar (talk) 21:52, 12 January 2026 (UTC)[reply]
The problem with allowing starters or ideas generated by AI is because first it permits an unfalsifiable loophole ("My comment isn't subject to this guideline because it's not AI generated, I just used AI to tell me what to say and then reworded it") and second, while the style of AI-generated posts is certainly problematic, another problem (as addressed in my guideline) is the content, and generating a starter with AI means the idea is still not yours but is rather the AI's, which is the whole thing this guideline aims to address.
If the AI tells someone to wikilawyer by citing a nonexistent policy or misapplying one that does exist, it doesn't matter if they do it in their own words or not.
So the point is to say that the ideas need to be your own, not just the presentation thereof.
As for how long before the community thinks repeated anti-LLM RFCs are a bigger problem than the use of LLMs on Wikipedia? to take a page from your own book in dismissing one's interlocutor; perhaps a person who is not active in the constant organised AI cleanup efforts doesn't have the best perspective on how much of a problem LLMs are.
I really encourage you to take some time and tackle one of the tracking subpages at WP:AINB some time. Take a look at this one of a user who generated 200+ articles on mainspace wholesale using AI with no review or verification and tell us again how the people trying to fight the fire are the real problem because they're getting everything wet in the process. Athanelar (talk) 19:10, 12 January 2026 (UTC)[reply]
Yeah, ironically, If most of your edits are about trying to fight against LLM use, you might be WP:RGW instead of WP:HERE is closer to actually assuming bad faith than anything people doing AI cleanup have been accused of. Gnomingstuff (talk) 21:04, 12 January 2026 (UTC)[reply]
...which is the whole thing this guideline aims to address. Yes, that's the problem, in my view: you are trying to address something Wikipedia has absolutely no business to address, which is what technology people use to communicate. As has been pointed out by others above, there is, first and foremost, the accessibility issues and the issues for non-native English speakers (like me btw). But beyond that, how a human being gets from a thought in their head, to a policy-compliant non-disruptive comment posted on Wikipedia, is none of our (the community's) business. It doesn't matter if they use a typewriter or what spellcheck or Grammarly or an LLM. If the output is not disruptive--if it's not bludgeoning or uncivil, etc.--we have no business telling an editor what technology they can and can't use to generate that output. (And btw if you think 200+ bad articles is a lot, lol, we've had people generate tens of thousands of bad articles, redirects, etc., without using LLM, and that's happened for the entire history of Wikipedia--we still never banned people from using scripts or bots, despite the fact that they've been abused by some, and with much worse consequences that what's being reported at AINB). Levivich (talk) 21:15, 12 January 2026 (UTC)[reply]
we still never banned people from using scripts or bots, But... we do? As pointed out before, we absolutely do restrict what technology people use to edit. You need express permission to operate a bot because of the potential for rapid, large-scale disruption.
You cannot seriously compare an LLM to a word processor or typewriter. Neither of those things is capable of wholesale generating a reply without any human thought behind it. Athanelar (talk) 21:21, 12 January 2026 (UTC)[reply]
We don't require permission to use a script. You don't need permission to use the WP:API. What is regulated is the output--specifically, BOTPOL and MEATBOT prevent unauthorized bot-like editing regardless of whether a script is actually used or not. It's the effect, not the method, that's regulated (in fact, the effect is regulated the same way -- bot or meat -- regardless of the method!). And yes, I am absolutely comparing LLMs to the pen, the typewriter, the word processor, the spellchecker, the grammar checker, autocorrect, predictive text, etc. It's just the latest technological advance in writing tools. And LLMs are not capable of generating anything "without any human thought behind it"; they require prompts, which require human thought, and their training data is a bunch of human thought. Levivich (talk) 22:49, 12 January 2026 (UTC)[reply]
Sure, but that's like arguing that paying someone to do your homework is materially the same as if you did it yourself, because you still had to describe the task to somebody else and then they still came up with an answer. You must know you're splitting hairs by now. Athanelar (talk) 22:53, 12 January 2026 (UTC)[reply]
Maybe hiring a secretary to write a letter on your behalf would be a more relevant analogy: Bob Business tells his secretary to send a letter saying he accepts their offer to buy 1,000 widgets but wants to change the delivery date slightly. He glances over the letter, decides that it makes the points that he wanted to communicate, and signs it before mailing it.
Do you think the typical recipient of that letter would be offended to discover that Bob didn't choose every single word himself? Is the recipient likely to believe that the facts communicated did not represent Bob's own thoughts? WhatamIdoing (talk) 23:33, 12 January 2026 (UTC)[reply]
That analogy only makes sense if you assume AI never makes up new arguments, and that it is only ever used to clarify existing thoughts that have been communicated in the prompt, rather than something like "please write me an unblock request". In the latter case, the fact that the substance of the unblock request isn't an original thought (but only the request to write one) is problematic, as we can't evaluate whether or not the blocked user properly understands the issues. That specific case is very much not theoretical, as around half of unblock requests have strong signs of LLM writing. Chaotic Enby (talk · contribs) 23:42, 12 January 2026 (UTC)[reply]
That analogy makes lots of sense, if you've ever worked with (or been) a human secretary.
The problem is that this analogy is very far removed from the actual situations we're facing, and makes it harder to talk about them in precise terms. In one case, you're having a secretary playing a purely functional role of transmitting a message and helping convey thoughts to an interlocutor, possibly adding some context of their own. The key task is to transmit the information, and using a secretary (or AI) to do it makes sense. On the other hand, an unblock request aims to show that the blocked user has some level of understanding of the situation. If a secretary (or AI) writes the unblock request, with the blocked user having only told them "write me an unblock request", then the unblock request fails at its purpose. Chaotic Enby (talk · contribs) 00:12, 13 January 2026 (UTC)[reply]
But how do we know what the prompt was? If the prompt was "write me an unblock request" and that's it, then your point holds true. But what if the prompt was "write an unblock request that says [user's own understanding]"? Like, for example, "write an unblock request that says I lost my cool and said something I shouldn't have and in the future I'll be sure to walk away from the keyboard when things get too heated and also I'm going to avoid this topic area for a while"? Could you tell what the prompt was based on the output? I don't think so... Levivich (talk) 00:25, 13 January 2026 (UTC)[reply]
We don't know what the prompt was exactly, but we can get some strong indications when the user leaves unfilled phrasal templates, or apologizes for nonexistent issues completely unrelated to their behavior, or only writes generic, nonspecific commitments that could apply to literally any unblock request. In many of these cases (and, again, these are a large proportion of the unblock requests I'm seeing), I'd probably be even more worried if the prompt came from the user's own "understanding". Chaotic Enby (talk · contribs) 00:56, 13 January 2026 (UTC)[reply]
The unblock process might fail at its intended purpose, but that's entirely within the realm of normal secretary behavior. Have you never read tales like the https://www.snopes.com/fact-check/the-bedbug-letter/? Or heard stories about secretaries who make sure that the boss always remembers to buy a present for his wife's birthday, send flowers on their wedding anniversary, and so forth?
In the end, I think that it might make more sense for us to re-design the unblock process (to make it more AI-resistant) than to tell people they shouldn't use AI. Maybe a series of tickboxes, setting up a sort of semi-customizable contract? "▢ I agree that I won't put the word poop in any more articles" or "▢ I agree that I won't write long comments on talk pages" or whatever. WhatamIdoing (talk) 00:37, 13 January 2026 (UTC)[reply]
To note: last time someone generated tens of thousands of redirects, we had to create a whole new speedy deletion criterion for it. More generally, there have been many discussions on article creation at scale (the other WP:ACAS) and attempts at building a framework to regulate it. So, while I don't disagree that we can't control everything, the issue of disruption at scale isn't new to Wikipedia, and efforts to address it aren't new either. Chaotic Enby (talk · contribs) 21:45, 12 January 2026 (UTC)[reply]
Yeah, we did that this time, too, and kudos to the community, it got to WP:G15 much faster than it took to get to WP:X1. But you know what we didn't do about the redirects or sports articles? Prohibit, or try to prohibit, people from using scripts or templates or bots, etc. We never went after the technology that made that spam possible, we went after the editors who did the spamming, and made new tools to efficiently deal with the spam (csd's). And those were 100,000-page problems; whereas this is like thousands of articles? (How many G15s have there been so far? I see 46 in the logs in the last two days.) So like an order or two orders of magnitude less? And our response, or some folks' response, has been an order or two orders of magnitude stronger. Levivich (talk) 22:41, 12 January 2026 (UTC)[reply]
More to the point: We didn't try to "Prohibit, or try to prohibit" everyone else "from using scripts or templates or bots, etc." just because a few people abused those tools. WhatamIdoing (talk) 23:20, 12 January 2026 (UTC)[reply]
But I never said we should prohibit anything entirely, just have a framework to regulate it. Which is exactly what we've done with bots (through WP:BRFA), with mass creation of articles and redirects (through draftification and new page patrolling), etc. Chaotic Enby (talk · contribs) 23:22, 12 January 2026 (UTC)[reply]
You: "I never said we should prohibit anything entirely".
Proposal: "Editors are not permitted to use large language models to generate user-to-user communications" (emphasis in the original)
The main worry I have with AI is that it is much more widely distributed. We don't have a few editors who can be blocked to get rid of the spamming, but tools that have been causing issues in the hands of a much broader range of editors, mostly because, sadly, many of them don't know how to use it responsibly. Banning the tool entirely is too harsh, blocking individual editors doesn't solve the underlying problem, meaning we're in this problem zone where it's hard to craft good policy.G15 is for the most extreme, blatant cases, but Category:Articles containing suspected AI-generated texts contains nearly 5000 pages, while Category:AfC submissions declined as a large language model output adds another 4000, just from the last 6 months. With all the smaller tracking categories, plus the expired drafts, we're easily above 10,000 pages. Chaotic Enby (talk · contribs) 23:20, 12 January 2026 (UTC)[reply]
I agree that we're in a difficult place. I don't like the idea of Wikipedia appearing to be AI-generated (even if it's not). I don't like the idea of Wikipedia having the problems associated with AI-generated content (including, but not limited to, factual errors).
But if:
We can't accurately detect/reject AI-generated content before it's posted
Many people believe that it's normal, usual, and reasonable to use AI tools to create the content they need for Wikipedia
The individual incentives to use AI (e.g., being able to post in a language you can barely read; being able to post an article quickly) exceed the expected costs (e.g., the UPE's throwaway account may get blocked)
then I think that having a rule, or even having an ✨Official™ Policy🌟, will not change anything (except maybe making our more rule-focused editors even angrier, which is not actually helpful). WhatamIdoing (talk) 00:27, 13 January 2026 (UTC)[reply]
@Gnomingstuff I think it would help readers if the summary also reflected an important nuance from the earlier RfC: it explicitly carved out cases where the reasoning is the editor’s own and an LLM is used only to refine meaning (e.g. for non-fluent speakers or users with disabilities). This consensus does not apply to comments where the reasoning is the editor's own, but an LLM has been used to refine their meaning... Editors who are non-fluent speakers, or have developmental or learning disabilities, are welcome ...
The current proposal seems materially more restrictive than that consensus, because it prohibits even “starter/idea” use and goes further by strongly discouraging copyediting/tone/formatting with LLMs:
Editors are strongly discouraged from using large language models to copyedit...
If the intent is to align with the earlier consensus, it may be worth explicitly stating that assistive uses that don’t outsource the editor’s reasoning (especially accessibility/translation-adjacent cases) are not what the guideline is trying to discourage. Grudzio240 (talk) 09:25, 13 January 2026 (UTC)[reply]
and i have one more concern about the “copyedit/tone/formatting” section: it reads as shifting the downside risk onto the editor in a way that can chill legitimate assistive use. The proposal first strongly discourages even cosmetic LLM assistance, and then says that editors who do so “should be understanding” if their LLM-reviewed comment “appears to be LLM-generated” and is therefore subject to collapsing/discounting/other remedies.
Editors who choose to do so despite this caution … should be understanding if their LLM-reviewed comment/complaint/nomination etc. appears to be LLM-generated and is subject to the remedies listed above.
That framing seems to pre-emptively validate adverse outcomes based on appearance (“looks LLM”) rather than on whether the editor’s reasoning is their own. If the intent is accessibility/meaning-preserving assistance to remain acceptable, it may be worth rewording this to avoid implying that a “looks LLM” judgment is presumptively correct, and explicitly protect meaning-preserving copyedits/formatting from being treated as fully LLM-generated. Grudzio240 (talk) 09:31, 13 January 2026 (UTC)[reply]
Like the idea of the different prompt examples. That said, if someone is writing I understand that edit warring and insulting other editors was disruptive, and that in the future I plan to avoid editing disputes which frustrate me in that way to prevent a repeat of my conduct, and that I am willing to accept a voluntary 1RR restriction if it will help with my unblock., it seems like they could just... say that, instead, without AI, and that doing so would be more likely to produce a positive outcome. Gnomingstuff (talk) 15:52, 13 January 2026 (UTC)[reply]
I have added the explanatory paragraph "These are examples of a prompt that would result in an obviously unacceptable output and a prompt that would result in a likely acceptable one, to act as guidance for editors who might use LLMs. They should not be taken as a standard to measure against, nor is the prompt given necessarily always going to correlate with the acceptability of the output. Whether or not the output falls afoul of this guideline depends entirely on whether it demonstrates that it reflects actual thought and effort on the part of the editor and is not simply boilerplate." Athanelar (talk) 16:28, 13 January 2026 (UTC)[reply]
I appreciate the effort but I'm probably not the best person to give feedback given I think (1) there shouldn't be a new guideline at all (Wikipedia needs fewer WP:PAGs, not more); (2) there shouldn't be a new guideline about "LLM communication" (as opposed to about LLM use in mainspace or LLM translation); (3) "Large language models are unsuited for and ineffective at accomplishing this, and as such using them to generate user-to-user communication is forbidden." is a deal breaker for me, in principle (I don't agree it's ineffective or unsuited or that it should be forbidden); (4) I do not support "a prohibition against outsourcing one's thought process to a large language model"; (5) I do not support "Editors are not permitted to use large language models to generate user-to-user communications"; (6) I do not agree with "It is always preferable to entirely avoid the use of LLMs and instead make the best effort you can on your own"; (7) the entire section "Large language models are not suitable for this task" is basically wrong, including "Large language models cannot perform logical reasoning" (false/misleading statement, they do perform some logical reasoning); and (8) I disagree with the entire section "Anything an LLM can do, you can do better". This is a guideline that says, in a nutshell, LLMs are bad and you shouldn't use them, and since I think LLMs are good, and people should use them, I don't think we're going to find a compromise text here. For me. But I'm just one person. Levivich (talk) 18:34, 13 January 2026 (UTC)[reply]
Understandable. So long as your disagreements are ideological and not "there's a fundamental contradiction" or the likes, that's still a good indication for me that I'm in the right direction. Much appreciated. Athanelar (talk) 18:44, 13 January 2026 (UTC)[reply]
I don't think this is an ideological disagreement. (Some proponents of a ban on LLMs may be operating from an ideological position; consider what Eric Hoffer about movements rising and spreading without a God "but never without belief in a devil". AI is the devil that they blame for many problems.) I do think that as someone hoping to have a successful WP:PROPOSAL, it's your job to seek information about what the sources of disagreement are, and to take those into account as much as possible, so that you can increase your proposal's chance of success (which I currently put at rather less than 50%, BTW). Feedback is a gift, as they say in software development.
For example:
Levivich expresses concerns about the proliferation of new guidelines. There have been several editors saying things like that recently. Do you really, really, really need a {{guideline}} tag on this? Maybe you should consider alternatives, like putting it in the project namespace and waiting a bit to see if/how editors use it.
He wonders whether a new guideline against "LLM communication" should be prioritized over AI problems in the mainspace. What are you going to say to editors who look at your proposal and say that it's weird to advocate for a total ban on the Talk: pages, when it's still 'legal' to use AI in the mainspace? You don't have to agree with him, but you should consider what he's telling you and think about whether you can re-write (or re-schedule) to defend against this potential complaint.
Your statement that "Large language models are unsuited for and ineffective at accomplishing this" is a claim of fact (getting us back to that ideology power word: opponents of LLMs are entitled to their own opinions, but not to their own facts). Are LLMs really unsuited and ineffective? Can you back that up with sources? Does it logically follow from "success depends on the ability of its participants to communicate" that a tool helping people communicate is always going to be ineffective at accomplishing our goals?
What if the use of AI in a particular instance is "Dear chatbot, please re-write the following profanity-laced tirade so that it is brief and polite, because I am way too angry to do this myself"? Does that interfere with the goal of "civil communication"? Or would that use of a chatbot actually improve compliance with our Wikipedia:Civility policy? Is it really true that "Anything an LLM can do, you can do better" – right now?
What if the use is a newbie who is pointing out a problem and who used an LLM to try to present their information as "professionally" as possible? What I'm seeing in my news feed is that Kids These Days™ aren't doing so well with reading and writing in school. Does trying to communicate clearly interfere with our goals of reaching consensus, resolving disagreements, and finding solutions?
What if the realistically available alternatives are also less than ideally effective? You've added a paragraph about dyslexia and English language learners (thank you), but how is the average editor supposed to know whether the person has a relevant limitation? For comparison, many years ago, we briefly had an editor whotypedallthewordstogetherlikethis and said that pressing the space bar was painful due to Repetitive strain injury, which he thought we should accept on talk pages as a reasonable accommodation for his disability. I never have been able to decide whether he had a surprisingly inflated sense of entitlement or if it was a piece of performance art, but we sent him on his way with a recommendation to look into speech-to-text software. Thinking back, I'd have preferred that he used an LLM to what he was doing. It would have been more effective at supporting communication than what he was doing. But: If he was here today, and used an LLM today, how would the other editors know that he had (in his opinion) a true medical reason for using an LLM? More importantly, if LLMs are effective for those groups of people, does that invalidate the factual claim that LLMs are "unsuited for and ineffective at" discussions?
You should go through the rest of Levivich's feedback and see whether there is any adjustment you can make that might reduce the likelihood that anyone else would vote against your proposal on the same grounds. Can you re-write it to be less strident? Less absolute?
Or take the opposite approach: Write an essay, and tell us how you really feel. Don't say "The substance of this guideline is a prohibition against outsourcing one's thought process to a large language model"; instead say something like "Whenever I see LLM-style comments on talk pages, I feel like I'm talking to a machine instead of a human. I worry that if you aren't writing in your own words, you won't read or understand my reply. I worry that if you're misunderstanding something, you won't care – you'll just tell the LLM 'she said I'm wrong; write a reply that explains why I'm right anyway'. That's not what I'm WP:HERE for." WhatamIdoing (talk) 20:51, 13 January 2026 (UTC)[reply]
This is all very good, and gives me something to work with for another round of improvements on this thing, so I appreciate it greatly. One thing I want to address specifically in this reply is the question of "why a guideline rather than an essay in WPspace?" and the answer is that while I absolutely do have a lot to say about LLMs on Wikipedia, I want to materially improve the situation by doing something about it, not just vent. The community norm is already against the use of LLMs in talk pages. People who use LLMs for that pretty much universally get told "hey, quit it" so I thought it would be sensible to make the unwritten rule written rather than having it exist in this nebulously-enforceable grey area. Athanelar (talk) 21:16, 13 January 2026 (UTC)[reply]
AITALK doesn't forbid the use of LLMs for discussions, it merely suggests that they may be hatted (which, by the way, if you didn't notice I also changed my 'should' to 'may'.) The only time LLM use in talk pages tends to escalate to sanctions is when a user persistently lies about it; which to be fair is common, but what I'm proposing is that any persistent (i.e., continuing after being notified and obviously subject to the limited carveouts) LLM usage for discussions should be considered disruptive. As my title here says, it's more WP:LLMCOMM than it is WP:AITALK. LLMCOMM begins with the sentence Editors should not use LLMs to write comments generatively. and my whole goal here is to basically turn that 'should' into a 'must' (while giving reasoning, addressing loopholes, and also synthesising AITALK into it to provide remedies/sanctions for the prohibited action) Athanelar (talk) 21:23, 13 January 2026 (UTC)[reply]
I have been looking at this with fresh eyes, and I think that the entire ==Large language models are not suitable for this task== section can be safely removed.
Overall, I feel like the bulk of the page is trying to persuade the reader to hold the Right™ view, instead of laying out our (proposed) rules.
The ===Boldness is encouraged and mistakes are easily fixed=== subsection is irrelevant. Boldness is encouraged in articles. Mistakes can be fixed in articles (though if you're listening to what people are saying about fixing poor translations and LLM-generated text, "easily" is not true). In the context of user-to-user communication, boldness has costs, and some mistakes are not fixable. Maybe a decade ago, we had an influx of Indian editors (a class?) who had some problems, and in a well-intentioned effort to be warm and friendly, they addressed other editors as "buddy" (e.g., "Can you help me with this, buddy?"). This irritated some editors to the point that there were complaints about the whole group being patronizing, rude, etc. As the sales teams say, you only have one chance to make a first impression. Even if you're just trying to fix grammar errors and simply typos, the Halo effect is real, and it is especially real in a community that takes pride in our brilliant prose (←the original name for Wikipedia:Featured articles). A well-written comment really does get a better reception here than broken English or error-filled posts.
Also, "using an LLM to communicate on your behalf on Wikipedia fails to demonstrate that you...have the required competence to communicate with other editors" might feel ableist to people with communication disorders. The link to Wikipedia:Not compatible with a collaborative project is misleading (it's about people who are arrogant, mean, or think they should be exempt from pesky restrictions like copyrights; it's not about people who are trying to cooperate but struggle to write in English).
I have been thinking about an essay along these lines:
How to encourage non-AI comments
There are practical steps experienced editors can take to encourage non-AI participation.
Please do not bite the newcomers. People who use AI regularly are often surprised that this community rejects most LLM-style content. Gently inform newcomers about the community's preferences.
Focus on content, not on the contributor or your perception of their skills. Don't tell newcomers that the Wikipedia:Competence is required essay says they have to be able to communicate in English. Kind and helpful responses to broken English, machine translation, non-English comments, typos, and other mistakes encourage people to participate freely. If people see that well-intentioned comments written in less-than-perfect English sometimes produce rude responses, they will be more motivated to use AI tools.
Accept mistakes, apologies, corrections, and clarifications with grace. Ask for more information if you think the person's comment doesn't make sense. Ask for a short summary if it is particularly long.
but I'm not sure it would actually help. People who are most irritated by "AI slop" don't automatically all have the social and emotional skills to be patient with the people who are irritating them.
I've posted a much shorter (~20%) and softer version of this proposal in my sandbox. I tried to remove persuasive content and examples from the mainspace, as well as shortening the few explanations that I kept. I also added practical information for experienced editors (so we're permitting dyslexic editors to use LLMs, but you're permitted to HATGPT, so...let's at least not edit war?). Maybe the contrast between the two will be informative. WhatamIdoing (talk) 19:53, 14 January 2026 (UTC)[reply]
I much prefer WAID's version as it restricts itself to the point and doesn't preach or demonise anyone or anything. I would though reprhase the authorised uses section so as to focus on the uses rather than actions, advice or specific conditions. Perhaps something like:
The following uses are explicitly permitted:
Careful copyediting: You may use an LLM to copyedit what you have written (for example to check your spelling and grammar), but you must always check the output as the tools sometimes change the meaning of a sentence.
As an assistive technology: If you have a communication disorder, for example severe dyslexia, LLM tools are permitted as a useful assistive technology. You are not required to disclose any details about your disability.
Translation. People with limited English, including those learning the language, may use AI-assisted machine translation tools (e.g., DeepL Translator) to post comments in English. Please consider posting both your original text plus the machine translation.
You are not required to state why you are using an LLM but in some cases doing so may help other editors understand you.
I do plan to synthesise some of WAID's into mine, but I still have major issues with the suggestions for how to handle some of these carveouts; because they provide any bad-faith editor (which, given the amount of people I see lie about using LLMs, is a lot) a get-out-of-jail free card. Or rather, it means we essentially can't enforce the guideline in good faith at all. We can't simultaneously say "you shouldn't generate comments with LLMs" and also say "but if you have certain exempting circumstances, you can essentially do whatever you want with LLMs with no disclosure whatsoever" because it makes it impossible for us to enforce against users using LLMs 'wrong' without inevitably catching, for example, a dyslexic editor who decides they want an LLM to compose their entire comment and so it sounds 100% AI generated. Athanelar (talk) 02:28, 15 January 2026 (UTC)[reply]
Yes, this is a problem. We can declare a total ban and thereby officially endorse discrimination against people with disabilities and English language learners into our guidelines.
Alternatively, we can permit reasonable accommodations and give editors no way to be certain that the person is using it truly qualifies for it. We can predict that we will have a number of emotional support peacocks in addition to people who don't know that it's banned, people who legitimately do fall into one of the reasonable exceptions, some rule-breaking jerks, and some people who believe that what they're doing is reasonable (in their eyes) and therefore the community's rule is unreasonable and shouldn't be enforced against them. (I'm pretty sure psychology has a name for the belief that rules don't apply to you unless you agree with/consent to them, but I don't remember what the word is.)
Plus, of course, no matter what we write, there would still be the problem of editors incorrectly hatting comments written by English language learners and autistic editors, because AI-generated text resembles some common ESL and autistic writing styles (e.g., simpler sentence structure).
I support revision 3 as is, without any changes that would further weaken its language. Having seen how LLM use is currently being handled by the community at other venues, including article talk pages, content-related noticeboards, and WP:ANI, my impression is that the discussion here is not representative of the community sentiment toward LLM use as a conduct issue, which is much more negative than is being portrayed here. A request for comment will invite input from the editors who spend more time resolving issues resulting from LLM use but do not closely follow all of the relevant village pump discussions. — Newslingertalk05:11, 15 January 2026 (UTC)[reply]
Oppose for all the reasons that WhatamIdoing explained in the discussion far more eloquently than I can. It's not a guideline to help editors undestand the issues and good practice around LLM use on talk pages it's an overly-long essay proslethising the evils of LLMs (well, that's a bit hyerbolic, but not by huge amounts). Don't get me wrong, we should have a guideline in this area, but this is not it. Thryduulf (talk) 19:44, 15 January 2026 (UTC)[reply]
Oppose the guideline per WhatamIdoing and support her alternative proposal at User:WhatamIdoing/Sandbox. This policy conflates all sorts of problems with AI (what is the section User:Athanelar/Don't use LLMs to talk for you#Yes, even copyediting doing here when the substance of that section is about copyediting articletext in a guideline that is about talk page comments?), makes a number of dubious claims about LLMs that rather than supported by evidence are supposed to be taken on faith, and is once again either dubiously unclear or internally contradictory (the claim that guideline does not aim to restrict the use of LLMs [for those with certain disabilities or limitations], for example). This would be great as an WP:Essay, but definitely not as a guideline. Katzrockso (talk) 23:03, 15 January 2026 (UTC)[reply]
what is the section User:Athanelar/Don't use LLMs to talk for you#Yes, even copyediting doing here when the substance of that section is about copyediting articletext in a guideline that is about talk page comments? Showing that LLMs have trouble staying on task when copyediting is relevant regardless of where that copyediting takes place, whether it's in articletext or talk page comments. It's a supplement to the caution in the 'Guidance' section about using LLMs to cosmetically enhance comments. Athanelar (talk) 23:11, 15 January 2026 (UTC)[reply]
Elsewhere I have used LLMs to copyedit few times and I have noticed this phenomenon (LLMs making additional changes beyond what you asked) using the freely available LLMs (I believe that the behavior of models is wildly variable so I cannot speak about the paid ones, which I refuse to pay for on principle). However, this was not a problem when I gave the LLM more specific instructions (i.e. do not change text outside of the specific sentence I am asking you to fix). The gist of the argument in that section is a non-sequitur: from the three examples given, the conclusion LLMs cannot be trusted to copyedit text and create formatting without making other, more problematic changes does not follow. Katzrockso (talk) 23:41, 15 January 2026 (UTC)[reply]
Athanelar, guidelines don't normally spent a lot of time trying to justify their existence. Think about an ordinary guideline, like Wikipedia:Reliable sources. You don't expect to find a section in there about what would happen to Wikipedia if people used unreliable sources, right? This kind of content is off topic for a guideline. WhatamIdoing (talk) 00:24, 16 January 2026 (UTC)[reply]
Sure, but LLMs are a topic that people are uniquely wont to quibble about, whether because their daily workflow is already heavily LLM-reliant or simply because they have no idea why anybody would want to restrict the use of LLMs. I think it's sensible to assume that our target audience here will be people who aren't privy to LLM discourse, especially Wikipedia LLM discourse, and so some amount of thesis statement is sensible. Athanelar (talk) 01:44, 16 January 2026 (UTC)[reply]
Oppose We should do something, but this manifesto isn't it. For example:
This is supposed to be about Talk: pages, and it spends 200+ words complaining about LLMs putting errors into infoboxes and article text.
Sections such as A large language model can't be competent on your behalf repeatedly invoke an essay, while apparently ignoring the advice in that same essay (e.g., "Be cautious when referencing this page...as it could be considered a personal attack"). In fact, that same essay says If poor English prevents an editor from writing comprehensible text directly in articles, they can instead post an edit request on the article talk page – something that will be harder for editors to do, if they're told they can't use machine translation because the best machine translation for the relevant language pair now uses some form of LLM/AI – especially DeepL Translator.
Overall, this is an extreme, maximalist proposal that doesn't solve the problems and will probably result in more drama. In particular, if adopted, I expect irritable editors to improperly revert comments that sound like it was LLM-generated (in their personal opinion) when they shouldn't. IMO "when they shouldn't" includes comments pointing out errors and omissions in articles, people with communication disorders such as severe dyslexia (because they'll see "bad LLM user" and never stop to ask why they used it), people with autism (whose natural, human writing style is more likely to be mistaken for LLM output), and people who don't speak English and who are trying to follow the WP:ENGLISHPLEASE guideline. WhatamIdoing (talk) 23:07, 15 January 2026 (UTC)[reply]
I agree in principle. That said, from the discussion above, I do think we need to redesign the unblock process to make it less dependent on English skills, because needing to post a well-written apology is why many people turn to their favorite LLM. I'm looking at the Wikipedia:Unblock wizard idea, which I think is sound, but it still wants people to write "in your own words". For most requests, it would probably make more sense to offer tickboxes, like "Check all that apply: □ I lost my temper. □ I'm a paid editor. □ I wrote or changed an article about myself, my friends, or my family. □ I wrote or changed an article about my client, employer, or business" and so forth. WhatamIdoing (talk) 00:40, 16 January 2026 (UTC)[reply]
From my limited experience reading unblock requests, it appears that the main theme that administrators are looking for is admission of the problem that led to the block and a genuine commitment to avoiding the same behavior in future editing. I think some people might object to such a formulaic tickbox (likely for the same reasons they oppose the use of LLMs in unblock requests) as it removes the ability of editors to assess whether the appeal is 'genuine' (whether editors are reliable arbiters of whether an appeal is genuine or not is a different question), which is evinced from the wording and content of the appeal. Katzrockso (talk) 01:25, 16 January 2026 (UTC)[reply]
I think we need to move away from model in which we're looking for an emotional repentance and towards a contract or fact-based model: This happened; I agree to do that. WhatamIdoing (talk) 04:06, 16 January 2026 (UTC)[reply]
I think the key thing that needs to be communicated is that they understand why they were blocked. Not just a "I got blocked for editwarring" but an "I now understand editwarring is bad because...". Agreeing what happened is a necessary part of that (if you don't know why you were blocked you don't know what to avoid doing again) but not sufficient because if you don't understand why we regard doing X as bad, then you're likely to do something similar to X and get blocked again. Thryduulf (talk) 04:33, 16 January 2026 (UTC)[reply]
My thought with tickboxes is that there is no opportunity to use an LLM when all you're doing is ticking a box.
l partly agree with your view that "It doesn't matter whether they're sorry". It doesn't matter in terms of changing their behavior, but it can matter a lot in terms of restoring relationships with any people they hurt. This is one of the difficulties. WhatamIdoing (talk) 17:50, 16 January 2026 (UTC)[reply]
Sure, there's no opportunity to use an LLM. But then we have exactly the same problem that we have when they're using LLMs: we don't actually know that they understand anything at all. -- asilvering (talk) 18:38, 16 January 2026 (UTC)[reply]
I think it depends on what you put in the checkboxes. Maybe "□ I believe my actions were justified under the circumstances" or "□ I was edit warring, but it was for a good reason". WhatamIdoing (talk) 20:28, 16 January 2026 (UTC)[reply]
I'd support it if it was tweaked First, a preamble. We continue to nibble around the edges of the LLM issue without addressing the core issues. I still think we need to make disclosure of AI use mandatory before we're going to have any sort of effective discussion about how to regulate it. You can't control what you don't know is happening. That might take software tools to auto-tag AI likely revisions, or us building a culture where its okay to use LLM as long as you're being open about it.General grumbles aside, lets approach the particular quibbles with this proposal. This guideline is contradictory. The lead says that using LLM is forbidden...but the body is mostly focused on trying to convince you that LLM use is bad. Its more essay than guideline. I also think that it doesn't allow an exemption for translation, which is...lets be honest...pervasive. Saying you can't use translate at all to talk to other editors will simply be ignored. I think this needs more time on the drawing board, but I'd tentatively support this if the wording was "therefore using them to generate user-to-user communication is strongly discouraged." rather than forbidden. CaptainEekEdits Ho Cap'n!⚓ 01:33, 16 January 2026 (UTC)[reply]
Just one small point, but from a literal reading of two current rules, you are already required to disclose when you produce entirely LLM generated comments or comments with a significant amount of machine generated material; the current position of many Wikipedia communities (relevantly, us and Commons) is that this text is public domain, and all editors, whenever they make an edit with public domain content, "agree to label it appropriately". [5]. Therefore, said disclosure is already mandatory - mainspace, talkspace, everywhere. The fact that people don't disclose, despite agreeing that they will whenever they save an edit, is a separate issue to the fact that those rule already exists. GreenLipstickLesbian💌🧸 06:31, 16 January 2026 (UTC)[reply]
@GreenLipstickLesbian, I think that's a defensible position, but not one that will make any sense to the vast majority of people who use LLMs. So if we want people to disclose that they've used LLMs, we have to ask that specifically, rather than expecting them to agree with us on whether LLM-generated text is PD. -- asilvering (talk) 18:40, 16 January 2026 (UTC)[reply]
@Asilvering Yes, but the language not being clear enough for people to understand is, from my perspective, a separate issue as to whether or not the rule exists. We don't need to convince editors to agree with us that LLM generated text is PD, just the same way I don't actually need other editors to agree with me on whether text they find on the internet is public domain or that you can't use the Daily Mail for sensitive BLP issues- there just needs to be a clear enough rule saying "do this", and they can follow it and edit freely, or not and get blocked.
And just going to sandwich on my point to @CaptainEek - it is becoming increasing impossible to determine if another editor's text in any way incorporates text from a LLM, given their ubiquity in translator programs and grammar/spellcheck/tone checking programs, which even editors themselves may not be aware use such technology. So LLMDISCLOSE, as worded, will always remain unenforceable and can never be made mandatory - an that's before getting into the part where it says you should say what version on an LLM you used, when a very large segment of the population using LLMs simply is not computer literate enough to provide that information. (Also, I strongly suspect that saying "I used an LLM to proofread this" after every two line post which the editor ran through Grammarly, which is technically what LLMDISCLOSE calls for, would render the disclosures as somewhat equivalent to the Prop 65 labels - somewhere between annoying and meaningless in many cases, and something which a certain populace of editors stick on the end of every comment because you believe that's less likely to get you sanctioned than forgetting to mention you had Grammarly installed)
However, conversely, what the average enWiki editor cares about is substantial LLM interference - creation of entire sentences, extensive reformulation - aka, the point at which the public domain aspect of LLM text and the PD labeling requirement starts kicking in. It's not a perfect relationship, admittedly, but it covers the cases that I believe most editors view should be disclosed, while leaving alone many of the LLM use cases (like spellcheck, limited translation, formatting) that most editors are fine with or can, at the very least, tolerate. GreenLipstickLesbian💌🧸 19:35, 16 January 2026 (UTC)[reply]
@GreenLipstickLesbianWP:LLMDISCLOSE isn't mandatory though, just advised. In a system where it is not mandated, it won't be done unless folks are feeling kindly. But I acknowledge that with the current text of LLMDISCLOSE, we could begin to foster a culture that encourages, rewards, and advertises the importance of LLM disclosure. We may need a sort of PR campaign where it's like "are you using AI? You should be disclosing that!" But I think it'd be more successful if we could say you *must*. CaptainEekEdits Ho Cap'n!⚓ 18:57, 16 January 2026 (UTC)[reply]
For the most part, people do what's easy and avoid what's painful. If you want LLM use disclosed, then you need to make it easy and not painful. For example, do we have some userboxes, and can that be considered good enough disclosure? If so, let's advertise those and make it easy for people to disclose. Similarly, if we want people to disclose, we have to not punish them for doing so (e.g., don't yell at them for being horrible LLM-using scum). WhatamIdoing (talk) 20:18, 16 January 2026 (UTC)[reply]
Maybe... or maybe a checkbox would just get some people to check it always, even if they're not using an LLM (we don't have a rule against false disclosures), and still be ignored by most LLM-using editors. WhatamIdoing (talk) 21:23, 16 January 2026 (UTC)[reply]
One argument I've seen against a per-edit checkbox is that it presumes acceptability. I.e., if you tick the box then it means your use of LLMs was fine because you disclosed it. Athanelar (talk) 21:28, 16 January 2026 (UTC)[reply]
Unfortunately, there's no foolproof way to tell whether a comment was LLM generated or not (sure, there are WP:AISIGNS, but again, those are just signs). Agree with Katzrockso that this would work better as an essay than a guideline. Some1 (talk) 02:30, 16 January 2026 (UTC)[reply]
Oppose. Too long, and I don't think a fourth revision would address the problems; this is trying to do too much, some of which is unnecessary and some of which is impossible to legislate. I agree with those who say a paragraph (or even a sentence) somewhere saying LLMs should not be used for talk page communication would be reasonable. Mike Christie (talk - contribs - library) 13:38, 16 January 2026 (UTC)[reply]
Support the crux of the proposal, which would prohibit using an LLM to "generate user-to-user communication". This is analogous to WP:LLMCOMM's "Editors should not use LLMs to write comments generatively", and would close the loophole of how the existing WP:AITALK guideline does not explicitly disallow LLM misuse in discussions or designate it as a behavioral problem. A review of the WP:ANI archives shows that editors are regularly blocked for posting LLM-generated arguments on talk pages and noticeboards, and the fact that our policies and guidelines do not specifically address this very common situation is misleading new editors into believing that this type of LLM misuse is acceptable. Editors with limited English proficiency are, of course, welcome to use dedicated machine translation tools (such as the ones in this comparison) to assist with communication. The passage of the WP:NEWLLM policy suggests that LLM-related policy proposals are more likely to succeed when they are short and specific, so I recommend moving most of the proposed document to an information or supplemental page that can be edited more freely without needing a community-wide review. — Newslingertalk14:17, 16 January 2026 (UTC)[reply]
I said above under version 2 that I don't think much of what is being addressed here is legislatable at all, but if anything is to be added I'd like to see a sentence or two added to a suitable guideline as Novem Linguae suggests. I think making this into an essay is currently the best option. Essays can be influential, especially when they reflect a common opinion, so it's not the worst thing that can happen to your work. Mike Christie (talk - contribs - library) 16:41, 16 January 2026 (UTC)[reply]
@Newslinger, I've read that some of the "dedicated machine translation" tools are using LLMs internally (e.g., DeepL Translator). Even some ordinary grammar check tools (e.g., inside old-fashioned word processing software like MS Word) are using LLMs now. Many people are (or will soon be) using LLMs indirectly, with no knowledge that they are doing so. WhatamIdoing (talk) 17:44, 16 January 2026 (UTC)[reply]
Which is one of the reasons why 1) people who can't communicate in English really shouldn't be participating in discussions on enwiki and 2) people who use machine translation (of any type) really should disclose this and reference the source text (so other users who either speak the source language or prefer a different machine translation tool can double-check the translation themselves). -- LWGtalk(VOPOV)17:52, 16 January 2026 (UTC)[reply]
We sometimes need people who can't write well in English to be communicating with us. We need comments from readers and newcomers that tell us that an article contains factual errors, outdated information, or a non-neutral bias. When the subject of the article is closely tied to a non-English speaking place/culture, then the people most likely to notice those problems is someone who doesn't write easily in English. If one of them spots a problem, our response should sound like "Thanks for telling us. I'll fix it" instead of "People who can't communicate in English really shouldn't be participating in discussions on enwiki. This article can just stay wrong until you learn to write in English without using machine translation tools!" WhatamIdoing (talk) 19:51, 16 January 2026 (UTC)[reply]
IMO if they are capable of identifying factual errors, outdated information, or non-neutral bias in content written in English, then they should be capable of communicating their concerns in English as well, or at least of saying "I have some concerns about this article, I wrote up a description of my concerns in [language] and translated it with [tool], hopefully it is helpful." With that said, I definitely don't support biting newbies, and an appropriate response to someone who accidentally offends a Wikipedia norm is "Thanks for your contribution. Just you know, we usually do things differently here, please do it this other way in the future." -- LWGtalk(VOPOV)20:04, 16 January 2026 (UTC)[reply]
Because English is the lingua franca of the internet, millions of people around the world use browser extensions that automatically translate websites into their preferred language. Consequently, people can be capable of identifying problems in articles but not actually be able to write in English. WhatamIdoing (talk) 20:22, 16 January 2026 (UTC)[reply]
I don't speak Dutch or Portuguese beyond a handful of words but I can tell you that if I found an article about a political party or similar group saying "De leider is een vreselijke man." or "O líder é um homem horrível." that it needs to be changed. Similarly I can tell you that an article infobox saying the gradient of a railway is 40% is definitely incorrect, but I can't tell you what either needs changing to and I can't articulate what the problem is in Dutch or Portuguese, but I can use machine translation to give any editors there who don't speak English enough of a clue that they can fix it. The same is true in reverse. Thryduulf (talk) 21:48, 16 January 2026 (UTC)[reply]
I don't know. A recent example: I removed a paragraph containing a hallucinated source from an article here recently. That paragraph had made it to the Korean Wikipedia (I checked dates to confirm the direction of transit), so I removed it there too, and used Google Translate to post an explanation because otherwise it'd look like I was just removing text for no reason. Gnomingstuff (talk) 01:38, 18 January 2026 (UTC)[reply]
Yes, I also recognize that many machine translation tools now incorporate LLMs in their implementation. (The other active RfC at Wikipedia talk:Translation § Request for comment seeks to address this for translated article content, but not translated discussion comments.) When an editor in an LLM-related conduct dispute mentions that they are using an LLM for translation, I have always responded that there is a distinction between using a dedicated machine translation tool (such as Google Translate or DeepL Translator) that aims to convey a faithful representation of one's words in the target language, and an AI chatbot that can generate all kinds of additional content. If someone uses a language other than English to ask an AI chatbot to generate a talk page argument in English, the output would not be acceptable in a talk page discussion. But, if someone uses an LLM-based tool (preferably a dedicated machine translation tool) solely to translate their words to English without augmenting the content of their original message, that would not be a generative use of LLM and should not be restricted by the proposal. — Newslingertalk01:00, 17 January 2026 (UTC)[reply]
Unless there is some reliable way for someone other than the person posting the comment to know which it was then the distinction is not something we can or should incorporate into our policies, etc. Thryduulf (talk) 01:02, 17 January 2026 (UTC)[reply]
Support. I agree with the concerns that it is too long, and certainly far from a perfect proposal, but having something imperfect is better than a consensus against having any regulation at all. I do also agree with Newslinger's proposal of moving the bulk of it to an information page if there is consensus for it. Chaotic Enby (talk · contribs) 17:22, 16 January 2026 (UTC)[reply]
Oppose - To restrictive and long. There is a reasonable way to use LLMs and this effectively disallows it which is a step to far. That coupled with the at best educated guessing on if it is actually is an LLM and assuming it is all unreviewed makes it untenable. PackMecEng (talk) 17:31, 16 January 2026 (UTC)[reply]
Support the spirit of the opening paragraph, but too long and in need of tone improvements. Currently the language in this feels like it is too internally-oriented to the discussions we have been having on-wiki about this issue, whereas I would prefer it to be oriented in a way that will help outsiders with no context understand why use-cases for LLMs that might be accepted elsewhere aren't accepted here. The version at User:WhatamIdoing/Sandbox is more appropriate in length and tone, but too weak IMO. I would support WhatamIdoing's version if posting the original text/prompt along with LLM-polished/translated output were upgraded from a suggestion to an expectation. With that said, upgrading WP:LLMDISCLOSE and WP:LLMCOMM to guidelines is the simplest solution and is what we should actually do here. -- LWGtalk(VOPOV)17:34, 16 January 2026 (UTC)[reply]
I would also support those last two proposals, with the first one being required from a copyright perspective (disclosure of public domain contributions) and the second one being a much more concise version of the proposal currently under discussion. Chaotic Enby (talk · contribs) 17:36, 16 January 2026 (UTC)[reply]
Support per Chaotic Enby and Newslinger, I don't see an issue with length since the lead and nutshell exist for this reason, but am fine with some of it being moved to an information page. LWG's idea above is also good, though re LLMDISCLOSE, Every edit that incorporates LLM output should be marked as LLM-assisted by identifying the name and, if possible, version of the AI in the edit summary is something nobody is going to do unprompted (and personally I've never seen). Kowal2701 (talk) 19:01, 16 January 2026 (UTC)[reply]
something nobody is going to do unprompted true, but it's something people should be doing. Failing to realize you ought to disclose LLM use is understandable, but failing to disclose it when specifically asked to do so is disruptive - there's simply no constructive reason to conceal the provenance of text you insert into Wikipedia. So while I don't expect people to do this unprompted, I think we should be firmly and kindly prompting people to do it. -- LWGtalk(VOPOV)19:11, 16 January 2026 (UTC)[reply]
I'd rather something like Transparency about LLM-use is strongly encouraged., and we should have practically zero tolerance for people denying LLM-use in unambiguous cases, ought to be met with a conditional mainspace block. I'll be bold and add something Kowal2701 (talk) 20:47, 16 January 2026 (UTC)[reply]
Oppose as written. For an actual guideline, I would prefer something like User:WhatamIdoing/Sandbox. It makes clear the general expectations of the community should it be adopted. This proposal reads like an essay; it's trying to convince you of a certain viewpoint. Guidelines should be unambiguous declarations about the community's policies. For me, the proposed guideline is preaching to the choir, I agree with basically all of it, but I don't see it as appropriate for a guideline. I second what Chaotic Enby, Newslinger, and CaptianEek have said, and absolutely support the creation of a guideline of this nature. -- Agentdoge (talk) 19:27, 16 January 2026 (UTC)[reply]
Oppose. This guideline is too long and too complicated. The guideline should be pretty simple - the length and the complexity should be similar to User:WhatamIdoing/Sandbox. Explaining the problem of the LLM may be added as a later explanatory essay, but not in this guideline. Provisions that editors are expected to WP:AGF before blaming LLM should also be made. ✠ SunDawn ✠Contact me!02:05, 17 January 2026 (UTC)[reply]
Weak support for the original proposal because the crux of it is still better than the status quo in spite of its flaws (too long, unfocused, essay-like); Strong supportWhatamIdoing's version, whether or not tweaks happen to it. Choucas0 🐦⬛14:32, 17 January 2026 (UTC)[reply]
I support a ban on LLM-written communication, but oppose this draft, as I find it to be poorly written in several respects. It is too long. The "Remedies" section entirely duplicates current practice elsewhere, some (maybe all?) of which is already documented in other guidelines. It says something is "forbidden" but then goes on to give an example of how LLMs actually can be used. And it generally contains far too much explaining of its own logic, which makes it much weaker and open to wikilawyering. "Anything an LLM can do, you can do better"? Nope. "Large language models cannot interpret and apply Wikipedia policies and guidelines"? Dubious. Toadspike[Talk]18:22, 17 January 2026 (UTC)[reply]
Oppose as written, support a ban/ regulation on LLM communication. Toadspike and WhatamIdoing expressed my concerns here quite nicely, I think this proposal is not yet ready to become a guideline. For example it uses examples from article editing for comparisons to communication. I also think that it insufficiently addresses WP:CIVILITY, some people may write in a way others interpret as LLM-generated, if we the fire the harshest wording we have at them we might scare away some great contributors from the project. Therefore I too support WhatamIdoing's proposal. Best, Squawk7700 (talk) 22:23, 17 January 2026 (UTC)[reply]
Thanks for the clarification! I myself meant it as more of a literal proposal (not a WP:PROPOSAL), can't speak for the others here of course and do think we'd probably need to so some workshopping. That said I think you already did quite a good job on that draft. Kind regards Squawk7700 (talk) 23:21, 17 January 2026 (UTC)[reply]
Don't feel bad. I've been writing Wikipedia's policies and guidelines for longer than some of our editors have been alive, and we spent hours discussing the problems we're having with AI comments before Athanelar launched this RFC. Given all that, it would be surprising if I couldn't throw together something that looks okay. WhatamIdoing (talk) 23:59, 17 January 2026 (UTC)[reply]
Any outcome here which results in more restriction against LLM usage is a positive one for me. I might not be fully satisfied with WAID's approach to the matter, but more community consensus against AI will only ever be an improvement as far as I'm concerned. I'll be happy if my RfC leads to that, whether I wrote the final result is irrelevant. Athanelar (talk) 02:19, 18 January 2026 (UTC)[reply]
Regarding examples from articles: Finding an example of a talk page edit of this nature would be difficult bordering on impossible; people aren't supposed to edit others' comments, and they almost never do except to vandalize them or occasionally fix typos. Gnomingstuff (talk) 23:33, 18 January 2026 (UTC)[reply]
Oppose. Most of the section "Large language models are not suitable for this task" digresses from the main topic and relies on a very outdated understanding of what LLMs can do. Among its issues, the idea that LLMs only repeat text from their training data is untenable in 2026. The section also completely ignores LLM fine-tuning. However, like dlthewave, I would Support something similar to WhatamIdoing's sandbox, which is well-reasoned, properly nuanced and relatively concise. Alenoach (talk) 20:37, 18 January 2026 (UTC)[reply]
[It] relies on a very outdated understanding of what LLMs can do.[citation needed]Among its issues, the idea that LLMs only repeat text from their training data is untenable in 2026.[citation needed]
For example, it claims that LLMs are not able to perform arithmetic operations, and that they instead only retrieve memorized results. But what if you ask to calculate e.g. 73616*3*168346/4? Surely, this can't be in its training data, and yet ChatGPT gets the exact answer even without code execution. Alenoach (talk) 21:19, 18 January 2026 (UTC)[reply]
It can use code interpreter, but it generally doesn't and it's often visible in the chain-of-thought when it uses it. You can also see that when the number of digits in a multiplication is too high (e.g. > 20 digits), it will start making mistakes (similarly to a human brain), whereas a Python interpreter would get an exact answer. I haven't found any great source assessing ChatGPT on multiplications, but the graph shown here gives an idea of what ChatGPT's performance profile on multiplications looks like. Alenoach (talk) 21:54, 18 January 2026 (UTC)[reply]
Because there's such a staggering amount of correct calculations in its training data that it usually gets things right, but it's fundamentally still a large language model. JustARandomSquid (talk) 21:23, 18 January 2026 (UTC)[reply]
...which means that we're wrong to say that these tools can't do these things.
Remember that not everyone is going to draw a distinction between "the LLM part of ChatGPT" and "the non-LLM parts of ChatGPT". Non-technical people will often say "LLM" or "AI" when they mean something vaguely in that general category. The proposal here doesn't distinguish between the LLM and non-LLM components of these tools. It seeks to ban them all. WhatamIdoing (talk) 21:32, 18 January 2026 (UTC)[reply]
Mine doesn't seem to disclose that. If they've started secretly doing stuff under the hood that's messed up. Still though, guidelines shouldn't have to apologetically explain themselves like this proposal does. JustARandomSquid (talk) 21:32, 18 January 2026 (UTC)[reply]
Support LLM outputs tend to be repetitive, irrelevant and reference common guidelines that everyone knows to support the poster's argument, they are a time sink because even an AI generated response with no effort put in it, has to be addressed by human effort. It is better to just ban it all. Zalaraz (talk) 01:50, 20 January 2026 (UTC)[reply]
Oppose as guideline, but retain this laudable effort on the part of Athanelar to address a growing problem as a proto-essay. While I am in favor of adapting WhatamIdoing's proposal into a badly-needed guideline, I fully understand OP's frustration. I say this as someone fresh out of a not-so-pleasant encounter with a disruptive LLM-using sockpuppet. As someone who uses next to no automated tools or bots himself (I appreciate 'Find on page'—though it feels a bit like cheating—and (sarcasm alert) it's nice to not have to constantly format a custom sig) you can imagine the frustration of having to watch someone spit out (I use the phrase advisedly) rapid-fire artificial responses to your questions. Worse, the LLM argues on its own behalf due to the lack of a clear guideline forbidding itself to do so, while repeating things you've brought up—such as articles, guidelines, and shortcuts—in an imbecilic way. I don't feel the same way about participating in a translated discussion, and the fact that I've never noticed being in one probably means the process works well. In any case, the sooner an LLM-talk guideline forbidding AI-produced discussion is implemented, the better. StonyBrookbabble07:55, 20 January 2026 (UTC)[reply]
As policy, this proposal assumes that it is the case (and will for some to continue to be the case) that people can confidently identify the output of Large Language Models. I am skeptical that, for short texts, this can reliably be done today. Worse, I can see this shifting debate on talk pages and drama boards toward partisan allegations of AI use that can neither be confirmed nor refuted. Worse, in the coming months, expect to see LLM integration into operating systems, browsers, and text editors for writing and editing assistance; many people won’t know whether the dotted red underlines derive from an LLM or a dictionary. MarkBernstein (talk) 23:08, 19 January 2026 (UTC)[reply]
in the coming months, expect to see LLM integration into operating systems, browsers, and text editors for writing and editing assistance We're already there. Gemini is in everything Google, you can't really use any Meta product without accidentally triggering Meta AI, Microsoft Word has Copilot, Windows 11 is an "AI-focused operating system", there are multiple competing AI browsers like ChatGPT Atlas, etc. etc. etc. SuperPianoMan9167 (talk) 23:19, 19 January 2026 (UTC)[reply]
Prioritize current towns over former municipalities in infoboxes and leads
Hi, I'm new here on en.wiki, therefore I don't know if this is the right place, if it isn't please move the thread to the right one. I'm already gone thorugh the idea lab, where I've received an approving opinion and a dissenting one (this last one though it seemed to me that it was due to a misunderstanding, probably caused by a poor formulation of the proposal on my part).
I'm opening this discussion to propose to prioritize current towns (and other sub-municipal entities) over former municipalities in infoboxes and leads in the countries in which we don't have different articles between a municipality and its administrative center, like Italy, Germany, Switzerland. What I'm talking about:
Let's take a look to three articles about former municipalties that have become municipal sub-entities in these three countries:
Italy, Bazzano, Valsamoggia: the infobox and the lead are about the present frazione, and it is mentioned that it was an independent municipality until 2014
Germany, Bachfeld: here too the infobox and the lead are about the present Ortsteil, mentioning that it was independent until 2019
Switzerland, Adlikon bei Andelfingen: here both the infobox and the lead are about the former municipality, although it is still an independent town with its own borders and everything (see here for reference). This means that we won't be able to update the infobox and the lead with new statistics about population (ok, in Switzerland data about municipal sub-entities are usually scarce, but still existent) and administrative divisions, because they are not about the town, but the former municipality. Please note that this case is not isolated, but common to every former Swiss municipality.
What would it change:
Taking always Adlikon bei Andelfingen as example, the proposal if accepted would lead to this kind of changes:
In the lead, from Adlikon bei Andelfingen (or simply Adlikon) is a former municipality in the district of Andelfingen in the canton of Zürich in Switzerland to Adlikon bei Andelfingen (or simply Adlikon) is a town in the municipality of Andelfingen, in the canton of Zürich in Switzerland. It was an independent municipality until 31 December 2022.
In the infobox it would be added a "|subdivision_type4=" with the current municipality the town is part of, and the "|neighboring_municipalities=" parameter would be either removed or replaced with the neighbouring towns (altough I couldn't find an appropriate parameter in the infobox settlement). Moreover the population data would be updated if and when new data will become available. Finally, the website would be removed unless (which is the case for some towns) it is still in some way an official wbsite about the town.
Please note that I've talked only about Switzerland, but the proposal applies to every country in which we don't have different articles for former municipalities and their former administrative center.
Are there opinions on the matter? --Friniate ✉ 14:21, 9 January 2026 (UTC)[reply]
I've notified of this discussion the users who had taken part to the discussion at the idea lab, the wikiprojects Switzerland and Geography, and the talk pages of the involved infoboxes.--Friniate ✉ 14:28, 9 January 2026 (UTC)[reply]
We should summarize what reliable sources say about these localities, not update to the most recent information about the topic. If a locality changes in some formal administrative status today, we don't need to change the first sentence of the article until a preponderance of reliable sources recognize this change. Wikipedia is an encyclopedic summarization of what reliable sources say about a subject, giving a broad historical view, rather than the most up-to-date information about a topic. That might mean emphasizing a former administrative status than the newer one depending on what the sources say. Katzrockso (talk) 14:29, 9 January 2026 (UTC)[reply]
@Katzrockso Well, now I don't think that we are really doing that, we're basically differentiating between countries: with Italy and Germany we're prioritizing the current entities, with Switzerland the former municipalities (at least for the municipalities that were suppressed after Wikipedia was born). If the administrative change is official and confirmed by reliable sources, I don't see why we should prioritize a former entity suppressed maybe years ago...
Yes, there is a reliable source which puts the former municipality first, but can't we take an editorial decision on matters like these, in order to avoid a different treatment of very similar cases across countries? The decision itself to talk about the former municipality and the present village in a single article is an editorial decision... There are anyway also other sources which portray first of all the current town, and on subjects like these I doubt that we will find much else. --Friniate ✉ 14:54, 9 January 2026 (UTC)[reply]
I generally support this proposal. The procedure of WP:NAMECHANGES, which is fairly similar to what we're doing here, is to prioritize the sources after the change/merger. For these tiny ex-villages, we usually don't have many sources, which means I think we can take the news articles on the mergers at face value and assume that future sources will refer to the ex-village as part of another municipality. (I hope this rambling makes sense.) Toadspike[Talk]11:57, 11 January 2026 (UTC)[reply]
I agree with the spirit of the proposal, it's probably hard to easily generalize to all cases.
In most cases, I would find it odd to call a former municipality as a "town". Depending on the situation, the terms of "District" or "Neighborhood" (example: Wollishofen) may be more accurate. But that's not specific to Switzerland.
There are also situations where "former municipality" may be the best descriptor. For example, my hometown St-Legier was now merged with Blonay as Blonay - Saint-Légier. Most people who live there would, when asked, say they live in "St-Legier". However, most official sources would describe St-Legier as a former municipality (example). 7804j (talk) 19:58, 12 January 2026 (UTC)[reply]
@7804j I fear that the HLS/DHS/DSS uses this approach with every former municipality, at least the ones that were dissolved in the last decades. So if we follow that approach, we should keep the present situation.
We have in any case also official sources for sub-municipal entities (it's where I've taken the translation of "towns" since the "Répertoire officiel des localités" is translated as "Official index of cities and towns"). --Friniate ✉ 20:09, 12 January 2026 (UTC)[reply]
I agree the historical dictionary isn't the best example as it always talks about former municipalities. But I also don't think the official of cities and towns is. I'm actually not sure why they keep St-Legier and Blonay as distinct, and I wonder if that's because they didn't update it yet? For example, St-Legier-la-Chiesaz was itself the merger of St-Legier and La-Chiesaz, but since this is much older, the distinction disappeared in practice. Also I find the translation of "localités" i to "cities and towns" as very odd -- you would certainly not refer to a village of a few thousand or hundreds of people as a "town" in Switzerland (in French for example, both town and cities would be called "ville", and in Switzerland something is called a "ville" starting from 10k inhabitants). 7804j (talk) 03:41, 13 January 2026 (UTC)[reply]
@7804j No, no, it's updated, here the french version: as you can see under "Limites de la commune" there is as "Nom officiel de la commune: Blonay - Saint-Légier".
As for the translation I've no objection to "village" or other words, I used "town" simply because it's the official translation, but English is not my mother tongue so I've no opinion on it. --Friniate ✉ 14:03, 13 January 2026 (UTC)[reply]
There are so many definitions of town and city in English that whichever you use is unlikely to be completely wrong. Many years ago, I remember seeing an official sign for the "City of ____", Population: 6. In the US, city can be an indication of size (a city is bigger than a town) or of legal status. WhatamIdoing (talk) 20:57, 13 January 2026 (UTC)[reply]
Ah I wasn't aware that "localité/Ortschaft" was an official term under Swiss law. Good to know!
Then I think it's more an issue of translation. When I see "localité" in French, it makes it clear that it doesn't refer to a proper municipality (commune/Gemeinde). But I think the term "town" or "city" in English would suggest that it's actually a commune/Gemeinde, with all the things that come with it (including different taxation rate, etc.).
I generally support this as well, the proposal makes sense for situations where a name has continuously referred to a particular place and the only difference is a change in how the place is classified. I think the OP deserves some praise for identifying and presenting this niche issue so clearly and for using proper en.Wiki-process as well. I hope they stick around. JoelleJay (talk) 17:09, 14 January 2026 (UTC)[reply]
"Adlikon bei Andelfingen (or simply Adlikon) is a former municipality...", in general "X is a former Y...", makes sense if that is the main way X is discussed today. A UK example might be if a small village is mainly notable for being a former County Town. I imagine, though, that such cases are unusual. Readers would normally want to first know what X is currently, and then maybe later know what it was formerly, probably in a history section. --Northernhenge (talk) 17:28, 14 January 2026 (UTC)[reply]
Generally I agree with others above that we should prioritise current information, however it is very difficult to make a general rule on these matters. This combines two tricky challenges, precisely defining an article topic, and using the very rigid format of infoboxes on topics that lack this rigidity. That said, the specific proposal seems sounds. It gets to part of the second issue by noting the infobox will have to be swapped out. CMD (talk) 00:37, 15 January 2026 (UTC)[reply]
@Chipmunkdavis@Northernhenge I've tried to do a bit of research on related policies on the matter. I see that on Wikipedia:WikiProject Cities/US Guideline even in the case of "ghost towns" the guideline provides that the emphasis should be put on the current situation... The same can be said also for old cities, Troy has an infobox about the current archeological site. I agree though that in principle a good guideline should always give enough wiggle room to deal with any kind of situation and exception case by case, but I don't think that this means that we can't have guideline for the vast majority of common situations, in which we'll have simply a former municipal center that became a village in a new (greater...) municipality.
As I said before, if we simply leave it to the sources, the risk is that the we are overly influenced by the approach of one single (although undoubtedly reliable) source, as in this case the historical dictionary of Switzerland, which puts the most emphasis on former administrative divisions in all these cases, without a case-by-case evaluation. It wouldn't be wrong per se, but it's certainly detrimental to consistency inside the encyclopedia.
So, maybe a good compromise could be something along the lines of: articles about human settlements should be normally updated to the current situation in terms of population, administrative classification, etc. Exceptions can be made in the case in which there is a clear consensus among many reliable sources that a former situation is more relevant (for example in the case of ghost towns, if the reliable sources mainly talk about the town as it was still inhabited rather than on the current ghost town). If both the subjects are relevant you could consider the possibility of having two different articles dedicated to each of them (for example Sparta is about the old city-state, and Sparta, Laconia about the current city, see WP:MULTIPLESUBJECTS). --Friniate ✉ 17:49, 15 January 2026 (UTC)[reply]
Ok, let's see if we have a consensus here. I'd propose to:
Add the following sentence (or a similar one) to Wikipedia:WikiProject Cities/Settlements: Article structure (an advice page): articles about human settlements should be normally updated to the current situation in terms of population, administrative classification, etc, and the lead and the infobox should reflect that. Exceptions can be made in the case in which there is a clear consensus among many reliable sources that a former situation is more relevant (for example in the case of ghost towns, if the reliable sources mainly talk about the town as it was still inhabited rather than on the current ghost town). If both the subjects are relevant you could consider the possibility of having two different articles dedicated to each of them (for example Sparta is about the old city-state, and Sparta, Laconia about the current city, see WP:MULTIPLESUBJECTS).
Open a more specific discussion about the Swiss situation, in particolar about the instructions on Template:Infobox Switzerland municipality, that currently state (or at least imply) that we should always prioritize the former municipalities over the current localities.
Gadget Proposal: Mobile-friendly “Go to top” navigation button
I’d like to propose a small, opt-in gadget that adds a floating “go to top” button while browsing Wikipedia pages. The primary motivation for this is improving navigation on mobile, where long articles are common and returning to the top of the page can be unnecessarily tedious. The script also works on desktop skins, but mobile usability is the main focus.
Go to top” button in light mode (top) and dark mode (bottom) on mobile.
The gadget displays a single floating button once the user has scrolled down the page. Activating it smoothly scrolls the page back to the top. The button automatically hides itself when the user enters any editing interface, including VisualEditor, source editing, and the mobile editor SPA routes, so it does not interfere with editing workflows.
The script adapts to Wikimedia’s light and dark themes using the existing client preference classes and system color scheme where applicable. It does not rely on external libraries or assets and uses only MediaWiki-provided modules and jQuery already loaded by the site. The button is keyboard accessible and includes an ARIA label for screen readers.
The main reason I think this could be useful as a gadget is that many Wikipedia articles are very long, especially on mobile devices, and there is currently no consistent, visible affordance for quickly returning to the top of the page. Similar functionality is widely used on other content-heavy platforms and is generally expected by users. Since this would be opt-in, it should not affect users who prefer the current behavior.
The script is currently available as a user script at User:Overandoutnerd/Scripts/goToTop.js. The documentation is available at goToTop. I’m happy to rename, refactor, or split styling into a separate gadget CSS file if that’s preferred. I’m also willing to maintain the gadget and respond to feedback or issues if it’s accepted.
I’d appreciate feedback on whether this would be appropriate as a gadget, as well as any suggestions regarding naming, placement, or implementation details.
I tried it and my first impressions are that I do quite like it because it fits with Wikipedia's style well in Minerva. I do not have any criticism for it right now, but might post later after properly trying it. FantasticWikiUser (talk) 14:48, 17 January 2026 (UTC)[reply]
Note that at on iOS you can double-tap the top bar of the phone to return to top. I imagine Android phones have a similar feature, but it's been a while since I've had one. novovtalk edits01:45, 18 January 2026 (UTC)[reply]
On Macs command-uparrow does this. And on mobile, screen real estate is usually at a premium and wasting it on a seldom-used navigation feature seems like a bad idea. —David Eppstein (talk) 06:24, 20 January 2026 (UTC)[reply]
requests for comment on enabling the 25th anniversary birthday mode
baby globe
to celebrate wikipedia's 25th anniversary, a toggleable "birthday mode" has been created. it consists of easter eggs involving baby globe, such as having it scroll a phone in the top right of an article while the reader scrolls down an article. more details can be found on the linked page. if the feature is enabled, by 26 january, administrators can configure the mode through community configuration; and the mode will be public from 16 feburary to 16 march (an administrator will have to turn on the feature on the day itself).
enabling the feature requires "a consensus from your community", so i have brought it here to ascertain the community consensus. (there was some previous discussion on Wikipedia:Village pump (miscellaneous)/Archive 86 § Easter eggs, but it was archived without further action.)
should english wikipedia enable the 25th anniversary birthday mode, and if so, should it be on by default?
option 1: enable birthday mode, and have it on by default
option 2: enable birthday mode, and have it off by default
option 3: do not enable birthday mode
voting
option 1 > option 2, oppose option 3. i think it would be a shame if the largest wikipedia did not participate in the celebration. i'd also prefer it to be on by default, as people generally don't change default settings, but i'm fine either way. ltbdl (skirt) 19:19, 17 January 2026 (UTC)[reply]
Support option 1 > option 2, oppose option 3 per ltbdl. This is an excellent feature that our readers will hopefully enjoy. It also fits nicely with one of our goals in recent times, which has been to emphasize the human aspect of Wikipedia. Toadspike[Talk]20:41, 17 January 2026 (UTC)[reply]
Prefer option 1 if there is an easy accessible way to disable it. Otherwise, option 2. Oppose option 3 because, come on, it's so stinkin cute. Chaotic Enby (talk · contribs) 21:05, 17 January 2026 (UTC)[reply]
Option 2 Per Thilio. Suggest that greater consideration be given to limits on the number of top-of-page distractions we allow. I also wouldn't mind a shorter period.--Wehwalt (talk) 21:10, 17 January 2026 (UTC)[reply]
Second choice is 3. I feel the case can be made that we're being very self indulgent in patting ourselves on the back in ways that do not benefit the reader. Wehwalt (talk) 15:17, 19 January 2026 (UTC)[reply]
Option 1 Agree with Chaotic Enby that this is contingent on having a readily available (and known--e.g., included in banner announcement of easter eggs) method to disable. — rsjaffe🗣️21:13, 17 January 2026 (UTC)[reply]
Option 1 let's have fun for once, 25 year anniversaries don't happen very often. Yes it's quirky, correct some people won't like it, but this is only temporary. Let's celebrate in style and get noticed for it. CNC (talk) 20:09, 18 January 2026 (UTC)[reply]
Option 3 unless we have a lot more information. I wouldn't like to see a confetti-throwing Wikimedia globe appear on the page for some disaster- or genocide-related page with massive viewer numbers. The linked Wikimedia page doesn't inform me sufficiently about what to expect. Fram (talk) 10:32, 19 January 2026 (UTC)[reply]
option 1, it's toggleable by the user and doesn't affect the actual contents of the page (thinking back to the grimace mcdonald's wiki incident) Jaidenstar (talk) 17:43, 19 January 2026 (UTC)[reply]
Comment. Some thoughts:
Like Fram, I am also concerned that cute (they are) easter eggs will show on serious articles.
If we want to enable the easter eggs by default, then we need to accept that they will show on serious articles or we need to filter which articles they show on. Aside from the scale needed for it, the second option also might go against the community sentiment that Wikipedia is not censored.
Do we know which extension this is linked to? If so, we could raise a patch/file a bug and/or read the code to figure out if there are safeguards against this. Sohom (talk) 11:43, 19 January 2026 (UTC)[reply]
Oh this is Extension:WP25EasterEggs, taking a look at the code it is extremely configurable! We can enable it for a specific set of pages, or enable it globally and not show it for a specific set of page and all of that is configurable through CommunityConfiguration (read through a on-wiki JSON file). Sohom (talk) 12:05, 19 January 2026 (UTC)[reply]
Good job finding it! My concern is that, if we enable it globally, we need to check most en.WP pages if they should be excluded.Excluding categories may help, but I remember an argument against actual policy proposals like inappropriate-image blurring that also apply here: categories are imprecise in a lot of ways (the first thing in my search about it is Thryduulf's 2024-11-06T11:57:00). If we will do categories regardless of that, you will still need to vet which categories are included or not. LightNightLights (talk • contribs) 12:24, 19 January 2026 (UTC)[reply]
Pretty much every pick for which pages will have it disabled or not will be extremely controversial, and more worryingly could be used as precedent for future "hiding" tools, so I don't think that's something we should go through. Going deeper than straightforward things like "have it enabled on the Main Page" and "have it enabled/disabled on X namespace" opens up a massive can of worms. Chaotic Enby (talk · contribs) 12:37, 19 January 2026 (UTC)[reply]
This has nothing to do with NOTCENSORED though. We are not here to deliberately cause offence with unencyclopedic content either. Displaying these easter eggs or not doesn't change the contents of the encyclopedia. Fram (talk) 14:01, 19 January 2026 (UTC)[reply]
I doubt "deliberately" is the best way to put it. It doesn't change the contents directly, but does leave us with a list of "acceptable pages" and "controversial pages", which can absolutely be used as a precedent for making some "controversial" material less visible/prominent. Chaotic Enby (talk · contribs) 14:03, 19 January 2026 (UTC)[reply]
Wouldn't like such a list either, which means this can go on the Main Page or perhaps on user pages but should stay out of the articles in general (with what we now know of what kind of easter eggs we may expect). Perhaps my opinion would change with more info, at the moment it feels too much like writing a blanco cheque. Fram (talk) 14:22, 19 January 2026 (UTC)[reply]
A reader choosing to look at a potentially offensive article does not change the fact that Wikipedia is 25 years old now and we are celebrating that. These are entirely unrelated. Toadspike[Talk]14:39, 19 January 2026 (UTC)[reply]
Not if you display them on the same page, in ways so far unknown. A reader looking in mid-March to our article "US invasion of Greenland" may well be completely unaware that Wikipedia was 25 years old a few months before and that the rightside image is displayed for that reason and not to celebrate the invasion. Fram (talk) 14:50, 19 January 2026 (UTC)[reply]
In my opinion, this is why we should make it clear (e.g. with a banner) that this easter egg is to celebrate Wikipedia's birthday. Incidentally, this also solves the "make it easy to turn off" problem. Chaotic Enby (talk · contribs) 15:02, 19 January 2026 (UTC)[reply]
Your idea sounds good. Maybe we should have text below the globe mascot images in the lines of "Celebrating English Wikipedia's 25th birthday"? It does not suffer from banner blindness and will always appear alongside the cute images, but does not solve the turn-off problem. LightNightLights (talk • contribs) 15:27, 19 January 2026 (UTC)[reply]
That could work! Either that, or making the banner very obviously birthday-themed so readers don't have to actually read the words to understand the context behind it all. Or both! Chaotic Enby (talk · contribs) 15:34, 19 January 2026 (UTC)[reply]
Based on what is already implemented, tThere is going to be a relatively large toggle on the sidebar every page (similar to the dark mode one) that says "birthday mode" but yes, also CE's idea is not a bad idea. Sohom (talk) 15:14, 19 January 2026 (UTC)[reply]
Maybe the "Birthday mode" toggles should be shown above the old ones so that people will see a) most likely see the new ones first, and b) most likely see that something was added in the options menu. LightNightLights (talk • contribs) 15:33, 19 January 2026 (UTC)[reply]
Hello :) My name's Corli and I'm working on this project and want to share a bit more context on how we've tried to solve this very tricky problem of "which articles gets which version of Baby Globe?"
We also concluded that trying to make a "disable" list would be extremely difficult (if not impossible!) to do, especially because what we're building needs to work across all language Wikipedias. So we created a version of Baby Globe that appears "neutral". They don't do much, they stand around, blink and look cute. This version can then show up on pretty much any page and not imply anything particularly positive or negative.
By comparison, the versions that are overtly celebratory (like the confetti one) can be configured to show up on overtly celebratory things (people also turning 25 this year, all “cakes”). This is done using mostly Wikidata items, you can see a first version of this here, which we will be updating with all the versions of Baby Globe later this week.
The configuration setup allows Baby Globe to be configured by each opted in language edition to be blocked on specific pages defined in community configuration (eg. define specific pages where no instance of Baby Globe will appear, not even its default neutral state). So you can very easily override and adjust this default setup.
So, if I understand correctly, a default version for all articles and a special one for celebration-related pages? That sounds like a much better idea! Chaotic Enby (talk · contribs) 16:11, 19 January 2026 (UTC)[reply]
The longer story is that there are 14 different versions arranged along a spectrum of sorts from very neutral to outright celebratory. You can get a sense for them in the table, which this week we'll update with the actual gifs so you can properly see the vibe :) These can all be individually turned on / off and customised to appear or not appear on specific articles. CDekock-WMF (talk) 16:21, 19 January 2026 (UTC)[reply]
I want to explicitly mention that I think there is a sentiment among editors that Wikipedia is not censored outside of NOTCENSORED, so I avoided saying "NOTCENSORED" at my original comment.LightNightLights (talk • contribs) 14:19, 19 January 2026 (UTC)[reply]
Option 1 – I was hesitating due to the concerns that Fram raised, but now that @CDekock-WMF has shared the table of different page queries and configurations, I feel reassured that a lot of thought has gone into context-aware presentation of the mascot. ClaudineChionh (she/her · talk · email · global) 22:45, 19 January 2026 (UTC)[reply]
Option 1 - Like Fram, I had some concerns about showing a celebratory globe on pages for horrific actions, but I am confident in the solution provided by the Foundation for that. So, yeah, after 25 years I think we can celebrate for a bit. LightlySeared (talk) 12:21, 20 January 2026 (UTC)[reply]
discussion
Looking at the documentation, it looks like it will be pretty accessible (pop-up) on mobile, while V22 desktop users will have it in their configuration panel – not exactly hidden, but not the most accessible for new users unfamiliar with the interface.
Regarding distractions, we have several articles with GIFs near the top of the page. Ones you'd expect, like Chronophotography, but also ones you may not, like Swing bridge and Panenka. Those GIFs can't be stopped by the average reader – at least, I wouldn't know how – which is in stark contrast to the cute puzzle globe, which looks like it can be disabled with one or two well-advertised button presses. Toadspike[Talk]21:09, 18 January 2026 (UTC)[reply]
It reduces the amount of requests at WP:PERM/PCR, which is important as admin numbers decline.
It makes it more attractive to use PCR protection rather than WP:semi-protection. That opens Wikipedia up more, and can attract more new editors.
Note that I do not propose a merger between the two perms. It is still useful to have PCR as a separate user right for newer editors. —Femke 🐦 (talk) 13:00, 18 January 2026 (UTC)[reply]
Can we see some data about what usually gets assigned first? I would much sooner grant Rollback than PCR. The way I see it, rollback requires the ability to identify common vandalism and page histories; PCR requires a much more nuanced understanding of policies such as verifiability and BLP. -- zzuuzz(talk)13:16, 18 January 2026 (UTC)[reply]
I don't think that's how it typically works. Both tools are only supposed to be used to revert obvious issues, so generally V doesn't come into play. Normally PCR is given first, in part because it largely consists of accepting benign edits, and also because RB gives access to automated tools like Huggle. Toadspike[Talk]13:39, 19 January 2026 (UTC)[reply]
Yup. Broadly speaking, all rollbackers and new page reviewers are qualified for PCR. I'm not sure it's worth explicitly codifying that though. Toadspike[Talk]13:40, 19 January 2026 (UTC)[reply]
Yes. Proposed merge discussions often end up languishing, I think in part because there's no good system for sorting proposed mergers by topic area and very few editors watch the process compared to AfD. Additionally, it seems that most merge discussions involve notability issues. voorts (talk/contributions) 18:24, 19 January 2026 (UTC)[reply]
Yes. Merge discussions lack a strong, centralized administrative task force and can tend to languish. Also, many AfDs close with consensus to merge, which is a confusing distinction to new editors. AfD should also be correspondingly renamed "Articles for Discussion" to match with other XfDs. Cremastra (talk· contribs) 18:28, 19 January 2026 (UTC)[reply]
I just read voorts' comment after posting mine, and it's funny that we say the exact same things about the merge process (down to "languishing") . Cremastra (talk· contribs) 18:29, 19 January 2026 (UTC)[reply]
No. Combining merge discussions with AfD will not solve the problem with merge discussions, because even when a merge discussion is the consensus at an AfD, they still languish and take weeks or even months to enact. Merge discussions languish not because of the centralized discusssion but because they require large amounts of content work which are harder to do than simply voting, and require people who are familiar with the source material. Merge discussions are often not a notability/sourcing issue and combining it with AfD will make many mergers that would make sense as a content decision instead be voted against based on notability, or vice versa. What would happen is we would have these merge AfDs, then no one would enact them for the same exact reason no one enacts them now when the conclusion of an AfD is merge. PARAKANYAA (talk) 18:40, 19 January 2026 (UTC)[reply]
Yes I have seen many mergers get only one or two replies and then last for months before finally being closed. AfD would encourage higher participation and give a quick binding result. Traumnovelle (talk) 18:59, 19 January 2026 (UTC)[reply]
The problem imo isn't that merge closures aren't binding (they are, as are AfD closures) or that they aren't closed quickly. It's mostly that there are not enough editors are willing to actually perform the merger. How would this change fix this problem? FaviFake (talk) 19:42, 19 January 2026 (UTC)[reply]
There are discussions at WP:PAM going back to June 2025 that haven't been closed. This would fix the problem by getting discussions closed quicker so that editors can proceed with the merge and not forget about it for several months. voorts (talk/contributions) 19:44, 19 January 2026 (UTC)[reply]
getting discussions closed quicker so that editors can proceed with the merge — User:Voorts 19:44, 19 January 2026 (UTC)
To you both: that doesn't solve the issue! If there are tot number of people willing to do tot number of merges a week, closing discussions quicker wouldn't increase the number of these editors, or their willingness to merge pages. Anyone can already close most merge discussions, it's just that they don't want to perform the merge and so they don't even close it. FaviFake (talk) 19:50, 19 January 2026 (UTC)[reply]
it's just that they don't want to perform the merge and so they don't even close it. The closer of a discussion is not required to implement the close. That's not the rule for any other sort of content discussion. voorts (talk/contributions) 19:52, 19 January 2026 (UTC)[reply]
Sure, but that's not the point. There will still be the same number of editors willing to carry out merges even if discussions are closed incredibly quickly. FaviFake (talk) 19:55, 19 January 2026 (UTC)[reply]
Yes per voorts, merges barely function atm (if that) thanks to a couple dedicated editors, and they usually get little participation and last for many months if not years. Would also support renaming AfD "Articles for Discussion", but I guess that can be discussed later. Agree w those in Discussion that WP:PAGEDECIDE should be central to how we decide on standalone articles. Informal merge discussions should still be allowed though, but that would only make a local consensus Kowal2701 (talk) 19:10, 19 January 2026 (UTC)[reply]
No Largely, per Parakanyaa. It's true that there is a problem that needs solving, but merging PAM into AfD is not going to solve it. What PAM needs is more structure, greater visibility and more timely closures. I suspect that a much better course of action would be model it after WP:RM and either merging article splitting discussions into the same new structure or doing the same for that independently. Thryduulf (talk) 19:11, 19 January 2026 (UTC)[reply]
Yes Merge discussions can die out and attract limited participation, so this could absolutely help. However, be cautious that TAs can’t make merge nominations if this comes to pass, so I think we need to make a daily log and let nominations go from there. ~2026-41476-9 (talk) 19:12, 19 January 2026 (UTC)[reply]
Right but that process is complicated and leads to some AFDs never being made. If merges and AFD are to be in one venue, a more efficient solution ought to be established. One possible one I can think of is drafting a deletion discussion and pushing it through WP:AFC. ~2026-41476-9 (talk) 19:26, 19 January 2026 (UTC)[reply]
It would immediately make the nomination process much harder to do without scripts. That's a relevant argument regarding the question presented here, but changing the process of how TAs create AfDs is not within the scope of this RfC, which is why I suggested that this TA raise the issue at WT:AFD. In any event, it seems the process of TAs requesting AFDs be created at WT:AFD works fairly well, so I'm not sure any change is necessary. voorts (talk/contributions) 20:02, 19 January 2026 (UTC)[reply]
Aren't you advocating for a change though? Or do you think we should allow TAs to create new AfD, only when they involve mergers? I've seen many IPs and TAs propose useful mergers. FaviFake (talk) 20:06, 19 January 2026 (UTC)[reply]
The proposed change of this RfC is to merge PAM with AfD. How TAs create merge discussions is one of many things that implementation of this proposal would effect. I am not opining on whether TAs should or should not be allowed to create AfD subpages because that question is not within the scope of the question presented by this RfC.TAs may already request that new AfDs be created by requesting assistance at WT:AFD. If this proposal is implemented, TAs could request an AfD with a proposed outcome of merge at WT:AFD just as they do for deletion now. If editors don't like that process, anyone should feel free to propose a change right now at WT:AFD. voorts (talk/contributions) 20:14, 19 January 2026 (UTC)[reply]
Yes: Merge discussions do not have a refined system to monitor or manage them, and are often overlooked with no formal process to keep track of them. Many tend to notify selective WikiProjects that are either inactive of do not elicit responses, so providing direct access and attention to the wider community would work wonders in the long run. I also agree that AfD should be renamed "Articles for Discussion", because merging ends up usually being a sufficient alternative to deletion. There should also probably be a requirement on the proposer to enact the merge, based on the consensus, should there be one, to ensure it is actually fulfilled. — Trailblazer101🔥 (discuss · contribs)19:18, 19 January 2026 (UTC)[reply]
Yes as its officially already been established in 2021that if an article is blanked and redirected (WP:BLAR), then AfD is the permitted venue alongside the articles talk page. Merge discussions often take months to resolve itself, but AfDs generally get closed/relisted after 7 days and generally do not last longer than a month. Its also because someone will notify the relevant Wikiprojects when an AFD is started, but not a page merge. JuniperChill (talk) 19:33, 19 January 2026 (UTC)[reply]
No/Oppose - I have long thought that AFD should be decentralized out of project space and should take place on an article's talk page. This better alerts those who might be interested in the discussion, and makes updating/editing/adding references all the more easier. And those watching the discussion will also automatically will be watching the page as well. This all would provide more opportunities for engagement, and more opportunities for encyclopedia development, while reducing the optics of AFD being an example of "Wikipedia the game", or "Wikipedia the battleground". While having central project-space pages for discussing project features like templates, redirects, and categories makes sense, article talk pages tend to be quite different places. Thanks to the various infrastructure supported by bots, this should be a fairly simple change. It's done for WP:RM, and WP:RFC, there's no reason AFD should not be done the same way. All of this - and more - is why I oppose adding more to AFD. - jc3719:36, 19 January 2026 (UTC)[reply]
What? No! How's that going to improve anything? The only thing that can come out of merging WP:PAM and WP:AFD is that AfDs will remain open (or closed and left in the backlog) for months because nobody will commit to merging the pages.The status quo for merge proposals is ridiculously bad, inefficient, complicated, and not visible enough, but this seems like a terrible way to try to fix it. FaviFake (talk) 19:36, 19 January 2026 (UTC)[reply]
Yes One process, one place makes it easier for all users, especially new users. I argued for this in the past and still believe it should happen. As a lot if AFDs end up as merges anyway, it makes sense.Davidstewartharvey (talk) 20:26, 19 January 2026 (UTC)[reply]
Yes, and change it to "Articles for discussion" in line with "redirects for discussion" and "categories for discussion" etc. This emphasizes that there are multiple potential outcomes aside from delete/not delete. BOZ (talk) 20:37, 19 January 2026 (UTC)[reply]
No In practice, editors seeking a merge can still attempt to achieve it via AfD, but they need to take the risk, which is absent in merge discussions, that the content may be deleted altogether. The proposal, if implemented, will just limit their options. It is also unclear how it should be implemented, because so far it looks like the proposal is to get rid of merge discussions without substantively changing AfD ones. In this case, I would say that it is better to attempt a reform of WP:PM instead of its outright abolition. Even further, in my opinion, a merge outcome is inappropriate for AfD, as the discussion usually doesn't involve editors watching the proposed merge target, who may oppose dumping additional content into the respective article. Kelob2678 (talk) 20:42, 19 January 2026 (UTC)[reply]
Regarding your last point, it would be easy enough to require notification to the talk page of the merge target and update the AfD scripts to automatically do that. voorts (talk/contributions) 20:52, 19 January 2026 (UTC)[reply]
Absolutely and it's about time We've long gone past boolean keep/delete, where now a sizeable number of discussions end up with a merge or redirect outcome. Putting all this thinking together in one place is really moving beyond the inclusionism/deletionism wars of the 2010s, and really asking one complex question in one place: Should this content be on Wikipedia, and, if so, how best to present it? Some deletion decisions are classic yes/no, but the ones that attract the most discussion are where there is a disconnect between how things are now, and how they ought to be. Jclemens (talk) 21:25, 19 January 2026 (UTC)[reply]
Yes, and I agree with BOZ's suggestion of changing the name of the process to "Articles for Discussion," as there are many times the nominator proposes something other than deletion, and many more times that the result is something other than the deletion that was originally requested. — Jkudlick ⚓ (talk)21:39, 19 January 2026 (UTC)[reply]
My default position is no/oppose. But: What would that look like? The system in question for merges is:
Start a merge discussion on the article's talk page (a couple of clicks in WP:TW, plus type the name of the other article and a rationale).
The discussion proceeds on the Talk: page, exactly as if Step #2 didn't exist.
What I don't understand from this proposal is whether it means what it links – Should the Wikipedia:Proposed article mergers [which it calls "the article merge process"] be merged into Wikipedia:Articles for deletion? – or if it means "Should the [actual] article merge process [which is WP:MERGE, not WP:PAM ] be merged into Wikipedia:Articles for deletion?" If the first, and nothing else changes, then I'd suggest chatting up the WP:PAM regulars and see what they want. If the second, then I oppose it because I don't think that mashing the merge system into AFD is as good as what we do now, in terms of producing the correct result.
I don't think that merge proposals should be run on a default seven-day timer, or even seven days plus a relisting or two. A merge proposal is not an emergency. It's okay if it takes a few months (just like any other discussion about an article is okay if it takes a few months). Speed is not important for merge proposals.
I also don't think that AFD, with its focus on sourcing and notability, is the right mental approach to merge proposals. Merging sometimes depends on sourcing, but it often depends on how editors want to write about the subject(s): Do we want to fold the author and the book together in one article, or write about them separately? Do we want to write about the original company and its successor together or separately? Will the article on widgets become too long if we merge in a few stubs about widget manufacturers? This is not the kind of thinking that happens at AFD (which is: First article in list: Search for sources, found some, vote keep. Second article in list: Search for sources, didn't find any, vote delete. Third article in list: Search for sources...). Merging is a different type of thinking, so it should be a separate process. WhatamIdoing (talk) 21:40, 19 January 2026 (UTC)[reply]
It's okay if it takes a few months (just like any other discussion about an article is okay if it takes a few months). Merge proposals only take a few months because very few editors comment in merge discussions. voorts (talk/contributions) 21:45, 19 January 2026 (UTC)[reply]
Merge proposals only take a few months because very few editors comment in merge discussions Tbh, that's not what I'm seeing at all. When I land on an open merge discussion, 90% or 95% of the time there's obvious consensus to merge, even if from a few editors, which is more than enough. The problem, which I keep repeating but which you haven't really addressed here, is that we need more editors carrying out mergers, not discussing them. Sometimes most of the discussion is about who's gonna do it rather than if it should be done. FaviFake (talk) 22:14, 19 January 2026 (UTC)[reply]
I think GLL's data shows that, given the volume of merge outcomes at AfD, the fact that the backlog isn't bigger than it is is an indication that merges regularly get done. voorts (talk/contributions) 23:44, 19 January 2026 (UTC)[reply]
The proposal is to make merging discussions occur at AfD, to answer your question. I thought that was pretty clear from the context of this discussion, notwithstanding the page linked in the RfC question. voorts (talk/contributions) 21:59, 19 January 2026 (UTC)[reply]
You're not supposed to open an AfD discussion if your proposal is to merge. Merging is an outcome that can occur at AfD right now, not the purpose of AfD. I think this proposal is fairly clear: it would mean that merge discussions happen at AfD instead of via proposed merge discussions. voorts (talk/contributions) 21:52, 19 January 2026 (UTC)[reply]
I don't think this is that proposal, as requested moves is an entirely different beast that will remain untouched. You do have a point that we might have to figure out a new name though. Aaron Liu (talk) 00:28, 20 January 2026 (UTC)[reply]
AfD should have fewer merges, rather than more. There are certain editors that believe that administrators should avoid closing AfD discussions as merge as much as possible (and I tend to agree). Katzrockso (talk) 22:08, 19 January 2026 (UTC)[reply]
Yes, absolutely. It is one of these things I have agreed with for a long time but would not propose myself. Merge discussions almost never get traction and therefore almost never get closed, and while this is for a variety of reasons, this is an issue for anything that does not require a bold merge (arguably most obvious mergers do anyways). While I agree that the speed of execution of a merger is not necessarily going to be improved, a merge outcome would not need to be speedily implemented once established and the article clearly marked as such. Aligning the name of AfD with the other XfDs into Articles for Discussion would also be an improvement both in terms of accuracy regarding what the process actually is, and taking some of the negative charge away from it. Putting complex merge discussions within it would help cementing the idea that the goal is not to either get rid of vs save an article, but to determine what is the best course of action for the current and potential contents of the article. Choucas0 🐦⬛22:03, 19 January 2026 (UTC)[reply]
Yes, as somebody who believes that, when in doubt, our PAGs should be descriptivist rather than prescriptivist. As any AfD regular knows, this is where merges already happen, for a variety of reasons - these are some very rough estimates, but as of writing, there are about 240 articles in Category:Articles proposed for merging from January 2026. Given that both the proposed merge target and the article being debated should be listed in said category, that's about 120 articles being diccussed this month. (Merge discussions move slowly, so I'm assuming a few have already been closed, but not that many). Conversely, about 100 AfDs have already ended in a merge result from January alone.[6] Over 300 have ended in redirect, which is an AfD option that explicitly designed to allow for selective merges, should somebody find anything they're particularly fond of in the article history, and implies that the editors discussed the merge at the AfD[7]. (I can't think of a clean way to get the numbers, as my brain is currently being wimpy and tired from some fun medication side effect interactions), but we all know that, despite my undercount in number of January PAM discussions, there's also going to be a lot of AfDs that ended in delete or keep where a merge was discussed and decided against/no consensused. But putting both those aside for a minute, 120 merge discussions at PAM versus. over 400 merge/de facto merge discussions at AfD. At a certain point, it doesn't matter whether or not you or I think AfD is the best place for these discussions, or you even want merge discussions to take place there - they already and overwhelmingly take place there. (You just can't say that's what you'd like to happen, as a nom, because if you say in your nom that you think the best outcome would be a merge or draftification, then anybody can technically come along and speedily close the nom under WP:SK1. I also think we should remove all stupid insider baseball rules, and this is very obviously one) GreenLipstickLesbian💌🧸 23:02, 19 January 2026 (UTC)[reply]
What if we enforced SK1 on all unambiguous merge proposals that are made at AFD?
I don't care if they're intended by the nom or not; when even one participant suggests a merge, it becomes an intended merge discussion.
I can see why enforcing SK1 at AfD for unambiguous merge discussions seems an attractive way of redirecting merge discussions back to PAM - however, the actual humans involved already have a solution to this: make their noms ambiguous. I mean, I'm not the most skilled at writing, but even I could (if I wanted to) make a AfD merge nom that couldn't be closed as SK1. GreenLipstickLesbian💌🧸 00:06, 20 January 2026 (UTC)[reply]
I don't think we can hold the nom responsible for participants who !vote to merge, when the nom is advocating for deletion. The nom has to make a choice of venue, and the choice of venue should be determined by the nom's goal (i.e., noms shouldn't use AFD to propose a merge, and they equally shouldn't use {{merge}} to propose deletion). But just because people (including me) respond to that deletion attempt by saying something other than "delete" doesn't mean that the deletion attempt becomes "an intended merge discussion" or "an intended keep discussion" or whatever. WhatamIdoing (talk) 02:56, 20 January 2026 (UTC)[reply]
I'm not arguing that I should hold the nom responsible; I'm arguing that when people show up to an AFD and suggest a merge, they would, typically, like the other participants to consider and debate (or, more likely) agree with said merge. hence they have intended it to become a merge discussion.
Think of my comment above this way: you, WAID, are our wonderful Wiki campus planner. You've put hours and hours into designing the buildings, how they connect to each other, and the oaths people should follow to get from building to building. Everything is optimized, you did surveys, you consulted landscapers, and the resulting paths are beautiful, efficient, and really quite wonderful!
Then you come back five years later and discover a desire path cutting right across one of the lawns.
What do you do?
For starters, you could put up large signs telling people to use the correct path (that's our AfD instructions). That doesn't work.
The next option is to tell the campus security guards to make people walk on the correct path; that cuts off a certain amount of the trespassers (especially the new and rule following), but most of the people just learn to cut across the lawn when the security guard isn't around, or is too far away to stop them.
You could even try making the intended path nicer, and send out surveys: but then everybody responds that no, they use their own path because it's shorter, more convinient, all their friends use this path too so they need to if they want to talk with them, and you can't do much about that!
You could continue that cycle: putting up higher fences to stop people from cutting (fine until they knock them down or climb over, or it snows and the fences get obscured), impose harsher fines, tell the guards to detain people who look as though they're so much as going to think about cutting across the lawn.
Or you could give in, give the new path a better surface to prevent ground erosion, give it some lighting so it's safer, and accept that, when somewhere between 50 and 80% of your people are already going down this unauthorized path, that this is their preferred route. And, for better or worse, AfD appears to be the preferred route for people wishing to merge articles. GreenLipstickLesbian💌🧸 03:32, 20 January 2026 (UTC)[reply]
Yes - Trailblazer makes a salient point that even if you notify a WikiProject of a merge discussion, a lot of topics just don't have a high level engagement versus the AfD listing process which seems to pull in a lot of editors outside of the regulars of a topic area. I also like "Articles for Discussion" because merge and/or draftify is often an AfD result, it feels less negative and we wouldn't have to change the acronym. It should probably be a suggested best practice rather than a requirement for the proposer to do the merge. Sariel Xilo (talk) 23:05, 19 January 2026 (UTC)[reply]
support merging. Both of these processes are asking editors the same question: "should this topic have a standalone article?" Thus it makes sense that they be addressed in the same forum. CapitalSasha ~ talk23:06, 19 January 2026 (UTC)[reply]
@CapitalSasha, I don't think I can agree. I think AFD is answering the question "Is this topic allowed to have a standalone article?" and MERGE is answering the question of "Even though this topic is allowed to have a standalone article, is that really how we want to handle it?" WhatamIdoing (talk) 23:40, 19 January 2026 (UTC)[reply]
Yes, and combining the processes would mean that in addition to asking the question "Is this topic allowed to have a standalone article?", editors would be allowed to ask the question "Even though this topic is allowed to have a standalone article, is that really how we want to handle it?" As other editors have noted above, merge is a common outcome at AfD. This idea that AfD editors don't know how to suggest merges when it's warranted is incorrect. voorts (talk/contributions) 23:43, 19 January 2026 (UTC)[reply]
I guess I don't see the distinction between "being allowed to have a standalone article" and "having a standalone article." I think of these processes as editorial processes, not governance processes, so to me ultimately it's about whether the topic ultimately ends up with its own article. CapitalSasha ~ talk00:24, 20 January 2026 (UTC)[reply]
Yes: This is supposed to solve the problem of the merge discussions themselves not receiving their due attention. The applicable rationales are similar enough (I'd say one-to-one, even) that these should be merged.Merges that have consensus staying in holding cells is a separate problem that this proposal would not exacerbate (or, at least, the merges that would be closed within 7 days under this process is preferable to just remaining under unclear consensus statuses) , and being in a holding cell does not stop the discussion from being closed (cc @FaviFake), which again is the only problem this tries to solve. Aaron Liu (talk) 00:21, 20 January 2026 (UTC)[reply]
Question: is the problem 1) that too many merge proposals are not being closed … or is the problem 2) that (once closed) the actual merging is not being implemented? Blueboar (talk) 00:24, 20 January 2026 (UTC)[reply]
I agree with Aaron, based on past experience with AFDs that closed as merge. Typically, the nom won't do the merge because that's not the outcome they wanted, and the other participants were just drive-by voters who wanted to give advice but not do the work. WhatamIdoing (talk) 02:59, 20 January 2026 (UTC)[reply]
Yes, the perceived binary nature of old AfD has always been overly artificial. It should go more forthrightly to, 'do we think in our editorial judgement, there should be separate page on this', same with merge. Consensus also works better with more options for agreement. The more eyes on 'what to do with this information' can only help the pedia be better. Alanscottwalker (talk) 01:04, 20 January 2026 (UTC)[reply]
No, because the resulting changes would not benefit the project as they place unnecessary barriers on merging articles. In the old system/status quo: You think an article should be merged. You post on the talk page. Two people agree with you that it's a good idea. You perform the merge within a couple of hours of requesting it. In the new system: You think an article should be merged. You must open an AfD to propose merging and said AfD discussion must stay open at least seven days or until consensus has been reached, whichever comes first. The reason this is a problem is that this will enact barriers to merging (opening a AfD) that are not justified. The barriers surrounding deletion (strict speedy deletion criteria, seven-day time periods, etc.) make sense, because deletion can only be performed by and reversed by admins, but placing similar barriers around merging is inappropriate, because anyone, even TAs, can merge a page into another as long as they can edit the pages involved, and a merge can easily be reversed by restoring old page histories. Merging is unlike deletion, which has somewhat strictly defined criteria (such as lack of notability), because the reasoning for a merge is scarcely more complicated than "I think these pages cover similar topics, so we should merge them." There is no such thing as Wikipedia:Speedy merging because anyone can merge if they feel it is appropriate. SuperPianoMan9167 (talk) 01:36, 20 January 2026 (UTC)[reply]
You think an article should be merged. You must open an AfD to propose merging and said AfD discussion must stay open at least seven days or until consensus has been reached, whichever comes first. This is incorrect. You don't even need to open a merge discussion in the status quo. You can just be bold and do the merge yourself. voorts (talk/contributions) 02:07, 20 January 2026 (UTC)[reply]
Also, none of this would preclude starting a talk page discussion with other editors or asking a Wikiproject if they think a merge is a good idea before you open an AfD. voorts (talk/contributions) 02:09, 20 January 2026 (UTC)[reply]
Making a bold merge does not preclude you from asking other editors to weigh in. There's no requirement to ask those questions using the proposed merge tags and WP:PAM, just like there would be no requirement to do so at AfD if the process changes. voorts (talk/contributions) 02:14, 20 January 2026 (UTC)[reply]
For example, you create an article, I review it at NPP, I think it ought to be merged, I drop you a note on your talk page, you respond and agree, and I merge it. No AfD involved. Another example: I find an article, think it's of marginal notability and intend to boldly merge it, ask someone at a relevant WikiProject for a sanity check, they agree, and I go ahead with the proposed merge. Again, no AfD involved. voorts (talk/contributions) 02:16, 20 January 2026 (UTC)[reply]
Under the current system, if someone disagreed with either of my bold merges, they'd have to revert and take it to PAM, where it might sit for months. Under this proposal, it would go to the combined AfD and hopefully be resolved much quicker (and with much more input from other editors). voorts (talk/contributions) 02:16, 20 January 2026 (UTC)[reply]
I'd be much more likely to carry out a merge if it's closed after one week than if it's closed seven months later, after I've moved on to editing other things. voorts (talk/contributions) 02:21, 20 January 2026 (UTC)[reply]
Nothing in the RfC question speaks to getting rid of bold merges. If I wanted to eliminate bold merging (which would be an odd thing to do), I would've asked that question. voorts (talk/contributions) 03:26, 20 January 2026 (UTC)[reply]
The problem is proposed merges never get two other responses within a few hours unless you ping them or it's affected by a current event. Aaron Liu (talk) 03:13, 20 January 2026 (UTC)[reply]
You perform the merge within a couple of hours of requesting it. No, the usual process is: you keep the discussion open for at least seven days, then the nom or another editor close it and perform it. Or you boldly do it without tags or talk page discussions FaviFake (talk) 16:49, 20 January 2026 (UTC)[reply]
Yes, with the understanding that for trivial merges, try WP:BOLD and only if there's pushback then revert and AFD away; and for minor merges, proposed text at the target article can potentially be implemented before the AFD is closed given that the information should nominally belong there. For larger merges, the (stated) expectation should be that the AFD is consensus/authorisation to go ahead, that there may need to be further discussion at the target article on how things are merged (perhaps with WP:3O input); and the nominator would perform a sizable proportion of the work. Repeated merge nominations without actually putting in the work to complete merges should be considered as disruptive. Merges are already discussed at AFD as WP:ATD (and something like 5% of my !votes have been for them). ~Hydronium~Hydroxide~(Talk)~01:57, 20 January 2026 (UTC)[reply]
Weak opposition - I think the PAM system is useful for quick (non-controversial) mergers, and since we can already !vote to “keep but merge” at AFD, I don’t think the proposal is needed. Note - we currently say that AFD isn’t for “article clean up”… and shifting it to “Articles for discussion” might change that. Not sure if that would be good or bad. Blueboar (talk) 03:17, 20 January 2026 (UTC)[reply]
Quick (non-controversial) mergers shouldn't be put through PAM in the first place per WP:NOTBURO. The first sentence of WP:MERGEPROP says: "If the need for a merge is obvious, editors are encouraged to be bold and simply do it themselves." I disagree that shifting it to “Articles for discussion” might change what AfD becomes. All it would do is add merging to the potential nomination outcomes along with delete, redirect, and draftification. voorts (talk/contributions) 03:30, 20 January 2026 (UTC)[reply]
Yes. Merge is the reasonable outcome for many AfD, and merge discussions are often buried away with no urgency to resolve quickly, unlike AfD. Joining them will speed up the process for the merge, allow for more participation from various editors, creating stronger consensus as it approached it broader audience of editors. ✠ SunDawn ✠Contact me!03:45, 20 January 2026 (UTC)[reply]
No. I share the same concerns about how moving merging to AfD would not address the backlog issue and add barriers to merging, but another reasons is that I see merging as being complimentary to splitting (something that I haven't seen brought up here in the !votes yet), which if this proposal goes through, then the process of splitting would remain unchanged while merging would now have to be done at AfD (assuming it is not done boldly). That doesn't sound like a natural approach to me for handling the question of whether to have a subject be broken up into multiple articles, or to have them all centralized in one article. As splitting would become cheaper, this could theoretically become the easier answer to said question, and it thus would be harder to undo a split if a merge has to be done behind AfD (assuming a bold merge was reverted). I could potentially see myself shifting to a yes if:
Both merges and splits were given to AfD,
AfD was renamed to Articles for Discussion to describe its new responsibility of handling merges and splits,
A grace period of say 3 days before !votes can be casted, in order to give participants time to discuss the article and form opinions on the best course of action, analagous to WP:GAR and WP:FAR. This is to prevent a knee-jerk reaction of keep/delete !votes that might not be the right fit for the article, and to get editors invested so that they might follow through on a merge or split if that was the result.
I admit that I don't participate in AfD (having only done so once I believe), so its possible that my proposals here might be flawed to regular participants. Gramix13 (talk) 04:34, 20 January 2026 (UTC)[reply]
No. The problematic part of a merge discussion is actually doing the merge. Getting the consensus done in a more rushed way with a shorter time limit is not going to help with that and is not going to result in better-reasoned outcomes. —David Eppstein (talk) 06:21, 20 January 2026 (UTC)[reply]
I don't actually agree that the problematic part is doing the merge - it's just a bit tedious. The problematic part is reaching a quorum.
Take Talk:Abbey Crunch#Merge Proposal, which I opened last April after a no consensus AfD leaning support for a merge. Two months of radio silence, a bold merge, the article creator undid me, refused to say why, until finally somebody else comes along and read a consensus to merge a month later... based only on the AfD discussion. 5 months, no new participants. GreenLipstickLesbian💌🧸 16:25, 20 January 2026 (UTC)[reply]
No – Merge discussions are fundamentally different from AfDs. This proposal might reduce the number of overlapping processes, but I don't see how it will actually facilitate more efficient merging. I am also concerned that adoption of this proposal would raise the procedural hurdles for merges, which are quite different in character from deletion (i.e. unlike deletion, they don't render any content invisible to lay editors like myself). Yours, &c.RGloucester — ☎09:30, 20 January 2026 (UTC)[reply]
Yes - Frankly the number of mergers you see happening as a result of AFD discussions is way more than are handled through this system. FOARP (talk) 12:56, 20 January 2026 (UTC)[reply]
No These serve two completely different functions. Merging is not deletion. Even though an AfD can end in a merge result, a merge request should never end up in deletion. Deletion is a very serious matter and the two should not be conflated at all. SportingFlyerT·C15:55, 20 January 2026 (UTC)[reply]
Plus this won't solve the problem that it's trying to solve. I've just discovered Wikipedia:Proposed article mergers. It's a mess. A reform to the merger system would be good, but merging it with AfD will be even more of a mess. I agree we need more users to actually perform the mergers, and that needs to be better advertised... SportingFlyerT·C15:59, 20 January 2026 (UTC)[reply]
Discussion
@PARAKANYAA: post-merge discussion languishing is a distinct issue from merge discussions themselves languishing. Merge discussions get very little participation compared to AfD discussions. Sometimes they sit for months with just the nominators' statement and nobody even closes it to show that the articles are ready for merging. At least with AfD the discussion would get closed and added to the proper "being merged" category, where editors can work on chipping away that backlog. voorts (talk/contributions) 18:43, 19 January 2026 (UTC)[reply]
PAM just doesn't have the same easy workflow of AfD. This would get more eyes on articles and encourage faster closes so that the merge can proceed. voorts (talk/contributions) 18:45, 19 January 2026 (UTC)[reply]
This would get more eyes on articles and encourage faster closes so that the merge can proceed
Maybe a stupid question, but can't we just… have more eyes and hands on with WP:PAM? We do have an absurd backlog with both proposed and in progress mergers (1000+ pages combined), but will moving it around really relieve that weight? We need more willing participants as much as we need visibility. ~ oklopfer (💬) 07:16, 20 January 2026 (UTC)[reply]
Yes, this is natural, because merges are for more tailored and related to the specific details of an article than a binary yes/no test of should we have an article, which is the typical fare with AfD. With a merge discussion you have to discuss the content, where it should be merged, which even now when voted on at AfD are often not thought through whatsoever (which is why they hoist it off on whoever has to perform it months later). AfD discussions you have a WEEK, which is often not enough to hash out merges. WP:PAGEDECIDE is completely independent of notability and I fear this would neglect it one way or the other, either preventing justified mergers because the topic is notable, or keeping non-notable topics because of it, by blending the AfD and merge processes.
I'm sure they have but there are plenty of ones I have worked with where there is substantial topic overlap and the topic is 100% notable but best covered somewhere else. Maybe if we turned it into a whole other kind of thing this would be acceptable, but expanding deletion discussions to have as part of its focus merges without a plan will be a nightmare. PARAKANYAA (talk) 18:51, 19 January 2026 (UTC)[reply]
It wouldn't be expanding deletion discussions; the merge would mean that the two processes are combined, so now you'd have both types of discussions at one venue. Maybe that would require extending the length of discussions where the proposal is merge. It may also require some tweaks to the wording at WP:AFD. I think if there's consensus for this change, those details could be worked out. But I don't think it makes sense to overhaul the process up front if there's no consensus for it. voorts (talk/contributions) 18:53, 19 January 2026 (UTC)[reply]
That would be expanding deletion discussions, it is expanding the content of what we cover at articles for deletion to non-notability mergers. But maybe that's quibbling.
The status quo is better than having an uncertain result where in theory it could be somewhat better, but just as easily could make merges far more annoying, forever. I believe most of the ways this could go are worse than the status quo. PARAKANYAA (talk) 18:58, 19 January 2026 (UTC)[reply]
Overall, this is something that I could support (apparently I even started that RFCBEFORE). But we would need a solid plan. My main concern is that WP:PAGEDECIDE is a big part of mergers. It's not given much weight at AfD where bare notability is considered not only enough to keep, but to necessitate a separate article, which kind of defeats the purpose of a merge discussion. Thebiguglyalien (talk) 🛸 18:46, 19 January 2026 (UTC)[reply]
If this change is approved, it would be easy to tweak the AfD guidelines. I've also noticed that we're disentangling notability and PAGEDECIDE a bit as a community now, which is a good thing. See the recent change to WP:NSONG, for example. voorts (talk/contributions) 18:51, 19 January 2026 (UTC)[reply]
That's because many editors (in my opinion rightfully) see merge discussions as out of the scope of AfD and as a regular part of editing, rather than the correct outcome of an AfD discussion. Katzrockso (talk) 22:09, 19 January 2026 (UTC)[reply]
see merge discussions as out of the scope of AfD People are allowed to have their preferences, but that's just not true under our current deletion policy. See WP:ATD-M. Merging is and has been an option on the table at AfD for a very long time. voorts (talk/contributions) 22:12, 19 January 2026 (UTC)[reply]
Yes, precisely. This discussion is a good example of an uncomplicated merge discussion, albeit one with an incompetent close by a new editor. (Whoops, that was me.) The core issue there is WP:PAGEDECIDE, which can be looked at as either a "notability" issue or a "coverage/content" issue, when in fact notability issues aren't content issues. Cremastra (talk· contribs) 18:51, 19 January 2026 (UTC)[reply]
I honestly didn't think that far ahead. Perhaps if it is a simple merge, the closer or any participants can merge as usual. If it is more complicated, then there could be a timeframe or a reminder set for the proposer to complete the merge? Not sure how ghat could be set up, though. I know merging can be a daunting task for some, though it could be a collective effort if some participants are willing to. — Trailblazer101🔥 (discuss · contribs)19:31, 19 January 2026 (UTC)[reply]
I have long thought that AFD should be decentralized out of project space and should take place on an article's talk page. This better alerts those who might be interested in the discussion .... @jc37: how does having it on the talk page alert editors better than a big old template on the top of the article that links directly to the AFD discussion? Also, having it on the article talk page would mean the entire discussion gets deleted if the article is deleted, leaving no record of why the article was deleted. voorts (talk/contributions) 19:41, 19 January 2026 (UTC)[reply]
I didn't say remove the template, nor any of the rest of the notification infrastructure. All that needs modifying is the target. Instead of targeting a subpage of AFD, we target the article talk page. qed.
And we'd just stop deleting talkpages of deleted pages. This is a place we're often shooting ourselves in the foot. Removing from view (all that "deletion" does) previously talked about issues with an article causes us to continually re-discuss/invent the wheel, and is a huge waste of volunteer time. There's no reason to arbitrarily delete article talk pages. If there's a good reason, like vandalism or outing, or whatever, sure, but even those can be handled without full deletion these days. - jc3719:54, 19 January 2026 (UTC)[reply]
If the idea is to keep all the infrastructure and move everything to the talk page, I don't think this proposal conflicts with yours. All it would do is combine the two processes of PAM and AfD so that the latter now has the same sorting, notification, and closure infrastructure as the latter. This would make PAM discussions more visible, get more editors involved in them, and ensure discussions are closed closer in time to their opening. voorts (talk/contributions) 20:19, 19 January 2026 (UTC)[reply]
My (strong) opposition is concerning merging to that venue. The process is immaterial if we don't get past that. Though, I think it's being shown that AFD doesn't handle well what merging it already handles. I don't think we should add more to that failing/failed process. I think there are ways to improve how we merge (and split) on Wikipedia. Feel free to ping me if someone starts a brainstorming discussion about this elsewhere. - jc3720:56, 19 January 2026 (UTC)[reply]
@FaviFake: RE the backlog issue, I don't think this would make the backlog any worse. It would just shift it from one holding area (WP:PAM / the various merge categories) to another one (the AFD closed as merge category, which could easily be divided into month subcats as well). If anything, it would make the backlog issue better by getting more eyes and more editors potentially interested in doing the merge.Also, given the large backlogs at PAM and AfDs needing merges, maybe we should organize a merge-a-thon to get the backlog down. voorts (talk/contributions) 20:21, 19 January 2026 (UTC)[reply]
When If this proposal passes, the question remains whether and how we rename WP:Articles for deletion. It might be useful to get a headstart on this matter now.
I say we could not move the ocean of nomination subpages and have WP:Articles for deletion redirect to the new page, to a section noting how AfD used to include only deletion but now includes mergers as well. (Under this, we could titleblacklist subpages under "WP:Articles for deletion/" if needed to prevent accidental creation by users and gadgets. Or just make another bot that automatically moves new creations?)
Of course, the prowess of our bots is not to be underestimated. We did recently remove all occurrences of {{pageviews}}. So whatever name we find we could also just rename all the subpages wholesale.
As for the actual name, I throw into the ring "Articles for disposition" to preserve our holy acronyms we just had an alphabet soup war over, and "Articles for deletion or merger" to keep it simple and understandable. Aaron Liu (talk) 00:47, 20 January 2026 (UTC)[reply]
I agree with "for Discusssion". I am neutral to against renaming past deletion discussions because they will have been deletion discussions even if the process was later changed. Jclemens (talk) 05:19, 20 January 2026 (UTC)[reply]
Merges are slow to be done because sometimes they are one of the more pain-in-the-ass things to do because they often involve painstakingly removing duplicative content from both articles to be merged and finding the right spot in the target article for every line or item to be merged from the article to be merged. I think this is exactly the kind of task that would be the best use case for an AI on Wikipedia. Since it is just integrating two existing sets of information, the AI would not be writing any new material itself, and no hallucinations should be introduced (or, at least, anything new introduced should be very easy to spot). Perhaps we could set things up so that an AI creates a proposed merge subpage, and an admin can review and accept that or improve it from there. BD2412T01:07, 20 January 2026 (UTC)[reply]
Honestly I'd rather just have a more functional "articles to be merged" list on my Wikiproject Article alerts - or at least some way to sort to be merged articles by topic area. GreenLipstickLesbian💌🧸 01:21, 20 January 2026 (UTC)[reply]
Hell no. LLMs cannot be trusted to edit Wikipedia. They hallucinate. They invent imaginary sources. They invent imaginary Wikipedia policies. And they write like crap. Frankly, given all this, it would be bizarre to even contemplate using them in this context. If a merger needs doing, it needs doing properly. By someone with the competence to do so. AndyTheGrump (talk) 01:45, 20 January 2026 (UTC)[reply]
Editors (including admins) can already do this on their own, ask an AI and review it themselves.I don't think we should be pushing it further than that. Aaron Liu (talk) 04:08, 20 January 2026 (UTC)[reply]
I'd take a tool that dumped the code of one article into the bottom of the second one, enclosing it in hidden html tags. If this is done then the same tool could automatically do the very annoying merge tagging, using the diff it creates when dumping the code. That handles attribution and puts everything together, then the editor can focus purely on the actual merging of content. CMD (talk) 07:14, 20 January 2026 (UTC)[reply]
At Wikipedia:Administrators'_noticeboard#Unclear_ECR_violation, I got into a dispute about the scope of the ECR for Kurds and Kurdistan, concerning a draft about a valley in northeastern Iraq. The current wording of the ECR implies that an entire region of the globe is a restricted topic area, which is unfair because it includes topics in that region which are uncontroversial. By contrast, we rejected such a broad scope when enacting an ECR for Armenia and Azerbaijan (for "politics, ethnic relations, and conflicts" involving these countries), even though WP:GS/AA does include everything in those regions due to ethnonationalism getting randomly shoehorned into the topic area. –LaundryPizza03 (dc̄) 21:34, 19 January 2026 (UTC)[reply]
Yes, we 'rejected such a broad scope' for an entirely different topic area. Consistency is not required, and indeed in this case including "Kurdistan" was an explict decision. The current wording doesn't 'imply' that that entire area is restricted, it's explict about it. A lot of the disruption WP:GS/KURD aims to squelch includes drive-by vandalism of the - oft, verbatim - Kurdistan will never exist sort. The topics may be uncontroversial in and of themselves, but the editing on them is absolutely not; there is a lot of that sort of disruption, often on articles that, on a quick glance, would appear to be only tangientally related to 'Kurdistan' and not at all related to 'Kurds'. Regarding your comment in the original discussion, We don't enforce ECR against places in Israel simply because they're in Israel, no we don't, because Arbcom restricted PIA to 'the Arab-Israeli conflict '. KURD, on the other hand, is phrased as 'Kurds and Kurdistan' for a reason. - The BushrangerOne ping only02:35, 20 January 2026 (UTC)[reply]