Requested edit filters

    This page can be used to request edit filters, or changes to existing filters. Edit filters are primarily used to address common patterns of harmful editing.

    Private filters should not be discussed in detail. If you wish to discuss creating an LTA filter, or changing an existing one, please instead email details to wikipedia-en-editfilters@lists.wikimedia.org.

    Otherwise, please add a new section at the bottom using the following format:

    == Brief description of filter ==
    *'''Task''': What is the filter supposed to do? To what pages and editors does it apply?
    *'''Reason''': Why is the filter needed?
    *'''Diffs''': Diffs of sample edits/cases. If the diffs are revdelled, consider emailing their contents to the mailing list.
    ~~~~
    

    Please note the following:

    • Edit filters are used primarily to prevent abuse. Contributors are not expected to have read all 200+ policies, guidelines and style pages before editing. Trivial formatting mistakes and edits that at first glance look fine but go against some obscure style guideline or arbitration ruling are not suitable candidates for an edit filter.
    • Filters are applied to all edits. Problematic changes that apply to a single page are likely not suitable for an edit filter. Page protection may be more appropriate in such cases.
    • Non-essential tasks or those that require access to complex criteria, especially information that the filter does not have access to, may be more appropriate for a bot task or external software.
    • To prevent the creation of pages with certain names, the title blacklist is usually a better way to handle the problem - see MediaWiki talk:Titleblacklist for details.
    • To prevent the addition of problematic external links, please make your request at the spam blacklist.
    • To prevent the registration of accounts with certain names, please make your request at the global title blacklist.
    • To prevent the registration of accounts with certain email addresses, please make your request at the email blacklist.


    Keyboard mashing filter?

    • Task: What is the filter supposed to do? To what pages and editors does it apply?

    The filter is intended to catch "keyboard spam" edits (things along the line of "ajksljhgfhlasjaewzxcvo"). The way I believe this could be implemented is with a filter that catches strings of length 5 that contain only lowercase consonants (y is a vowel in this case). For example, in the example given above, the substring "jklsj" would be caught and flagged. Should only apply for main space edits and only for IPs to avoid usernames triggering the filter. Exception needed for links. I don't know what regex has in its capabilities so I don't know if this is possible. I'm worried about edits on other language scripts messing it up.

    • Reason: Why is the filter needed?

    This is a relatively common pattern of vandalism; the diffs below were collected over a span of a single, non cherry-picked hour.

    • Diffs: Diffs of sample edits/cases. If the diffs are revdelled, consider emailing their contents to the mailing list

    [1][2][3]

    Wildfireupdateman :) (talk) 17:50, 13 January 2025 (UTC)[reply]

    Have you given some thought to compounds such as Knightsbridge and Catchphrase, names like Goldschmidt and Norbert Pfretzschner, technical articles like HTML color names (white is #FFFFFF; see also hex for color names Blanched almond, Gainsboro, Lemon chiffon, Navajo white, Pale turquoise, and Snow); the parenthetical phrase in the first line of The Adventures of Mr. Nicholas Wisdom, and non-English content (notably German compounds) such as Handschriftencensus (6), Selbstschutz (7), and Rechtschreibreform (7). But I believe these examples are rare, and that there are no 8-letter examples, so you can probably whitelist all of these. There might be a portion of an article that covers keyboard spam with examples, and you might have to whitelist that, too. Mathglot (talk) 10:31, 14 January 2025 (UTC)[reply]
    I didn't think of those. It appears that in addition to the filter below, there are way too many exceptions to work properly. I'm going to retract this request but I don't know how; can someone help out? Wildfireupdateman :) (talk) 20:16, 14 January 2025 (UTC)[reply]
    There IS a filter for this:
    • 135 (hist · log) ("Repeating characters")
    It works almost exactly as suggested as well, even the exception for links, with the difference being it looks for 9 characters, not 5.
    At any rate, perhaps the filter could be improved - for example, it didn't catch the second example because the edit edited a line starting with a pipe (|), why do we exclude edits that do that?
    That change was done here in 2012, which changed it from excluding edits that left a line like |- or |. in the article to ones that edit any line starting with a pipe or an exclamation mark.
    The filter did not catch examples 1 and 3 because of the aforementioned vowels before it reached 9 'repeating' characters. – 2804:F1...87:8192 (::/32) (talk) 15:32, 14 January 2025 (UTC)[reply]
    Alternate idea: since keyboard spam usually stays on the same keyboard row, could a filter that checks for repeated characters in the same row (usually the home row) be a thing? Chaotic Enby (talk · contribs) 17:50, 27 January 2025 (UTC)[reply]
    If that is the case, the length trigger would probably be ~7-8 or so, as there are sufficiently few words(typewriter, rupturewort) that would need to be implemented as exceptions. Wildfireupdateman :) (talk) 17:54, 27 January 2025 (UTC)[reply]
    Yep, that would be a more reasonable length trigger – 5 is too short, but 8 would likely still match most keymashes. Chaotic Enby (talk · contribs) 17:55, 27 January 2025 (UTC)[reply]
    I'm working on a major update to this filter. Daniel Quinlan (talk) 11:35, 28 February 2025 (UTC)[reply]

     You are invited to join the discussion at Wikipedia:Village pump (idea lab) § Would a filter to identify changes from "transgender" to man, boy, girl, female, woman be appropriate. Chaotic Enby (talk · contribs) 14:48, 9 February 2025 (UTC)[reply]

    Removing random characters from pages at a fast pace

    • Task: The filter will prevent (or maybe log or warn for now only) unregistered (and possibly non-autoconfirmed) users from rapidly removing random characters from pages for no reason. This could be done using the throttle function.
    • Reason: There has been an IP-hopping vandal who has been doing this a lot recently, who uses proxies that had to be blocked each time, so a filter could be made to prevent having to mass-rollback their edits and cause disruption all the time.
    • Diffs: See Wikipedia:Administrators' noticeboard/Incidents#IP hopper making tons of useless edits for more details. Here are examples of some edits: here, here, here, and here.

    User3749 (talk) 07:21, 28 February 2025 (UTC)[reply]

    It is clear that this is an issue, but an edit filter should be careful to not also affect IPs and non-autoconfirmed users fixing typos, especially since some of the removals were not limited to a single character. Putting a rate limit of around two edits per minute might do it, although we should definitely test for false positives first, as this will affect a lot of new editors. Chaotic Enby (talk · contribs) 12:06, 28 February 2025 (UTC)[reply]
    I'll try to make some regex for this. Here is my first draft:
    [removed so we don't help the LTA]
    I made this pretty quickly, so it probably does not work as expected. It could probably be used as a template though to tweak the code further.– PharyngealImplosive7 (talk) 14:49, 28 February 2025 (UTC)[reply]
    Thanks a lot! The main issue I'm seeing is that two of the example edits aren't limited to removing 5 characters (this one and this one), and I'm genuinely wondering how to catch them without throwing too big of a net around good-faith edits. Chaotic Enby (talk · contribs) 16:11, 28 February 2025 (UTC)[reply]
    Yeah, I don't know how exactly to catch some of the larger edits without catching a bunch of FPs. Consequently, I think that any filter of this type will have a lot of false negatives. – PharyngealImplosive7 (talk) 17:31, 28 February 2025 (UTC)[reply]
    I'm really not sure on the efficacy of a filter with such a tight edit_delta tolerance, I think it's likely a vandal would simply find the limit and stay just outside of it. This would then result in a cat and mouse game whilst still having to balance false negatives and false positives every time a change is made. This could be improved by making the filter private, but I still think it'd be fairly easy to find the limit. FozzieHey (talk) 18:58, 28 February 2025 (UTC)[reply]
    Quick notice: Supposedly, the same vandal has switched up their method. They're still using proxies, however they're now adding characters instead of removing them. See Special:Contributions/2.86.162.27. / RemoveRedSky [talk] [gb] 17:34, 28 February 2025 (UTC)[reply]
    I think we could then make the filter look for both rapid removals and insignificant additions using throttle again, but I’m not sure if FPs might be an issue in that case then. User3749 (talk) 18:50, 28 February 2025 (UTC)[reply]
    I missed this discussion, but Special:AbuseFilter/1345 was created for this vandal. Sam Walton (talk) 08:27, 1 March 2025 (UTC)[reply]
    Any further conversation should continue on the mailing list, as we're dealing with an LTA who already has a private filter. – PharyngealImplosive7 (talk) 16:23, 1 March 2025 (UTC)[reply]
    • Task: Flag links generated by ChatGPT and other LLMs, through the ?utm_source parameter
    • Reason: Additions of LLM-generated content can contain citations that do not actually support the text.
    • Diffs: Special:Diff/1271820600 (mentioned in the linked discussion), this search brings up a lot more including in high-profile articles

    Following a discussion at Wikipedia talk:Large language models#LLM-generated content, a suggestion was brought up, namely an edit filter detecting ?utm_source=chatgpt.com in links. That parameter is appended after an URL when copied from ChatGPT (for example, https://en.wikipedia.org/wiki/Wikipedia:Edit_filter/Requested?utm_source=chatgpt.com points to the same place as https://en.wikipedia.org/wiki/Wikipedia:Edit_filter/Requested, but indicates the source of the link as being ChatGPT).

    I suggested the following simple filter:

    page_namespace == 0 &
    added_lines rlike "utm_source=chatgpt\.com"
    

    Another user (@Z. Patterson) proposed a more advanced filter that would detect other LLMs in URLs, but exclude some situations to avoid false positives, based on 1045 (hist · log):

    equals_to_any(page_namespace, 0, 10, 118) & 
    (
        llmurl := "\b(chatgpt|copilot\.microsoft|gemini\.google|groq|)\.\w{2,3}\b";
        added_lines irlike (llmurl) &
        !(removed_lines irlike (llmurl)) &
        !(summary irlike  "^(?:revert|restore|rv|undid)|AFCH|speedy deletion|reFill") &
        !(added_lines irlike "\{\{(db[\-\|]|delete\||sd\||speedy deletion|(subst:)?copyvio|copypaste|close paraphrasing)|\.pdf")
    )
    

    Chaotic Enby (talk · contribs) 20:06, 28 February 2025 (UTC)[reply]

    Pinging users who participated in the previous discussion: @Alaexis @Phlsph7 @Photos of Japan @PPelberg (WMF) @1AmNobody24 @Chipmunkdavis Chaotic Enby (talk · contribs) 20:08, 28 February 2025 (UTC)[reply]
    Sounds like a sensible idea. To be clear, are you proposing to just tag these edits, or to eventually warn as well? I think it'd be a good idea to warn, as similar filters for citations do. There is the risk of false positives for editors who research via LLMs but do check the source content, so a good evaluation period would be useful. I think we'd also want to put in an extendedconfirmed exemption like in filter 1057 (hist · log). FozzieHey (talk) 22:13, 28 February 2025 (UTC)[reply]
    I'd agree that warning would be helpful – I don't think it hurts to give a reminder to editors who do check source content that they're on the right track. Regarding an extended-confirmed exemption, I don't think it should be present: some additions like this one do come from extended-confirmed users, and it could be useful to remind them to check the generated sources. Since it is just a visual warning and logging, rather than any kind of action being taken, I would say it's appropriate to have it show up for all users. Chaotic Enby (talk · contribs) 22:29, 28 February 2025 (UTC)[reply]
    I guess it's whether we treat the warning as a "warning, you probably shouldn't do this" or a gentle reminder like you say, which would also influence how we draft the warning template. Arguably citing Wikipedia is worse (and I can't think of any valid reasons as to why you would need to, outside of some very niche articles about Wikipedia), and an extendedconfirmed exemption is present there. FozzieHey (talk) 22:40, 28 February 2025 (UTC)[reply]
    I agree that we should warn users, as we do for self-published sources. It will give them time to think about what they are entering and if it is legitimate. It should deter most instances of citing LLMs. Z. Patterson (talk) 04:36, 1 March 2025 (UTC)[reply]
    The filter idea seems good, whether it should be attached to a warning or other action is a later discussion. I'm not sure how much analysis has been done. CMD (talk) 07:53, 1 March 2025 (UTC)[reply]
    This sounds like a sensible filter to start log-only for testing, see how it goes, and then perhaps upgrade to tagging if we don't have too many false positives. However, I just tested the filter suggested by Z. Patterson and it is matching any edit which adds a URL - could you double check the regex? Sam Walton (talk) 08:25, 1 March 2025 (UTC)[reply]
    I'm guessing it might be because the (chatgpt|copilot\.microsoft|gemini\.google|groq|) part ends with |) which includes the empty string as an option, removing that pipe and changing to (chatgpt|copilot\.microsoft|gemini\.google|groq) instead might fix it. Chaotic Enby (talk · contribs) 12:31, 1 March 2025 (UTC)[reply]
    @Samwalton9 and Chaotic Enby: Yes, I had intended to include only URLs that have LLMs. I also suggest adding claude\.ai to the filter so it catches instances of citing Claude. Z. Patterson (talk) 12:49, 1 March 2025 (UTC)[reply]
    {{tq|sounds like a sensible filter to start log-only for testing, see how it goes, and then perhaps upgrade to tagging if we don't have too many false positives.}}
    +1, @Samwalton9!
    Thinking a bit ahead about the question @FozzieHey posed above, is anyone here holding an idea in mind for when/how people might be inserting links of this sort? E.g. might you imagine them to be pasting these links into Citoid? Might you imagine them to be pasting these links directly into articles? Something else?
    I ask the above with two thoughts in mind:
    1. Might the kind of feedback the filter y'all are shaping here is intended to deliver be well suited for an Edit Check?
    2. When might people attempting to insert links be open to receiving feedback about them?
    This all of course assumes the filter ends up demonstrating a low enough false positive rate for us (collectively) consider it reliable.
    And hey, thank you for inviting me into this conversation, @Chaotic Enby. PPelberg (WMF) (talk) 22:31, 3 March 2025 (UTC)[reply]
    Sounds like a good idea. In the regular expression you're using, should it be "groq" or "grok"? Or both? Alaexis¿question? 18:25, 1 March 2025 (UTC)[reply]
    Groq appears to also exist, but I think Grok was intended. Chaotic Enby (talk · contribs) 18:45, 1 March 2025 (UTC)[reply]
    @Alaexis and Chaotic Enby: I intended for both Groq and Grok to be included. Originally, I thought about Groq, but I would also like to include Grok. Z. Patterson (talk) 19:22, 1 March 2025 (UTC)[reply]
    Trialling log-only at Special:AbuseFilter/1346. Further refinement welcome, I just used the suggestion above. Sam Walton (talk) 22:00, 1 March 2025 (UTC)[reply]
    Thanks! Looking at the first two hits:
    • Special:Diff/1278344988 does make use of a link with the utm_source=chatgpt.com parameter. It does seem to be consistent with the claim (a sports team being relegated), although not stating it explicitly (the source only gives tournament results). I might be missing something, as the whole website is in Icelandic.
    • Special:Diff/1278344163 also uses such a link. The claim it is attached to is very promotional, and, while the source does support a small bit of it, it doesn't even make sense for the rest of the claim, which discusses events taking place since the source's publication.
    Chaotic Enby (talk · contribs) 22:24, 1 March 2025 (UTC)[reply]
    Another random comment: Putting the content through gptzero.me suggests that the second hit is likely AI-generated and the first isn't. (As an aside, I've thought about making a tool that automatically scans all of Wikipedia (or maybe even most Wikimedia projects) to check for potential AI-generated content. However, there is a lot of text on Wikipedia, and not a lot of AI detection tools that can handle such a volume of content, so I'm not sure whether this idea is actually doable or not.) Duckmather (talk) 01:34, 2 March 2025 (UTC)[reply]
    A caution with that is that apparently a lot of LLMs used Wikipedia articles as part of their training, so articles prior to the date the LLM was trained will turn up a lot of false positives when fed Wikipedia articles, or so I have read in discussions, at least. - The Bushranger One ping only 05:59, 4 March 2025 (UTC)[reply]
    @Chaotic Enby the filter seems to be working well with just over 40 hits so far. How useful are you (and anyone else here) finding it? Would tagging edits be helpful? Sam Walton (talk) 08:37, 4 March 2025 (UTC)[reply]
    Looking at a few edits, the filter is definitely working well, and catches a lot of questionable edits. Tagging could be helpful, although I believe warning to remind the editors to verify their sources might be more productive than having someone else double-check behind. Also noting that a lot of the edits are to drafts, which is not surprising, but users do have a lot more latitude there. Chaotic Enby (talk · contribs) 12:35, 4 March 2025 (UTC)[reply]
    Noting here that the filter flags edits from ALL users, including bots, so we might want to exclude extended confirmed users, sysops and bots per WP:EF/TP. Codename Noreste (talk) 21:07, 4 March 2025 (UTC)[reply]
    Not sure if we should exclude extended-confirmed users, per my comments earlier. Regarding bots, I'm not opposed to excluding them, as I don't see in which cases they would add LLM-generated URLs to begin with. Chaotic Enby (talk · contribs) 21:24, 4 March 2025 (UTC)[reply]
    I was curious, so I looked into what bit of chatgpt actually generates a link with that kind of URL. Notably, asking chatgpt to write an article for you doesn't produce links like that (for me). What does create them is their web-search tool -- which writes a summary of the search topic, but also includes a list of links and inline-citations. Said summary with citations isn't in a particularly friendly format for pasting directly into wikipedia, though someone who was willing to go through and convert all the external-links into citations could probably make it work.
    As such, I suspect that this filter is mostly catching the LLM-equivalent of people who googled for citations -- it’s just that google search doesn’t stick a recognizable URL parameter onto all the links you follow, so we can't detect those.
    It's probably a good warning-sign: someone who uses one of these links is at higher risk of having also copied in whatever chatgpt wrote about the topic, or of having trusted chatgpt about it without reading the source themselves. That said, it's not an actually dispositive sign of malfeasance. Escalating to a "maybe double-check your sources, we know they came from a LLM" warning sounds reasonable enough, but outright blocking such edits feels a step too far. DLynch (WMF) (talk) 03:07, 5 March 2025 (UTC)[reply]
    Thanks for the investigation! Have you seen phab:T387903? I'm planning to check other LLMs to see if they have similar behaviors. Chaotic Enby (talk · contribs) 07:16, 5 March 2025 (UTC)[reply]

    Prevent other languages on Wikipedia

    • Task: Any symbols associated with other languages (Russian, Turkish, Arabic, Chinese, etc) that are in an edit to articlespace, where the symbols are outside of quotation marks are disallowed from being published or tagged as potentinal vandalism.
    • Reason: Recently there has been a user going around putting small russian text in articles and this is apart of a wider problem of people who don't speak English coming here and trying to publish their own language on the Encyclopedia.
    • Diffs: I can't find the diffs but this is an issue on Wikipedia, I saw this while going through recent changes.

    135.180.130.195 (talk) 06:16, 4 March 2025 (UTC)[reply]

    It would really be helpful to have some diffs demonstrating the disruptive edits. There are a number of reasons for non-English text to be included in articles, so I'm initially not sure how we'd avoid false positives here. Sam Walton (talk) 08:34, 4 March 2025 (UTC)[reply]
    Symbols from other languages that are outside quotation marks are pretty common in enwiki. Many of them, but presumably not all, will be in templates like Lang and Langx. Sean.hoyland (talk) 08:45, 4 March 2025 (UTC)[reply]
    Not to forget are references, which can include titles, publishers and authors in other languages. Nobody (talk) 08:48, 4 March 2025 (UTC)[reply]
    @135.180.130.195, Samwalton9, Sean.hoyland, and 1AmNobody24: I think that if Wikipedia were to implement such a filter, it would result in false positives, as templates such as Template:Nihongo, Template:Nihongo foot, and Template:Nihongo krt use foreign languages, and we would need to make sure to catch instances outside of quotation marks, <blockquote> tags, and <ref> tags. The English Wikipedia often cites foreign-language information and must include foreign-language information as a source, if it is used. Also, as many names of people are not in English, it could result in a large number of false positives. In addition, we have language-specific notice templates that we use for non-English contributions, such as those available in Category:Non-English user warning templates. We could, instead, potentially ask @NaomiAmethyst, Rich Smith, and DamianZaremba: to look into training User:ClueBot NG, as ClueBot NG is capable of machine learning, whereas edit filters are not. Otherwise, we, as editors, will need to be vigilant about finding illegitimately-placed non-English text and telling said users to either contribute in English, or go to a different-language Wikipedia and edit there. Z. Patterson (talk) 00:49, 5 March 2025 (UTC)[reply]

    Add Daily Express into filter 869

    George Ho (talk) 13:53, 4 March 2025 (UTC)[reply]

    To the \.co\.uk part of the filter, we can add express, and to the \.com part of the filter we can add the-express. – PharyngealImplosive7 (talk) 17:40, 4 March 2025 (UTC)[reply]

    Brief description of filter

    • Task: What is the filter supposed to do? To what pages and editors does it apply?
    • Reason: Why is the filter needed?
    • Diffs: Diffs of sample edits/cases. If the diffs are revdelled, consider emailing their contents to the mailing list.

    187.249.110.48 (talk) 02:32, 6 March 2025 (UTC)[reply]

    No tags for this post.