Sõda

MEEDIAVALVUR: algab „sõjalise erioperatsiooni“ teine etapp nimega „SÕDA“

AI images

On this page:

Guideline against use of AI images in BLPs and medical articles?

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The questions asked in this RfC's title are fairly easy to answer. The BLP half was already answered in a related thread,EDIT: see addendum below which became subthread § BLPs and was closed by Ganesha811: There is clear consensus against using AI-generated imagery to depict BLP subjects. Marginal cases (such as major AI enhancement or where an AI-generated image of a living person is itself notable) can be worked out on a case-by-case basis. The discussion of the medical side in this thread shows comparable opinions, although in smaller numbers. To the extent medical images were discussed as their own category, it's fair to say there's general opposition to using AI-generated ones, although there might be greater room for exceptions, especially for very simple images. In both cases, "AI-generated image" should be read to mean one wholly created by generative AI, not a human-created image that has been modified with AI tools (which may still be problematic, but do not carry the same presumed disfavor).
But neither of those questions is really what this RfC came to be about. A majority of comments in the main thread were about whether to fully ban AI images from Wikipedia. And at face value, the answer to that question is "yes except when the image is the subject of discussion", with about 75% supporting either that or a stricter ban. However, this was not a very well-attended thread, relative to the sweeping scope of that proposal. It was never listed on WP:CENT and its title was never amended to reflect the broader question being discussed.
So I'm going to handle this in a somewhat unusual way, but one that I think best balances the strength of support with the limited quorum. I am relisting this RfC and listing it on CENT. See § Relist with broader question: Ban all AI images? below. -- Tamzin[cetacean needed] (they|xe|🤷) 06:25, 28 February 2025 (UTC)[reply]
Addendum: This edit to WP:BLP made me realize that I erred in treating this RfC's BLP question as congruent with § BLPs'. That discussion concerned images of living persons. This discussion was a bit more ambiguous: Chaotic Enby and several others spoke of AI illustrations of BLPs (so the same question as the other discussion), while a few spoke more broadly of all images in BLPs. Now, a majority of people wanted something broader than either of those, which is why I've relisted the RfC to discuss a (near-)blanket ban; but to the extent that editors discussed BLPs as a separate class from other articles, I don't see that anyone made a case for why images of things other than living people in BLPs should be treated differently than similar images in non-BLPs. I'm not saying that a case couldn't be made, but nobody made it. There is consensus to ban AI illustrations of living people, and there was strong-support-but-insufficient-quorum to (mostly) ban AI images in general, but there was not a consensus regarding that particular middle ground between those two points. So I will be reverting that addition to WP:BLP. Of course this may well become moot if the (near-)blanket ban passes. -- Tamzin[cetacean needed] (they|xe|🤷)

I have recently seen AI-generated images be added to illustrate both BLPs (e.g. Laurence Boccolini, now removed) and medical articles (e.g. Legionella#Mechanism). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases?

To clarify, I am not including potentially relevant AI-generated images that only happen to include a living person (such as in Springfield pet-eating hoax), but exclusively those used to illustrate a living person in a WP:BLP context. Chaotic Enby (talk · contribs) 12:11, 30 December 2024 (UTC)[reply]

What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - Sebbog13 (talk) 12:17, 30 December 2024 (UTC)[reply]
Same with animals, organisms etc. - Sebbog13 (talk) 12:20, 30 December 2024 (UTC)[reply]
I personally am strongly against using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. jolielover♥talk 12:28, 30 December 2024 (UTC)[reply]
I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – Joe (talk) 12:38, 30 December 2024 (UTC)[reply]
There hasn't been a full discussion yet, and we have a list of uses at Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts, but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. Chaotic Enby (talk · contribs) 12:44, 30 December 2024 (UTC)[reply]
Discussions are going on at Wikipedia_talk:Biographies_of_living_persons#Proposed_addition_to_BLP_guidelines and somewhat at Wikipedia_talk:No_original_research#Editor-created_images_based_on_text_descriptions. I recommend workshopping an RfC question (or questions) then starting an RfC. Some1 (talk) 13:03, 30 December 2024 (UTC)[reply]
Oh, didn't catch the previous discussions! I'll take a look at them, thanks! Chaotic Enby (talk · contribs) 14:45, 30 December 2024 (UTC)[reply]
There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in Simon Ekpa. CMD (talk) 15:00, 30 December 2024 (UTC)[reply]
While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)[reply]
For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)[reply]
The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. Chaotic Enby (talk · contribs) 20:00, 30 December 2024 (UTC)[reply]
We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- Nat Gertler (talk) 20:54, 30 December 2024 (UTC)[reply]
I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. CMD (talk) 10:15, 31 December 2024 (UTC)[reply]
Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools have been implemented in a lot of photo editing software, such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). Photos of Japan (talk) 18:18, 30 December 2024 (UTC)[reply]
Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. Chaotic Enby (talk · contribs) 20:04, 30 December 2024 (UTC)[reply]
I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. Horse Eye's Back (talk) 18:40, 30 December 2024 (UTC)[reply]
For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule.
I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- Colin°Talk 19:12, 30 December 2024 (UTC)[reply]
I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. Chaotic Enby (talk · contribs) 20:03, 30 December 2024 (UTC)[reply]
Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as this one (as used in the "medical" article Fluconazole) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — xaosflux Talk 19:26, 30 December 2024 (UTC)[reply]
I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. JekyllTheFabulous (talk) 13:33, 31 December 2024 (UTC)[reply]
Support total ban this creates a rights issue which is unacceptable on Wikipedia, everything else aside. It is not yet known where AI images trained on stolen content will fall legally, and that presents problem for Wikipedia using them. Warrenᚋᚐᚊᚔ 15:27, 8 February 2025 (UTC)[reply]
AI generated medical related image. No idea if this is accurate, but if it is I don't see what the problem would be compared to if this was made with ink and paper. — xaosflux Talk 00:13, 31 December 2024 (UTC)[reply]
I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. Di (they-them) (talk) 20:46, 30 December 2024 (UTC)[reply]
AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. Photos of Japan (talk) 21:46, 30 December 2024 (UTC)[reply]
AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)[reply]
I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)[reply]
AI-generated images should always say "AI-generated image of [X]" in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)[reply]
Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)[reply]
always end up with "no consensus" and no guidelines on use at all, even if most people are against it Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. Some1 (talk) 02:28, 31 December 2024 (UTC)[reply]
Of interest perhaps is this 2023 NOR noticeboard discussion on the use of drawn cartoon images in BLPs. Zaathras (talk) 22:38, 30 December 2024 (UTC)[reply]
We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites.
That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. Gnomingstuff (talk) 23:31, 30 December 2024 (UTC)[reply]
Do you really mean to ban single images showing the way birds use their wings?
Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)[reply]
Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)[reply]
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)[reply]
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. Thryduulf (talk) 18:45, 15 January 2025 (UTC)[reply]
I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not:
  1. Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use.
  2. Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated.
The potential harm I mentioned above is twofold, firstly Wikipedia is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. Thryduulf (talk) 00:52, 31 December 2024 (UTC)[reply]
I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys the idea of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)[reply]
A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and WP:IAR still does (and should) apply in edge cases.
The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been. In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". Chaotic Enby (talk · contribs) 11:13, 31 December 2024 (UTC)[reply]
Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored every time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image is the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images.
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated. The key words here are "supposed to be" and "shouldn't", editors absolutely will speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that.
Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. Thryduulf (talk) 11:43, 31 December 2024 (UTC)[reply]
For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.
Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)[reply]
the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image). There are only two possible scenarios regarding verifiability:
  1. The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
    • Verifiability is no barrier to using the image, whether it is AI generated or not.
    • If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not.
  2. The image is either not an accurate representation, or we cannot verify whether it is or is not an accurate representation
    • The only reasons we should ever use the image are:
      • It has been the subject of notable commentary and we are presenting it in that context.
      • The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo)
    This is already policy, whether the image is AI generated or not is completely irrelevant.
You will note that in no circumstance is it relevant whether the image is AI generated or not. Thryduulf (talk) 13:27, 31 December 2024 (UTC)[reply]
In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.
In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)[reply]
If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image is misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. Thryduulf (talk) 15:04, 31 December 2024 (UTC)[reply]
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. Gnomingstuff (talk) 20:35, 31 December 2024 (UTC)[reply]
Yes, but that's a Commons thing. A guideline on English Wikipedia shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)[reply]
I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. Gnomingstuff (talk) 20:45, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated images on Wikipedia. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also WP:OR scraped from who knows what and where. Use only reliable material from reliable sources. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. :bloodofox: (talk) 03:12, 31 December 2024 (UTC)[reply]
    Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)[reply]
    Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Wikipedia. Wikipedia isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)[reply]
    "Unquestionably"? Let me question that, @Bloodofox. ;-)
    If an editor were to use an AI-based image-generating service and the prompt is something like this:
    "I want a stacked bar chart that shows the number of games won and lost by FC Bayern Munich each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is:
    • 2014–15: played 34 games, won 25, tied 4, lost 5
    • 2015–16: played 34 games, won 28, tied 4, lost 2
    • 2016–17: played 34 games, won 25, tied 7, lost 2
    • 2017–18: played 34 games, won 27, tied 3, lost 4
    • 2018–19: played 34 games, won 24, tied 6, lost 4
    • 2019–20: played 34 games, won 26, tied 4, lost 4
    • 2020–21: played 34 games, won 24, tied 6, lost 4
    • 2021–22: played 34 games, won 24, tied 5, lost 5
    • 2022–23: played 34 games, won 21, tied 8, lost 5
    • 2023–24: played 34 games, won 23, tied 3, lost 8"
    I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that?
    We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. WhatamIdoing (talk) 01:58, 2 January 2025 (UTC)[reply]
    Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)[reply]
    We're discussing generating images of people, places, and objects here The proposal contains no such limitation. and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. Thryduulf (talk) 03:14, 2 January 2025 (UTC)[reply]
    As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Wikipedia, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)[reply]
    So you think the lead image at Gisèle Pelicot is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed.
    A lot of my concern about blanket statements is the principle that what's sauce for the goose is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too.
    (Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.) WhatamIdoing (talk) 06:47, 2 January 2025 (UTC)[reply]
    Review WP:SYNTH and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. :bloodofox: (talk) 09:33, 2 January 2025 (UTC)[reply]
    Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)[reply]
    Yes, which explicitly states:
    It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.
    Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under WP:SYNTH: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. :bloodofox: (talk) 10:07, 2 January 2025 (UTC)[reply]
    "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments"
    We are not talking about original images created by Wikipedians. This isn't splitting hairs, the image itself is not created by an editor. Warrenᚋᚐᚊᚔ 15:41, 8 February 2025 (UTC)[reply]
    The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. Remsense ‥  07:00, 31 December 2024 (UTC)[reply]
    100 dots: 99 chocolate-colored dots and 1 baseball-shaped dot
    @Bloodofox, here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate?
    I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get genuinely looks like those objects, I'm not actually going to worry about it. WhatamIdoing (talk) 06:57, 31 December 2024 (UTC)[reply]
    As you know, Wikipedia has the unique factor of being entirely volunteer-ran. Wikipedia has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Wikipedia editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future.
    In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors.
    Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI.
    As a long-time editor who has frequently stumbled upon intense WP:PROFRINGE content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Wikipedia readers and Wikipedia editors alike.
    Wikipedia is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines.
    A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage. :bloodofox: (talk) 07:40, 31 December 2024 (UTC)[reply]
    A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Wikipedia articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Wikipedia-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)[reply]
    I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'.
    As a translator myself, I can only say: Oh please. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ever beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, human translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Wikipedia itself).
    I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the reality is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Wikipedia.
    Either you, a human being, can contribute to the project or you can't. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Wikipedia in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project.
    If people can't be confident that Wikipedia is made by humans, for humans the project is finally on its way out.:bloodofox: (talk) 09:55, 31 December 2024 (UTC)[reply]
    I don't know how up to date you are on the current state of translation, but:
    In a previous State of the industry report for freelance translators, the word on TMs and CAT tools was to take them as "a given." A high percentage of translators use at least one CAT tool, and reports on the increased productivity and efficiency that can accompany their use are solid enough to indicate that, unless the kind of translation work you do by its very nature excludes the use of a CAT tool, you should be using one.
    Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.
    88% of respondents use at least one CAT tool for at least some of their translation tasks.
    Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.
    Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. Photos of Japan (talk) 17:26, 31 December 2024 (UTC)[reply]
    You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is absolutely horrible at translation and all of it must be thoroughly checked by humans, as you would be if you were a translator yourself. :bloodofox: (talk) 22:20, 31 December 2024 (UTC)[reply]
    "all machine translated material must be thoroughly checked and modified by, yes, human translators"
    You are just agreeing with me here.
    "if you’re just trying to convey factual information in another language that machine translation engines handle well, AI/MT with a human reviewer can be a great option. -American Translation Society
    There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. Photos of Japan (talk) 06:48, 1 January 2025 (UTC)[reply]
    And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)[reply]
    I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Wikipedia article?" The question here is not "Shall we put AI-generating buttons on Wikipedia's own website?" WhatamIdoing (talk) 02:27, 2 January 2025 (UTC)[reply]
    I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)[reply]
    Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is not "nonsense"?
    I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that List of scholarly publishing stings will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...).
    But I'm not worried about a Wikipedia editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of Template:Wikipedia editor graph (100 per month), feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. WhatamIdoing (talk) 07:09, 2 January 2025 (UTC)[reply]
    Translators are not using generative AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any generative faculties to its output since that is the exact opposite of what translation is supposed to do. JoelleJay (talk) 02:57, 2 January 2025 (UTC)[reply]
    Translators are not using generative AI for translation this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. Thryduulf (talk) 03:06, 2 January 2025 (UTC)[reply]
    Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)[reply]
  • Ban AI-generated from all articles, AI anything from BLP and medical articles is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. Remsense ‥  06:53, 31 December 2024 (UTC)[reply]
    @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)[reply]
    I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥  07:02, 31 December 2024 (UTC)[reply]
    A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)[reply]
    Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥  07:18, 31 December 2024 (UTC)[reply]
    Like everyone said, there should be a de facto ban on using AI images in Wikipedia articles. They are effectively fake images pretending to be real, so they are out of step with the values of Wikipedia.--♦IanMacM♦ (talk to me) 08:20, 31 December 2024 (UTC)[reply]
    Except, not everybody has said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. Thryduulf (talk) 10:24, 31 December 2024 (UTC)[reply]
    @Remsense, exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{pd-algorithm}} instead of {{cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? WhatamIdoing (talk) 02:33, 2 January 2025 (UTC)[reply]
    The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥  04:43, 2 January 2025 (UTC)[reply]
    How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? WhatamIdoing (talk) 07:13, 2 January 2025 (UTC)[reply]
    There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in File:Pinwheel scheduling.svg (from Pinwheel scheduling) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid?
    I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —David Eppstein (talk) 01:15, 1 January 2025 (UTC)[reply]
    I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of Obesity) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. WhatamIdoing (talk) 07:35, 2 January 2025 (UTC)[reply]
  • I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, artificial intelligence art or Théâtre D'opéra Spatial.—S Marshall T/C 11:21, 31 December 2024 (UTC)[reply]
    Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)[reply]
    That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)[reply]
    Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. Chaotic Enby (talk · contribs) 11:43, 31 December 2024 (UTC)[reply]
  • Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. Masem (t) 15:12, 31 December 2024 (UTC)[reply]
  • Support total ban of AI imagery - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Wikipedia will be increasingly respected for holding a hard line against synthetic imagery. Carrite (talk) 15:39, 31 December 2024 (UTC)[reply]
    For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. Thryduulf (talk) 16:34, 31 December 2024 (UTC)[reply]
  • Yes, we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture does not depict the real person because it is quite simply fake.
  • Even worse would be using AI to develop medical images in articles in any way. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. Cremastra 🎄 u — c 🎄 20:08, 31 December 2024 (UTC)[reply]
    It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Wikipedia is not going to be taken over by AI, AI is not out to subvert Wikipedia, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)[reply]
    So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis.
    I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). Cremastra 🎄 u — c 🎄 21:02, 31 December 2024 (UTC) Cremastra 🎄 u — c 🎄 20:56, 31 December 2024 (UTC)[reply]
    Determining what benefits any image brings to Wikipedia can only be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot.
    The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things any image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. Thryduulf (talk) 21:43, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated text or images in articles, except in contexts where the AI-generated content is itself the subject of discussion (in a specific or general sense). Generative AI is fundamentally at odds with Wikipedia's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. pythoncoder (talk | contribs) 21:34, 31 December 2024 (UTC)[reply]
  • Support blanket ban on AI-generated images except in ABOUTSELF contexts. This is especially a problem given the preeminence Google gives to Wikipedia images in its image search. JoelleJay (talk) 22:49, 31 December 2024 (UTC)[reply]
  • Ban across the board, except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. Seraphimblade Talk to me 00:29, 1 January 2025 (UTC)[reply]
  • Oppose blanket bans that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)[reply]
Lachlan Macquarie?
  • Oppose blanket bans AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now (right). This purports to be a particular person ("The Father of Australia") but, if you check the image description, you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. Andrew🐉(talk) 08:03, 1 January 2025 (UTC)[reply]
    So, you expect an the AI, notoriously trained on Wikipedia (and whatever else is floating around on the internet), to correct Wikipedia where humans have failed... using the data it scraped from Wikipedia (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)[reply]
    I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible.
The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology

To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps:

  1. Gathering information on the Opie portrait: This included details about its history, provenance, and any available information on its cost.
  2. Reviewing scholarly articles and publications: This step focused on finding academic discussions specifically addressing the attribution of the portrait to John Opie.
  3. Collecting expert opinions: Statements and opinions from art experts and historians were gathered to understand the range of perspectives on the certainty of the attribution.
  4. Examining historical documents and records: This involved searching for any records that could shed light on the portrait's origins and authenticity, such as Macquarie's personal journals or contemporary accounts.
  5. Exploring scientific and technical analyses: Information was sought on any scientific or technical analyses conducted on the portrait, such as pigment analysis or canvas dating, to determine its authenticity.
  6. Comparing the portrait to other Opie works: This step involved analyzing the style and technique of the Opie portrait in comparison to other known portraits by Opie to identify similarities and differences.
  • It was quite transparent in listing and citing the sources that it used for its analysis. These included the Wikipedia image but if one didn't want that included, it would be easy to exclude it.
    So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Wikipedia. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist.
    Andrew🐉(talk) 09:09, 2 January 2025 (UTC)[reply]
    They don't have to be black boxes but they are by design: they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Wikipedia is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). :bloodofox: (talk) 09:39, 2 January 2025 (UTC)[reply]
    While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. Chaotic Enby (talk · contribs) 17:40, 2 January 2025 (UTC)[reply]
    Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. Andrew🐉(talk) 17:28, 4 January 2025 (UTC)[reply]
  • Oppose blanket ban: best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what exactly would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. MichaelMaggs (talk) 12:52, 1 January 2025 (UTC)[reply]
    I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)[reply]
    That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)[reply]
    I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being generated by AI (like the Laurence Boccolini example below) and an image being altered or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. Chaotic Enby (talk · contribs) 15:24, 1 January 2025 (UTC)[reply]
  • Oppose as unenforceable. I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI because they don't like the results to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Wikipedia. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. lethargilistic (talk) 15:41, 1 January 2025 (UTC)[reply]
    And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. MichaelMaggs (talk) 17:39, 1 January 2025 (UTC)[reply]
    Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. MichaelMaggs (talk) 17:58, 1 January 2025 (UTC)[reply]
    As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say if if changes the image), while I am talking about creating an image ex nihilo, which is what "generating" means. Chaotic Enby (talk · contribs) 18:05, 1 January 2025 (UTC)[reply]
    I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. MichaelMaggs (talk) 18:16, 1 January 2025 (UTC)[reply]
    Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. Darkfrog24 (talk) 22:51, 3 January 2025 (UTC)[reply]
  • Support blanket ban because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output that has already been generated might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? XOR'easter (talk) 23:30, 1 January 2025 (UTC)[reply]
  • Support blanket ban - Primarily because of the "poisoning the well"/"dead internet" issues created by it. FOARP (talk) 14:30, 2 January 2025 (UTC)[reply]
  • Support a blanket ban to assure some control over AI-creep in Wikipedia. And per discussion. Randy Kryn (talk) 10:50, 3 January 2025 (UTC)[reply]
  • Support that WP:POLICY applies to images: images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on WP:OR and WP:V by using images instead of text? Shooterwalker (talk) 17:04, 3 January 2025 (UTC)[reply]
    As an aside on this: in some cases Commons is being treated as a way of side-stepping WP:NOR and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. FOARP (talk) 10:43, 4 January 2025 (UTC)[reply]
  • Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in The Dress and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]
  • First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Wikipedia, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Wikipedia editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. isaacl (talk) 17:59, 4 January 2025 (UTC)[reply]
  • Support Blanket Ban on AI generated imagery per most of the discussion above. It's a very slippery slope. I might consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -Ad Orientem (talk) 02:45, 5 January 2025 (UTC)[reply]
  • Oppose blanket ban It is far too early to take an absolutist position, particularly when the potential is enormous. Wikipedia is already is image desert and to reject something that is only at the cusp of development is unwise. scope_creepTalk 20:11, 5 January 2025 (UTC)[reply]
  • Support blanket ban on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. LEPRICAVARK (talk) 22:44, 5 January 2025 (UTC)[reply]
  • Support blanket ban as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). Alanscottwalker (talk) 15:32, 8 January 2025 (UTC)[reply]
Which parts of this photo are real?
  • Support indefinite blanket ban except ABOUTSELF and simple abstract examples (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of the first video of a horse in gallop, in 1878. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon[1][2]. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Wikipedia should act to limit its exposure to this kind of technology as far as is feasible. Daß Wölf 20:57, 9 January 2025 (UTC)[reply]
  • Support at least some sort of recomendation against the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (Generative artificial intelligence, Springfield pet-eating hoax, AI slop, etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see WP:AIIMAGE) fail WP:IMAGERELEVANCE (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay WP:PORTRAIT, and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view have no legitimate encyclopedic function whatsoever. Cakelot1 ☞️ talk 14:36, 14 January 2025 (UTC)[reply]
    Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which have no legitimate encyclopedic function whatsoever. This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use is relevant. Thryduulf (talk) 14:45, 14 January 2025 (UTC)[reply]
    That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)[reply]
    Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)[reply]
    Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea?
    "Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? WhatamIdoing (talk) 17:50, 15 January 2025 (UTC)[reply]
    Criteria (b) and (c) were not part of the statement I was responding to, and make it a very significantly different assertion. I will assume that you are not making motte-and-bailey arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome.
    Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. Thryduulf (talk) 18:56, 15 January 2025 (UTC)[reply]
  • Oppose blanket ban, only AI-generated BLP portraits should be prohibited. I propose that misleading, inaccurate or abusive AI-generated images should be removed manually. In particular, if an image is AI-generated, it should be clear to readers that it's not meant to be an authentic photo. Editors can also be more demanding toward AI-generated images, and remove them more readily if they don't provide much value or are not relevant. This could be encouraged by a guideline. But the blanket ban seems too radical. There isn't always a clear boundary between what is AI-generated and what isn't, for example if a LLM helps generate a SVG or a graph, or if AI is used to edit more or less significantly a photo. Some AI-generated images also had mediatic coverage, and are thus relatively legitimate to use. Moreover, the amount of added AI-generated images hasn't been overwhelming, there hasn't been much abuse relative to how easy it is to generate images with AI. And we should keep in mind that technology will keep improving, along with the accuracy and quality of images. Alenoach (talk) 01:16, 4 February 2025 (UTC)[reply]
    I propose that misleading, inaccurate or abusive AI-generated images should be removed manually you don't need to propose this, because all misleading, inaccurate or abusive images can (and should) already be removed from articles (and, if appropriate, nominated for deletion) manually by any editor as part of the normal editing process. Thryduulf (talk) 01:41, 4 February 2025 (UTC)[reply]
  • This was archived despite significant participation on the topic of whether AI-generated images should be used at all on Wikipedia. I believe a consensus has been/can be achieved here and should be closed, so I have unarchived it. JoelleJay (talk) 17:37, 2 February 2025 (UTC)[reply]
  • This discussion is titled Guideline against use of AI images in BLPs and medical articles?, but people are using it to support or oppose a blanket ban on all AI-generated imagery. Such a ban will never pass. I think editors need to be more specific about the type of AI-generated images they want banned (or conversely, what they find acceptable). The community clearly doesn't want AI-generated images depicting living people (see the RfC below if you haven't). What about AI-generated images depicting dead people? Famous landmarks? Landscapes? Different dinosaurs? Etc. Some1 (talk) 03:59, 4 February 2025 (UTC)[reply]
    I don't think there should be any subjects listed as specifically allowed or disallowed. Simply if the image meets all the same requirements as a non-AI image (i.e. acceptable copyright status, accurate, good quality, encyclopaedically relevant, better than all the alternatives including no image) then it should be used without restriction. Where an image doesn't meet those requirements (for whatever reason or reasons) then it shouldn't be used. Whether it is AI-generated should remain completely irrelevant. If we extend the BLP-prohibition (which is based entirely on ill-defined disapproval) we will further harm the encyclopaedia by preventing the use of the best image for a given situation just because some people vocally dislike AI imagery). Thryduulf (talk) 04:54, 4 February 2025 (UTC)[reply]
  • Support total ban of AI images in all articles on the wiki, with the only exception being instances where it is relevant to the article (i.e. Donald Trump shared an AI image, Xi Jinping used an ai image as propaganda etc.). We are an encyclopedia, not a repository to promote your AI "art" (which in many cases has inaccuracies, and is not to be used as a depiction anywhere).Plasticwonder (talk) 18:59, 4 February 2025 (UTC)[reply]
    @Plasticwonder You've just described how things work currently: Wikipedia is not a repository to promote anything. If an image is not relevant to the article it shouldn't be used, whether it is AI-generated or not is irrelevant. If an image is inaccurate, it should only be used in an article if the inaccuracies are encyclopaedically relevant to and discussed in that article (e.g. an article or section about a manipulated image). Again, whether it is AI-generated or not is irrelevant. If an image is relevant to the article, accurate, has an acceptable copyright status, is of good quality, etc. then it should be considered for use in the article alongside all the other images that meet the same criteria, and the best one used (or no image used if none of the available images are good enough). Whether any of the images are AI-generated or not is irrelevant. Thryduulf (talk) 02:47, 5 February 2025 (UTC)[reply]
    The community regularly finds consensus that particular sources have such a demonstrably poor track record for reliability overall that they should never be cited outside ABOUTSELF, even when they contain verifiably accurate and encyclopedic information. So the provenance of content absolutely is relevant and in fact more often than not supersedes all other considerations. This discussion is exactly like anything on RSN where we have decided to blanket ban a source, so I'm baffled why you continue to act as if added content is only ever evaluated case-by-case. JoelleJay (talk) 18:07, 5 February 2025 (UTC)[reply]
    We aren't dealing with a single source producing a single type of content that has a consistent track record such that it can be meaningfully evaluated as a whole. AI is a collection of multiple, widely different technologies that produces a vast array of different content types. Banning "AI" would be closer to banning magazines than to deprecating the Daily Mail. Also, when we blanket ban a source we do so based on evidence of repeated and significant problems with a defined source's reliability that mean a content assessment will always end up reaching the same conclusion, not vague prejudices about a huge category of tools. The two are not comparable and trying to equate them is not something that contributes anything useful to the discussion. Thryduulf (talk) 19:18, 5 February 2025 (UTC)[reply]
    AI-generated imagery is being treated as a singular entity by plenty of organizations and publishers (e.g. Nature) and in numerous legal cases, there is no reason to specify any one program when the underlying problems of IP, inaccuracy, bias, etc. plague all of them. JoelleJay (talk) 00:49, 6 February 2025 (UTC)[reply]
    Specifying any one programme would be just as wrong as specifying "AI", just as banning editors using iphones but allowing editors using Android phones would be. None of the other organisations you cite are writing an encyclopaedia, their use cases are different to ours. IP is irrelevant to this discussion - if an image is not Free we cannot use it whether we want to or not. Inaccuracy and bias are attributes of individual images, and apply equally to images not created by AI tools - as explained in detail multiple times in the multiple discussions by multiple people. If you want to convince me otherwise you have to actually engage with the arguments actually made rather than with vague generalisations, strawmen and irrelevances. Thryduulf (talk) 04:56, 6 February 2025 (UTC)[reply]
    You say IP is irrelevant, but do AIs generate Free images or not? You mentioned "an image generator trained exclusively on public domain and cc0 images", which is a nice idea, but wouldn't it be crippled for lack of training data? I found one [3]. The quality looks about as expected. Also, crucially, non-mainstream models are limited to those with the technical wherewithal to run them. This means that in terms of policy covering Wikipedia, 99 out of 100 times we will be dealing with ChatGPT, Gemini, Claude, Midjourney (etc) output. Or are you saying that it looks like law is arranging itself such that tools trained on non-free images can be treated as free, so we don't need to worry about it? Emberfiend (talk) 10:12, 7 February 2025 (UTC)[reply]
    What I'm saying is that IP is irrelevant to this discussion. As long as there is one or more AI-generated image that is Free and/or which we can use under fair use then we can potentially use AI-generated images in articles. If a given image is not Free, regardless of why it is not, and fair use does not apply to that image then whether we want to use the image or not is irrelevant, because we can't use it. Thryduulf (talk) 13:18, 7 February 2025 (UTC)[reply]
    As long as there is one or more AI-generated image that is Free
    All AI generated imagery itself falls out of copyright as it isn't created by an individual and can be freely used, regardless of claims of an alleged rights holder. Wikipedia has a long and storied history of picking this fight. Whether or not the image was legally generated or results from massive IP theft is both up to the AI used and as-of-yet up in the air jurisprudence. Medical journals across the board ban AI imagery as far as I know, I think it's a bad idea to pretend WP:MEDRS doesn't apply here. The images are not created by individuals, are indifferent to technical accuracy (see the lung example by @Xaosflux above), are often instantly obvious as AI to SMEs (S22 post-processing in the moon picture from @Daß Wölf), and in general fail any kind of argument for WP:VERIFY I'm aware of. Warrenᚋᚐᚊᚔ 13:35, 7 February 2025 (UTC)[reply]
    We've been through this multiple times already: whether an image is accurate and/or verifiable is something that is a property exclusively of the individual image, not of the tool used to create them. We don't ban photoshopped images even though that produces images that are inaccurate, etc, because when dealing with non-AI images almost everybody is rational and makes decisions based on the evidence. I don't understand why this same is not possible when it comes to AI-images? Thryduulf (talk) 20:23, 7 February 2025 (UTC)[reply]
    But 99% of AI-images are going to bad, so it is far simpler to just say "AI images are generally unreliable and should be presumed such; exceptions can be determined on a case to case basis" rather than unparsimoniously force everyone to determine whether or not a specific AI image should be removed in this particular circumstance, which is inefficient and promotes costly discussion which would allow the pro-AI people to beat their point endlessly. Cremastra (talk) 20:49, 7 February 2025 (UTC)[reply]
    But 99% of AI-images are going to bad do you have any actual evidence that anywhere close to that many images that people might want to use are unsuitable? There is so much FUD being thrown around in these discussions that I'm skeptical. Thryduulf (talk) 21:32, 7 February 2025 (UTC)[reply]
    WP:AIIMAGE is the best we have at the moment. Cremastra (talk) 21:35, 7 February 2025 (UTC)[reply]
    Warren, your comments are self-contradictory. Here, you say that AI images are public domain because they can't be copyrighted. Above, you say that AI images should be banned because they might be derivative works. One of These Things (Is Not Like the Others).
    I think a slightly more nuanced argument is in order, including the idea that some images are too simple to qualify for copyright protection no matter who makes them: a professional artist, a child with a crayon, or an AI tool. WhatamIdoing (talk) 21:32, 12 February 2025 (UTC)[reply]
  • Support total ban of AI-generated images in most contexts, with the only exceptions (with clear in-line disclosure) being for when specific AI-generated images themselves, or the general concept of AI-generated images, are the focus of the topic or section. This also wouldn't cover the use of AI tools to touch up existing images, although any such use of a tool by an editor who isn't also listed as the image's creator or source would obviously have to be disclosed (which is true for any other image editing, since that means that without that diclosure it no longer reflects the listed source.) If the gray area between "created" and "touched up" becomes problematic we can hammer it out later but I think that that's unlikely - in practice they're very different tools; people know the difference between using Adobe's touch-up tools and tossing a sketch into Stable Diffusion. Regarding the core question, just looking at Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts shows how far from usable most of these images are and how much time and effort is being wasted cleaning up after the people who keep adding them. The generators have fundimential problems with biases and quality, which often leads to subtle problems when it comes to depicting subjects accurately; because these problems are so prevasive, and because they can be used to produce a firehose of content at a rate users have trouble keeping up with, it's unreasonable to expect editors to judge each one individually. Some people have expressed concern that this might be hard to enforce; but we have numerous policies that require time and effort to enforce (eg. WP:CIVILPOV, WP:COI, the offsite provisions of WP:CANVASS, WP:MEAT, etc.) - and the biggest concern would be people who flood the wiki with AI-generated images repeatedly, who are generally going to be easy to catch due to the limitations of existing models. --Aquillion (talk) 13:41, 8 February 2025 (UTC)[reply]
  • Ban per my comments in Wikipedia:Village_pump_(policy)#BLPs, which is now a subsection of this? Seems like a duplication of effort. Zaathras (talk) 15:36, 8 February 2025 (UTC)[reply]
  • Oppose ban AI can be used for manipulation and other nefarious purposes. It can also be used to make work significantly easier. People can use computers can do the exact same kinds of things without AI. AI is already used in filters of many of our photos (even if it isn't openly stated). AI is a tool. The use of that tool is at issue, not the tool itself. There's no reason to have an open ban. Obviously fake photos and manipulated photos, of course. Blanket ban? No. Buffs (talk) 16:19, 18 February 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Relist with broader question: Ban all AI images? (!vote here)

Should AI-generated images be banned from use in articles? (Where "AI-generated" means wholly created by generative AI, not a human-created image that has been modified with AI tools.) 06:25, 28 February 2025 (UTC)

  • I am procedurally relisting this RfC with a broader question and a WP:CENT listing, per my closing statement above. This is a continuation of that discussion. There are already 30ish !votes on this question above; please read those before !voting/closing (and no need to re-!vote if you already have). As I note above, currently consensus strongly favors a ban except for when the image is the subject of discussion. -- Tamzin[cetacean needed] (they|xe|🤷) 06:25, 28 February 2025 (UTC)[reply]
  • I don't think I can support a blanket ban like this. If the image is accurate, I don't care whether it was drawn by hand, by computer program (like most graphs and charts), or by gen AI. Looking over the previous discussion, xaoxflux's example is an awfully inaccurate depiction of the lungs, but Whatamidoing gave several examples where using AI makes sense and is not problematic. As long as nothing in the image is wrong, i.e. false, unverifiable, inaccurate, then I am okay with it. Toadspike [Talk] 08:22, 28 February 2025 (UTC)[reply]
    Having read over some of the below discussion, my opinion is unchanged, but I would support requiring clear declaration of all images made using generative AI. Toadspike [Talk] 10:55, 12 March 2025 (UTC)[reply]
  • i support an almost blanket ban. i mean like a "blanket but some bugs nibbled a couple holes into it" ban. which is to say, i think ai generated images should only really be allowed in the context of ai generated images (ai being used somewhere and sources going "yeah that's ai lol", blps where the subject uses ai to represent themself (though an image of them would still be better imo), articles about ai stuff, and at least one more case i'm forgetting at the moment). this is to say, never as representative of anything except ai content itself, unless explicitly used for other stuff by the subject of an article that happens to house an ai generated image (like willy's chocolate experience and its unfulfilled promise of sugary booba) consarn (prison phone) (crime record) 11:13, 28 February 2025 (UTC)[reply]
  • Oppose a total ban. I support a ban on AI generated photos, but give more flexibility to AI generated illustrations (particularly those present in the baseball example in this RfC. I support a policy for requiring clear identification of images, ideally including a template parameter that indicates AI generated content. Dw31415 (talk) 13:44, 28 February 2025 (UTC)[reply]
  • Support Just an open ended invitation to OR, circularity, etcetera, same problems as with AI generated text. Selfstudier (talk) 13:51, 28 February 2025 (UTC)[reply]
  • Oppose -- You can (to my understanding, as someone who does not actively us AI) prompt for something like a Venn diagram where one section is marked "spacemen", another "cowboys", and the intersection "Maurice" and get a result that's not particularly different than if you had used some standard graphic software to create it.... and such a diagram would be visually verifiable as accurate to the intent. There's no real reason in such a case to keep people from using whichever tool they're comfortable with. I support a fairly broad AI image ban, but not complete. -- Nat Gertler (talk) 18:01, 28 February 2025 (UTC)[reply]
    Result of a Stable Diffusion prompt asking for a simple Venn diagram
    I tried entering the exact prompt you described on Stable Diffusion (specifically stable-diffusion-xl-1024-v1-0). Of the four images the model suggested, none looked close to the intended Venn diagram, and none had readable text, with this one being the closest approximation. In general, AI image models are trained to generate more "artistic" images, and do not work well on (even simple) precise diagrams like this one. Chaotic Enby (talk · contribs) 18:24, 28 February 2025 (UTC)[reply]
    i say this knowing the improbability of it going anywhere or being understood...
    wait, it's all venn diagrams? consarn (prison phone) (crime record) 18:33, 28 February 2025 (UTC)[reply]
    AI-generated diagram from Claude using the prompt 'a Venn diagram where one section is marked "spacemen", another "cowboys", and the intersection "Maurice"'
    Here's what Claude created: garbage. JoelleJay (talk) 18:41, 28 February 2025 (UTC)[reply]
    Well, at least that one is an actual Venn diagram. It does make sense that a model capable of generating SVG code would do better at this job. Chaotic Enby (talk · contribs) 18:44, 28 February 2025 (UTC)[reply]
    Just a heads up: the images on the right are not of good quality and like strawmans even if not intended to demonstrate unsuitability of AI visuals – people using AI images or opposing this censorship would not use these either.
    Please do not make decisions based on the few AI images you may have seen in your personal experience and/or the few (bad) examples in this thread.
    Some somewhat better examples are near the bottom. There are many positive use-cases of AI images – for example illustrating some visual art style for which there is no or just one free media example, or for visualizing some event according to descriptions, or some visual example of some scifi / fantasy concept for illustration, etc. It's usually not useful for factual or scientific concepts, but can sometimes be useful for other purposes. Even if you think it's near impossible for any AI image to be useful even for articles about fictional concepts, these can simply be removed (like so far and like with Photoshop images) without the need for an indiscriminate ban. Prototyperspective (talk) 11:09, 24 March 2025 (UTC)[reply]
    @Prototyperspective: Please stop bludgeoning the discussion. You’ve been badgering a disproportionate number of ban supporters with these exact same points (“this is censorship”, “those images are bad examples”, “what about Photoshop”, “what about fantasy/scifi”) over and over again. You’ve made your points already, and continuing to beat them into the ground like this is not having the positive effect you want it to. pythoncoder (talk | contribs) 17:11, 24 March 2025 (UTC)[reply]
    Okay.
    Just tried to have remaining voters' points addressed or votes without explanation asked about rationale. "This is censorship" was none of my points btw and I think I only mentioned examples being bad once and in the comment above. Prototyperspective (talk) 17:18, 24 March 2025 (UTC)[reply]
    Okay, you are indeed right, that is a failure (if an amusing one in this case), on that particular platform. But the problem is easily detectible at a glance, and this would not get placed. If someone better at prompting than I used an AI platform better than SD in the results, and got something visually verifiable as accurate, I see no reason why it need not be used in favor of what might be a visually identical image generated by some other system that the editor may not be comfortable with/cognizant of. The image would be simple enough that it would not be a copyright concern, and abstract enough that it would not be an accurate-depiction-of-reality concern. But perhaps I am putting too much on theoretical AI to come. -- Nat Gertler (talk) 18:45, 28 February 2025 (UTC)[reply]
    What level of complexity is the cutoff, then? For any diagram that is simple enough for anyone to validate, there will be a) numerous other free online non-AI-generative tools you could use instead, and/or b) numerous examples already in existence you could use instead. I would also question the encyclopedic value of such simple diagrams... JoelleJay (talk) 18:25, 28 February 2025 (UTC)[reply]
  • Oppose This is like trying to ban all photoshopped images. The technology is here, and in use. There are many use cases for it that should not be limited. GeogSage (⚔Chat?⚔) 18:44, 28 February 2025 (UTC)[reply]
    This argument is absolutely daft. As stated I can tease out only two interpretations, differing as to whether genAI is a uniquely destructive technology in terms of enforceability of existing site policy.
    If it is not, the technology is here to violate many of our content policies as you imply. Thus, your argument applies equally to all copyvio, and really to all tools one can use to violate site policy. This ignores that we could ban Photoshop just like we ban copy-pasting sources unattributed, and it is a matter of enforcement. We don't because there are valid uses to Photoshop. You've obscured the point.
    Alternatively, we assume this technology is uniquely insidious and disruptive to the extent where our core content policies must be seen as moribund, in a fait accompli. In this case, I'm not willing to give up just because you are. Remsense ‥  19:08, 28 February 2025 (UTC)[reply]
    An image is an image, we could ask for a user to give the image generator and prompt, but I oppose any blanket ban. I'm not "giving up" in a fight against AI, because personally, I want to explore how we can step on the gas and use AI to improve the project without broad limits. I'm not fighting against it's use and believe such fights are standing in the way of progress. GeogSage (⚔Chat?⚔) 19:15, 28 February 2025 (UTC)[reply]
  • Support but needs narrower wording Especially ban them for anything which appears to be a photograph. An image which is a photograph provides information....what a real world view of a real world object or scene looks like. And providing information is the role of Wikipedia. A "photograph type" AI image does not provide that information, it's just showing you something that that mysterious AI black box invented. Even worse, it can provide false visual information. The next case might be one of presenting information. For example, making a graph from some numerical data. If it is a straightforward presentation of data, you don't need AI to create it. If you allow AI to create it, you are are looking at a creation of a mysterious black box. In other words, synthesis by an unknowable unaccountable black box. Next we must realize that the Wiki meaning of "image" is different than the common real world meaning of image. The common real world meaning is something like a photograph or which appears to be a photograph. The Wiki meaning is any file that displays as a 2 dimensional pattern or the resultant displayed two dimensional pattern. So per the common real world meaning, a map or a diagram or the Mona Lisa or some other created or creative work is not commonly called an image. We actually allow a little bit of wp:synthesis / wp:or in certain wp:images (e.g. diagrams, maps) but we shouldn't allow the same synthesis by an unknowable unaccountable unquestionable black box. North8000 (talk) 19:48, 28 February 2025 (UTC)[reply]
    I can get behind this, since AI-generated photograph-like images (except where the subject of discussion) are inherently not accurate. Toadspike [Talk] 12:19, 2 March 2025 (UTC)[reply]
    I think the more crucial point is not that they they are inherently inaccurate (indeed, by some metrics in some circumstances, they are fully representative of that which they were meant to emulate). Rather, I think the critical issue is that they tend to be inherently misleading. A photo-realistic appearing image will very often seem to be a genuine candid photo--and the rate of detection of manufactured images will only decline, moving forward--and thus we risk misleading the reader into perceiving a photo-like AI image as a real-world representation, which problematically raises their perception that it is completely accurate and representative of the subject which it is meant to illustrate. That's a problem for a large number of reasons. An AI generated drawing on the other hand, is immediately, implicitly, and fundamentally parsed as a facsimile, which may or may not be accurate along particular criteria but is never perceived as a perfectly representation, as is indeed the nature of all generative art, human or AI. Thus, for the most part, we can judge the suitability of a non-photo-realistic image based on how useful and illustrative it is just by reference to its practical features (i.e. using existing policies and common sense), without much or any reference to its human vs. AI provenance. SnowRise let's rap 11:20, 6 March 2025 (UTC)[reply]
  • Support blanket ban, with as only obvious exception when the subject is tangentially related to AI pictures itself (as pointed out above several times). This technology presents absolutely no usefulness at all to build an encyclopedia, and instead represents an existential threat to accuracy in our articles. Pretty general illustrations are helpful for a PowerPoint presentation, not to display visual information relevant to an encyclopedic article. I think the blanket ban itself is necessary because any specific line would be completely unenforceable, evolving to be permanently crippled by exceptions, and resulting in case by case examination, which would be a nightmare and a time sink. We do not need this, and we should not burden ourselves with this for no meaningful gain whatsoever. Choucas0 🐦‍⬛💬 ⸱ 📋 21:55, 28 February 2025 (UTC)[reply]
  • Oppose blanket ban mostly per xaosflux above. For BLPs I think a ban is fully appropriate because of the risk of misleading the reader that an AI-generated image is a real photo of a living person. For other photographs I'm divided: in general I think we should not be using AI images if the veracity of the photo is relevant, but in limited cases (like the example of the flight cycle of a bird above) I could see photo-like AI images being appropriate. For most other uses including some medical uses I think AI images are fine: the example xaosflux gave of a diagram of a molecule is I think particularly relevant. If the AI generated image is accurate (and to be clear, this should be verified) I don't really see any reason for preferring manual image generation over AI image generation in this case.
One particular argument that I think should be addressed separately: the copyright issues could be relevant in the future but I think that we should not try to be a judicial crystal ball. In theory any number of other major changes to copyright could occur but we don't normally worry about, say, Congress putting public domain images back under copyright. Loki (talk) 22:40, 28 February 2025 (UTC)[reply]
What diagram of a molecule? The only examples I see from xaosflux are the error-riddled diagrams of alveolar epithelium (why are all Lp depicted outside of the macrophages?) and lungs (which, btw, now appear on the first page of google image results on the topic!). In my experience molecular models are particularly inapt for AI generation as bond lengths, orders, and other spatial relationships between atoms are extremely important but errors in them are also highly unlikely to be noticed by anyone. I asked Claude to create a model of naphthalene and it gave me a nonsensical disjointed structure with only single bonds. Even generative AI software designed for drug discovery is problematic. I'm sure what seem like simple diagrams in other topics are actually a lot more complex if you talk to someone with domain expertise.
Inaccurate AI-generated model of naphthalene from Claude
JoelleJay (talk) 20:03, 1 March 2025 (UTC)[reply]
100% this. As someone with domain expertise for chemical structures, I think the problems faced are intractable and likely to be similar for any scientific/technical field. In this case, it boils down to two basic facts of LLMs that are caused by their training: they create pictures meant to look pleasant first and accurate second, and they are trained on the totality of available resources, which is largely wrong. For chemical structures it is even worse, because actually every possible representation of every possible structure is always going to be wrong in some way, as they are always an approximation. Very often only SME can be able to discern which approximation can safely be made, and which characteristic should be emphasized; and this is a decision that is done not even case by case per structure, but per structure in the context of its mention. As I said before, I can easily see these unsolvable issues replicated in the overwhelming majority of domains. Failing to pass a blanket ban now would mean hundreds of discussions to carve out exceptions and specifics, which is the nightmare scenario I mentioned in my !vote and that I really wish we could avoid. Choucas0 🐦‍⬛💬 ⸱ 📋 13:32, 2 March 2025 (UTC)[reply]
  • Support ban of any AI-generated image likely to be perceived by a reader as human-generated per JoelleJay and Remsense and isaacl and others. User generated imagery (as opposed to imagery sourced from reliable sources) already comes with risks of OR and verifiability issues. AI-generation makes it easier to create that kind of problem and harder to detect it. Photorealistic AI-generated imagery is always a problem except the ABOUTSELF case and cases where the specific image in question has been discussed in reliable sources. The use of AI to generate diagrams/illustrate abstract concepts may become acceptable as the state of the art improves (the current state of the art seems far worse than non-generative tools), but such content should be clearly marked as AI-generated and it should be acceptable to remove such content if it is not clearly verifiable as accurate. -- LWG talk 05:07, 1 March 2025 (UTC)[reply]
  • Support blanket banning, except when discussing the image itself (Crungus/Loab/Shrimp Jesus.) I just do not think it is a good idea to trust AI with presenting factual information, regardless of its possible improvement in the future. Wikipedia is probably the worst place on the Internet to implement AI-generated content, and it runs the risk of tanking our credibility. wikidoozy (talk▮contribs)⫸ 20:13, 1 March 2025 (UTC)[reply]
  • Support, with exception for contexts where it being an AI-generated picture is important to article. even ignoring copyvio issues, hard to close that can of ethical worms once its opened. ViridianPenguin and wikidoozy also says it best. User:Bluethricecreamman (Talk·Contribs) 22:05, 1 March 2025 (UTC)[reply]
  • Oppose blanket ban. Many of the potential issues with AI-generated images could also present themselves in human-created images, and our image use and core content policies would apply the same in any case. In particular, copyright-infringing images and blatantly false images are already eligible for speedy deletion under WP:G12/WP:F9 and WP:G3 respectively, and I would support a blanket bansupport the enacted blanket ban on AI-generated images that purportedly depict living people as there is no guarantee of accuracy and therefore compliance with WP:BLP. On the other hand, AI could theoretically be used to generate sophisticated graphics – especially if no human graphic designer is up to the task – and is also present in some image editing tools (which certainly would not make sense to blanket ban). Complex/Rational 00:14, 2 March 2025 (UTC)[reply]
    @ComplexRational: Just a clerical note as one of the two partial-closers here: The blanket ban on AI-generated images that purportedly depict living people already exists, as enacted below in § BLPs. -- Tamzin[cetacean needed] (they|xe|🤷) 00:20, 2 March 2025 (UTC)[reply]
    @Tamzin: Noted and comment amended – that's what happens when I don't read the entire page. Thanks, Complex/Rational 00:27, 2 March 2025 (UTC)[reply]
  • Oppose: AI is just a tool for making images, with advantages and disadvantages like any other tool. Yes, an AI image may be misleading, just like a hand drawn picture, a picture created manually using software (like Windows' Paint)... and yes, even photos. Just ask Nikolai Yezhov. Cambalachero (talk) 17:07, 2 March 2025 (UTC)[reply]
  • Oppose if an AI generated image satisfies our general requirements for images, including attribution etc., I am not convinced just because an "AI" tool was used to generate something why that alone should be banned. Also contributes a lot of unnecessary friction and policing to our project which I think is a distraction from our main mission. Tom (LT) (talk) 07:05, 1 March 2025 (UTC)[reply]
  • Support blanket ban If an article describes the shape and color of a bacterial species under a particular stain, is it fine to generate an AI image that is accurate to the written parameters? To me, the answer is still no because the AI image is a guess, whereas the article needs a real-world photograph to capture the nuances. The baseball diagram is not compelling because such figures can always be created with manual control using software or code. For concepts that cannot be photographed or diagrammed, such as the apocalypse, I think we remain best served by human art, rather than AI synthesis of human art, because the former represents the actual cultural attitudes, while the latter is a machine's attempt to satisfy the prompt with potential copyright issues. Andrew Davidson highlights smartphones' widespread use of automatic AI photo enhancement, but per Chaotic Enby, it has been clear from the start that we are discussing images generated with text prompts. Since 2004, WP:HIIQ has encouraged improving images by editing their brightness, contrast, etc., such that even if Photoshop switches from consistent algorithms to AI for its editing tools, photo editing will remain allowed as long as the underlying content remains accurate. ViridianPenguin🐧 (💬) 21:10, 1 March 2025 (UTC)[reply]
    When an AI-generated image cannot be distinguished from a non-AI generated one (e.g. in the diagram example you give) where is the benefit in banning the AI-generated one? Thryduulf (talk) 22:01, 1 March 2025 (UTC)[reply]
    Of course there is no direct benefit to banning the AI baseball diagram indistinguishable from its manually created counterpart, but if the rule is "AI images are allowed outside of BLPs if they are accurate," then returning to my microbiology example, the uploader will naturally defend their AI image as consistent with the text, unaware that the subject potentially looks different due to nuances not covered in the text of reliable sources. Thus, I am arguing that despite the convenience of using AI to create some figures, we should forgo all AI images (except those used to describe AI) because the ease of image creation is less important than our commitment to accuracy. ViridianPenguin🐧 (💬) 18:51, 3 March 2025 (UTC)[reply]
    unaware that the subject potentially looks different due to nuances not covered in the text of reliable sources. [...] the ease of image creation is less important than our commitment to accuracy the content in reliable sources is how we judge what is and what is not accurate, nuances included. Everything else is the OR you seek to avoid by banning AI. Thryduulf (talk) 18:55, 3 March 2025 (UTC)[reply]
    Any non-trivial (but note this is not the same as "non-subtle") "nuances" introduced by generative AI are OR deviations from RS and therefore should disqualify the image automatically. JoelleJay (talk) 21:11, 3 March 2025 (UTC)[reply]
    Any non-trivial "nuances" introduced by a human (whether subtle or otherwise) are OR deviations from RS and therefore should disqualify the image automatically. This is not evidence of any need or benefit to treating AI images differently to human images. Thryduulf (talk) 21:32, 3 March 2025 (UTC)[reply]
    The probability that an AI-generated image has such nuances ranges from "high" to "guaranteed" depending on complexity. JoelleJay (talk) 21:36, 3 March 2025 (UTC)[reply]
    Citation needed. In reality it depends on the skills and knowledge of the prompter, the exact AI used, and the quality of any fact checking. It is equally possible for a human generated image to have such nuances, especially with an unskilled creator. In other words you have to check the accuracy of every image regardless of the source, so once again the technology used is not relevant to whether the image is accurate. Thryduulf (talk) 23:26, 3 March 2025 (UTC)[reply]
    What "unskilled creator" would even attempt to draw a highly technical image, let alone create one that is as plausible-looking and professional as those from AI??
    You think people currently patrolling articles are applying the same level of scrutiny to every diagram they come across, with no assumptions about accuracy based on appearance of professionalism; and that the existing pool of SMEs is sufficient to handle orders of magnitude higher volumes of professional-looking contributions from orders of magnitude less-qualified contributors? You think it's ok for any unevaluated images to remain in mainspace indefinitely? JoelleJay (talk) 00:32, 4 March 2025 (UTC)[reply]
    You think it's ok for any unevaluated images to remain in mainspace indefinitely? No. I'm saying that all images should be evaluated, regardless of the source. Just because something looks "professional" does not mean it is any more or less likely to be correct than something that doesn't. I'm only semi-skilled when it comes to making diagrams, but I can produce something that looks professional based on my understanding of the sources without using AI. I know my understanding of the sources in subjects like chemistry would be woefully insufficient for it to be a good use of my time me to even attempt that, and the existence of AI does not change that. The danger comes from those who think they understand the subject but actually don't - and they are equally likely to make a diagram with or without AI. Thryduulf (talk) 15:36, 4 March 2025 (UTC)[reply]
    and they are equally likely to make a diagram with or without AI. Are you joking?! Literally in this discussion we have the professional-looking examples from xaosflux that the uploader absolutely would not have been able to create without AI. JoelleJay (talk) 17:33, 4 March 2025 (UTC)[reply]
    Yet again you have missed my point completely (maybe next time will be the occasion you read what I actually wrote rather than something else). Just because someone can use AI to make a diagram does not mean they will use AI to make a diagram, and the existence of AI does not make it more likely that someone will create a diagram they are unable to fact check. With the exception of generating images for the purposes of discussions we don't have any evidence of masses of inaccurate diagrams being created by anybody, using any technology. Thryduulf (talk) 20:05, 4 March 2025 (UTC)[reply]
    Just because someone can use AI to make a diagram does not mean they will use AI to make a diagram, and the existence of AI does not make it more likely that someone will create a diagram they are unable to fact check. Again, we have evidence in this discussion of random users creating inaccurate AI-generated images that were professional-looking enough that e.g. xaosflux used them as examples of "potentially unproblematic" AI-generated images. Those (again, very inaccurate to SMEs) images are now on the first page of Google image results on the topic! JoelleJay (talk) 21:13, 4 March 2025 (UTC)[reply]
    This is a great argument for having some sort of policy or guideline that governs the use of these images, but it is not a very good argument for indiscriminately banning them all. Most people cannot safely bungee jump, or juggle knives, or do backflips on a trapeze, et cetera -- but we do not call the police and have circus performers arrested for doing so. jp×g🗯️ 03:50, 5 March 2025 (UTC)[reply]
    In this analogy, are the circus performers supposed to be the SMEs who would be qualified as prompt engineers for generative AI of complex, technical images...? If so, that is going to be a vanishingly small pool of people in general, let alone WP editors, and given some of the literature I've cited even they will have difficulty prompting an accurate graphic. The topics where new images will be most likely to persist in mainspace are also going to be exponentially harder to prompt engineer (few existing freely-licensed images, niche expertise necessary to assess accuracy). And this is all ignoring the fact that generative AI makes it 10000x easier for people with zero expertise to create plausible-looking graphics, as we've already seen here. To continue the analogy, this would be like letting randos loose in a knife-juggling simulator without telling them their mistakes still have real-life consequences.
    I personally do not want more AI-generated images showing up early in Google image search results, especially for technical topics that I'm not familiar with. Allowing them on WP—and any lesser "case-by-case", "leave it to the experts/consensus to evaluate" policy option is functionally equivalent to this—is exactly how that happens and will significantly accelerate the inevitable AI-ouroborosing. JoelleJay (talk) 04:42, 5 March 2025 (UTC)[reply]
    By this reasoning, original research itself would be allowed as long as it leads to conclusions consistent with reliable sources. WP:No original research is instead a blanket prohibition because of the assumption that the process of original research is likely to introduce unsupported ideas, analogous to AI image generation being likely to introduce errors. JoelleJay offers well-sourced arguments below that AI images are likely to proliferate in technical topics where the systems are least accurate. Non-experts are unlikely to interpret these AI images with sufficient suspicion, leaving them ill-served until we can depend on experts to expend time on discerning faults. ViridianPenguin🐧 (💬) 02:37, 4 March 2025 (UTC)[reply]
    Eh? That's not even remotely close to what I said. Accuracy is determined solely by how closely what we are evaluating matches what is found in reliable sources. Any deviation from reliable sources is equally inaccurate regardless of how and why that deviation came about. Thryduulf (talk) 15:42, 4 March 2025 (UTC)[reply]
    "File:Artist’s impression of the magnetar in the star cluster Westerlund 1", used in Magnetar.
    Artist's impression of a stellar quake, currently in Neutron star.
    Who cares if it is made by a neural network? We have a gigantic number of images, currently illustrating articles, that are artists' impressions. This means that someone made a guess, and it's not certifiable to reality. Do you think that File:Artist’s impression of the magnetar in the star cluster Westerlund 1.jpg (currently in Magnetar) is a photo? It is not. Nobody has ever been this close to a magnetar: you would die instantly. It's a guess.
    I object to special pleading — principles must possess some kind of basic logic. Here is the situation proposed:
    • Green checkmarkY Our image of a quark star is some guy's drawing who's never seen one (made with charcoal on vellum).
    • Green checkmarkY Our image of a quark star is some guy's drawing who's never seen one (made with a computer).
    • Red X symbolN Our image of a quark star is some guy's drawing who's never seen one (made with a computer using a freaky-style neural network).
    This just doesn't make any sense.
    It may be the case that many images generated by neural networks are crap, or wrong, but many images are wrong crap, period. There is already a rule against images being wrong crap. This is not an argument for creating a new rule banning the ones that aren't wrong crap, for a reason unrelated to them being wrong crap. jp×g🗯️ 03:43, 5 March 2025 (UTC)[reply]
    So you really cannot comprehend how two images reliably published by professionals in the European Southern Observatory and NASA ought to be treated differently from a hypothetical unpublished AI-generated depiction from who knows where? JoelleJay (talk) 23:34, 5 March 2025 (UTC)[reply]
    You think that using a specific computer program to draw a star causes someone to become from NASA? jp×g🗯️ 08:01, 6 March 2025 (UTC)[reply]
    What? I think images solely created by NASA are probably from people affiliated with NASA...? JoelleJay (talk) 18:01, 6 March 2025 (UTC)[reply]
    You’re comparing an artist’s image published and endorsed by the European Southern Observatory (an organization with substantial subject matter expertise and credibility) with random AI-generated images generated by anyone that will solely rely on Wikipedia editors (few of which will be SMEs) to validate. There will be a vast proliferation of the latter if it’s not checked.
    Maybe we can add an exception for AI-generated images published by reliable sources (and explicitly noted as such), but still prohibit the vast majority of AI-generated images. 4300streetcar (talk) 18:03, 6 March 2025 (UTC)[reply]
  • Support a near-blanket ban. I see utility for AI images to illustrate related topics, for example a Artificial intelligence art, but outside of that use case I am convinced by other participants that there is virtually no encyclopedic use for AI-generated images, and significant downside risk from proliferation of AI-generated images in articles. Dclemens1971 (talk) 02:37, 2 March 2025 (UTC)[reply]
  • This is a topic that requires a lot more nuance than the comments in this discussion are generally giving it. Here is a list of my issues with some of the major arguments I have seen in this discussion:
    • "We need to ban AI images because AI is an immoral technology that's destroying the environment": If that's your standard for banning a method for creating images, I have bad news about your camera.
    • "We need to ban AI images because tech companies are integrating more and more AI into everything and we need to Take A Stand against it now or Wikipedia will be overrun": All of the comments I have seen about this are really vague about what this hypothetical future where overenthusiastic tech companies cramming AI features into every conceivable product regardless of their utility harming Wikipedia actually entails. Online search algorithms starting to heavily deweight non-AI images or something similar to that would be a problem for Wikipedia whether or not we ban AI images, and some company trying to cram the site with promotional images generated using its particular AI service would be dealt with like any other promotional spam in a world without a ban on AI images.
    • "AI images will always be worse than human-made ones for depicting the totality of the popular cultural image of more abstract concepts": This statement relies on a prediction that no AI image will ever attain the cultural relevance that paintings or photographs have, and I do not think this technology has existed long enough for it to make sense to make that kind of prediction. There have already been multiple popular memes centered around images generated by AI, and it is entirely plausible that some future online movement or subculture will become closely associated with a particular AI image in such a way that the image is the most logical way to depict that movement, even if it doesn't strictly meet ABOUTSELF criteria, the same way specific paintings can become associated with specific religious stories or concepts to the point they make sense as the image for that article without the article being specifically about the painting. This argument also seems to be implying that Wikipedia editors are specifically designing images for articles to encapsulate the full cultural conception of a concept, which does not really happen that commonly. The one given example of Apocalypse specifically uses a pre-existing Eastern Orthodox painting depicting the concept, which would probably wildly clash with, for example, Calvinist ideas of how that topic should be shown in art. An AI model probably would not be better at depicting this concept than the current image, but it's a weird example to use if you want to demonstrate the necessity of human creativity.
    • "It will save a lot of time on pointless discussions about whether a specific AI picture could be considered valid for a particular instance if we just banned all of them": There are a few problems with this one. There are a lot of things that almost only ever show up in edits that make articles worse. If Wikipedia outright banned the word "fucking" and made technical changes to prevent anyone from publishing any edits with that word in it, it would save a lot of editor time cleaning up vandalism, fixing unencyclopedic prose, and dealing with long and arduous discussions about whether somebody's conduct was uncivil. However, since there are situations like quotes and articles on profanity where "fucking" is the actual best word that could be used, so that rule doesn't exist, and there are long discussions to deal with the edge cases. Nearly all of the examples of bad images that supposedly justify a blanket ban are things that would already be immediately removed for other policy reasons, and anyone seriously arguing that those particular images of a Venn diagram or napthalene absolutely needed to be in the articles would probably waste just as much time in that argument with or without one extra policy they're blatantly violating. Even the less obviously problematic ones like the lung infection or Haitian presidents would be obvious to people familiar with the topic, presumably the primary demographic editing the affected artices, and the offending images could be easily removed with a single post saying "that is not the correct anatomy of a lung" or "that is not a uniform that was ever used in Haiti". While there probably would be some actually time-consuming discussions where editors have legitimate policy-based disagreements on whether a particular AI image is appropriate for the article, this proposal would only actually save time in a scenario where a bright-line rule with obvious boundaries replaces the current system, but this is not a proposal for a rule like that. Moving a gray area to the less permissive end of a spectrum does not necessarily decrease the size of that gray area. Pretty much every supporter for a blanket ban also supports ABOUTSELF exceptions, which means we will be having multiple long discussions over whether or not an AI image qualifies for inclusion on ABOUTSELF guidelines, not to mention all the discussions of instances where it is legitimately unclear whether or not an image was generated by an AI.
    • The line between an image edited by AI and an image entirely generated by AI is a lot blurrier than people in this discussion seem to realize: The loophole I am about to mention is a problem with even the strictest versions of the ban that are being seriously considered. Let's say, for example, I really don't like the infobox image on Pachirisu for some reason. So, I decide to take a picture of a squirrel in my backyard, use one of those tools that edits part of an image with AI to highlight the squirrel and tell it to replace the squirrel with a Pachirisu, and upload the new AI picture of a Pachirisu. Since most of the image was not generated by AI, this does not fall afoul of any AI image generation bans, and since free pictures of a subject are preferred to non-free pictures of a subject and current jurisprudence says that all AI images are uncopyrightable, my new image bumps the old Pachirisu image from the article. This particular extreme example would probably get thrown out under IAR if nothing else, but it would take very careful wording to separate someone intentionally creating a Pachirisu out of a squirrel, and someone's phone accidentally turning a white circular light into the moon, and someone removing a distracting piece of paper they could've removed with other tools without violating policy. A policy strictly separating AI generation from AI editing that accounts for that contingency would probably need to go into difficult-to-answer if not unknowable questions about the intent of the person making the picture. It's inconvenient, but we may need to seriously consider banning people from using Samsung phones and other cameras which incorporate involuntary AI editing when we need to make sure a picture is accurate to reality. On the opposite end of the spectrum, a lot of recent advances in the study of protein folding have been made using similar neural network technology to the technology used in AI image generation models. Any image of a protein whose structure was discovered using AI tools could be considered an image made by generative AI under some wordings of the blanket ban, since the basis of the image was created by AI and any human-made parts of the illustration were added afterwards.
    • "Since AI images are currently considered to be uncopyrightable, Wikipedia can use them without worrying about non-free image restrictions": While Wikipedia may not have a contingency plan for if every world government suddenly decides that the public domain shouldn't exist anymore, that is because the public domain is well-established and its existence seems stable, and that is not the case for AI images. There are several lawsuits and numerous popular political campaigns currently trying to change the legal status of neural network-created content, and while none of them may end up succeding, their success is still a realistic possibility. It will probably be years before the copyright status of AI is settled legally in a stable way, and if it is decided in the end that neural network training sets need to follow copyright law, the longer Wikipedia spends allowing AI-generated content to be posted freely to the website, the more difficult it will be to excise all of the AI-generated content that now suddenly violates copyright rules. I have seen a few arguments in this discussion surrounding the topic on whether or not copyright law should apply to neural network training sets, but when it comes to copyright rules, Wikipedia does not follow what we think the law should be, Wikipedia follows what the law is, and when the law cannot decide what the law is, we should err on the side of caution.
    • "We can only allow AI models based on public domain/copyleft sources to avoid copyright issues": This is impractical for multiple reasons. We would need to go through every image in the dataset used by the neural network, if that dataset is even available, and make sure every single image in the dataset follows Wikipedia guidelines. Any updates to the dataset of a neural network will need to be closely monitored to check if any unusable images are added and any images making it unusable are removed, and the exact time any images made using the neural network were generated relative to changes to the dataset will need to be known to an unrealistic degree if those images are used on Wikipedia. Furthermore, this strategy relies on editors being able to identify the exact neural network a specific AI image came from, something that is not always possible.
  • Due to these factors, I think that AI images should be treated as non-free images until their legal status stabilizes. If consensus is against this choice, my second choice would be to oppose a blanket ban, but support some official guidance about the risks of inaccuracies in AI images and how an AI image is not necessarily better than no image. I generally agree with the comments stating that in most encyclopedic articles about concrete things, accuracy is the most important consideration in image choice, and that the current capabilities of AI image generation mean it can almost never attain this goal. I also agree that AI images are generally considered unprofessional in our current cultural climate, and this makes them unsuitable for a lot of the more abstract use cases they could have. However, due to the many reasons I have listed above, I do not think a blanket ban for reasons other than copyright is the most effective solution to these problems.161.130.168.80 (talk) 07:22, 3 March 2025 (UTC)[reply]
    Responding to each of your points: First, this discussion begins with the clarification that "'AI-generated' means wholly created by generative AI, not a human-created image that has been modified with AI tools," so your reference to AI in cameras is irrelevant. Second, you mistake the argument that tech companies are recklessly pushing AI images as a claim that a company like OpenAI would personally mass-produce AI images for use on Wikipedia when the discussion instead considers whether the widespread marketing of AI services will encourage users to upload slop. Third, your rebuttal only furthers my claim that the apocalypse article would be benefited by further uploads of apocalyptic art across cultures but harmed by an AI image condensing that creativity into an image that represents no one. Fourth, your analogy to the encyclopedic use of "fucking" in quotes or articles on profanity only justifies the use of AI images in articles on AI, hence why, as you acknowledge, the "blanket" ban would not actually block all AI images. Fifth, WP:NFCC1 already would not value an AI image of Pachirisu edited to make this Pokémon appear as if inhabiting a real backyard above the actual sprite in an article describing its design. As for your reference to protein folding studies supported by AI, I again refer you to the opening clarification that such research figures are not being debated here. However, I agree that the unclear copyright status of AI images means the WP:precautionary principle applies. Microsoft's Copilot Copyright Commitment does not extend to images created with DALL-E, and as far as I know, no other AI image service is offering legal protection if AI images are found to violate the copyright of images in their training data. ViridianPenguin🐧 (💬) 19:56, 3 March 2025 (UTC)[reply]
The idea that AI-generated illustrations can just be assessed for accuracy the same way we do for user-created images assumes that a) the approach to evaluating AI-generated and human-created images would/could be the same; b) AI-generated images would be introduced at the same volumes and in the same topics as we currently have for human-created images; c) AI-generated images can be presumed to be as accurate as those generated by humans. These are faulty assumptions.
a) It is far easier and quicker to reject an image that looks amateurish than one that looks professional. A hand-drawn or MS Paint diagram can be axed by anyone, regardless of subject familiarity. Before AI, amateurs also did not make professional-looking contributions in general, let alone in subjects they were completely unfamiliar with. Patrolling editors are therefore conditioned to assume that professional-looking contributions are most likely from professionals and warrant less scrutiny. But generative AI is very good at simulating the appearance of professional illustrations, and these can be generated from prompts by people with zero background in the subject. This means AI-generated submissions in technical subjects can easily bypass error detection by non-experts, and even experts will be less thorough if they assume the contributor is also an expert.
b) AI generation opens up the door to image creations on any topic by anybody. It is very likely that these contributions will concentrate in the topics that don't already have images—which would include a lot of articles in technical subjects where the dearth of images is directly due to a dearth of experts.
c) Here is what the scientific literature has to say about the accuracy of AI-generated science illustrations (emphases mine):
The Promise and Pitfalls of AI-Generated Anatomical Images: Evaluating Midjourney for Aesthetic Surgery Applications

Results: All of [the] images produced by Midjourney exhibited significant inaccuracies and lacked correct anatomical representation. While they displayed high visual impact, their unsuitability for medical training and scientific publications became evident.

Conclusions: The implications of these findings are multifaceted. Primarily, the images inaccuracies render them ineffective for training, leading to potential misconceptions. Additionally, their lack of anatomical correctness limits their applicability in scientific articles.


Evaluating AI-powered text-to-image generators for anatomical illustration: A comparative study

In this study, the authors input the prompt "detailed and accurate anatomy illustration of the human [skull or heart])" and "drawing of human brain" into [DALL-E, Stable Diffusion, and Craiyon V3]. [...] The authors' evaluation revealed that none of the three generators were able to produce illustrations that met the criteria of being both detailed and accurate. [...] Foramina, such as the mental and supraorbital foramina, were frequently omitted, and suture lines were inaccurately represented. The illustrations of the heart failed to indicate proper coronary artery origins, and the branching of the aorta and pulmonary trunk was often incorrect. Brain illustrations lacked accurate gyri and sulci depiction, and the relationship between the cerebellum and temporal lobes remained unclear.


A comparative analysis of text-to-image generative AI models in scientific contexts: a case study on nuclear power

We explored 20 AI-powered text-to-image generators and compared their individual performances on general and scientific nuclear-related prompts. Of these models, DALL-E, DreamStudio, and Craiyon demonstrated promising performance in generating relevant images from general-level text related to nuclear topics. However, these models fall short in three crucial ways: (1) they fail to accurately represent technical details of energy systems; (2) they reproduce existing biases surrounding gender and work in the energy sector; [...]


Evaluating the Accuracy of Artificial Intelligence (AI)-Generated Illustrations for Laser-Assisted In Situ Keratomileusis (LASIK), Photorefractive Keratectomy (PRK), and Small Incision Lenticule Extraction (SMILE)

This study highlights the inaccuracy of AI-generated images in illustrating corneal refractive procedures such as LASIK, PRK, and SMILE. Although the OpenAI platform can create images recognizable as eyes, they lack educational value. AI excels in quickly generating creative, vibrant images, but accurate medical illustration remains a significant challenge. While AI performs well with text-based actions, its capability to produce precise medical images needs substantial improvement.

Artificial intelligence in plastic surgery: Implications and limitations of text-to-image models for clinical practice

Despite the promising applications, text-to-image models currently face significant limitations, especially concerning their use in generating anatomic images for clinical purposes. These AI models are not yet equipped to handle the complexity and precision required for accurate anatomical representations

Art or Artifact: Evaluating the Accuracy, Appeal, and Educational Value of AI-Generated Imagery in DALL·E 3 for Illustrating Congenital Heart Diseases

Most AI-generated cardiac images were rated poorly as follows: 80.8% of images were rated as anatomically incorrect or fabricated, 85.2% rated to have incorrect text labels, 78.1% rated as not usable for medical education. [...] While experts have identified errors and questioned the utility of AI-generated images, non-experts like medical students and nurses found them more favorable.

JoelleJay (talk) 00:14, 4 March 2025 (UTC)[reply]
They generated text labels using the model? It really just sounds like they blindly typed 'draw a medical diagram' into the proompting box, hit "run" and got a bunch of garbage. Is this a joke paper or something?
I would not put much stock in someone "evaluating" the performance of an axe by holding the blade, hitting a tree with the handle, and writing down "0.0%", regardless of how many digits were after the decimal. jp×g🗯️ 04:00, 5 March 2025 (UTC)[reply]
The anatomical accuracy was the more important parameter being rated, and was explicitly evaluated separately from the text accuracy.[4] They also scored whether the image could be suitable after modification. They rated 2.5% of the 110 images as being anatomically accurate, and 1% as having accurate labels. 75.5% of the minority of images that received a score of "accurate" came from trainees (who made up less than 1/3 of the evaluators). The paper says they ran pilot tests of prompts like “Draw an accurate illustration of [congenitally corrected transposition of great arteries] like those in the Congenital Heart Disease: A Diagrammatic Atlas by Mullins and Mayer.”, and then iteratively enhanced the prompts. JoelleJay (talk) 05:48, 5 March 2025 (UTC)[reply]
Single-shot generation (e.g. a 100% generated image with no inpainting, img2img, referencing, etc) is just a dumb, bad, and unskilled use of the tool. I don't think an intelligent person familiar with the tool would use it this way for this task. I don't think it demonstrates the (exceedingly difficult-to-prove) idea that the tool is incapable of being used effectively. jp×g🗯️ 08:09, 6 March 2025 (UTC)[reply]
The images were good enough that 25% and 30% were rated as accurate or medium-accurate by medical trainees and nurses, respectively (compared to 8% by cardiology experts). And anyway, this RfC is about 100% AI-generated images. What advanced tools do you think the editors uploading these unpublished AI-generated images are using? JoelleJay (talk) 18:38, 6 March 2025 (UTC)[reply]
More excerpts from the last study:

Very few of the images (2.5%) were considered anatomically accurate, while the majority (80.8%) were assessed as fabricated. In the evaluation of images’ text label, 85.2% were rated as useless, versus only 1.2% were considered useful.

Medical students, interns, and residents were significantly more likely to perceive the images as anatomically accurate, find the illustrative text useful, and consider the images both usable for medical education and visually appealing compared to other evaluators (p-value < 0.001). Nurses found the images notably more attractive and useful for medical education, and they also rated the accompanying text as highly useful, compared to other groups of evaluators (p-value < 0.001) as shown in Table 1. Conversely, the cardiology experts were significantly more inclined to perceive the images as (inaccurate, not attractive, not for medical education and their illustrative text being not useful) compared to the other evaluators.

Most AI-TIG images were rated poor regarding anatomical accuracy, illustrative text usefulness and usability for medical education 1–3%. However, generally the images were perceived as attractive in 15–22%.

This means that even people who have substantial familiarity with the subject, such as medical students/residents, are not equipped to evaluate the accuracy of AI-generated images of cardiac anatomy. Why would we ever trust random wikipedia editors on this?? JoelleJay (talk) 18:04, 4 March 2025 (UTC)[reply]
I would agree with most of the things the IP has said here, except for the idea of preëmptively treating things as copyrighted when there's so far been nothing to indicate this claim holds water. Some people filed some lawsuits — so what? People file lawsuits about all kinds of stuff. I could file a lawsuit claiming that every photo of a salt shaker on the Internet is illegal because I call dibs. Who cares?
Would we just shut down Wikipedia entirely if some schmo at Britannica said "this is illegal because, uh, I mean it has to be illegal because like, um, you know, it's just not fair"?
Rewarding litigious media conglomerates with the provisional assumption that they're right about everything is just not a good way to run a free-content encyclopedia. jp×g🗯️ 02:40, 5 March 2025 (UTC)[reply]
  • Oppose a total ban. Oppose a very broad ban. I do support a ban on photorealistic AI images of individual, identifiable people (living and dead). But if someone wanted to create an image represents the same concept of Central obesity as File:Obesity6.JPG but didn't use a real photo of a specific person, then it doesn't really matter to me whether they do that with Photoshop, with AI, or with an ink pen and a piece of paper. Ditto for images of everyday objects (especially if they're non-photorealistic). What matters is that it really does look like the thing that is being illustrated. Also, I think that Commons should continue to require AI images to be tagged as such. WhatamIdoing (talk) 20:49, 3 March 2025 (UTC)[reply]
  • Oppose: Banning all AI images sitewide really doesn't make sense. Articles like Artificial intelligence art and Dead internet theory use AI-generated images to give context to the topic overall. I think MOS:IMAGEREL and WP:NOR should apply; if it's not relevant to the article, of no educational value, not covered in RS, and/or original research, remove it. Point blank. End of story. 🌙Eclipse (she/they/all neostalk • edits) 01:37, 4 March 2025 (UTC)[reply]
    @LunaEclipse, considering almost every ban supporter has also stated ABOUTSELF AI images would obviously be exempt, do you have an opinion on the use of AI-generated images that aren't in that category? JoelleJay (talk) 01:51, 4 March 2025 (UTC)[reply]
    JoelleJay, if it doesn't serve an encyclopedic purpose, then yes, get rid of them. — 🌙Eclipse (she/they/all neostalk • edits) 19:48, 8 March 2025 (UTC)[reply]
    And keep them away from medical and science articles. That's a disaster waiting to happen. — 🌙Eclipse (she/they/all neostalk • edits) 19:49, 8 March 2025 (UTC)[reply]
  • Support for a ban except where used to demonstrate relevant context per the examples provided by LunaEclipse. Articles concerning real things ought to have real images, articles concerning fictional things ought to have images from the applicable work(s) of fiction. CR (how's my driving? call 0865 88318) 01:46, 4 March 2025 (UTC)[reply]
  • Oppose a total ban, particularly and especially for trivial explanatory images and diagrams that clearly illustrate a concept or idea, as opposed to images of real and specific objects, where "object" is defined as "a real specific thing that exists in the world (including people, animals, etc)". Please note especially my use of "specific". While it is obviously not necessary in the following example, I would not generally object to an AI generated image of a generic chair for an article on chairs-- the article is not about a specific chair, but chairs in general. I would support a preference for non-AI generated images of generic objects, but in the absence of a better image, I see no reason that AI can't illustrate various things where an illustration of the general idea would be helpful to understand what the article is talking about. Fieari (talk) 07:04, 4 March 2025 (UTC)[reply]
    @Fieari, do you oppose banning AI-generated images of more complicated objects? If so, where would the cutoff be for complexity? JoelleJay (talk) 17:40, 4 March 2025 (UTC)[reply]
    I would oppose banning AI-generated images of more complicated "generic" objects (like my chair example, except obviously something a bit more complicated than a chair), with the caveat that more complicated objects have details that may be relevant that an AI image generator might miss. It would need to be up to the editors to absolutely ensure that the AI generated image covers the complexity well enough to serve as a useful illustration. This may be difficult, this may not even be possible depending on the image generator used and the skill of the prompter, but I don't want to ban it outright, merely say that caution and care should be taken. Fieari (talk) 23:51, 4 March 2025 (UTC)[reply]
  • Support blanket ban, except where the article is actually discussing the image (per Wikidoozy). I also support any lesser ban in preference to an outcome of no consensus. Stifle (talk) 15:52, 4 March 2025 (UTC)[reply]
  • Oppose blanket ban but I'd support a guideline (not policy) against them. An image is either good or bad, useful or not, free to use or not. It does not matter If it is drawn by hand or AI generated. As in the BLP images discussed above, if there are no free image of XX how could an AI generate an image...? If there is a free image, we want that one (how much can it be "enhanced" by AI is another discussion). - Nabla (talk) 18:05, 4 March 2025 (UTC)[reply]
  • Oppose Blanket Ban we need something a lot more specific and structured than broad strokes. Thanks,L3X1 ◊distænt write◊ 18:44, 4 March 2025 (UTC)[reply]
  • Support blanket ban on generative AI images with common-sense exceptions (discussion of image in question, etc.). Because of the possibility/likelihood of copyrighted materials being used to train LLMs, images created from those training sets are likely to fall afoul of our mission to provide a free-content encyclopedia, regardless of particular court rulings on whether the image itself is copyrightable. Since it's impractical to make that determination for each questionable image, a full ban is all that makes sense here. --SarekOfVulcan (talk) 18:52, 4 March 2025 (UTC) Moved from discussion sectionJoelleJay (talk) 19:05, 4 March 2025 (UTC)[reply]
  • Support ban in articles, with exception for articles about AI Allowing AI images will be shooting our reliability in the foot. Readers trust us as one of the last bastions of reliable knowledge on the internet--a position it took us 20 years to build. Why would we wipe that away in one fell swoop by allowing AI images? Nobody I know trusts AI images. When I see an AI image, I immediately discount everything about it and assume it is wrong, incorrect, fake, misinformation, or otherwise being used for malicious intent. AI is theft. It steals copyrighted works and spits them back out but worse. Let's also remember that we are the training base for AI, and we risk contamination by allowing AI proliferation. Anytime we use an AI image, it needs to be clearly labelled in the caption and all AI images in the database must be tagged accordingly. We should allow AI images only where the subject requires an AI image because the subject is itself about or related to AI. I don't think minor photographic touchups like the auto-blemish feature on most phones, or even some of the newer tools in Photoshop, count as AI. I'm talking about the generative prompt engines where you just type a series of words and it spits something back at you. Our readers expect real information, not slop, even if it's well intended. CaptainEek Edits Ho Cap'n!⚓ 19:01, 4 March 2025 (UTC) Moved from discussion sectionJoelleJay (talk) 19:05, 4 March 2025 (UTC)[reply]
    I am strongly in agreement that such images must be tagged and noted accordingly, although I do not think most of the claims here are true. jp×g🗯️ 08:28, 6 March 2025 (UTC)[reply]
  • Suppor blanket ban on images wholly created by generative AI, until such a time as we can be sure the input to said AI was not in violation of copyright. We are currently incapable of finding specific copyvios in AI-generated content, but by its very nature it is a composite of the AI's training dataset, and there is considerable evidence that large portions of the internet were fed to the major AI tools in violation of copyright. This isn't inherent to the software - the models could have been trained on PD datasets - but they were not. Support some common-sense exceptions such as discussions of AI images themselves. Vanamonde93 (talk) 19:07, 4 March 2025 (UTC)[reply]
  • (edit conflict) Support except in about self fashion: there is no reason to use AI images in article space except when the image itself is of commentary or the program that generated the image. AI art can't even generate realistic photos of real world locations. A low effort AI photo causes huge problems for the quality of images on Wikipedia (added after it was cut off for some reason). For all other uses that could be filled by AI human photographers including the amateur ones using only their cell phone beat AI almost every time. Aasim (話すはなす) 19:16, 4 March 2025 (UTC)[reply]
  • Oppose blanket ban. AI is just a tool, and it's not going to go away. People still need to be responsible for the content they upload and if they're using AI tools, it's their job to check the correctness of what the tool generated, just like it's their job to check machine translations of text, or that photoshop didn't introduce erroneous artifacts. In the case of an AI generated image, we should require that this be disclosed, what model was used, and what prompt was given to the model. We don't require this for photographs because at this point, we all basically understand how photography works (although, in modern cameras, we basically get that level of detail anyway in the EXIF data). But AI is a new technology that we're all working to figure out (and keep up with it as it evolves), so giving us more information about where the image came from makes sense. RoySmith (talk) 19:08, 4 March 2025 (UTC) Moved from discussion section CaptainEek Edits Ho Cap'n!19:30, 4 March 2025 (UTC)[reply]
    PS, the problem of unknown provenance vis-a-vis copyright is a valid issue; of all the issues raised here, that's the one that concerns me the most. RoySmith (talk) 23:15, 4 March 2025 (UTC)[reply]
  • Support. AI is trained on stolen copyright material. It is the absolute antithesis of our ethos. ???? — Preceding unsigned comment added by JzG (talk • contribs) 19:36, 4 March 2025 (UTC)[reply]
    So are our editors. Who cares? jp×g🗯️ 02:34, 5 March 2025 (UTC)[reply]
  • Support for a ban except where used to demonstrate relevant context per Eclipse. The idea that AI images are in some way otherwise necessary is irrational in my eyes. — ♠ Ixtal ( T / C ) Non nobis solum 19:38, 4 March 2025 (UTC)[reply]
  • Oppose blanket ban. This can be decided at the article level. AI probably isn't useful for the vast majority of articles, but I could see the possibility it would be useful in some. A blanket ban means even if a dozen editors at an article think a particular AI image would be helpful, policy disallows. Valereee (talk) 20:42, 4 March 2025 (UTC)[reply]
  • Support blanket ban save for the obvious exception of articles about AI. There are too many issues - copyright, stolen images in training sets, plain old inaccuracy, etc - for AI-generated images to be useful. Singular exceptions can be considered when there is clear consensus, but the general policy should be to disallow them unless a specific exemption is granted. Pi.1415926535 (talk) 21:52, 4 March 2025 (UTC)[reply]
  • Support ban, allowing exceptions where specific consensus allows. Some of the opposition to a ban is from people who appear to think that AI-generated images are, if not beneficial, at least not harmful. Others who oppose the ban are citing specific edge cases, imagining places where AI-generated images may be useful, or predicting a future in which AI-generated images may become useful. I don't see any reason why these exceptions couldn't be dealt with as exceptions. Want an AI-generated image in an article about AI-generated images? This obvious use case will sail through a consensus discussion. Something less obvious? Maybe it won't. But I strongly feel, for many of the reasons listed by ban supporters above, which I don't feel the need to repeat, that we should be starting from the presumption of no AI-generated images allowed. Also, they should be very, very clearly labelled as such, if and when we do use them. -- asilvering (talk) 22:06, 4 March 2025 (UTC)[reply]
  • Support due to copyright concerns. Compassionate727 (T·C) 22:29, 4 March 2025 (UTC)[reply]
  • Support at least for the time being. I do have a concern that banning AI images might encourage users to lie about whether images are AI-generated, but users are already pretty bad about disclosing that. If the ban has unintended consequences, it can be revisited later. Apocheir (talk) 23:34, 4 March 2025 (UTC)[reply]
  • Support mostly blanket ban I see an use for AI images in articles about a specific AI generated image or articles on AI images and engines. Currently, that's about as far as I'll go. Perhaps a skilled prompter can produce an accurate image but then there's the issue of verification, of *confirming* it's an accurate representation. The field remains fraught at this time and results remain unpredictable in many cases. Cheers Mark Ironie (talk) 23:30, 4 March 2025 (UTC)[reply]
An inaccurate chemical diagram of benzene, drawn entirely by a person.
  • Oppose a blanket ban — so far I have failed to see any compelling argument in favor of a ban. Of the comments here that support it, a great number seem to be either tangentially related commentary on social issues; many are just bare claims which are either false or completely unproven. jp×g🗯️ 03:06, 5 March 2025 (UTC)[reply]
Here are some things I would support, and in the past have supported:
  • Mandatory disclosure of AI-generated material.
  • Mandatory disclosure of AI-generated material, enforceable by it being against the rules.
  • Mandatory disclosure of AI-generated material, enforceable (as with copyvio) by speedy deletion of noncompliant material.
  • Mandatory disclosure of AI-generated material, enforceable (as with spam) by immediate deletion of noncompliant material and aggressive blocking of people who refuse to comply.
  • Mandatory disclosure of AI-generated material, as defined by some straightforward criterion (e.g. "was it used at all?")
  • Mandatory disclosure of AI-generated material, as defined by some complicated and nuanced criterion (e.g. a case-by-case determination of which tools should require disclosure)
  • Mandatory disclosure of AI-generated material via edit summaries.
  • Mandatory disclosure of AI-generated material via userpage or talk page disclosures.
  • Mandatory disclosure of AI-generated material via some yet-to-be-ascertained method.
I do not support banning all of it, because nobody can come up with a reason why this is necessary. We should not be in the business of dictating the contents of articles via sitewide policy unless it is exigently necessary. Copyright violations are banned by sitewide policy because they create legal issues for the project, not because we feel icky about it. WP:BLP exists for legal and moral reasons, based on real effects that non-compliant content has (e.g. falsely accusing people of grievous misdeeds). Thinking it's cringe is not an exigent necessity that requires us to override the editorial discretion of our users and administrators on every individual article and image across the whole project. We are capable of deciding whether stuff is cringe on our own, by a process of looking at it. jp×g🗯️ 04:11, 5 March 2025 (UTC)[reply]
  • Support this proposal by jpxg and the reasoning behind it.
· · · Peter Southwood (talk): 09:41, 5 March 2025 (UTC)[reply]
Dismissing arguments about reliability, accuracy, OR, volume and editor workload, etc. as just complaints about being "cringe"—an argument literally no one has made—is pretty ABF. JoelleJay (talk) 23:17, 5 March 2025 (UTC)[reply]
That is obviously not what I said. jp×g🗯️ 01:37, 6 March 2025 (UTC)[reply]
I think that the current situation is basically fine: there are very few images of this nature being used in Wikipedia articles.
There are relatively few instances in which this type of image is appropriate, and usually something else fits the bill better -- so they are not used very often.
The proposal being made here is that we replace the current situation, where editors use their heads and think about whether an image is appropriate and policy-compliant... with a new system.
Why must this be done? What emergency is going to happen if we simply let editors decide for themselves?
Well, these programs have existed for several years, allowing great volumes of images to be created for little to no cost. For several years there has been no restriction whatsoever on their use here, apart from "people will remove it if they think it's bad".
So where is the emergency?
We have not had an unparalleled avalanche of slop, that's subsumed our encyclopedia and ruined everything forever, like people are predicting here. If the current rules inexorably cause this to happen, then okay -- DALL-E 2 and Midjourney have been publicly available since 2022. What is the explanation for why the slopocalypse hasn't happened? Where's the slopocalypse?
I do not favor putting these images in every article. I do not think they are appropriate in many places. In fact, I have been criticized for favoring policy that is too biased against AI content (e.g. mandatory disclosure and possibly labeling).
But to ban them outright is just not a good idea. There is no urgent crisis that demands immediate action. The existing system is not incapable of dealing with the issue. Why do we need to create a sweeping new policy, that institutes an unprecedented absurdity where we mandate what specific computer programs are allowed in the creation of otherwise acceptable images?
Because that's what this applies to! It does not apply to generated images that violate copyright. Those are already against copyright policy. It does not apply to generated images that are incorrect. Those are already against verifiability policy. It only applies -- it only changes the situation of what's allowed -- for images that would otherwise be acceptable according to our policies.
It feels like it's "doing something" about a hot-button subject, but it is the wrong war, at the wrong time, with the wrong weapons. jp×g🗯️ 08:48, 6 March 2025 (UTC)[reply]
  • Oppose a blanket ban, would support having guidelines around their usage. AI images are still an emerging thing, and they seem to generally be ineligible for copyright (no problems having them on Commons), so they could be very useful if used carefully, and it should be clearly indicated on the file pages where they are AI-generated. A blanket ban is completely unnecessary at this point. Thanks. Mike Peel (talk) 13:38, 5 March 2025 (UTC)[reply]
  • Support restrictions on their use, but not a blanket ban. I'll rehash the comments I made on Wikipedia talk:Computer-generated content a month ago: there are certainly issues with using AI to generate images of things that are underrepresented in photographs, such as depictions of lesser-known cultures. AI generators are biased by what is and isn't present in their training data, and so it's wholly inappropriate to get an AI to generate its depiction of a lesser-known culture and use that in a Wikipedia article. The same could be argued of human artists, sure, but AI is not a better solution to this, particularly given its propensity to hallucinate or get things completely wrong. I'm not on board with allowing people to use whatever image generator they want, using any prompt they want to achieve it, to generate images for our articles. However, I'm not in full support of a blanket ban because the technology could potentially be used by researchers and scientists to create images of long-extinct animals based on data they collect, which may be a valid use case; this is a far cry from using a public image generator like DALL-E, or (God forbid) someone creating their own model without disclosing what data they used to train it, which smells of WP:OR. In this case, I'd support using AI generated images if they are also used and supported by a reliable source, and not simply generated with a random Hugging Face model. —k6ka 🍁 (Talk · Contributions) 15:00, 5 March 2025 (UTC)[reply]
  • Support near blanket ban - with the obvious exceptions of articles about AI and AI-generated images as a phenomenon, and possibly other narrow exceptions. I wanted to echo Bloodofox and CaptainEek's concerns that widespread use of AI-generated images on Wikipedia would badly damage Wikipedia's credibility and trust as a reliable source of information on the internet, given widespread perception (at least in the US) of AI-generated images as "fake" and low-value. Perhaps we can revisit in a few years (when maybe culture has shifted and people accept AI more broadly), but right now I think your average reader would not take well to regularly seeing AI-generated images in articles. Seeing an AI-generated image would likely cast serious aspersions upon the quality and trustworthiness of the rest of the article, if not the project as a whole. 4300streetcar (talk) 15:48, 5 March 2025 (UTC)[reply]
  • Support near blanket ban with AI-related articles a possible exception. My main concern is that AI image generater developers have blatently used non-free images without explicit permission to train their models. The generated work is not fully original, and the generated images often contain recognizable pieces of the original works, and yet is impossible to credit all the original authors. Even if the AI were trained solely on CC media, it'd still be normally impossible to give proper credit. Fruit from a poisoned tree in my view. Jason Quinn (talk) 17:06, 5 March 2025 (UTC)[reply]
  • Oppose ban per WhatamIdoing and JPxG. Ajpolino (talk) 19:41, 5 March 2025 (UTC)[reply]
  • Strong support of blanket ban - Unless the AI image itself is the subject of the article's content, such images have no place on any Wikipedia article. AI-generated content has an atrocious error rate and AI-generated images are absolutely not an exception to this. History with AI-generated content has shown that when a person is not responsible for generating the content and it is generated with little thought or effort on a person's part, there's less concern for its accuracy and more concern for quantity and the amount of time which is saved by generating the images. Having no image is much better than having inaccurate images, and these low-effort AI-generated images are a race for quantity over quality with little to no concern for their accuracy in most cases. That some images produced this way may be beneficial and checked thoroughly for accuracy would be the exception by far, and not worth the time it would take for editors to discuss all of the images created this way just to find the very few that might be beneficial, and they are beneficial only if you ignore Wikipedia's purpose, which is to create a free encyclopedia.
The non-free content critieria for example helps to minimize legal exposure by limiting the amount of non-free content. This is still very new technology and legal questions around using AI to generate images are far from settled. Several lawsuits that explore this question are still ongoing.[5][6] Many of the comments opposing a blanket ban equate using AI-generated images with using photoshop or suggesting that it is the same as a drawing just on a computer. It is not. Those methods are a person intentionally adding or omitting every single element in the resulting image, even if tools are being used to assist. With AI-generated images, the reverse is true. The AI uses its own data (typically copyrighted works scraped from elsewhere online) to generate the images to fit what it best guesses will match the input given to it from a person, but that person does not control the all of or even most of the aspects of what is generated. This is why AI-generated images cannot receive copyright protection[7] as the creative human element is lacking, and it is inaccurate to conflate a creative work that can be subject to copyright with a machine-generated image that cannot. Given Wikipedia's mission to promote free content, it would behoove us to refrain from allowing problematic, error-prone new technologies with unclear legal considerations to be used on our project. Let us collectively look before we leap. - Aoidh (talk) 22:21, 5 March 2025 (UTC)[reply]
This claim is objectively false. Models do not, and cannot, "use their own data" — they cannot contain any images from their training sets. People were saying this in 2023, and what I said then was this:
If you want a reason why they don't contain "fragments of source images", I can give one: it is physically impossible. I can't speak to what goes on with closed-source models, but checkpoints for publicly available models are a few billion bytes of neuron weights (e.g. Stable Diffusion XL 1.0 is 6.94 GB). The datasets these models are trained on constitute a few billion entire images (LAION-5B is 5.85 billion images). I would like to see someone explain how images -- fragment or otherwise -- are being compressed to a size of one byte.
One byte is eight bits: the binary representation of the number 255 by itself takes one full byte (11111111).
A single colored pixel (i.e. yellow-green, #9ACD32) is a triplet of three bytes (10011010, 11001101, 00110010).
The smallest file on Wikimedia Commons, a transparent 1x1 GIF, is 26 bytes. This 186 x 200 photograph of an avocado (as a JPEG -- a highly optimized, lossily compressed file format) is eleven thousand bytes. Even if we disregard the extensive literature concerning how neural networks (and the subset of generative models that create images like these) work, it is not mathematically possible to store training images in the models. It is not possible to store fragments of training images in the models. Existing copyright law does not provide any mechanism by which a single pixel of a copyrighted image (or less than one pixel) can constitute infringement. jp×g🗯️ 01:48, 6 March 2025 (UTC)[reply]
I am aware of how LLMs store their information, they train on data that is then stored as a complex series of parameters and weights that allow the model to generate or interpret visual content based on learned patterns and associations. However, saying that this not "using their own data" is inaccurate, and your reply is meant for an argument that I am not presenting. What I am saying is that because this technology is so new, there are unsettled legal questions as to the images generated and the process used to create them, and their ineligibility for copyright highlights how drastically different these images are from someone creating an image themselves. I'm not saying the data that is used stores portions of images, and that's irrelevant to the concern. - Aoidh (talk) 04:04, 6 March 2025 (UTC)[reply]
AI-generated content has an atrocious error rate And? Human paintings also have a high error rate. This just means many of them are not suitable for illustrating articles and either aren't added to them or get removed if they are. This seems to be a common misconception in this thread. not worth the time it would take for editors to discuss all of the images created this way Not many of these have been added so far. The time required would only increase if you facilitate people to not disclose they used AI in their production workflow. conflate a creative work that can be subject to copyright with a machine-generated image that cannot Good then that it's not conflated. This is why AI-generated images cannot receive copyright protection That's amazing and a great boost to the commons / public domain and perfectly in line with open content projects like Wikimedia's, a mission or principle which at the least it should not actively work against. to refrain from allowing problematic, error-prone new technologies humans using Photoshop and drawings are also error prone so if being consistent in regards to that rationale, those should also be banned. beneficial only if you ignore Wikipedia's purpose, which is to create a free encyclopedia that is unexplained and false. Prototyperspective (talk) 14:14, 24 March 2025 (UTC)[reply]
  • Oppose a blanket ban. I see very little difference between user-created imagery and AI-created imagery. Both have possible accuracy or copyright issues, but we should handle those on a case-by-case basis. Anne drew (talk · contribs) 22:48, 5 March 2025 (UTC)[reply]
  • Support near blanket ban with commonsense exception, i.e. for articles dealing with the topic or in instances where the image itself might be either the topic or integrally related to the subject of the article. FWIW I would support a similar ban on user created art and similar images, but that is a topic for another discussion. -Ad Orientem (talk) 00:09, 6 March 2025 (UTC)[reply]
  • Oppose. The broad generalizations made in support do not justify a universal or near-universal ban. Editors in individual cases can determine the quality and accuracy of content such as images, and whether a particular image is a net benefit to an article compared to alternative images or to no image. Adumbrativus (talk) 02:03, 6 March 2025 (UTC)[reply]
  • Strong support with possible exceptions similar to Ad Orientem and others, an article about AI should be able to have examples if they are WP:DUE. Or if an article has content which mentions AI generated images. I think additionally, we could decide to allow AI generated images that are purely schematic and not attempting to look like a human created image. They should clearly not look like a human created image and be confirmed to be correct and should be speedily deleted if they are not. An AI generated Venn diagram or a flow cart? Maybe! But an AI image that tries to make it look like it was a photo/painting/drawing, absolutely not. And if we can't find the consensus to allow those acceptable usages (I think the first is a stronger allow than the second), I'd prefer a full ban for now and creating carve-outs later. Skynxnex (talk) 04:42, 6 March 2025 (UTC)[reply]
  • Support ban except in the obvious cases as described elsewhere (eg articles about AI/AI products). As an alternative to an outright ban, we must at least clearly mark AI-generated images as such. And by 'clearly' I mean right there in the image caption, not on the file page or buried in the description. In addition to the accuracy/original research concerns described by others above, I have a different concern. I think AI-generated images do not communicate the values and strengths of Wikipedia to readers. What I mean by this is that one of the main selling points of Wikipedia as a trusted resource is that it was created by other real humans, in dialogue and review with one another to work towards the most reliable and accurate overview of a topic. I think AI-generated images communicate the opposite - a laziness or lack of interest in content created with human intent. "If this image in the article is AI-generated, is the text?" "Did anyone review this?" "Is this website just a ChatGPT-style amalgam of words and sources?" I think we would do ourselves a great disservice by giving readers these doubts. Sam Walton (talk) 09:25, 6 March 2025 (UTC)[reply]

Random break

I've thrown in this break so we don't have to edit the whole section just to vote or add a comment. Nyttend (talk) 04:16, 11 March 2025 (UTC)[reply]

  • Very stong opposition to an over-simplified blanket ban. I would say I generally land on one of the extreme ends of the AI safety question in general, in that I am firmly of the belief that the near-term and catastrophic consequences of un-restrained bleed of AI into our social processes are much larger than must people are even in a position to understand, so far. And yet even I find the suggested course of a broad ban here to be a poorly considered and overly-panicky suggestion of a solution. There are just far, far, far too many benign and helpful uses of potential AI work product. For every hallucinated stat or non=representative image (which, along with every other class of problem image discussed here thus far, can be checked through more targeted rules), there's an equal or greater potential for a well-curated, well-supported graphic added by an editor with physical accesability issues or just a shortfall in technical know-how. Or a short animation that enhances the description of a complicated physical topic that was created by feeding in more piecemeal imagery. Or any of a large number of innocuous and perfectly policy-compliant and project goals-oriented educational material. Any long-(or even middle-)term solution to handling the question of AI content is going to have to be more nuanced than a phobic renunciation of anything that bears the mark of an AI.
    (I mean, don't get me wrong, I think the Butlerian Jihad may be just around the corner, but until the longterm consequences of the mechanized extraction of value of people by other people reaches its accelerated conclusion in the AI era, with every kind of abusive technology immaginable, and our social systems melt under the pressure into a catclysm which we will only possibly escape through a revolution in human perspective and behaviour...we should probably not just toss out the actual innocent and useful product of AI technologies on the way there.) SnowRise let's rap 09:16, 6 March 2025 (UTC)[reply]
  • Oppose ban. For one thing, it's almost unworkable or will shortly become so, as there's not any reliable way to detect whether some images were produced using AI tools or not. But more widely than that, a blanket ban like that would be shooting ourselves in the foot. AI tools are just tools; we might as well ban images that have been generated using any other type of software and insist that all diagrams are produced using paper and pencil. A more nuanced approach - for example banning the use of AI tools that are known to violate copyright - might be acceptable, but again would be difficult/impossible to enforce. Some of those supporting a ban have done so on the basis of the quality of AI images (famous cases such as pictures of hands with the wrong numbers of fingers etc.) but that's not a reason to ban all AI images; it just means we should pay attention to the quality of the images we're using, which applies equally to non-AI-produced images. WaggersTALK 10:04, 6 March 2025 (UTC)[reply]
    > might as well ban images that have been generated using any other type of software
    Most non-AI tools do not synthesize photorealistic content from whole cloth, and the ones that do (via 3D modeling and rendering, highly skilled sketching or painting, etc.) require a significant amount of human skill and time to use. Since it ends up being a significant investment of effort, good-faith creators are motivated to ensure the content is accurate. Even then, photorealistic 3D-rendered content is fairly rare on Wikipedia. Even non-photorealistic paintings and drawings are relatively rare, except on historical articles that pre-date widespread photography, or as ViridianPenguin points out, to illustrate things like the Apocalypse that fundamentally cannot be photographed.
    Yes, photographs can be selectively and misleadingly composed, or deceptively edited. However, the vast majority of photographs are neither, and there's a general belief in society that photographs can be presumed to portray real objects. They may have inaccurate captions or not be from a good angle, but the photograph itself is of something real. AI-generated images are inherently fake (or "synthetic" if I want to use a less-charged word) and generate substantial doubts about accuracy, and as a result do not carry a presumption that they accurately depict real objects. This makes photorealistic AI-generated images presumptively unsuitable for an encyclopedia, except when used to illustrate AI phenomena itself or possibly other narrow exceptions.
    Yes, inaccurate diagrams can be created using existing non-AI software, but require users to manually input what the diagram says. Since this takes time and effort, in most cases we can presume the diagram to have been created in good faith by users with subject matter expertise. AI-generated diagrams do not carry those presumptions - someone with no subject matter expertise can easily ask an AI image-generator to create a nice-looking diagram that ends up being completely wrong.
    Even if AI tools can be carefully used by skilled prompt engineers who also happen to have subject matter expertise, it is difficult to evaluate the level of care that was put into the generation and review of an image. How can I tell if an AI-generated image or diagram was carefully prompt engineered and reviewed by an expert, or was lazily generated by someone with little expertise typing 3 words into an image-gen?
    All of this is at the *editor* level. At the *reader* level I think widespread AI use would be a disaster for Wikipedia's reputation and trustworthiness. AI is heavily associated with slop, and I would personally think lower of any publication (be it a traditional encyclopedia, a newspaper, or anything else that purports to be a credible source of information) that uses AI to illustrate real things (other than when AI itself is the subject). 4300streetcar (talk) 20:00, 6 March 2025 (UTC)[reply]
    I guess to put it more succinctly, the problem with AI:
    • Photography: very low barrier to entry, but depicts real objects
    • Deceptively edited or composed photos (fake): high barrier of entry to pull off convincingly, and rarely done as a result.
    • Photorealistic 3D-rendering: high-barrier to entry
    • Manually creating good-looking diagrams: moderate-to-high barrier to entry
    • AI: very low barrier to entry, but easily generates inaccurate or misleading images
    4300streetcar (talk) 20:16, 6 March 2025 (UTC)[reply]
  • Oppose ban. per much of the other arguments. If an image otherwise meets our standards, it is fine. Jeepday (talk) 16:29, 6 March 2025 (UTC)[reply]
  • Oppose blanket ban. There is no need to hamper Wikipedia's ability to grow further by cutting out another tool, However I will say that the points on copyright and accuracy are valid, and I have yet to see an AI image that is very "encyclopedic" in my opinion. But limiting our options by jumping the gun now would not help build a reliable encyclopedia in the future, so we should wait a bit to see if AI images can be reliably used without problems of copyright or quality. tytech038 (talk) 16:44, 6 March 2025 (UTC)[reply]
  • Oppose blanket ban. per above oppose arguments, esp. Snow Rise. Also, we should be wanting more young editors to join to keep us relevent. Increasingly I'm finding Zoomers consider working without AI akin to going into a fight with one arm tied behind their back. FeydHuxtable (talk) 19:14, 6 March 2025 (UTC)[reply]
  • Support ban unless it is ABOUTSELF or published by a reliable secondary source. The issue with LLM-generated images is that they inherently constitute original research and synthesis when they are generated by Wikipedia users. However, if they are published by a reliable source and can be used without copyright restrictions, I don't see any major issue with their usage (AS LONG AS THEY ARE LABELED "AI-generated"). And for the oppose arguments stating that we should simply treat AI-generated images with the same standards as any others: should the WP community really have to deal with such a major time sink? The amount of effort and time it would take to sort out the inevitable wave of AI slop - whether from good faith users, vandals, or profit-driven corporations - simply doesn't seem worth it. 296cherry (talk) 03:00, 7 March 2025 (UTC)[reply]
    Images are an exception to our original research policy. See WP:OI. Anne drew (talk · contribs) 20:16, 8 March 2025 (UTC)[reply]
    The referenced guidance doesn't say that images are an exception. It clarifies that not all original images are original research. isaacl (talk) 05:54, 9 March 2025 (UTC)[reply]
    There has not been an "inevitable wave of AI slop" in all this time and there is no reason to think there will be. The amount of time to draw some good-quality image by hand or in Krita/Photoshop is too large for people to actually do it to visualize some concept in art/fiction for example (plus the quality may be low) – this is one example where if anything it improves time-efficiency. As for images generated by Wikipedia users: they don't necessarily add these to any articles and there are also free AI images on Commons created by AI-prompt engineers (sometimes called AI artists) or people who aren't Wikipedians.
    I'm certainly one of the three people who spent most time and effort on identifying and organizing AI images on Commons and can tell you on that basis that there aren't that many and that it was quite simple to categorize them all by AI software used and by what it depicts (including genre) and by whether it includes misgeneration. AI images could certainly be labelled as AI-made. Moreover, there is lots of nonAI art by Wikipedians / Commons users that helps illustrate subjects (examples: Gandalf, Character race) and there is no good reason to ban just one manufacturing-method but not others. Prototyperspective (talk) 10:41, 17 March 2025 (UTC)[reply]
    @Prototyperspective wrote, "... by AI-prompt engineers (sometimes called AI artists)" and then 3 sentences later wrote, "... and there is no good reason to ban just one manufacturing-method ..." So, "AI Artists" "Manufacture" art.
    An automotive corporation's robot-manufacturered car door is not by an "Automotive artist," because of it's use of "manufacturing-method.
    Ooligan (talk) 23:44, 17 March 2025 (UTC)[reply]
    It's not one or the other. Where is your point – it doesn't matter to what I said how you call them. Call them 'plumbers' or 'engineers' or 'evil people I really hate', it doesn't matter. I've been working a lot to get non-AI onto Commons, being the nearly only one over there who imports images from many scientific studies so please don't distract from the subject. The subject is whether or not to censor a broad-purpose useful tool and you did address zero of my points. Prototyperspective (talk) 23:48, 17 March 2025 (UTC)[reply]
    What percentage of "scientific studies" use AI images? @Prototyperspective -- Ooligan (talk) 01:07, 18 March 2025 (UTC)[reply]
    Hopefully nearly none. Don't know why you're asking and probably you misunderstood sth I said. Prototyperspective (talk) 10:46, 18 March 2025 (UTC)[reply]
  • Support ban with exceptions: aboutself, relevant AI images (re)published by RS, yield to local consensus, AI images outside mainspace. I also support adding a simple, standardized AI image disclosure to image captions per SamWalton. I don't agree that the scope of the problem hasn't justified taking action yet. My single biggest concern with AI images is pseudohistory; there have been a lot of ancient gods and legendary figures given AI depictions on Wikipedia, and we might have no way to know if they are even close to what the ancients thought of them as. On one occasion, someone used AI to generate images of museum artifacts depicting an ancient god, and added them to the article as though they were genuine items. Outside of the excepted uses, AI images have so far been near-universally unacceptable; only one example that doesn't fall under the exceptions (Artificial_planet#In_fiction_and_popular_culture) is still in use (see talk). I can't think of an acceptable general use for AI images (I tried, and even experimented with adding one to an article before), and given the data it seems that nobody else can either. If something good comes up, we should discuss that locally and work out whether to use it. I would oppose a total blanket ban that did not have exceptions for aboutself and images sourced from RS, due to the value of these images. 3df (talk) 09:32, 7 March 2025 (UTC)[reply]
  • Support blanket ban. Just to back-up, Wikipedia values text above pictures, we can have an article without pictures but not without words (and we are very willing to go without pictures). The issues of non-free (non-free is more than copyright), accuracy, and informed reader are too important to allow the free and easy introduction of these images. That said, I would also allow a consensus-substitution option (in the non-BLP/medical matters) that in each case requires an affirmative consensus that all the issues are properly addressed in a particular case and the image is highly useful and disclosed to the reader. What seems apparent is the opposes are basically saying, 'there is a narrow use', --- so, thus we need a general ban and a way to appropriately and methodically identify the narrow use cases, if they exist. Alanscottwalker (talk) 15:01, 7 March 2025 (UTC)[reply]
  • Strong support for blanket ban AI images hallucinate, and are in no way a substitute for legitimate human involved images UNLESS the AI image in question is the topic of the article (like if a politician shared a conspiracy theory using a DALLE image). The chances of misinformation and synthesis are too high, not to mention many AI images just look tacky, too tacky for Wikipedia at least.Plasticwonder (talk) 19:29, 8 March 2025 (UTC)[reply]
    But AI-generated images are "human involved images". A human writes the prompt, evaluates the output for accuracy and quality, and decides whether to add it to the article. Nobody is suggesting AI agents should be adding images to articles by themselves. Anne drew (talk · contribs) 20:16, 8 March 2025 (UTC)[reply]
    I hear this, but the page Wikipedia:WikiProject_AI_Cleanup/AI_images_in_non-AI_contexts is full of examples of cases where a human wrote a prompt and decided to add the result to an article without adequately evaluating it for accuracy and quality, and has few to no examples of AI-generated imagery being used appropriately (except for the ABOUTSELF exceptions that almost everyone here has endorsed). It would strengthen the "AI isn't the problem, users are" position if we could point to some examples of positive use of AI images on the wiki. Otherwise we are weighing a hypothetical benefit against a real documented cost. -- LWG talk 20:44, 8 March 2025 (UTC)[reply]
    How would I come up with such a list? High quality AI-generated images are nearly indistinguishable from manually created images by definition. There is Commons:Category:AI-generated images, but that mostly contains the slop that was easily identifiable as AI generated. Anne drew (talk · contribs) 21:07, 8 March 2025 (UTC)[reply]
    If there are in fact editors who are using AI to produce high-quality, vetted images, I would hope at least some of them are labeling their images as such, and we ought to be able to point to those images as examples. One of the most universally-agreed upon principles in this chat is that AI use should at minimum be disclosed (this isn't specific to AI: for example if a photo has been retouched in photoshop and reuploaded to commons it would be normal to explain that in the image description). -- LWG talk 21:30, 8 March 2025 (UTC)[reply]
    Do we have any examples of well-engineered AI images that could fool us? The AI generated images online that I have seen contain fatal errors like buildings morphing oddly or that weird glossy effect. If those are the best we have, its not exactly crazy to assume that we could, decently reliably, identify the vast majority of AI images. Though, I may also be an old man and trusting images that I really ought not to be. ✶Quxyz 23:35, 8 March 2025 (UTC)[reply]
    Literally seconds on Google found no shortage of images that I couldn't tell were AI generated (other than, in some cases, due to the subject matter, e.g. a caveman taking a selfie isn't going to fool anybody no matter how good the rendering is). Thryduulf (talk) 01:19, 9 March 2025 (UTC)[reply]
    I went through AI images in non-AI contexts and I do have to say that a few may be confusable. However, an expert in the field may be able to identify them as odd (hopefully). Most depictions of microbes fall under this category. Some images at a glance look realistic, but fall apart upon inspection, usually because a pattern is wonky. Portraits are harder to identify, though I do think some of them use techniques that are not from the time period (though I am not an expert so I am not really sure). For the photos that would fool me, I believe that they are closer on the side of organic and realistic. Id est, there is little fantasy and they are usually humans. ✶Quxyz 13:18, 9 March 2025 (UTC)[reply]
    This only applies to images that a) have not been checked and adjusted by humans so fix an possible issues and b) more importantly only applies to subjects where AI images are meant to illustrate something factual but not e.g. scifi concepts, fantasy genres etc. What you said is why I think we shouldn't have LLM texts since they can be false while sounding plausible. But your point does not apply to AI images in total. Prototyperspective (talk) 10:27, 17 March 2025 (UTC)[reply]
    You’ve been bringing up this “fantasy/scifi” argument repeatedly, but it feels like a deflection from a deeper problem that you address later in your comment, while coming to an incorrect conclusion. AI-generated images are just as capable of “be[ing] false while sounding plausible”, as they too are prone to hallucinations (e.g. wrong numbers of fingers, nonsensical text). pythoncoder (talk | contribs) 22:58, 17 March 2025 (UTC)[reply]
    That they are prone to it doesn't mean every image has these issues. Just like not every image made with Photoshop is good and should be included in every Wikipedia article, AI images too often aren't good to include and/or of low quality. Furthermore, you now seem to understand better why prompt engineering is a skill. Prototyperspective (talk) 23:24, 17 March 2025 (UTC)[reply]
    I do worry that AI will still add details that, while not central to the image, still partially comprimises some of its value. With photoshop, the artist has basically complete control over every detail of the image. Anything that the artist does not change is still likely to be true or blank. I look at some of these AI prompts and I feel like a lot of it is throwing in keywords and hoping a good output arises. Also, it fills in everything with detail. If the AI does not know a detail, it will still fill it in with hallucinations. ✶Quxyz 16:12, 18 March 2025 (UTC)[reply]
    What you describe means using these to produce a good result can be difficult but it doesn't mean every result is necessarily bad. It's very simple so I wonder why many here keep having that misconception. Also you can edit AI images in Photoshop. Often the result isn't that bad and just needs something removed or fixed. Usually people revise an image over multiple prompts, using a prior result as an input image to then change some part of it or use an AI tool to remove things but again it can also be edited with GIMP/Photoshop/whathaveyou. I look at some of these AI prompts and I feel like a lot of it is throwing in keywords and hoping a good output arises 1. Many use it that way but that doesn't mean all. 2. That doesn't mean the result is always bad. 3. It appears to you that way because you may not have advanced experience with using these tools. That you get some impression or feel some way doesn't mean it is that way. It just shows how hard it can be to get a good result while at the same time one can quite easily get a good result with two words if it should just show some animal (these images would not be useful). Here you can see some examples for styles to use or combine and that is just the style and just a subset of these. Here is another subset (see section "Materials"). The style may be the easiest part.
    In short, it's good to worry about it but editors who watch articles can make the decision; images made using AI aren't always bad and sometimes potentially useful. Artists may have complete control (actually most don't) over an image but they can just as well add or screw up details that compromise its value. Those images are simply not used or removed. Same for images made using this method. Prototyperspective (talk) 16:59, 18 March 2025 (UTC)[reply]
    You cant just say 🌈prompt engineering🌈 and have all of my fears go away. A lot of species lack images. For example, on Liatris tenuifolia I had to rely on a top-down view of the flowers. I do not believe there is an available side-profile image. To prompt engineer an image would require ridiculous attention to detail like ensuring the correct environment, right scale, colour, leaf shape, et cetera. That et cetera makes me worry for the veracity of AI images. This also applies to events, geographical formations, people, and other, non-general topics. ✶Quxyz 17:12, 18 March 2025 (UTC)[reply]
    to [create] an image would require ridiculous attention to detail like ensuring the correct environment, right scale, colour, leaf shape, et cetera. Everything you say is equally true of human-created images, and is not an argument against using AI but an argument for verifying all images before adding them to an article. Thryduulf (talk) 17:22, 18 March 2025 (UTC)[reply]
    Outside of identification, little work is needed on the end of a artist for an encyclopedic-level photograph. If I was creating a drawing of it, it would require about as much, if not more, work than an AI image. ✶Quxyz 17:35, 18 March 2025 (UTC)[reply]
    If you read my comment as "🌈prompt engineering🌈" you seem to have trouble with reading and I recommend to please read again. A lot of species lack images I was saying that AI images are not suitable for illustrating species. Once again please read more carefully. Prototyperspective (talk) 17:30, 18 March 2025 (UTC)[reply]
    I made that point to address a larger trend that I feel like may happen: most articles without photographs or other human-generated images may have AI images attached. That's why I also made the effort to eliminate the vast majority of articles from having AI imagery. Also, I have been looking over this discussion and if large amounts of people responding to you are having issues understanding your comments, maybe you want to look over them for issues. ✶Quxyz 17:42, 18 March 2025 (UTC)[reply]
    It's something similar to fearmongering. It hasn't happened in all this time and there is no reason to think it will. Moreover, having an AI image attached when no free media image is available and the image is of good quality and helpful in many cases can be a good thing. Number counts of people don't convince me, reading and understanding what I wrote and making good arguments that also address my points does. Prototyperspective (talk) 17:49, 18 March 2025 (UTC)[reply]
    @Prototyperspective- you wrote, "Moreover, having an AI image attached when no free media image is available and the image is of good quality and helpful in many cases can be a good thing."
    Ooligan (talk) 01:23, 24 March 2025 (UTC)[reply]
    I'm not sure what your point is? Those images seem perfectly appropriate for the setting. They seem unlikely to be useful on Wikipedia, but (a) they aren't on Wikipedia, and (b) one AI image not being appropriate in one context says absolutely nothing (positive or negative) about other AI images or other contexts. Thryduulf (talk) 01:30, 24 March 2025 (UTC)[reply]
    Yes.
    Highly used image misrepresenting AI images quality
    There are less controversial subjects but that one is according to a) description of witnesses (which sound absurd) b) according to drawings of witnesses (both linked there) and c) according to cultural description in for example fiction books. There would be no good illustration / image to use otherwise and the most-used AI image example (old, low prompt skills – see on right) is actually showing a cow getting abducted by a UFO just at humorously bad quality. AI can do much better as can be seen there. What's your point even, you do not address any point. Prototyperspective (talk) 11:01, 24 March 2025 (UTC)[reply]
    Thank you for your reply to my question. -- Ooligan (talk) 07:20, 25 March 2025 (UTC)[reply]
  • Support, except for when articles involve the topic of AI generation like Misinformation about the 2024 Atlantic hurricane season and a couple other articles related to the 2024 POTUS election. ✶Quxyz 23:23, 8 March 2025 (UTC)[reply]
  • Support ban except when the image itself is the subject of discussion in the article, per my previous comments on the subject. 01:48, 9 March 2025 (UTC) — Preceding unsigned comment added by Cremastra (talk • contribs)
  • Heavily restrict - to be acceptable, an AI image must be labeled as such, must be created by someone with the SME to in principle create the same image at the same level of detail using traditional means, and should be disfavored over traditional media of comparable quality due to hallucinations. Obviously be more permissive in articles about AI or where an AI image is relevant to the article subject, and more strict in sensitive areas (BLPs, medical, etc). If this is most cleanly enforced by a blanket ban, then so be it. An example of an AI image I would be fine with is if I asked a program to draw me a diagram of a curve with tangent, secant, and normal lines on the curve labeled - I have the SME to generate that diagram in MS paint, and to check for hallucinations in that AI image, so the AI is plausibly a timesaver there. Tazerdadog (talk) 22:30, 9 March 2025 (UTC)[reply]
    should be disfavored over traditional media of comparable quality due to hallucinations I don't understand your point here? If an image has hallucinations it isn't of comparable quality, and obviously we should never prefer an inferior image regardless of what the source of either is. If the image is of comparable quality then it doesn't have hallucinations, and quality can't be a reason to favour one image over another when both are the same quality. Thryduulf (talk) 22:47, 9 March 2025 (UTC)[reply]
    To my knowledge, AI images are basically entirely hallucinations in a form that a human can recognize. Its the same premise that LLMs are just fancy autocorrect that can make decent paragraphs instead of the mumbo jumbo that your phone's would make. ✶Quxyz 16:20, 18 March 2025 (UTC)[reply]
  • Oppose. We should permit AI-generated illustrations, at least. Let's say I'm a scientist with very poor image-creating skills, so I use an AI model to generate a diagram of something in my field. If the diagram is accurate, who cares whether I made it myself or used an AI model to create it? Either way, as the uploader I'm responsible for any content errors. Aside from limited exceptions, e.g. an AI-generated "photo" to illustrate an article about AI image generation, we should prohibit AI-generated files that look like real-life images (photos, X-rays, scanning-electron-microscope images, etc.), but something that's not meant to represent something visible-at-any-scale isn't fundamentally problematic. For example, if someone created a new version of WP:Wikipe-tan, it wouldn't matter if the artwork were done primarily by a human or a computer. Nyttend (talk) 20:33, 10 March 2025 (UTC)[reply]
    In the case of WP:Wikipe-tan obviously it would be fine since that is not encyclopedia content and the OR and verifiability standards don't apply. I think a concern is that "a scientist with poor image-creating skills uses an AI model to generate a diagram of something in his/her field" potentially has issues with WP:OR, if the only way to verify the accuracy of the image is to be a subject-matter expert and it can't be verified through external sources. -- LWG talk 21:29, 10 March 2025 (UTC)[reply]
    if the only way to verify the accuracy of the image is to be a subject-matter expert and it can't be verified through external sources firstly that's not stated or implied by Nyttend's comment, nor is it something that differs between AI-generated and human-generated images making it yet another completely irrelevant argument. Thryduulf (talk) 22:09, 10 March 2025 (UTC)[reply]
    There's a Wikipe-tan image at the top of the Moe anthropomorphism article; someone could create a new image in this style and place it there. As long as it matches the style, it shouldn't matter whether the image is created by a human or by an AI model. My point is that we shouldn't have blanket restrictions on using AI-generated images of artwork. And as far as the scientist: imagine I'm expanding an article with information from a systematic review, and I want to include a graph of some of the review's most important information, so there's no difficulty with sourcing or relevance. If an AI model can create a better graph than I can with Windows Paint, why shouldn't I be allowed to use the AI image? (As far as I understand, copyright wouldn't be an issue with images not created by a human.) All that should matter is the quality of this specific image (e.g. graphical quality and accuracy), so if you'd accept my Paint graph, there's no reason you shouldn't accept my AI graph. Nyttend (talk) 00:26, 11 March 2025 (UTC)[reply]
    I agree with this in principle, though I'd feel better about it if I had ever seen AI generated graphs that were of acceptable quality. But that goes into a the question of whether/how AI should be used right now, which is a different question than whether/how it should be permitted wiki-wide. As Thryduulf has rightly said many times, the quality of AI continues to improve so even if all AI-generated content is trash now we don't need to assume it always will be. My personal position (leaving aside the copyright and attribution questions that are above my paygrade) is that AI-generated images should only be used in contexts where there is no danger of the average reader misunderstanding the nature of the image. That rules out photorealistic images except in ABOUTSELF contexts, but potentially leaves space open for charts and diagrams. For charts and diagrams, I agree with Thryduulf that AI-generated images should be subject to the same standard of verifiability as human-created images, except that we should acknowledge that AI-generated images require a different type of scrutiny than human-generated images because the clues that cue us in to potentially-inaccurate content are different in AI-generated images, and AI is prone to different types of errors.
    Wikipe-tan specifically is a weird case because it is a product of the Wiki community, so any image made by a Wikipedian is just as legitimate as another. But I think it would be less acceptable for someone to AI-generate an image of a character like Winnie the Pooh or Māui, and totally unacceptable for someone to generate an image of George Washington or Alexander the Great and insert it in an article where it might not be clear to a reader that it was AI-generated. -- LWG talk 01:06, 11 March 2025 (UTC)[reply]
    ...What kind of scientist would create a graph in Windows Paint? Everyone has access to Google Sheets and other free graphing apps, which require the same amount of input by the user as they would need for an AI tool (or are you suggesting that you feed the review itself to the AI and have it spit out a graph??). The people who actually have sufficient subject matter expertise to create such a graph can at the very least use R (which is also free), if not proprietary graphing software. Sorry, but this "scientist" situation just does not exist. JoelleJay (talk) 02:29, 11 March 2025 (UTC)[reply]
    Agreed, and this is why I don’t have a problem with generally banning AI images. If the image is a simple diagram, then AI isn’t needed; any basic graphic design tool would work. If the image is complex and/or realistic, AI would be misleading and thus shouldn’t be used. 296cherry (talk) 03:10, 11 March 2025 (UTC)[reply]
    So every single subject-matter expert also knows how to use graphing software? Go find me a study that says this. I'm personally not very skilful in graphing software; if I were going to create a chart with this kind of data in one of my fields of expertise (historic preservation and religious history), I'd have to use Paint or go to WP:GL/I. Nyttend (talk) 04:03, 11 March 2025 (UTC)[reply]
    Yes, I am 100% certain every scientist who is an expert in a field that uses graphs knows how to use at least Google Sheets to produce a graph... No one gets a PhD and numerous publications in experimental science without learning how to visualize data. If someone is not competent enough to create a chart in a topic where such charts are standard then they don't have the necessary expertise to evaluate the accuracy of that chart. I would also question how they would even use AI for that purpose since the data entry would presumably be similar. JoelleJay (talk) 19:50, 11 March 2025 (UTC)[reply]
    But "it's not needed" is not a good reason for a ban. Can we write articles about politics without using The New York Times as a source? Of course. That wouldn't mean that declaring TNYT an unreliable source that should not be used would be a good idea because of that. Cambalachero (talk) 04:07, 11 March 2025 (UTC)[reply]
    That isn’t the main reason. The main reason is that AI has the ability to produce obscure errors on charts and diagrams that may only be noticeable to subject-matter experts. Anyone can create a chart with an AI image generator, and as long as some editors think it’s “good enough”, it could be added to an article in spite of misinformation. And I already know I’m going to get a reply saying: “but human-made images can have errors, too”. The difference is that the quantity of these problematic images is several orders of magnitude lower than the amount that could flood the site if AI generation was permitted. Is slightly-easier diagram production worth the time sink the community would have to deal with? 296cherry (talk) 20:15, 11 March 2025 (UTC)[reply]
    The problem with this "open the floodgates" line of argument is that AI image generators have been around for a couple of years already and we simply have not been inundated with low-quality generation, despite there being absolutely no policy against it beyond those that apply to all images. AI technology is improving, and people are becoming more knowledgeable about what to look out for so the risks of something bad getting added to articles is actually decreasing. Thryduulf (talk) 20:49, 11 March 2025 (UTC)[reply]
    that's not stated or implied by Nyttend's comment, nor is it something that differs between AI-generated and human-generated images That is fair, and I apologize if I read something into the comment that wasn't there. My understanding is that those supporting a restriction of AI images have raised the concern that AI images might contain errors that aren't apparent to people who aren't specialized experts, and it appeared that the response to that concern was to say that we should accept AI-images if the people uploading them are experts, which sounded like WP:OR to me. I don't necessarily disagree with you that the same issue arises with human-generated images, but the claim I am seeing people make here (which I am not in a position to judge) is that the kind of errors AI typically introduces are more difficult for lay editors to detect than the type of errors that humans typically introduce, which I think is a fair claim that at least deserves to be taken seriously. -- LWG talk 01:16, 11 March 2025 (UTC)[reply]
    Let's take a random subject, advanced physics, for an example. I can't evaluate anyone's information of any sort in an article about string theory, and I assume the typical person also can't, so most of the solid work on string theory articles will be done by subject-matter experts because they're the only ones who understand it at all. If a source makes some obvious statements (e.g. "string type A is twice the length of string type B, which is three times the length of string type C"), anyone with graphical skills could upload an image comparing the three string lengths, but most of us won't be able to understand sources properly and shouldn't go uploading diagrams/graphs/etc. of data if we don't understand it properly. Obviously we can't vet individual editors' qualifications, so we have to AGF as far as diagram uploads: we trust that the uploader knows his field, but of course the diagrams are subject to challenge on factual grounds. This is true regardless of how the image was generated. Imagine that you have an AI model with great graphics skills, and you tell it "Provide me a bar chart showing [item A] with [status B], [item C] with [status D], etc." If it does a great job of this, why should we care how the uploader produced the image? Nyttend (talk) 04:16, 11 March 2025 (UTC)[reply]
    I agree with that imaginary scenario. If an AI exists that takes data entry from a human and consistently produces charts that accurately reflect that data, I have no objection to using that AI to make images here. I don't know why an editor would do that when reliable, user-friendly data-to-chart generators already exist. My personal position is that we should ban images likely to be confused as non-AI generated (for example photorealistic images) but that other uses of AI may be acceptable if their AI-generated nature is disclosed. For charts specifically, the conversation upthread seems to indicate that the current state of the art is pretty bad, so an editor choosing to generate charts with such an inferior tool when superior tools exist raises concerns for me about their competence, and I am very concerned about the consequences of practices like asking an AI to read an academic paper and summarize the results in a chart (I have colleagues in my non-wiki world that have tried those kinds of uses with spectacularly bad results). -- LWG talk 20:00, 12 March 2025 (UTC)[reply]
    If the contention is mostly with diagrams, then perhaps AI-generated explanatory diagrams and charts can be permitted, but AI-generated photorealistic images could still be subject to a blanket ban except when used to illustrate AI itself. 4300streetcar (talk) 17:00, 11 March 2025 (UTC)[reply]
  • Support except where the image itself is the subject of commentary. Concerns with accuracy, copyright, and credibility. Nikkimaria (talk) 00:15, 11 March 2025 (UTC)[reply]
    1) What matters is whether the resulting image is accurate, not whether there is a chance it will be inaccurate. Images made or edited with Krita/Photoshop can also be inaccurate. Inaccurate ones are simply not used or removed. And there's many subjects like scifi tropes where 'accuracy' is different than many other kinds of articles. 2) Copyright is dealt with on Commons and there is no copyright issue since machines can learn from public images not any less than humans can learn from them or be inspired by them when browsing the Web or going to an art exhibition. 3) Don't know what you mean with credibility but a site that claims to be a defender of free speech and open culture / open content that just outright patronizingly indiscriminately bans a general-purpose tool that gives HD-images-making capability to the people loses all credibility in my mind. Prototyperspective (talk) 10:58, 18 March 2025 (UTC)[reply]
  • Oppose as written, support heavily discouraging usage, I personally don't see any scenario where wholly-AI-generated images should be used on the encyclopedia, however, I do recognize that there might be edge cases that we have not considered where a AI image be the only image suitable. (Note: My opinion was sought on this issue IRL by a offwiki individual which made me stumble on the RFC, not sure if that counts as being canvassed -- mentioning for transperancy). Sohom (talk) 16:37, 11 March 2025 (UTC)[reply]
  • Oppose blanket ban, just as I'd opposed a ban on user-created illustrations, photoshopped images, images made using software that isn't technically "AI", etc. There aren't many cases where I think an AI-generated image would be helpful, just like I think there aren't all that many cases for other kinds of illustrations, but no to a full ban. As others have stated, and as I've said elsewhere, I don't think we should have AI-created images that implicitly or explicitly purport to be photos. I don't think we should have anything-created images that do that, other than images created with cameras. And where precision is important, we'd need to carefully consider the tool (certainly midjourney, dall-e, etc. are not precise). Here's an analogy, though, for just one set of use cases: we currently have an AFAIK uncontroversial project where users created illustrations of dinosaurs based on textual descriptions in scientific literature, and those images are then submitted for review. Where review, and comparison against the sources, is concerned, why is my pencil drawing inherently good for text-to-image conversion while absolutely every tool with "AI" in its branding is somehow inadequate? I'm not saying there are many use cases for this, and I'm not saying we should prefer AI to humans for illustrations -- the standards just don't seem consistent. Most importantly, assuming this doesn't close with consensus to implement, maybe we can stop with these "ban [technology X]?" proposals for a little while and focus on incremental/narrower frames? — Rhododendrites talk \\ 17:30, 11 March 2025 (UTC)[reply]
    absolutely every tool with "AI" in its branding This question is limited exclusively to wholly AI-generated images.
    Some differences between a person hand-drawing a dinosaur from text descriptions and AI-generation are that every detail in the former has human intent that can be explained, and the types of errors are predictable. AI-generated art cannot explain the reasoning behind any single detail, and the errors it introduces are not similarly predictable. This significantly alters the approach to evaluating its accuracy. JoelleJay (talk) 20:05, 11 March 2025 (UTC)[reply]
  • Strong support for blanket ban for all the good reasons noted above, as well as the common-sense exceptions. AI is the shiny new toy that kids of all ages want to play with. But please allow this old man to remind you all that AI is not human: it has no soul, no conscience, and it does not care about truth or honesty. It is simply a tool, like a hammer or saw or monkey wrench - it can be used for any and all purposes, good or evil. Now is the time to take a stand for honesty and integrity and HUMAN judgement, before Wikipedia is overrun with fake images of all kinds. The machine does not care if it is right or wrong - only humans do. Tread carefully, my friends. Textorus (talk) 17:58, 11 March 2025 (UTC)[reply]
    With the exception of the bizarre references to a soul, that all reads like a reason to oppose a blanket ban. AI is indeed simply a tool that can be used for good or bad, we already have policies and practices to deal with the bad so why ban the good? Thryduulf (talk) 18:25, 11 March 2025 (UTC)[reply]
  • Oppose blanket ban / Support limited ban Emir of Wikipedia (talk) 20:04, 11 March 2025 (UTC) (please Reply to icon mention me on reply; thanks!)[reply]
  • Support Blanket Ban - with a carve-out for articles about AI images. Per Nikkimaria. Carrite (talk) 16:35, 12 March 2025 (UTC)[reply]
  • Support ban of any AI-generated image likely to be perceived by a reader as human-generated as per LWG (see [8]) NikolaiVektovich (talk) 17:41, 13 March 2025 (UTC)[reply]
    They can get a caption that clarifies it's AI-generated. Prototyperspective (talk) 10:50, 18 March 2025 (UTC)[reply]
    If we make a requirement that AI-generated images be clearly marked as such with a caption, then that would satisfy my concerns and I would consider an outright ban unnecessary at that point. -- LWG talk 18:49, 18 March 2025 (UTC)[reply]
  • Oppose. AI is a tool. Its output should be judged under the same criteria as human-produced images. The argument that it currently is not the best at producing accurate diagrams is one that will probably become less accurate with time as the technology improves.–yutsi (talk) 18:54, 13 March 2025 (UTC)[reply]
  • Oppose blanket ban - I think it's a tad too soon for us as a community to make a call on this with such authority one way or another. The technology is too new; we are just too used to technology moving at the pace that it does. All things being equal, I'd prefer editors argue the reliability/usefulness of a given image on a case-by-case basis - regardless of origin - over arguing and demanding deletion purely on the basis of it's origin. I am unopposed to a policy that requires such images be tagged in some manner. —Sirdog (talk) 22:52, 14 March 2025 (UTC)[reply]
  • Support heavy restrictions as per Dclemens1971: AI images should be limited to articles about AI, clearly labelled as such, and used in a manner that avoids readers confusing them for non-AI images. Cortador (talk) 12:18, 16 March 2025 (UTC)[reply]
    @Cortador: Do we also use images made or edited with Photoshop only in Photoshop-related articles but not in articles about let's say modern digital art genres or articles about subjects in fantasy art? --Prototyperspective (talk) 17:58, 16 March 2025 (UTC)[reply]
    Can we not do whataboutism please? There have already been many comments on this page about why Photoshop and AI are different. pythoncoder (talk | contribs) 03:23, 17 March 2025 (UTC)[reply]
    That they're different in some ways doesn't mean they're different fundamentally or sufficiently to censor this particular toolset.
    It's not whataboutism, it's bias against a particular tool people don't like, e.g. because they think unlike humans who look at public art, they would "steal" images that these models are trained upon by a machine 'looking' at it. My comment points out this hypocrisy and is a more than valid question / objection to the vote above. Prototyperspective (talk) 10:25, 17 March 2025 (UTC)[reply]
    I don’t think making accusations of “bias”, “censorship”, and “discrimination” is the winning argument that you think it is. pythoncoder (talk | contribs) 02:57, 18 March 2025 (UTC)[reply]
    Thanks for your comment, please reread my comment if you think that was my argument. You did not address any of my points but demonstrated you dismiss them as mere accusations for correct terms used as part of 1) my arguments and 2) my questions (mainly why works that were produced using this particular widely-used widely-available toolset should be censored). Also relevant to what you said are WP:NOTCENSORED and WP:RIGHTGREATWRONGS. Prototyperspective (talk) 14:06, 18 March 2025 (UTC)[reply]
    I just read both of the pages you linked and I don’t see how either of them are relevant to this RfC aside from having shortcuts that look relevant. In the future, please read before you link. pythoncoder (talk | contribs) 16:34, 18 March 2025 (UTC)[reply]
    Wikipedia may contain content that some readers consider objectionable or offensive‍—‌even exceedingly so. […] Some articles may include images, text, or links which are relevant to the topic but that some people find objectionable.While we can record the righting of great wrongs, we can't actually "ride the crest of the wave" ourselves. It's clearly relevant. And in relation to both of these policies, also that no policy existing so far would support censoring AI images: one would have to create an entirely new policy based on a few (likely offended) users about censoring this particular tool just like one would have to create a new policy to censor anything else (such as illustrations not approved by a verified artist or images produced using Photoshop). It would go against the open content and free speech principle of Wikipedia. And it would also go against goal We will innovate in different content formats, develop new software functionalities for Wikimedia projects […] Build the necessary technology to make free knowledge content accessible in various formats. Support more diverse modes of consumption and contribution to our projects (e.g. text, audio, visual, video, geospatial, etc.). […]. Prototyperspective (talk) 17:45, 18 March 2025 (UTC)[reply]
    1) The “movement strategy recommendations” are not policies and are generally not taken seriously by Wikipedia editors (for example, those recommendations have previously called for WP to loosen notability requirements, a suggestion which was promptly laughed off by the community). 2) Your RGW quote could apply equally well to allowing AI images. 3) Here’s one more link for you: WP:NOTFREESPEECH. “Wikipedia is free and open, but restricts both freedom and openness where they interfere with creating an encyclopedia.” In this case, AI slop interferes with creating an encyclopedia — see the comments elsewhere from users who have been on the front lines of AI cleanup. pythoncoder (talk | contribs) 18:12, 18 March 2025 (UTC)[reply]
    We have a handful of anecdotes from some people who have bean dealing with some bad examples of AI, but not a single scrap of evidence that existing policies are unable to deal with those bad examples - just vague fearmongering about one potential future in which things they don't like come to pass, even though there is absolutely no indication this is likely (and indeed evidence that it is unlikely). Even if there was some indication that some change was necessary (and as yet there is exactly none), there is no reasoning given that something as massively encompassing as "ban all AI images" is a necessary and proportionate response to that. Thryduulf (talk) 18:37, 18 March 2025 (UTC)[reply]
    Feel free to start a discussion about edited pictures. Cortador (talk) 06:30, 18 March 2025 (UTC)[reply]
Scifi concept visualization (cooking robot)
Nearly only example art helping explain the genre 'science fantasy' (for many subjects there are few free files)
  • Very strong oppose for more or less the same reason we don't ban images made with Photoshop either. There are many positive use-cases of AI images – for example some art style for which there is no or just one free media example, or for visualizing some event according to descriptions, or some visual example of an Internet meme, or some scifi / fantasy concept for illustration, etc etc. Even if you can't think of any AI image that would be useful in your mind, things shouldn't be outright fully banned if you can't imagine in your head any positive use case in advance – or even worse oppose it just because you don't like the technology. There is no requirement for people being able to conceive of valid applications in advance and if they can't do so ban everything made with a novel useful tool. I'm a bit shocked by how much luddism / technophobia rather than rational considerate decision-making pervades Wikipedia. If there is some outright ban I'll really reconsider if I should spend my time and energy on this site that could just make thoughtless biased knee-jerk reactions because some users are offended by this or that. This needs to be decided on a case-by-case basis. Also many people are not aware just how many subjects there are without any or sufficiently many good free media examples. AI image tools are tools by which we the people finally have a way to visualize with good quality anything we have in mind, it's a great boon to the public domain / open content. It doesn't mean everybody is good at it (prompt engineering can be tricky depending on the case) and that we should unmindfully embed AI images in many places, but there are cases where these can be helpful to the reader as much or maybe nearly as much as images made with other tools like Photoshop/Krita. At least they could be. Since many people jump to close-minded emotions-fueled conclusions based on what they personally have been exposed to (again, lots of AI art is bad and most not useful) and basically all images in this thread further such simplistic views of all AI images = bad, I embedded two examples on the right that aren't as bad and visualizes some hypothetical technology and a genre where there are virtually no free media available. If you care so much about what tools have been used to produce the image, I'd be okay with if it's required to specify that in the caption for AI images. Please do not discriminate against a particular toolset. --Prototyperspective (talk) 17:58, 16 March 2025 (UTC)[reply]
    Why on earth would we need a picture of a cooking robot? And what is it cooking? A pile of breadcrumbs and gold dust, lightly drizzled with carbonated piss? I mean seriously, this is your good example? – Joe (talk) 14:22, 18 March 2025 (UTC)[reply]
    As mentioned below there are actual cooking robots in operation, so an article may want a picture (though as discussed the AI-generated image does not resemble real cooking robots and would be misleading). The "piss" comment is also needlessly provocative. 4300streetcar (talk) 16:24, 18 March 2025 (UTC)[reply]
    Just a brief clarification: it would if anything go into a potential section about humanoid robots as cooking robots if there is such a section (as an illustration labeled as AI-made). Example of a humanoid cooking robot. I'm not saying it should be there.
    Also some scifi stories may be about humanoid cooking robots. Other examples may be better – for scifi concepts like domestic humanoid robots an example for those that are further from actually researched subjects would be better: nobody would complain an AI image of a wormhole portal is unrealistic because such don't exist even as prototypes and what is meant to be illustrated is not the research subject but the popculture portrayal of wormhole portals or whatever. Prototyperspective (talk) 17:09, 18 March 2025 (UTC)[reply]
    There are actual cooking robots in actual restaurant operation (e.g. Cala in Paris [9], which makes pasta, Sweetgreen [10], and formerly Spyce in Boston [11]) as well as in development (e.g. Dexai's Alfred [12]). None of these are humanoid nor resemble the AI-generated impression of a cooking robot, and using a fake AI-generated impression here would be misleading to readers as to what cooking robots look like in the real world. While well-intentioned, this is another example of why AI generated images should not be allowed, because it allows people who lack expertise to easily generate misleading images of a subject.
    I cannot speak as much for art movements, but given art movements are inherently human-centered activities, using an AI-generated impression may still not be particularly suitable either, and there is inherent value in showing human-generated art. It would also potentially be offensive to artists themselves to use an AI-generated to illustrate an art movement, given the use of artists' work to train AI models without compensation has been broadly controversial. 4300streetcar (talk) 16:06, 18 March 2025 (UTC)[reply]
  • Support. We're supposed to be writing a free content, factual encyclopaedia. There is no excuse for presenting our readers with computer-generated drivel that abuses free content with no regard for facts (and does more than its fair share to burn the atmosphere along the way). Human-made art, whether illustrations or photoshopped photos or whatever, is not comparable.. – Joe (talk) 18:01, 17 March 2025 (UTC)[reply]
    1) it does not abuse free content, like human can learn from public art that is posted online or in exhibitions, machines can also learn or be inspired from visual content
    2) the regard for facts may matter for factual articles but e.g. fantasy concepts are not about facts and additionally this argument comes from a misconception of how these tools are used: one doesn't just enter "palantir" and then goes with the first result but uses these to produce the kind of image one has in mind and without inaccuracies/issues
    3) we shouldn't censor particular tools because editors want to WP:RIGHTGREATWRONGS but if you care about the environment so much I suggest you focus on things that have an actual impact – one 50 km car trip will emit more greenhouse gases than your annual AI art tool use so I suggest you avoid that instead of distracting from what is actually effective and there's even suggestions AI art may reduce emissions (The carbon emissions of writing and illustrating are lower for AI than for humans, Nature).
    4) Of course is it comparable – prompt engineering is a skill that humans do as is the ideation and creativity when it comes to the concept/ideas and selections+adjustments of the images. Prototyperspective (talk) 18:49, 17 March 2025 (UTC)[reply]
    I've produced my fair share of free content in my time, here and elsewhere, and like thousands of others whose work has been exploited,[13][14][15][16][17][18] I absolutely consider corporate giants using it to create monetised bullshit-generators that not only fail to acknowledge or give back to the communities that they rely but actively sabotage them to be an abuse of it. This is the real world, not an Isaac Asimov novel; these machines cannot "learn" or be "inspired". They are commercial algorithms that swallow up vast quantities of unwittingly-provided human creative labour to regurgitate it for lazy and talentless people.
    I don't know why I'm even bothering responding to an argument that begins with "if you care about the environment so much", but I don't have a car, so... next tip? Contrary to popular opinion, WP:GREATWRONGS—which concerns what we write in articles—was never meant to mean that we should pay no attention to ethics or the outside world when deciding how to run our own community. The catastrophic climate and environmental impacts of generative AI are well-established. A 50 km car trip will at least get someone from A to B. What does poisoning the atmosphere this way get us? Thoughtless and shitty illustrations of things that don't even particularly need to be illustrated?
    Calling "prompt engineering" a "skill" is a dead giveaway that you don't have any actual skills. – Joe (talk) 14:10, 18 March 2025 (UTC)[reply]
    Calling "prompt engineering" a "skill" is a dead giveaway that you don't have any actual skills. We do not need yet more personal attacks in this discussion, I strongly encourage you to withdraw it. While doing so it would be better if you actually attempted to refute the arguments made rather focusing on ad hominems and "evil corporations are out to destroy free culture" nonsense. Prompt engineering is a skill - it may not be one you see value in, but that is irrelevant. You can demonstrate this very easily yourself by going to any free AI online image generator and spending a few minutes experimenting with a single concept, say a person walking on Mars. You will quickly see that better prompts produce better images. Thryduulf (talk) 14:23, 18 March 2025 (UTC)[reply]
    Asking someone to "actually [attempt] to refute the arguments" in the same sentence as you dismiss their argument as "nonsense" is quite ironic. – Joe (talk) 15:37, 18 March 2025 (UTC)[reply]
    Then try actually reading what I wrote. Your arguments are a mixture of nonsense (the bit about evil corporations) and trivially refutable prejudice. What I'm asking you to attempt to refute is the arguments you are responding to. Thryduulf (talk) 17:21, 18 March 2025 (UTC)[reply]
    The top of this thread is my response to this RfC. The remainder is you and prototyper dismissing and/or belittling everything I've said, just like you've bludgeoned the rest of the discussion. After all your years at this I can't imagine what you hope to achieve or why you'd expect me to engage further. – Joe (talk) 19:22, 18 March 2025 (UTC)[reply]
    What I hope to achieve is people supporting the ban actually listening to the arguments against and responding with something other than ad hominems, personal attacks, FUD, nonsense or irrelevancies. I live in hope. Thryduulf (talk) 19:51, 18 March 2025 (UTC)[reply]
    First of all please see WP:RIGHTGREATWRONGS. I'm not saying the examples are very good, just better than the prior examples and they could be improved (for example by editing them in Photoshop). And it can for example help make an article more interesting and understandable to readers where having some visual example helps.
    I absolutely consider corporate giants using it to create monetised bullshit-generators Thanks for your opinion but I only use Stable Diffusion which is open source and free to use. I don't have money to commission some artwork and don't have the time to for years learn to paint just to make a few images. I'm not lazy, I don't have the time to learn art for years or don't prioritize it as much as you since I use my time for other things like contributing to Commons and Wikipedia. These novel tools give the people the power to visualize anything they have in mind and this broad-purpose ability can be used for all sorts of constructive things. It democratizes art-making and also it facilitates people to learn artmaking skills since these can be very useful to improve an AI image and one doesn't have to start with nothing (it's motivating). What you think of AI companies is irrelevant to whether this production method should be censored or not. next tip? Also avoid flights and meat. but I don't have a car, so... […] was never meant to mean that we should pay no attention to ethics And I don't have a car either but in all likelihood the majority of people expressing resentment against the use of AI do have a car even if it's not all of them so it's worth mentioning more effective ways.
    It's unethical to silence people and works who don't have the money or long-won skills to produce manual traditional art. It gives many more people a voice and form of expression and expands bounds of art as new ideas, styles etc are visualized/developed. The catastrophic climate and environmental impacts of generative AI are well-established That is a false statement. The current electricity needs a miniscule in the large order of things and we're not talking about all generative AI here but AI image creation tools. If you paint an image for a long time on some touch drawing pad or on your computer that needs electricity too as do eventual car trips and whatnot for getting materials, to courses, or managing the production etc. For example, I use AI art tools to visualize climate change mitigation concepts and basically support movements for sustainability. These tools can be used for any purpose, not just the random examples you have come across. Visualizing a concept like an art genre (or providing an example for a style) and making an open knowledge site more appealing to readers are good causes. In any case, that can and should simply be decided on a case-by-case basis instead of patronizingly making this decision for all articles.
    Calling "prompt engineering" a "skill" is a dead giveaway that you don't have any actual skills. And prompt engineering is a skill because one needs to basically trick AIs to visualize things one has in mind since they don't actually understand what one is saying. these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society ~[19] The key is figuring out how humans and machines can best work together, resulting in humans’ abilities being multiplied, rather than divided, by machines’ capabilities ~[20] AI substantially enhances human creative productivity by 25% and increases the value as measured by the likelihood of receiving a favorite per view by 50% over time ~[21]
    Anadol likens his AI algorithms to a “thinking brush.” The important thing, he says, is “designing the brush.” “Some people believe it’s a case of ‘Hey, here’s the data, here’s AI, voilà!’” he says. “But it’s actually more challenging when you start to have some control over the system instead of having something imposed on you. That’s where the true challenge of art creation comes in.” […] Human-machine loops are not new in art. Artists have always used technology to do things they could not do themselves or simply to see what would happen. […] Both artists were involved in a kind of human-machine feedback loop. ~[22]
    This paper investigates prompt engineering as a novel creative skill for creating AI art […] This is in line with our hypothesis that prompt engineering is a new type of skill that is non-intuitive and must first be acquired ~[23] It's irrelevant to whether or not it's a skill. Use of these already-widely used helpful tools will only increase and we'll censor a large segment of visual works/expressions that in many cases could be useful just like there's cases where images made with the not-so-new-anymore tool of computer image editors like Photoshop are useful even when they usually aren't. Prototyperspective (talk) 14:52, 18 March 2025 (UTC)[reply]
    Green apple: human drawing vs AI-generated
    Human-made art, whether illustrations or photoshopped photos or whatever, is not comparable Hypothetically, if there were no free images of a green apple to use for a Wikipedia article and we had to choose between a human-made drawing of a green apple vs an AI-generated image of one, I'd choose the latter. Some1 (talk) 03:00, 19 March 2025 (UTC)[reply]
    I'm not necessarily disputing your point but I, with somewhat poor art skills, could likely paint a far better apple than that. This argument somewhat feels like a strawman. ✶Quxyz 15:26, 19 March 2025 (UTC)[reply]
    Or... one of us could take a photo of an apple? Come on, if this tool is so useful to us, it should not be so hard to come up down a single example of a place it would be useful that isn't completely ridiculous and contrived. – Joe (talk) 20:44, 25 March 2025 (UTC)[reply]
    Several such examples have been shared in this thread multiple times. Including sci fi concepts, proteins and art styles. In a rational world it would be literally any topic, because we'd only care about whether the specific image was the best image for the given usage without regard to anything else. Indeed I've still not seen any justification (other than "I don't like it" or "other people don't like it") for forcing the use of an inferior image where an AI image happens to be the best one. Thryduulf (talk) 21:13, 25 March 2025 (UTC)[reply]
    At the end of the day AI images are fake, machine-generated syntheses of what something is supposed to look like, based on its training data. For tangible, real objects, photographs are preferable when available, because photographs are directly derived from the real thing, and encyclopedia readers will understand that they are in some sense, more "real" and not synthesized. For art styles and intangible concepts - human artwork is preferable when available, because art is inherently a human phenomenon - as @ViridianPenguin said, "For concepts that cannot be photographed or diagrammed, such as the apocalypse, I think we remain best served by human art, rather than AI synthesis of human art, because the former represents the actual cultural attitudes, while the latter is a machine's attempt to satisfy the prompt". For scientific representations like proteins, there is a great risk of inaccuracy that editors may fail to catch, as @JoelleJay has pointed out and backed with data and examples several times.4300streetcar (talk) 21:31, 25 March 2025 (UTC)[reply]
    Those are fine opinions, and some of the points you make mean that AI images are unlikely to be the best available in some uses. That's fine, nobody is arguing that AI images should be used when they are inferior. What that doesn't justify is prohibiting AI images when they are the best available. Thryduulf (talk) 21:35, 25 March 2025 (UTC)[reply]
    Even in the best-case scenario for AI, where there is no freely available image available for an article, and an accurate AI-generated image can be generated for it, I would still lean towards not allowing the AI-generated image. The trade off is between the additional educational value of having an image for a particular article versus the risk to the project's general reputation and trustworthiness among readers. With current cultural attitudes towards AI (at least in the US), its close association with deepfakes and misinformation, and it's reputation for inaccuracy and concerns over how the training data was acquired and used, even for accurate images I believe the trade-off is in favor of not using the image on an encyclopedia. Maybe it will change in a few years when AI systems possibly gain a better reputation, but I don't think we should currently permit it. 4300streetcar (talk) 21:53, 25 March 2025 (UTC)[reply]
    No one should be adding AI-generated images of proteins to articles unless they've already been published in HQRS. Most of the AlphaFold protein structures have been from predictive models, not actually generative, anyway; AF3 only came out recently. JoelleJay (talk) 23:46, 25 March 2025 (UTC)[reply]
    This proposal would ban the use of all AI images, even those published in a reliable journal, regardless of the type of AI used to generate them. It's also worth mentioning that nobody should be adding random incorrect images to any article, regardless of the article and regardless of the type of image. By and large they don't, but when they do existing policies and processes have proven perfectly effective at dealing with the issue, including since AI images became commonplace. Thryduulf (talk) 01:10, 26 March 2025 (UTC)[reply]
    Where "AI-generated" means wholly created by generative AI. JoelleJay (talk) 02:55, 26 March 2025 (UTC)[reply]
  • Support near blanket ban with a few possible exceptions mentioned above: 1) very simple diagrams; 2) illustrations to showcase AI art / capabilities themselves; 3) re-using AI imagery previously created elsewhere if it has some encyclopedic significance, like fake news or Taylor Swift deepfake pornography controversy. Brandmeistertalk 22:07, 17 March 2025 (UTC)[reply]
    Why censor it away for all other cases in advance indiscriminately rather than allowing editors to decide this on a case-by-case basis? Prototyperspective (talk) 10:49, 18 March 2025 (UTC)[reply]
    I don't think a case-by-case judgement is appropriate here, other than few exceptions mentioned above. Brandmeistertalk 22:40, 18 March 2025 (UTC)[reply]
    Why? A case-by-case examination is required of every image anyway to determine whether it is both accurate and the best for the specific use intended. If this proposal passes it would also be required to examine every image to determine whether it is AI-generated or not (which is not black-and-white, or always easy to determine) so it wouldn't save any time. Thryduulf (talk) 23:00, 18 March 2025 (UTC)[reply]
    AI is known for generating factual errors and inaccuracies (perhaps the most notorious being gibberish unreadable inscriptions), a case-by-case examination would consume time that could be useful elsewhere, except instances I mentioned above. Brandmeistertalk 15:10, 19 March 2025 (UTC)[reply]
    The point is that every image needs a case-by-case examination regardless of the source, the only extra time being consumed is determining whether an image is AI-generated or not which is something that is completely irrelevant to whether it is sufficiently accurate. Thryduulf (talk) 15:29, 19 March 2025 (UTC)[reply]
    The point is that I and others aren't seeing encyclopedic merit in general allowance of AI images except aforementioned instances. At least at this stage of AI technology. Brandmeistertalk 18:50, 19 March 2025 (UTC)[reply]
    But other than vague vibes, unsourced claims, personal anecdotes, general dislike of the technology and (in some cases) ad homimems you and the others have yet to give any actual reasons why the existing policies are not sufficient to prevent the project being harmed by bad AI images (and no evidence at all that good AI images would harm the project), nor any other reason why more restriction on AI images specifically would be beneficial let alone necessary. If you want to stop AI images being used whenever they are better than the alternatives (which this proposal and any similar restriction would do) you need to justify why forcing Wikipedia to use inferior images would benefit the project. Thryduulf (talk) 19:47, 19 March 2025 (UTC)[reply]
    Prefacing this by saying that I don't personally think all AI images should be banned (only those likely to be confused as non-AI images by readers), I feel that your repeated "anti-AI people are incoherent fear-mongers" rhetoric is unhelpful. There have been many valid concerns raised in this discussion by those supporting a ban. The most relevant in my opinion:
    • 1) Despite a massive amount of discussion, I have seen few to no non-hypothetical examples of a case where an AI-generated image was added to an article and the article was better for it (outside the ABOUTSELF exceptions pretty much everyone agrees on).
    • 2) Wikipedia:WikiProject_AI_Cleanup/AI_images_in_non-AI_contexts is full of many real, non-hypothetical examples of AI-generated images that were added to articles where the article was worse for it.
    • 3) AI-generated images containing serious inaccuracies that originated here on the wiki have already climbed to high positions in Google search results for various queries.
    So what I am seeing here is that at the current state of the art, AI images are objectively a net negative to the Wiki. Cleaning up the images at Wikipedia:WikiProject_AI_Cleanup/AI_images_in_non-AI_contexts has already wasted the time of myself and others, time that could have been spent contributing to other areas. AI-generated misinformation has already been presented to our readers, and some of that misinformation is already embedded in search results in a way that we can't easily reverse. On the other hand, I haven't yet seen you or anyone else in this discussion present any non-hypothetical examples of cases where AI-generated images have improved the wiki. Currently they way I see it, if we ban AI images we will immediately gain a tangible benefit (as bad images like those at Wikipedia:WikiProject_AI_Cleanup/AI_images_in_non-AI_contexts can be uncontroversially cleaned up, and users will be discouraged from adding more), at the cost of losing a hypothetical future benefit (the positive uses of AI that may become possible as the technology improves, but that no one has yet presented any non-hypothetical examples of). We can discuss our assessment of the relative cost/benefit here, but let's not pretend there aren't coherent arguments to be made for both sides. -- LWG talk 20:47, 19 March 2025 (UTC)[reply]
    King Tutankhamun brought to life using AI
    1) That is in part because linking further examples here is not a smart thing to do when this discussion is full of anti-AI editors who would like to indiscriminately remove all cases of AI images. Another part is that few users so far make use of these tools and there are few people in the world that license their works under a Commons-compatible license and at the same time fill any of the gap's they likely don't know about (I do know some gaps). Thirdly, they are more extensively featured in other language Wikipedias so people here may not notice.
    2) And? It just shows how it's not a problem. They are dealt with quickly and conveniently and it's just a handful of cases. If more users who know Wikipedia's visual gaps would use these tools then you'd see a few more useful cases. But again, there are already quite a few of them but you don't see these listed there. Even if that wasn't the case, if a broad-purpose tool has not been useful so far, that's no reason to ban it.
    3) Have not seen that. We're not responsible for Google or should make decisions based on its hiccups but I've never noticed even just one such case.
    4) Banning them will not save your time since people would still add them. Additionally, it would increase the time since people would not disclose they used AI. More importantly, the work that goes into purging the AI images is very little and people who watch the articles could nearly just as well just remove it themselves.
    That there's a project monitoring this and that just a few images have been added so far further shows there's no need to ban. When you're speaking of time spent on this, I've only noticed a few users, iirc Trade and Belbury, who alongside me frequently identified and categorized images into the AI category on Commons that this whole project largely relies upon. If AI images were banned, it would not save any time – people would still try to add them (like other bad images at times) and editors would still categorize them on Commons (doesn't take much time either as there aren't that many AI files). One reason for why fewer AI images are used constructively in English Wikipedia is because here, unlike in other projects&langs, editors use "fair use" images that are not free media / open content which then means the article doesn't necessarily need a further image (example: Palantír). Sure, there are coherent arguments for both sides. One thing I have to admit is that contributors do not make constructive use of it as often as I had thought earlier. However, that can change, I do have some positive uses (not ENWP so far and gave examples and types of applications above), and it is no reason to ban. It has been a net benefit for Wikimedia and the free/open content ecosystem, various things could be considered 'not a net benefit' for ENWP but are still not banned. Prototyperspective (talk) 00:28, 20 March 2025 (UTC)[reply]
    These are good points and are among the reasons I personally think AI should be prohibited when its AI nature is likely to be unclear to readers, and should otherwise be permitted and clearly marked in captions (which I think is also your position more or less). This sort of response is much more helpful than accusing the other side of incoherent fearmongering. To a couple of your points:
    1) I understand the reluctance to "out" AI content in a highly visible context like this one while the discussion is still ongoing. Unfortunately that reluctance still leaves people like me skeptical of the usefulness of AI.
    3) Other examples have been mentioned here and in earlier AI discussions, but one I saw recently was that a Google image search for "gorontalo sultanate" or "sultan amai" still brings up an AI render of a generic man that was on the article Gorontalo Sultanate for a while before being removed.
    I disagree with the perspective that we should ignore what search engines/etc do with our content - since Wikipedia has such a major input to the tools most people use to get quick information, I think we need to be more careful with our content than we were in the past. -- LWG talk 00:56, 20 March 2025 (UTC)[reply]
  • Oppose blanket ban - Support mandatory labelling: I can't think of any place I would use an AI generated image outside of the context of articles about AI, but that doesn't mean it can't possibly ever be useful. "No ban" is just continuing the current status quo, which has not resulted in large amounts of AI slop all over the place. I see no reason to ban it pre-emptively for a problem which is currently only hypothetical. That said, it seems reasonable to want all AI images clearly labeled as such, to avoid passing them off as something else. If AI is a good fit for an illustration then it should be a good fit when labeled Tioseafj (talk) 04:48, 19 March 2025 (UTC)[reply]
  • Support blanket ban except on related topics Keep AI out of Wikipedia. I do not believe an AI generated image will be useful, and even though many articles need images, AI is NOT a solution. AI could lead to tonnes of fake slop, and AI images should NOT be perceived as reality. Additionally, if an AI image is added to anything other than AI related topics, the user should be given a permament ban. The fact that we have not issued a complete ban on AI photos outside of AI art related topics is disturbing. Thehistorianisaac (talk) 07:12, 20 March 2025 (UTC)[reply]
    AI images should NOT be perceived as reality They can be labelled, e.g via the caption. Also nobody takes a painting or an image that looks like it to be reality. I do not believe an AI generated image will be useful On what basis, have you made lots of research and built expertise what its current and future potential applications are? What's even the rationale? How do you know you personally would be able to know whether it could be useful? AI is NOT a solution. Why would that always be the case? For example, it allows people who haven't spent many years learning to illustrate to close some gaps of visual examples so why would the way a good result is achieved be always critical? AI could lead to tonnes of fake slop 1. It has not so far. 2. There is no reason to think that will change. 3. These images will simply be removed like e.g. a low-quality Photoshop image or a bad hand-drawing added by a user. [that] we have not issued a complete ban on AI photos outside of AI art related topics is disturbing Why? Why should Wikipedia censor a particular general-purpose toolset/manufacturing method that could be used for very many very different applications? (Many of which you probably haven't considered.) To me it's very disturbing many editors are suddenly reveal themselves as so censorious and call for oppressive measures. These tools give the power to create high-quality digital imagery from a few privileged people to the masses and their use will only increase (e.g. as part of sophisticated artistic workflows (example) and the quality improve – censoring this means putting the stifling of censorship through all domains of society to privilege the few who have the ability to commission or produce entirely manual art + super-old works where copyright expired. Inappropriate AI images can, as has been done before, simply be removed and I've not seen just one good reason for why banning them would be needed or bring any notable benefit (except for appealing to users with the techno-phobic personal opinion that everything AI is basically intrinsically evil). Prototyperspective (talk) 15:00, 20 March 2025 (UTC)[reply]
    This feels like a textbook example of WP:BLUDGEON to me. While Isaac’s suggestion of permabans for people who post AI images outside of WP:ABOUTSELF contexts seems excessive to me, it doesn’t justify you posting a wall of text that demands he answer multiple open-ended questions that I’m pretty sure have already been answered multiple times in this discussion. Also, putting the word “censor” in your comment as many times as possible does not make it more persuasive — in fact, it has quite the opposite effect (see WP:OWB #1 for evidence that repeatedly complaining about censorship is viewed negatively by many Wikipedians). pythoncoder (talk | contribs) 17:24, 20 March 2025 (UTC)[reply]
    If just one person supporting the ban on AI would actually answer the questions and provide evidence (not anecdotes, not vague feelings) to justify their position then the questions wouldn't need to be repeatedly asked. Thryduulf (talk) 18:50, 20 March 2025 (UTC)[reply]
    I linked six studies analyzing the accuracy of AI-generated scientific diagrams that demonstrated they almost universally contain significant inaccuracies, and that sometimes these inaccuracies are subtle enough that only senior experts in the relevant field—not even people highly trained in the broader field—can notice them. These are not inaccuracies that someone manually creating these diagrams would make, both because they would require intentional injection of false imagery and because people who are not experts in these subjects would never be putting in massive amounts of effort to create such images manually anyway (and especially not ones that have such a veneer of professional quality). GenAI now allows random undergrads to conjure grossly inaccurate but professional-appearing graphics in complex topics that are then rocketed to the first page of Google search results, which is a far more indelible and prominent mark than any words they might add.
    I've also linked a study by Getty (who has its own proprietary GenAI platform and content for purchase, so is hardly anti-AI) saying 87% of consumers agreed it is important that an image be authentic. Why would we want to permit something that we know would significantly damage readers' trust in us? JoelleJay (talk) 01:03, 22 March 2025 (UTC)[reply]
    JoelleJay Getty do have their own small platform but has a $1B/y conflict of interest, much of their business is in maintaining an oligopoly over stock images, and is directly threatened by Gen AI tools. Their surveys should be assumed to be biased in favor of the market they have successfully cornered. – SJ + 22:05, 22 March 2025 (UTC)[reply]
    They still have a substantial stock of AI-generated images they sell. JoelleJay (talk) 23:55, 22 March 2025 (UTC)[reply]
    And how is that relevant to either Sj's point or anything else in this discussion? Thryduulf (talk) 00:21, 23 March 2025 (UTC)[reply]
    ....because Getty has actual data demonstrating that people don't want to see AI-generated images even in commercials, and you've been complaining that no one has presented evidence other than "vibes" that people dislike AI-generated images? JoelleJay (talk) 15:12, 23 March 2025 (UTC)[reply]
    I was referring to your immediately prior comment, but while what you posted is data it isn't really useful data for this discussion. Specifically the source is a company that makes its money selling mostly human-generated stock photos and the survey data up to three years old from several sources, two of which are unavoidably biased towards human images given that very nearly all of Getty's images are human-generated (the bias or otherwise of the other sources is not stated), and the conclusions are mostly things nobody is arguing, such as accuracy is important, mixed in with other conclusions like "it's difficult to tell AI from non-AI" and AI is a tool" that match what those opposing blanket bans are saying, and things irrelevant to this discussion such as what matters in marketing campaigns. Thryduulf (talk) 20:58, 23 March 2025 (UTC)[reply]
    Grossly and sufficiently inaccurate images of any kind have been and continue to be removed.
    . . . Also this is Wikipedia, not Google. It damages trust to ban a novel tool – and to announce we can't deal with AI and facilitate people to not disclose they used it – when we can deal with these. The poll you linked has a quite different context. Like I said below, the best argument for banning is that there's many people who hate on everything anyhow linked to AI. That's not a good argument, we don't need to appease some crowd who dislike a particular novel tool for no good reason and aren't that many readers anyway – overall, if the image is not portrayed as something that it isn't (e.g. labelled as AI-made), it makes articles more engaging, informative and interesting when there's no or few illustrations otherwise but just loads of blackandwhite text which is sth that turns down lots of readers and makes it often more difficult quickly grok some subject. This is not a place people go to watch advertising which is what that poll is about, they go here to understand things and/or read/learn about interesting things where often some illustration helps. Good that at least you admit that it gives access to the useful power to create professional-appearing graphics for both complex topics and simple topics. Prototyperspective (talk) 01:44, 22 March 2025 (UTC)[reply]
    All images with inaccuracies (being distinct from differences in level of detail) should be removed, regardless of how frank they are. GenAI introduces fabrications and omissions that are both significant enough for seasoned experts to consider them intractably inaccurate, and subtle enough that people who have 8+ years of training in a broader topic or even the same niche topic may not detect them. This is part of why major science journals are banning GenAI images outright.
    People come here with the expectation that graphics giving the appearance of being professionally-created actually were professionally created, because until GenAI that expectation was virtually always correct. I cannot imagine that the people being polled (by, again, a company that actively promotes their GenAI and tried to spin their data as positively as possible) have different, let alone higher, standards for authenticity in advertising compared to what is supposed to be a reference work. And if graphical representations of a complex concept don't already exist in the real world, it's already OR to include a user-created one regardless of provenance. JoelleJay (talk) 04:35, 22 March 2025 (UTC)[reply]
    > It damages trust to ban a novel tool
    When said novel tool is widely viewed by the public with suspicion, is notorious for inaccuracies and hallucinations, and inherently synthesizes images that are not real, I would argue that its widespread use in the current cultural context would be far more damaging to trust in Wikipedia than banning its use. The public does not expect Wikipedia to use the newest and most novel technologies, but does expect to see human-generated images (ideally photographs) that has been vetted and reviewed by humans. Perhaps a ban can be revisited in a few years when public perceptions of AI has changed.
    I can imagine a far more negative reaction to say, a news article with the headline "Wikipedia permits use of AI-generated content" than a news article that says "Wikipedia severely restricts use of AI-generated content". 4300streetcar (talk) 05:10, 22 March 2025 (UTC)[reply]
    The public does not expect Wikipedia to use the newest and most novel technologies, but does expect to see human-generated images (ideally photographs) that has been vetted and reviewed by humans. evidence please. The public certain expects to see accurate images, but nobody is proposing to use inaccurate ones and any that do get added to articles can be, and are, removed under current policies and guidelines regardless of whether it is AI-generated or not.
    People come here with the expectation that graphics giving the appearance of being professionally-created actually were professionally created, because until GenAI that expectation was virtually always correct. another claim that needs evidence. There are many, many graphics on Wikipedia created by amateurs without the use of AI that are of equal quality to those created by professionals. I also have not seen any evidence that the majority of readers care about whether an image is or is not AI-generated.
    if graphical representations of a complex concept don't already exist in the real world, it's already OR to include a user-created one regardless of provenance. We don't need new policies for AI-generated OR images when OR images can be removed under current policies regardless of provenance. Thryduulf (talk) 10:16, 22 March 2025 (UTC)[reply]
  • Support blanket ban with exceptions for articles about AI and images that are themselves worth discussing. They are in a copyright grey area in a lot of jurisdictions, which could conflict with Wikipedia's goal of being a free encyclopedia. Also, I am quite good at noticing when an image is AI-generated by its general vibe and I don't think I am alone in this. People are also becoming increasingly aware of the issues posed by AI. If readers start noticing AI images everywhere, even if the images are (seemingly) accurate, many will be concerned about its accuracy, and by extension the text and sourcing accuracy. QwertyForest (talk) 16:50, 21 March 2025 (UTC)[reply]
    We don't work on "vibes" we work on evidence. Do you have any actual evidence to back up any of your assertions? Thryduulf (talk) 16:55, 21 March 2025 (UTC)[reply]
    The vibes are relevant when it comes to our readers and it does not negatively affect the encyclopedia. This is why we don't have silly animated banners everywhere, or go out of our way to pick the most offensive images possible when a less offensive one is just as informative. I do not see AI images doing anything that a human cannot do just as well or better.
    I have noticed that you have commented on several support !votes to complain about them. This could start to come off as bludgeoning. If you have !voted, the closing admin will see it. Your opinion, my opinion and everyone else's opinions will be judged on their merits. I'd also advise that you choose your edit summaries carefully. This [24], while probably not outright uncivil, is probably pushing it. QwertyForest (talk) 20:31, 21 March 2025 (UTC)[reply]
    Your opinion, my opinion and everyone else's opinions will be judged on their merits. which is exactly why I'm trying to get those who support this proposal to actually justify their opinions with something resembling hard evidence. Factual edit summaries, such as stating someone has missed the point when they have missed the point, are not uncivil. Thryduulf (talk) 21:13, 21 March 2025 (UTC)[reply]
  • Support ban. Zero encyclopedic value. These models owe their existence to troves of stolen and uncredited content. Not only are these images a slap in the face to the principles of the free knowledge movement, they are by their very nature violations of copyright and impossible to attribute. James (talk/contribs) 10:03, 22 March 2025 (UTC)[reply]
    1) Illustrations can have encyclopedic value. Not all and maybe not many, but it's possible – for example some art style for which there is no or just one free media example, or for visualizing some event according to descriptions, or some visual example of an Internet meme, or some scifi / fantasy concept for illustration, etc etc. Do you really think it's much better to only read text about a visual art style instead of having some example(s)?
    2) They don't owe their existence "to troves of stolen and uncredited content" – it's not stealing to look at and learn from or be inspired from public art, like you can learn from artworks in a public exhibition, a film on TV or art posted on the Web. AI image generators are impossible without learning from lots of data.
    3) It's a great boon to the public domain – finally so many gaps in free media can be closed by giving people the ability to create high-quality visualizations/illustrations for anything they can imagine if they can prompt well enough. This is brings tremendous benefit to the open culture & free knowledge movement. For one example, consider that there's only at most 200 or so modern quality digital artworks that are licensed under CCBY.
    4) "they are by their very nature violations of copyright" is false and nothing but some false claims by a few offended artists who despise rather than embrace novel tools and production methods supports this. Prototyperspective (talk) 12:38, 22 March 2025 (UTC)[reply]
    This claim is completely false. jp×g🗯️ 20:21, 24 March 2025 (UTC)[reply]
  • Strongly Oppose a ban, per Rhododendrites and others. This RFC needs more detail and nuance to be useful; "AI" is not a single tool and there are misunderstandings throughout the discussion around what tools and uses are covered. There are categories where we certainly do want AI generated images where available. (protein structures!) Many arguments for a ban apply equally to Photoshop (the subject of similar debates back in the day).
    Support robust labelling. We already prefer real images to illustrations or artistic renderings where available. We should deepen our requirements for including process-provenance in image descriptions. Where there are extenuating sensitivities (health, current events, BLPs), we can have other restrictions. See Discussion for more. – SJ + 21:54, 22 March 2025 (UTC)[reply]
  • Support "ban of any AI-generated image likely to be perceived by a reader as human-generated" and in BLPs. I'd support a moratorium on their use in scientific legal articles, especially where the technology is not quite there yet. Bearian (talk) 00:29, 23 March 2025 (UTC)[reply]
  • Support - I'll be honest, before I came to this RFC, I thought AI generated images were banned by proxy of them being user generated. I'd assumed that images in articles, much like the text, had to be taken/cited from reliable sources. With that said, I'd support a ban on all AI generated imagery barring obvious ABOUTSELF such as the page for AI slop Vəssel [talk to mə] 13:38, 24 March 2025 (UTC)[reply]
    Why? Prototyperspective (talk) 13:46, 24 March 2025 (UTC)[reply]
    The verifiability of the content within images is indeed based on their use by reliable sources. I think many of the objections being raised are covered by Wikipedia's verifiability policy. isaacl (talk) 14:59, 24 March 2025 (UTC)[reply]
  • Support blanket ban, with the understanding that exceptions can be made. I don't support an absolute ban on AI generated images, for what should be fairly obvious reasons. There are articles which can benefit from them, such as Propaganda, AI slop, Text-to-image model and Artificial intelligence art. In addition, I can see numerous cases where a particular AI generated image might have some encyclopedic use, which can be determined on a case-by-case basis.Basically, what I don't want to see is any editor who has a valid reason to insert an AI generated image needing to appeal to WP:IAR to justify it, but the general understanding being that, without a compelling reason why, AI images are not to be used.
    My reasoning here is that AI images are completely lacking in intentionality. The engine being used to generate them doesn't understand the purpose to which the image will be used and doesn't take into account factors that may be obvious to any human creator, but which aren't explicit. When I have created images for WP, I have done so in the full knowledge that the image I am creating is supposed to be informative first, clear and easy to 'read' second, and pleasing to the eye third. When a system such as Stable Diffusion generates an image, it makes it pleasing to the eye first, and accurate to the prompt second, and that's it. And that second aspect is only done to the limits of the system, which are much more restrictive and less capable of expansion than any human artist's limits. ᛗᛁᛟᛚᚾᛁᚱPants Tell me all about it. 15:29, 24 March 2025 (UTC)[reply]
    That is we don't have an AI that uses an AI art tool to create an image, select the first result, and then insert it into an article. -> Instead, there is a human using an AI art tool in a workflow that consists of hacking/engineering/using it to produce something looking like what s/he intends it to show, and a human who decides the image is suitable for and useful in an article to add it there, and humans who read or watch the article who decide whether or not to remove it from there.
    There is some nuance but there already needs to be a compelling reason for any media to be added to an article regardless of its production method; and the rationale and quality baseline is already higher for AI images since these are monitored so much and disliked by some editors. When I have created images for WP We need more free media illustrators – there's very few within and outside the community – not cut down on some tools that for example could be used to illustrate a visual art style/theme based on training on thousands of works in that style with zero to one example visuals that have a free license.
    What you described is the reason why using AI image tools is not as simple as people often think it is. It is the result that matters – if of low quality, irrelevant, or inaccurate, people remove it. Prototyperspective (talk) 16:44, 24 March 2025 (UTC)[reply]
    I don't think you really understand my point. It isn't that AI images are worthless, it's that AI systems produce images which lack intentionality.
    This means that, unless the editor planning to add it is informed enough on the subject being depicted to spot errors, has a good understanding of what makes an image read clearly to the viewer and spends enough time pouring over the image to verify that it's useable, it's very likely to introduce inaccurate information.
    With a human artist, it's easy to verify whether or not an image is error-free thanks to the natural process of researching the creation of the image the artist will engage in before beginning, the artist will already have some knowledge of and experience with making images that read clearly, and you've got at least two sets of eyes on it before it gets added.
    The AI doesn't know that it's making an image for an encyclopedia. An artist does, and if you think that makes no difference, then I'm afraid you don't know how artwork is made. ᛗᛁᛟᛚᚾᛁᚱPants Tell me all about it. 17:43, 24 March 2025 (UTC)[reply]
    I don't know: that sounds as much a philosophical argument as a practical one. AI might lack 'intentionality', but many have deep capacities for heuristic recognition of associations between inputs and reliably useful outputs, including in image generation. For example, many AI can reliably take a small number of key frames and produce an animation from them that is more or less qualitatively indistinct from what a human would produce. That's just one of a basically endless open class of set cases that would be in no way problematic for our uses here. And coming at the issue from the other side, you also make a lot of assumptions about your hypothetical human agent that are more idealized than guaranteed (or in some cases, arguably even likely). At the same time, you are also I think underestimating the ability of the individual editors and the community at large to distinguish between problematic versus beneficial content and restrict uses where a conceptual element makes AI problematic. SnowRise let's rap 06:09, 27 March 2025 (UTC)[reply]
    I don't know: that sounds as much a philosophical argument as a practical one.
    I outlined some (but not all) of the practical concerns in my two comments above, but here's a real example. At Talk:Grey alien, there's a discussion about the image used to illustrate the article. Currently (and historically, for some time now), it has been a quick drawing that I threw together for the purpose of giving a nice, relatively sterile illustration of what, exactly, the article is about. I drew it rather quickly, but I did so using the article itself to inform my choices.
    Another user briefly exchanged that for a 'higher quality' AI generated image. (I will grant that the AI image does indeed demonstrate more detail and drama than mine, though that is hardly the only metric of quality). However, there are a number of problems with that image.
    1. The aliens aren't even gray. Because AI only knows that "gray alien" is a term used to refer to depictions of a certain kind of creature.
    2. The image is too dark, and not clear enough to read easily.
    3. The image only shows their heads, and it shows multiple heads.
    4. A lack of nostrils, despite nostrils being in the vast majority of depictions.
    5. Extensive wrinkles, which are almost never included in descriptions of the subject.
    6. Prominent skeletal structures, despite most descriptions of them noting the distinct lack of prominent skeletal structures.
    7. Venous and sinew lines that are essentially pulled from the aether. (since it could be said the be pulled from the AI's nonexistent ass, perhaps 'assther' is the proper word?)
    8. Prominent brow ridges.
    9. Splotchy skin (visible if you adjust the lightness of the image), as opposed to the monochrome skin they're usually described as having.
    10. A strange, swirling pattern reminiscent of scarification on the front alien's right breast.
    I could probably go on, but you get the point, I'm sure.
    For example, many AI can reliably take a small number of key frames and produce an animation from them that is more or less qualitatively indistinct from what a human would produce. That's just one of a basically endless open class of set cases that would be in no way problematic for our uses here.
    I, too, can imagine a functionally infinite number of 'acceptable' uses, which is why I phrased my !vote the way I did. I don't think an editor should need to appeal to WP:IAR to justify using one, but rather to simply establish a consensus that the image in question is suitable. But I'm wary enough of the problems I outlined to believe that such discussions should start from the assumption that it's not acceptable, and then attempt to disprove that. Because, as I stated above, AI artwork lacks that intentionality.
    WRT your specific example, you're actually referring to a process which is far more algorithmic than most might expect. Given a series of keyframes, the intervening frames can be extrapolated mathematically. Indeed, many animation applications have had similar functionality built in since long before stable diffusion first made headlines. AI's ability to accomplish that is kind of a meaningless point: you don't need either an artist or an AI to accomplish that. You just need some math.
    And coming at the issue from the other side, you also make a lot of assumptions about your hypothetical human agent that are more idealized than guaranteed (or in some cases, arguably even likely).
    Having actually produced multiple illustrations specifically for WP articles and having participated in discussion where other editors were producing such illustrations, I would hardly characterize my description of the process as 'assumptions'. 'Speaking from experience' is an more accurate phrase that comes to mind.
    At the same time, you are also I think underestimating the ability of the individual editors and the community at large to distinguish between problematic versus beneficial content and restrict uses where a conceptual element makes AI problematic.
    I mean, an editor looks at the AI image in the example I gave above and said "Yeah, that's good enough." I think you might be overestimating the ability of a single editor to make a determination as to whether a given image is usable. Assuming we were to engage in an iterative process of generating new AI images in response to a discussion, the number of images generated approaches infinity; there's no guarantee the AI will ever produce a suitable image, regardless of the specificity of the input string or the seed chosen. ᛗᛁᛟᛚᚾᛁᚱPants Tell me all about it. 20:30, 27 March 2025 (UTC)[reply]
  • Support blanket ban (with obvious WP:IAR exceptions; for example, obviously an AI image could be used to illustrate what an AI image itself looks like). I'm assuming we're talking about a regular old image in a regular old article. The AI generated image is a clear break in the chain of trust of WP:V / WP:RS. I don't doubt that the training data put into these models is reliably correctly labeled. In other words, if we were to apply Wikipedia's image standards, I'm sure that the vast majority of images and corresponding labels could pass WP:RS and we could use them in articles (if the copyright was amenable). The problem is within the AI model itself. There is insufficient reliability that the outputted image for label "X" actually corresponds to the relevant features/details that were present on images of "X" in the training set. It will certainly look similar but there's no guarantee that any encyclopedic value is present. Surely, an advanced user of an AI image generator who also has advanced knowledge of the subject matter could tweak and re-prompt the image generator until all the relevant details are correct. This is possible, but not likely. We certainly should not presume this to have occurred. Given that Wikipedia editors are not, ourselves, subject matter experts, this generation is WP:OR. We could republish an AI image, if a WP:RS / WP:SME made a specific claim to its accuracy in all details, but that's overwhelmingly not the case. In short, a general rule should be put in place against AI generated images, especially images generated by Wikipedia editors ourselves. AI generated images within otherwise WP:RS should be viewed with skepticism but not blanket banned. Leijurv (talk) 18:22, 24 March 2025 (UTC)[reply]
  • Support blanket ban with WP:IAR exceptions. The law is a blunt instrument and has to be sometimes. A simple rule ("no") is better here than a complicated one with 16 paragraphs of "Well consider such-and-so etc." IMO. How to handle IAR exceptions, I dunno, on individual article talk pages I guess (although lots of people will reject an IAR argument out of hand). Maybe there could be someplace to apply for an exception -- maybe any admin could grant an exception or something.
    And yes a person can make an inaccurate drawing. Yes a person can use Photoshop to make an inaccurateimage. Yes a person can take a photograph of something and say its something else. But these all require manual labor and so people have not been flooding us with these and they aren't going to so its not a problem. If it becomes a problem we will indeed talk about banning hand drawings or whatever. The AI image situation is just different. Finally, there is a moral component. 99+% of editors believe (wrongly) they are excused from the moral world when they sit down at a typewriter, so that's hardly worth mentioning. Herostratus (talk) 01:45, 25 March 2025 (UTC)[reply]
  • Support blanket ban with the only exception being articles related to AI, in which case an AI-generated image serves an illustrative purpose. Dehumanization and copyright issues aside –since most AI content is based on stolen material– there's no reason to use it when human-created content can accomplish the same goal. Paprikaiser (talk) 21:20, 25 March 2025 (UTC)[reply]
    I agreed to stop my commenting but I'll have to make one more if I may. People make absurd or false claims and nobody points it out. I'll keep it short; it's a pain to see the Wikipedia process fail so hard when decisions are not made based on reasoning and nuanced deliberation but based on false claims and feels; votecounts.
    - Paprikaiser, AI images are not based on stolen material; machines can learn from / look at public art just like humans can. There's no "dehumanization". there's no reason to use it when human-created content can accomplish the same goal the main point is that it (suitable AI images) can be used when there is no human-created content available.
    - Leijurv, This is possible, but not likely. We certainly should not presume this to have occurred. Good then that it isn't presumed.
    - Herostratus, no complex rules have been needed so far for AI images. They aren't a problem so far. It's a solution looking for a problem and it would only get complicated with exceptions if it's blanket censored. people have [..] been flooding us with these that is false. Images get added and removed all the time (that just isn't monitored on some place), only an extremely small number of images have been added to ~7 million articles. Prototyperspective (talk) 23:50, 25 March 2025 (UTC)[reply]
    @Prototyperspective: Good then that it isn't presumed ?? what?? To reiterate, I'm saying an advanced user of an AI image generator who also has advanced knowledge of the subject matter could tweak and re-prompt the image generator until all the relevant details are correct but that we should not presume that that happened. This is obviously true: not all AI generated images were made by a subject matter expert. Therefore, AI generated images should NOT be presumed to pass WP:V. So, AI generated images are unverifiable (unless they came from a WP:RS, but we're assuming that's not the case). WP:V is a core pillar, therefore unverifiable images should be banned. Leijurv (talk) 03:18, 26 March 2025 (UTC)[reply]
    Existing policy already requires content to be verifiable, which includes images. You started your earlier comment supporting a blanket ban but then ended by saying images from reliable sources shouldn't be blanket banned. Images created by users that aren't directly verifiable from appropriate sources are already subject to the original research policy. (A common example of acceptable user-generated images are graphs whose data points are appropriately cited.) isaacl (talk) 05:51, 26 March 2025 (UTC)[reply]
    I allege that AI images currently function as a backdoor, or workaround, to WP:V. Or in other words, that there's a "blind spot" in the policy. I agree that content has to be verifiable, and I argue that AI images are not verifiable, so I don't see where that disagrees with my claims. Images published in WP:RS are verifiable to that source, so I also don't see any contradiction there? Specifically, AI image generators allow editors to make images that superficially look plausible / accurate, but may be missing key features. It's easy to believe that what the editor typed into the AI image generator was indeed a prompt related to the article, but the AI image generator itself is not reliable enough that we can say its outputted images have encyclopedic value (i.e. that all their relevant details are correct). It's misleading to have images that look right to the untrained eye but really aren't reliably correct in potentially important details. By your graph analogy, I would indeed support a blanket ban on something like "AI Excel" if the charts it produced only superficially looked like the inputted data - because it breaks the chain of verifiability. But regular charts made with regular Excel (not AI Excel) don't break the chain of verifiability because we reasonably trust that the program faithfully represented the inputted data in the outputted chart - not just superficially / to the untrained eye, but precisely. Leijurv (talk) 06:15, 26 March 2025 (UTC)[reply]
    Personally I don't feel there is a workaround. The original research policy applies to images that are not directly verifiable to the untrained eye. isaacl (talk) 06:47, 26 March 2025 (UTC)[reply]
    For AI images that have no clear reliable source attesting to their validity, so in other words AI images that were probably generated by the editor themselves, I agree with you that WP:NOR should ban them. We should make that clear with an explicit policy statement. Hence why I supported this RfC. Leijurv (talk) 06:57, 26 March 2025 (UTC)[reply]
    Personally I don't feel there is a workaround I think your comparison to graphs is actually evidence of a workaround. Per my previous explanation, it served to launder the reliability of something like Excel (which we agree is a reliable translator from data to a visual chart) into the reliability of an AI image generator, which are almost unbelievably misleading and UNreliable in the extent to which key details of their output are accurate representations of what they superficially look like. That's the workaround: coasting off the assumption that the tool made a reasonable output given a reasonable input. Leijurv (talk) 07:00, 26 March 2025 (UTC)[reply]
    I don't see why the policy on original research should be ignored for the images you are considering to be misleading and unreliable. By default, images have to be considered in the context of their publisher. Images being published by their creator on Commons or English Wikipedia are by definition not subject to the type of editorial control expected from a reliable source. isaacl (talk) 16:21, 26 March 2025 (UTC)[reply]
    Hm? I don't follow, in what way do you think I'm ignoring WP:NOR? If anything, I'd imagine I sound like a WP:NOR hardliner... I agree that images published by their creator on e.g. Commons are by default WP:OR and should only be allowed into articles if they originate from a WP:RS or if they are "obviously" correct, such as a real-life photo that is plainly of the subject (or, in your words, directly verifiable to the untrained eye, which I agree with). The problem is that AI images may appear to be correct to the untrained eye, but get important/encyclopedic details wrong, hence they constitute a policy oversight that only gets worse as the generators get "better", hence why I supported this RfC. Leijurv (talk) 17:15, 26 March 2025 (UTC)[reply]
    To clarify, as we both agree the policy on original research applies, I don't see a policy oversight. Enforcing this policy appropriately deals with images that are not directly verifiable to its source data by the untrained eye. isaacl (talk) 18:09, 26 March 2025 (UTC)[reply]
    And doesn't require any policy specific to AI images Thryduulf (talk) 18:26, 26 March 2025 (UTC)[reply]
    In your view, how often are AI generated images directly verifiable to its source data by the untrained eye? Never, rarely, generally, presumptively, or always? My position is "Never" for AI generated images because they are machines essentially tuned to deceive the untrained eye. Leijurv (talk) 19:35, 26 March 2025 (UTC)[reply]
    Based on all the evidence I've seen, AI generated images are directly verifiable to its source data by the untrained eye enough of the time that banning them is not justifiable on those grounds. It's also worth noting that I've seen exactly zero evidence that AI images which are not sufficiently verifiable are added to Wikipedia articles any more frequently than non-AI images which are not sufficiently verifiable to the same standards, and exactly zero evidence that AI images are being added to articles in sufficient numbers that our existing processes can't cope - even with AI images being subject to much more scrutiny that non-AI images. Thryduulf (talk) 19:42, 26 March 2025 (UTC)[reply]
    Okay. I think we have found the crux of the disagreement. It's about whether AI generated images are reliably directly verifiable to its source data by the untrained eye. I think they frequently deceptively "look right" but get details wrong, whereas you think this doesn't happen much / isn't really a problem. Other places in this thread have already dug into specific examples to dispute how frequently such a thing may or may not happen so I'd rather not do so here. Leijurv (talk) 19:48, 26 March 2025 (UTC)[reply]
    To be clear, I don't think enough of the images that are added to Wikipedia articles are problematic (in this or another way) that there is any need to ban AI images specifically. The accuracy, etc of images that are not added to articles is completely irrelevant. Thryduulf (talk) 19:56, 26 March 2025 (UTC)[reply]
    WP:OI directly states that "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments" 4300streetcar (talk) 22:23, 26 March 2025 (UTC)[reply]
    In other words, the content in the image must be verifiable through citation to reliable sources, in order to demonstrate that the content accurately depicts published ideas or arguments. If the verification isn't straightforwardly direct, then the image enters the area of original research. isaacl (talk) 22:32, 26 March 2025 (UTC)[reply]
    Keep reading the next few sentences of WP:OI Leijurv (talk) 23:18, 26 March 2025 (UTC)[reply]
    If I may answer the user's question: we should not presume that that happened It is not presumed. Like any other image that is not the subject of the article, it is not presumed it's suitable. If it's unsuitable such as low-quality or not helpful, it will be removed like images a user or anybody else made using Photoshop. Also, WP:V is not about images. For example, there are lots of illustrations made by Wikimedians that are used as are many other kinds of images that are not in reliable sources. People imagine all sorts of things in this thread that just aren't true. Also see Wikipedia:Image use policy. Policy even says there may be relatively few images available for use on Wikipedia. Editors are therefore encouraged to upload their own images, releasing them under appropriate Creative Commons licenses or other free licenses. Prototyperspective (talk) 12:57, 26 March 2025 (UTC)[reply]
    I don't think illustrations / diagrams made by Wikimedians should be permitted by default. But let's say that it is permitted. I would still argue my point above, which is that even if we allow charts made in "Excel", we should not allow charts made in "AI Excel" because we know the plain facts that the AI tools generate images that look about right but miss important details. Your argument is that the "AI Excel" charts will be removed on a case by case basis because they'll be noticed to be low-quality or not helpful. My argument is that "AI Excel" charts should never be considered reliable, because these AI tools are experts at making things that look generally right (but are subtly wrong), therefore you cannot actually tell from a glance whether the details are right. The conditions under which an AI image should be considered reliable are if the creator is a subject matter expert on both the topic being illustrated and the AI image generator, and Wikipedians are not either, making it WP:OR. I argued, and you agreed, that we can't presume that has happened (that the AI tool was used in the very specific very narrow way that could possibly be reliable). This directly leads to the conclusion that these images are not reliable, hence why I supported this RfC. Leijurv (talk) 16:05, 26 March 2025 (UTC)[reply]
    1. We're not talking about "AI Excel charts".
    2. See earlier points, e.g. that they are prone to being flawed doesn't mean all images made with these tools are flawed 3. This doesn't lead to that conclusion. Just like a) illustrations made by Wikimedians and b) illustrations not in a reliable sources aren't banned and widely used throughout WP, images made using AI should also not be banned; and are dealt with effectively with policy as is.
    Also, you seem to have ignored nearly my entire comment – including repeating things I addressed in it and linking to the policy page from which I quoted – but I won't repeat it and will hopefully now refrain from further discussion here. Prototyperspective (talk) 17:31, 26 March 2025 (UTC)[reply]
    1 I sorta think "AI Excel" is really a great argument because we can all plainly agree that if you asked chatgpt to make a chart from a table it might look right but you couldn't reliably conclude all the series were plotted right. (sure it would work if it wrote a program to make the chart, but this is about AI generated images). 2 I do believe that essentially all images made with these tools are flawed, at some level of "flawed". 3 Illustrations made by Wikimedians / not found in reliable sources should be banned by default with an exception for the case where the illustration is plainly correct / verifiable to an untrained eye. AI images do not meet that standard because of how misleading / superficial they can be, a problem that's only getting worse as they advance. 4 I think I did answer pretty comprehensively. I believe that if AI images made by Wikipedians were allowed, it would be tantamount to "presuming" that said image generation strategy is reliable, but said images are not reliable, therefore AI images should not be allowed. For Photoshop, human intentionality went into the edit, therefore the editor who did the photoshop could describe what edits they made from what source material, and we could reasonably trust (and somewhat verify) that that's true, because the Photoshop program itself is reliable. Whereas if you used AI, that may hallucinates details into the image with no explanation or warrant of reliability. Leijurv (talk) 17:52, 26 March 2025 (UTC)[reply]
    that may hallucinates details into the image with no explanation or warrant of reliability and the user can pick one that does not have such an issue or reprompt or edit it to remove these. Images (incl charts & illustrations) not in reliable sources are the most common types of images on WP and aren't banned. Paintings can be very misleading / superficial. Intentionality can also go into AI images. Neither a paint brush nor Photoshop have a warrant of reliability.
    AI excel charts is a bad argument because were not discussing AI chart images but all kinds of AI images / applications where all AI charts and all AI images about anything data-related could be removed but still some AI images stay. It's pointless to argue/reason with people here anyway and I give up. Prototyperspective (talk) 18:04, 26 March 2025 (UTC)[reply]
    The toolchain used to modify an image (whether it includes Photoshop, GIMP, Pixelmator, or something else) isn't judged for the degree of fidelity it preserves to the original photo, which after all is dependent on how the software is used (for instance, generative AI is now a key feature of Photoshop). The verifiability of the resulting output is judged based on whether it remains directly verifiable to the cited reliable sources. isaacl (talk) 18:28, 26 March 2025 (UTC)[reply]
    Tools should absolutely be judged for the degree of fidelity the preserve to the original photo. If Croptool distorted images similar to how generative AI creates subtly distorted images, then we would want to go back and reassess all of its outputs, and ban its usage in the future. whether it remains directly verifiable This makes no sense to me - I see no conceivable way to "verify" an image that was AI generated by a Wikipedian. There is no citation to check. All that remains is looking at it with the "untrained eye" to see if it looks about right. That process is deceptive given the nature of AI images looking correct at a glance but being subtly wrong in the details. Leijurv (talk) 19:32, 26 March 2025 (UTC)[reply]
    Yes, as we have been discussing already, images generated by an editor generally fail the original research policy.(*) They aren't verifiable, and can already be rejected under this strong policy that doesn't care if the editor hand-placed each pixel or used tools to generate them.
    (*) As discussed, graphs are the most common outlier to this general situation: they are generated from data points and can be verified using basic skills. Minor photo-editing can also be considered to be verifiable to the original source image through basic skills. The original source image must be verifiable through its publisher. isaacl (talk) 21:47, 26 March 2025 (UTC)[reply]
    the user can pick one that does not have such an issue or reprompt or edit it to remove these It's absolutely true that this could happen. Hence my original statement, which I think is extremely clear and correct: Surely, an advanced user of an AI image generator who also has advanced knowledge of the subject matter could tweak and re-prompt the image generator until all the relevant details are correct. This is possible, but not likely. We certainly should not presume this to have occurred. This is unlikely, therefore it directly follows that AI generated images are unreliable. aren't banned This is a misleading statement because you're glossing over the exception for the case where the illustration is plainly correct / verifiable to an untrained eye. They are only allowed when they are verifiable to an untrained eye. Illustrations certainly don't get a blanket exemption from WP:V/WP:NOR. Paintings are clearly paintings to the untrained eye. But imagining if someone painted an image that looked so incredibly lifelike that it appeared like a photograph, and yet important details of that painting were invented by the painter and thus unreliable/incorrect, then such an image should not be allowed on Wikipedia since it's WP:OR. Photoshop and paintbrushes do have a warrant of reliability in their fundamental function, in that painting applies the color you intended it to. By analogy, Excel is reliable in that its charts reliably represent the input data. AI image generators do not have this, their output is deceptive to the untrained eye. As I said, if you made a painting that was similarly deceptive to an AI image generator, it should not be allowed for the same reason of WP:NOR. But that's pretty implausible, so I don't think it should be blanket banned. With AI image generators, the situation is reversed, in that the default operation is very deceptive and unreliable, and while we can imagine a reliable use case, that scenario is uncommon and implausible. Leijurv (talk) 19:32, 26 March 2025 (UTC)[reply]
  • Support except when the subject of the article is AI-generated images. AI-generated images are neither photographs, nor diagrams created by an intelligent human being. Even illustrations often have randomly generated inaccuracies. Rather than fight every one individually, better to ban them and have only deliberately and carefully created images as options. Mrfoogles (talk) 16:46, 26 March 2025 (UTC)[reply]
    Why do we need "fight" images? There is no evidence of inaccurate AI images being added to articles or other pages more frequently than inaccurate non-AI images, and existing policies and guidelines have proven sufficient to remove those without the need for any fighting. Thryduulf (talk) 16:52, 26 March 2025 (UTC)[reply]
  • Oppose. From a font design perspective. Ivan (talk) 18:55, 26 March 2025 (UTC)[reply]

Discussion: Ban all AI images

  • I'll repeat my comment from above that a blanket ban on all AI-generated images will never pass. For this RfC to yield any useful result (since I can see this RfC getting 100+ comments and ultimately ending in no consensus), editors will need to be specific about what they want to ban or any exceptions they might have. For example, are AI-generated images of deceased people acceptable? What about AI-generated images of historical landmarks or complex diagrams? What about those meeting ABOUTSELF or those that have been widely reported in RS? etc. Some1 (talk) 13:25, 28 February 2025 (UTC)[reply]
  • (edit conflict) Comment In all the extensive previous discussions the only two reasons (other than IDONTLIKEIT) that anybody has been able to express for why AI images should not be used are because some of them are and/or might be inaccurate or misleading and that some might be copyright violations. Existing policy already allows for copyright violations and misleading and/or inaccurate images to be removed (and in some cases deleted). Other than vague and unsubstantiated FUD about being overwhelmed not a single person has yet managed to explain why the existing policy is inadequate. Anybody supporting a ban (limited or not) needs to do this. Additionally it would be best if they also explain how they will deal with cases such as where AI-authorship is unclear or disputed (noting this is irrelevant to the current policy). Thryduulf (talk) 13:55, 28 February 2025 (UTC)[reply]
    I think it’s a reasonable assumption that cases where authorship is unclear would be dealt with via community discussion, just like any other content dispute. I fail to see why it should be necessary to create additional bureaucratic requirements like that for the votes of people who disagree with your opinion to be considered. On the “what about existing policy” point, AI is being pushed heavily by Big Tech right now (see e.g. how YouTube is becoming overrun with AI slop), so even if existing policies are enough to deal with AI right now, it seems likely that AI will become a bigger problem on Wikipedia in the future. It’s better to have this discussion now to get out ahead of this new technology and revise policy later than to just not have our policies address AI images. pythoncoder (talk | contribs) 18:03, 28 February 2025 (UTC)[reply]
  • As I discussed previously, I think the key issue is that Wikipedia editors shouldn't be the ones to validate the reliability and accuracy of generated images. Editors should rely on the editorial control of reliable, independent, non-promotional sources, as is done with all content on Wikipedia that requires more than ordinary math skills or basic objective comparison to validate. I feel providing guidance on the basis of how reliability should be determined would better align with general Wikipedia principles, rather than a blanket directive based on mode of creation. isaacl (talk) 17:45, 28 February 2025 (UTC)[reply]
    @Isaacl, how does this standard align with our existing rules? It's not clear to me whether you're saying that we should follow the same rules (e.g., editors are expected to be able to identify which images in c:Category:Fraction 1/2 are suitable for illustrating the article about the fraction One half) or if you are saying that it's okay for them to use their basic skills for non-AI images but AI images require extra proof. WhatamIdoing (talk) 20:16, 3 March 2025 (UTC)[reply]
    I didn't say anything about AI images requiring extra proof. I said the same principles for all content on Wikipedia should be followed. Content created by editors should be directly traceable back to sources using basic skills. Looking at some examples of images, an editor could create a bar graph from a data table, and it can be straightforwardly verified with basic math skills. A map based on one from a source can also typically be compared with basic skills, though that may be less feasible with specialized maps. The veracity of an image of an event typically requires editorial judgement, and so English Wikipedia editors evaluate the editorial control exerted by the image's publisher. How an image is created doesn't change the responsibility for it to be verifiable through citation to appropriate sources. isaacl (talk) 23:15, 3 March 2025 (UTC)[reply]
  • I briefly want to reiterate what I've said above so many times: AI-generated images present significant threats to accuracy that go beyond what we already experience with user-generated images. With the latter, it is very unlikely that someone will be creating a complex depiction or diagram without having subject-matter expertise. Errors in their images can be traced back to human intent or human oversight, something that is not possible to do with the black box of AI-generation (as expanded on by e.g. @Remsense). However, with AI image-generation it is very possible for someone with no SME to create a plausible-looking image in minutes (see the example cited by xaosflux earlier), and even when someone has SME it is much easier to miss details when you are not the one adding them—this is part of why major scientific journals like Nature have wholesale banned AI-generated images. Allowing such images, regardless of their complexity, would result in a flood of unvalidated slop well beyond our maintenance capabilities. This in turn poses the additional problem of those images populating the first pages of Google search results and supplying the training corpus for other diffusion models.
    We owe it to our readers (who, according to research by Getty, overwhelmingly (98%) consider authentic images "pivotal in establishing trust") to provide accurate images. JoelleJay (talk) 18:08, 28 February 2025 (UTC)[reply]
    But how does banning AI images because they are AI achieve this aim, given that not all inaccurate images are AI-generated and not all AI-generated images are inaccurate? Thryduulf (talk) 18:25, 28 February 2025 (UTC)[reply]
    The vast majority of AI-generated images will have inaccuracies, but not ones that can necessarily be noticed as inaccuracies without subject matter expertise. This means that allowing AI images will either create a lot more work for specialized volunteers to verify each one, or lead to many inaccuracies not being caught.
    This isn't the case with current human-made images, as someone who is not an expert will likely not draw possibly misleading details, except if they have a clear intent to deceive (which we can easily deal with). In contrast, AI models are known for adding details and embellishments for the sake of it, adding plausibility at the sacrifice of accuracy. Chaotic Enby (talk · contribs) 18:31, 28 February 2025 (UTC)[reply]
    These gears were drawn by hand, but not by someone who knows how gears work.
    someone who is not an expert will likely not draw possibly misleading details: Um, dubious—discuss? At least with technical subjects, the omission of critical details and the addition of misleading ones happens all the time.
    If you know the subject, you won't accept a bad image no matter what its provenance is. If you don't know the subject, the thing you accept from AI might be no worse than the thing you would generate yourself. WhatamIdoing (talk) 20:27, 3 March 2025 (UTC)[reply]
    Someone who is not an expert should not be creating technical images that would require an expert to detect subtle errors. This should not be a controversial position! JoelleJay (talk) 21:15, 3 March 2025 (UTC)[reply]
    The same thing goes with photographs. Unless the photographer manually edits the photo, it is likely to depict something valid that exists or has existed in real life. The most I can see of a human error is accidentally misidentifying the image. However, AI can make subtle, unstoppable errors and it is in the vast majority of images made. I notice it particularly bad with architecture as the lines that AI makes are heavily inconsistent when it ought to be. ✶Quxyz 12:58, 9 March 2025 (UTC)[reply]
  • An originally-generated example of the "Shrimp Jesus" AI imagery that appeared on Facebook in 2024.
    One example I'd like to note is attached image of "Shrimp Jesus." While it may appear odd or rediculous to include, we include it on the the page for the Dead Internet theory. As AI images become more prevelant in cluture, I suspect we will need to have examples of them in the project. A blanket ban is not in line with this. It is impossible to foresee the future exceptions that will pop up like "Shrimp Jesus," and would not want to impose some blanket beuracratic policy ban that needs to be overcome every time we find a use that AI is appropriate for. GeogSage (⚔Chat?⚔) 18:48, 28 February 2025 (UTC)[reply]
    @GeogSage, almost everyone who opposes AI-generated images has noted that "ABOUTSELF" or notable published usage would be permitted. JoelleJay (talk) 18:51, 28 February 2025 (UTC)[reply]
    @JoelleJay some people have commented to that effect but I don't think it's most, and of those who have explicitly supported exceptions far from all of them have mentioned those particular ones. There is also no such exception included in the proposal so we cannot assume that those who do not explicitly comment about it would support (or oppose) such an exception. Thryduulf (talk) 18:55, 28 February 2025 (UTC)[reply]
    Enough people have noted this extremely obvious and straightforward exception that it would 100% be considered in any competent close. Conditional constraints not originally in the RfC question are allowed to be introduced by participants. JoelleJay (talk) 18:59, 28 February 2025 (UTC)[reply]
    @JoelleJay Fair, I wanted to note this specific example as part of the reason I'd oppose a ban. The reasons I would oppose banning a user generated image that is bad/inappropriate would apply to an AI generated one. If the AI image isn't appropriate, it being AI isn't the reason why, but something that would apply to a user generated drawing, or low quality graphic. GeogSage (⚔Chat?⚔) 19:01, 28 February 2025 (UTC)[reply]
    @GeogSage, the problem is that non-AI user-generated images have an author who is responsible for and can explain all details in their image. If they don't have subject-matter expertise, they also are very unlikely to attempt to create an image on a technical topic. With AI-generated images, subtle details can be (and usually are) introduced beyond what's in the prompt (that's a major point of transformer and diffusion models!), and these details may require SME to detect (and even SMEs might have trouble noticing small errors, especially when they are in non-intuitive places or wouldn't be made by humans). AI-generated images can be created in seconds, by editors with zero knowledge on the topic, which means a far higher load in images needing SME validation will happen. JoelleJay (talk) 19:25, 28 February 2025 (UTC)[reply]
    Not to toot my own horn, but I'm a bit of a SME on maps. My experience is that when I point out gigantic, unethical, massive errors and ethical violations, it is largely ignored. I don't even bother pointing out minor errors. AI Images in article space need to be scrutinized like any other image, the fact they are AI should not really matter. Strict bans will only make AI images harder to detect because users won't disclose them, and lead to possible accusations of AI on images that are not AI. GeogSage (⚔Chat?⚔) 19:33, 28 February 2025 (UTC)[reply]
    I have not seen anyone actually argue against an exception for this class of images. Remsense ‥  18:53, 28 February 2025 (UTC)[reply]
    Not explicitly, but there have been multiple comments supporting a ban with no exceptions and there are no exceptions included in the proposal. Thryduulf (talk) 18:56, 28 February 2025 (UTC)[reply]
    I've previously had a very hardline position, but I think I've been compelled now that whether there is a specific guideline or not, the task of enforcing it would be largely the same and based on the same underlying content policies. Thus, I am not sure what a new statute could theoretically do for us other than handle a few people who would encounter and understand an express ban but not WP:NOR or WP:V generally. I think that's a pretty small class of editors, as most will not read any policy at all before uploading. Remsense ‥  19:13, 28 February 2025 (UTC)[reply]
    However, a ban would make it far easier for editors who are not SMEs to just reject a (plausible-looking) technical AI-generated image without needing to validate its accuracy. JoelleJay (talk) 19:27, 28 February 2025 (UTC)[reply]
    That is not a good thing in my opinion. Making it easy to reject stuff without checking it's accuracy is not my idea of a good practice. As AI Images progress in the next few years/decades, they will likely be integrated with digital camera's and basic image editing software. AI is here, just like the internet is here. I hope that we eventually make a Wiki AI Image generator that only uses images from the commons, and have AI powered bots proofing our articles, but that is likely a few years down the line at best. GeogSage (⚔Chat?⚔) 04:04, 1 March 2025 (UTC)[reply]
    this is specifically why i mentioned only using them in the context of ai (and avoiding generating something yourself for use here unless absolutely necessary). shrimp jesus is an ai trend, so it's fine and dandy
    ...i mean, the usage of the images in that specific context is fine and dandy, not the fact that the context exists in the first place. that is not fine or dandy. i just realized that image has 11 fingers, ow my art bones consarn (prison phone) (crime record) 18:56, 28 February 2025 (UTC)[reply]
  • What about the image in the Stable Diffusion article's infobox? We need nuance about which areas do readers expect fine details to be accurate and/or all photorealistic images to be photographs, so I was going to propose something along the lines of "Support a ban only for tangible nonfiction subjects". But then I thought of Architectural renderings and other things labelled "Artist's impression", and also saw the Venn diagram problem above.
    On one hand, it would be unreasonable to ban the image on the DALL-E page. On the other hand, narrower wording might have loopholes. Perhaps a rewording should have computer-related topics as the only exception, and on top of that, require clear labeling as AI-generated. 216.58.25.209 (talk) 20:14, 2 March 2025 (UTC)[reply]
    I think my suggestion of "no AI-generated imagery in any context where a reader is likely to think it is a human-generated image" covers most of those edge cases. If it's obvious that an image is AI-generated, whether because it is inherent to the article topic or because it is clearly labeled as AI-generated, then most of the potential harm will be mitigated. -- LWG talk 23:33, 2 March 2025 (UTC)[reply]
    I think this is correct: We need nuance about which areas do readers expect fine details to be accurate and/or all photorealistic images to be photographs. A hand-painted photorealistic oil painting by a professional artist can be inappropriate in some circumstances. Editors will always need to use their judgement to determine whether an image is just an illustration (a lower bar) or making a statement that something is definitely exactly as shown (a higher bar). WhatamIdoing (talk) 20:32, 3 March 2025 (UTC)[reply]
  • The proposition seems far too naive about photography and AI features. Photography is not "human-created" − a camera is a machine and the extent to which its images are visually accurate is subject to numerous issues of optics and technology. So called AI-features now extend the range of manipulations which are available in commonplace consumer devices. These include blemish removal and other cosmetic features to make people look better. So, an indiscriminate witchhunt directed at some buzzword such as "AI" is not a good approach. What's needed for images is some guideline for what manipulations are or are not acceptable, regardless of the name given to the technology. Andrew🐉(talk) 22:43, 1 March 2025 (UTC)[reply]
    No one is talking about "AI editing features". The prompt literally says (Where "AI-generated" means wholly created by generative AI, not a human-created image that has been modified with AI tools.) You've also already !voted. JoelleJay (talk) 07:25, 2 March 2025 (UTC)[reply]
    I've moved this down to the discussion section and removed the boldfaced "oppose" since it is a duplicate !vote. -- Tamzin[cetacean needed] (they|xe|🤷) 02:51, 3 March 2025 (UTC)[reply]
  • I oppose this proposal. I will list my opinions on the points used to justify a complete ban.
    1. AI images may be inaccurate: That's more of a problem of the prompter than the AI itself. A general and vague prompt will generate a random image with the general and vague idea requested, the trick is to provide long and detailed prompts... and if it's still inaccurate, make it even more detailed. Like any technology, AI has a learning curve. Besides, it is a really new technology, growing each day, and problems like this inaccuraccy in the images are being fixed as we speak. Should we ban now, and reconsider later? Not a great plan. Changing a consensus is a difficult thing to do, and those who support a ban now and don't use AI for other purposes elsewhere may fail to notice improvements if someone proposes to change it at some point. So it may be better to treat inaccurate images just like we do right now, on a case-by-case basis. And note that the level of accuraccy e need also depends on what is it we want to illustrate. Sometimes it's molecule structures, sometimes it's the way birds fly.
    2. AI images may be copyright violations: Let the WMF worry about that. Right now, under the current laws and in the current state of things, AI images are public domain and free to use here (and safer than fair use images, in fact). The scenario may change, but to speculate that it will is just that, speculation. But what if it does change, and AI images suddenly become subject to copyright? Not a problem: all AI images are tagged with {{PD-algorithm}}, so a bot may delete the whole lot in minutes. Also, remember that Commons has an official guideline on AI-generated media to prevent copyright misuse, such as an AI image of Fin Shepard and Captain America shaking hands.
    3. We can create images without using AI: Yes, of course... and we don't really need cars either, we can use carriages pulled by horses. Thing is, AI can be of immense help for Wikipedia if used right. The problem before AI was a thing was that the free license places huge limits on our hability to illustrate articles. Fair use is for very strict and limited cases (and not available in all WMF projects), public domain by age means incredibly dated images created in a cultural zeitgeist completely alien to the present day, freely licensed by the author requires to locate an author that does so, and self published photos and images require to be there to take the photo or to have the skills to make an image from scratch. AI is, for the first time time since I started editing Wikipedia almost two decades ago, a tool that can create free images on demand, specifically tailored to the situation I may need.
    4. AI is dangerous because [opinion about AI technology in general]: Even if not specifically disruptive, the spirit of Wikipedia:Do not disrupt Wikipedia to illustrate a point would suggest to avoid developing opinions of Wikipedia-related stuff based on opinions of real-life stuff. Cambalachero (talk) 02:39, 4 March 2025 (UTC) Moved duplicate !vote to discussion JoelleJay (talk) 22:20, 4 March 2025 (UTC)[reply]
  • Oppose for the same reasons I listed above. If I want to generate something like a diagram to depict an abstract concept and use an AI to draw the lines from a prompt vs by dragging virtual lines around my photo editor, why would this be a problem - so long as I'm actually reviewing it before I choose to publish the file. — xaosflux Talk 13:30, 5 March 2025 (UTC)[reply]
  • To those who oppose a blanket ban because AI is merely a tool - how is an average reader supposed to tell how much care went into an AI-generated image if the caption mentions that it is an AI-generated image? Would they think it created by an SME and heavily edited and reviewed by other SMEs for accuracy, or was it created by someone with little subject knowledge simply typing a simple prompt into an image generator? Would your average reader presume it's inaccurate if they were merely told in the caption that it was AI-generated? If AI-generated images are presumptively inaccurate (or at least, prompt substantial doubt about the accuracy), isn't there a great risk that they would cause them to doubt the rest of the project?4300streetcar (talk) 18:31, 6 March 2025 (UTC)[reply]
    The question arises as well - how do you escape that presumption of inaccuracy from your average reader if they find out an image is AI-generated? Should all AI-generated images used here undergo review before it's published, and should AI-generated images directly mention in the caption that the image has been reviewed by real humans with SME? Are there other methods to assuage readers (who may not be familiar with Wikipedia policies or such) that the AI-generated image is accurate? 4300streetcar (talk) 18:33, 6 March 2025 (UTC)[reply]
    What makes you think the public does not doubt the rest of the project as is? Remember, Wikipedia is user-generated, and everybody knows that. Even citing sources, being neutral and all, the average Joe will always suspect that Wikipedia is written by kids who have no idea or by guys with hidden agendas, and take whatever is written here with a grain of salt. The presence or absence of AI images is unlikely to change either way the reputation that Wikipedia already has. Cambalachero (talk) 23:56, 14 March 2025 (UTC)[reply]
    The encyclopedia has built up quite a reputation since 2008. It's general reliability has been discussed by large media outlets. Though I do not believe that the image of Wikipedia is of much concern to Wikipedia outside of areas where the five pillars and public ideals overlap. ✶Quxyz 21:13, 15 March 2025 (UTC)[reply]
  • A thought: wherever we end up falling on the question of when and how AI use is permissible, there seems to be near universal agreement that we should set an expectation that when/if AI is used, its use should be disclosed, both for clarity in helping editors assess content and for transparency in helping readers understand what they are looking at. -- LWG talk 21:37, 10 March 2025 (UTC)[reply]
  • Lots of great points above. A few cases where I've seen AI to be helpful as a major part of generation:
    Specialized models - both trained from scratch and fine tuned.
    3D protein structure, as noted above. AlphaFold and Boltz generate excellent structures.
    Other systems where a model is trained specifically to produce outputs that can be compared to the work of subject experts, and benchmarked. Ideally the outputs that we integrate would already have been published elsewhere in a catalog, but we certainly want those on Commons and in articles and not rejected due to the tools used to make them. Plenty of these, the best ones are sufficiently reliable that they are used to populate databases of diagrams or viz used by professionals. Not largely used yet on the projects but could be (and when they are, may be suitable for a whole category of articles as with category:proteins).
    Icons - for keys in diagrams.
    User-generated sketches or images - most people, including most artists who already provide illustrations, can do a better job with current state of the art tools, including fine tuning them on our own work so that the outputs are comparable in tone or style to what we would make with non-AI software.
    Illustrations of hypotheticals - in hypothetical past, present, and future scenarios. This is considered useful in the fields of illustrating maps, diagrams, &c. If WP/Commons is not an ideal primary point of publication, it would be interesting to compare this to guidance for other illustrations & options for flagging peer review of same.
– SJ + 07:31, 23 March 2025 (UTC)[reply]


BLPs

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Are AI-generated images (generated via text prompts, see also: text-to-image model) okay to use to depict BLP subjects? The Laurence Boccolini example was mentioned in the opening paragraph. The image was created using Grok / Aurora, a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.
AI-generated image of Laurence Boccolini
Some1 (talk) 12:34, 31 December 2024 (UTC)[reply]
AI-generated cartoon portrait of Germán Larrea Mota-Velasco

03:58, January 3, 2025: Note: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the Germán Larrea Mota-Velasco example, which was generated using DALL-E, another text-to-image model).

Some1 (talk) 11:10, 3 January 2025 (UTC)[reply]

notified: Wikipedia talk:Biographies of living persons, Wikipedia talk:No original research, Wikipedia talk:Manual of Style/Images, Template:Centralized discussion -- Some1 (talk) 11:27, 2 January 2025 (UTC)[reply]

  • No. I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) Chaotic Enby (talk · contribs) 12:46, 31 December 2024 (UTC)[reply]
    That AI generated image looks like Dick Cheney wearing a Laurence Boccolini suit. ScottishFinnishRadish (talk) 12:50, 31 December 2024 (UTC)[reply]
    There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them unless they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. Thryduulf (talk) 16:45, 31 December 2024 (UTC)[reply]
  • No. Well, that was easy.
    They are fake images; they do not actually depict the person. They depict an AI-generated simulation of a person that may be inaccurate. Cremastra 🎄 u — c 🎄 20:00, 31 December 2024 (UTC)[reply]
    Even if the subject uses the image to identify themselves, the image is still fake. Cremastra (u — c) 19:17, 2 January 2025 (UTC)[reply]
  • No, with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. Horse Eye's Back (talk) 20:37, 31 December 2024 (UTC)[reply]
  • No. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. pythoncoder (talk | contribs) 21:30, 31 December 2024 (UTC)[reply]
  • No except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -Kj cheetham (talk) 21:32, 31 December 2024 (UTC)[reply]
  • Yes, when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use any image to represent a BLP subject this is already policy. Thryduulf (talk) 21:46, 31 December 2024 (UTC)[reply]
    How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real Laurence Boccolini has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression.
    How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. Cremastra 🎄 u — c 🎄 21:54, 31 December 2024 (UTC)[reply]
    How well can we determine how accurate a representation it is? in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation any image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. Thryduulf (talk) 23:54, 31 December 2024 (UTC)[reply]
    I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 u — c 🎄 00:14, 1 January 2025 (UTC)[reply]
    I'm guessing your filter bubble doesn't include Facetune and their notorious Filter (social media)#Beauty filter problems. WhatamIdoing (talk) 02:46, 2 January 2025 (UTC)[reply]
    A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a Getty poll, 87% of respondents want AI-generated art to at least be transparent, and 98% consider authentic images "pivotal in establishing trust".
    And even if you disagree with all that, can you not see the larger problem of AI images on Wikipedia getting propagated into generative AI corpora? JoelleJay (talk) 04:20, 2 January 2025 (UTC)[reply]
    I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so.
    I think we're Wikipedia:Here to build an encyclopedia, not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. WhatamIdoing (talk) 07:40, 2 January 2025 (UTC)[reply]
  • Absolutely no fake/AI images of people, photorealistic or otherwise. How is this even a question? These images are fake. Readers need to be able to trust Wikipedia, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. :bloodofox: (talk) 22:24, 31 December 2024 (UTC)[reply]
  • No except for edge cases (mostly, if the image itself is notable enough to go into the article). Gnomingstuff (talk) 22:31, 31 December 2024 (UTC)[reply]
  • Absolutely not, except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. JoelleJay (talk) 23:06, 31 December 2024 (UTC)[reply]
  • No with no exceptions. Carrite (talk) 23:54, 31 December 2024 (UTC)[reply]
  • No. We don't permit falsifications in BLPs. Seraphimblade Talk to me 00:30, 1 January 2025 (UTC)[reply]
    For the requested clarification by Some1, no AI-generated images (except when the image itself is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. Actual photographs of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is not an image of the person. Seraphimblade Talk to me 05:42, 3 January 2025 (UTC)[reply]
  • No, but with exceptions. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)[reply]
    Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —David Eppstein (talk) 05:41, 3 January 2025 (UTC)[reply]
  • No, and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than Frankenstein images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. Randy Kryn (talk) 01:34, 1 January 2025 (UTC)[reply]
    Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)[reply]
    Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – Blind Lemon Jefferson was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. WhatamIdoing (talk) 07:44, 2 January 2025 (UTC)[reply]
  • Yes, so long as it is an accurate representation. Hawkeye7 (discuss) 03:40, 1 January 2025 (UTC)[reply]
  • No not for BLPs. Traumnovelle (talk) 04:15, 1 January 2025 (UTC)[reply]
  • No Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --SuperJew (talk) 07:10, 1 January 2025 (UTC)[reply]
    Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
    What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint. — Masem (t) 14:29, 1 January 2025 (UTC)[reply]
  • No, I'm in agreeance with Seraphimblade here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. Doawk7 (talk) 09:32, 1 January 2025 (UTC)[reply]
    So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)[reply]
    To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person."
    However, I really want to stick to what you say at the end there: Heck, most AI looks closer to the real thing than any portrait.
    That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.

    Per the wording of the RfC of "depict BLP subjects," I don't think there would be any valid case to utilize AI images. I hold a strong No. Doawk7 (talk) 04:15, 3 January 2025 (UTC)[reply]
  • No. We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. Di (they-them) (talk) 19:33, 1 January 2025 (UTC)[reply]
    Gisèle Pelicot?
  • Maybe There was a prominent BLP image which we displayed on the main page recently. (right) This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the Laurence Boccolini composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. Andrew🐉(talk) 08:30, 1 January 2025 (UTC)[reply]
    Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (u — c) 14:18, 1 January 2025 (UTC)[reply]
    Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)[reply]
    Commons descriptions do not appear on our articles. CMD (talk) 10:28, 2 January 2025 (UTC)[reply]
    People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. Cremastra (u — c) 14:15, 2 January 2025 (UTC)[reply]
    Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing. — Masem (t) 14:33, 1 January 2025 (UTC)[reply]
    Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Wikipedia:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (u — c) 14:37, 1 January 2025 (UTC)[reply]
    Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)[reply]
    Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis)
    ...human [illustration] is not going to change or distort a person's appearance in the same way an AI image would. [Drawings] done by a [competent] person who is paying attention to what they are doing [...] by person who is aware, while they are making [the drawing], that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra (u — c) 20:56, 1 January 2025 (UTC)[reply]
    @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)[reply]
    I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above: The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. Cremastra (u — c) 00:16, 2 January 2025 (UTC)[reply]
    Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)[reply]
    I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (u — c) 02:30, 2 January 2025 (UTC)[reply]
    To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. Cremastra (u — c) 02:38, 2 January 2025 (UTC)[reply]
    Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? Thryduulf (talk) 02:58, 2 January 2025 (UTC)[reply]
    Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute.
    I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. Cremastra (u — c) 15:30, 2 January 2025 (UTC)[reply]
    Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a Borgesian exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, known paths. JoelleJay (talk) 04:44, 2 January 2025 (UTC)[reply]
    Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)[reply]
    If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)[reply]
    If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)[reply]
    The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)[reply]
    Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. WhatamIdoing (talk) 07:58, 2 January 2025 (UTC)[reply]
    And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.
    And I don't want to count 100 dots either! JoelleJay (talk) 17:43, 2 January 2025 (UTC)[reply]
    Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. WhatamIdoing (talk) 07:44, 3 January 2025 (UTC)[reply]
  • Comment: when you Google search someone (at least from the Chrome browser), often the link to the Wikipedia article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). Photos of Japan (talk) 09:39, 1 January 2025 (UTC)[reply]
    This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. Thryduulf (talk) 11:39, 1 January 2025 (UTC)[reply]
  • Already opposed a blanket ban: It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I expressed my views there. MichaelMaggs (talk)
    Some editors might oppose a blanket ban on all AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/text-to-image models) to depict living people. Some1 (talk) 14:32, 1 January 2025 (UTC)[reply]
  • No For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --Masem (t) 14:35, 1 January 2025 (UTC)[reply]
  • I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we prefer photos over paintings (if available). So… we should prefer photos over AI imagery.
    That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image is the only option (ie there is no photo available), then the caption should clearly indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. Blueboar (talk) 14:56, 1 January 2025 (UTC)[reply]
    The issue with the latter is that Wikipedia images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)[reply]
    We're here to build an encyclopedia, not to protect commercial search engine companies.
    I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have inaccurate AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image looks like whatever's being depicted. We are not necessarily warranting that the image was created through a specific process, but the image really does need to look like the subject. WhatamIdoing (talk) 03:12, 2 January 2025 (UTC)[reply]
    You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. Photos of Japan (talk) 17:38, 3 January 2025 (UTC)[reply]
    As another editor pointed out in their comment, there's the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet, especially on a site such as Wikipedia and especially on their own biography. WP:BLP says the bios must be written conservatively and with regard for the subject's privacy. Some1 (talk) 18:37, 3 January 2025 (UTC)[reply]
    Once we can no longer tell the difference, what's the point in banning them? Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should let the horse in. Cremastra (u — c) 18:47, 3 January 2025 (UTC)[reply]
    If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. JoelleJay (talk) 04:48, 2 January 2025 (UTC)[reply]
    Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)[reply]
    But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)[reply]
    Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. WhatamIdoing (talk) 07:48, 2 January 2025 (UTC)[reply]
  • Oppose. Yes. I echo my comments from the other day regarding BLP illustrations:

    What this conversation is really circling around is banning entire skillsets from contributing to Wikipedia merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Wikipedia. [...] Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.
    Additionally, [when I say say "entire skillsets," I am not] referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Wikipedia is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.

    lethargilistic (talk) 15:41, 1 January 2025 (UTC)[reply]
    Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Wikipedia has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Wikipedia via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Wikipedia. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Wikipedia. lethargilistic (talk) 15:59, 1 January 2025 (UTC)[reply]
    By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)[reply]
    I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images will be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. lethargilistic (talk) 16:29, 1 January 2025 (UTC)[reply]
    Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear WP:OR and outright WP:SYNTH. There's no two ways about it. Articles do not require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. :bloodofox: (talk) 23:39, 1 January 2025 (UTC)[reply]
    I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)[reply]
    Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of WP:SYNTH: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes.
    A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Wikipedia:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that.
    Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the fake sources LLMs also love to "hallucinate"? :bloodofox: (talk) 03:37, 2 January 2025 (UTC)[reply]
    So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion: Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Wikipedia. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review WP:NOTSYNTH because SYNTH is not a policy; NOR is the policy: If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH. Additionally, not all synthesis is even SYNTH. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. lethargilistic (talk) 08:08, 2 January 2025 (UTC)[reply]
    "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)[reply]
    NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)[reply]
    This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. :bloodofox: (talk) 09:59, 2 January 2025 (UTC)[reply]
  • Maybe: there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. Photos of Japan (talk) 18:14, 1 January 2025 (UTC)[reply]
    That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)[reply]
    It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. Thryduulf (talk) 22:14, 1 January 2025 (UTC)[reply]
    That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and WP:CREEP states that new policies should address current problems rather than hypothetical concerns. Photos of Japan (talk) 22:16, 1 January 2025 (UTC)[reply]
  • Easy no for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. Cessaune [talk] 19:05, 1 January 2025 (UTC)[reply]
  • No obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. Zaathras (talk) 22:19, 1 January 2025 (UTC)[reply]
  • No to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. XOR'easter (talk) 23:36, 1 January 2025 (UTC)[reply]
    While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)[reply]
    The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)[reply]
    That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. Thryduulf (talk) 03:04, 2 January 2025 (UTC)[reply]
  • No for all people, per Chaotic Enby. Nikkimaria (talk) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. Nikkimaria (talk) 04:00, 3 January 2025 (UTC)[reply]
  • No - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios ("Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant" is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is).
  • If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. FOARP (talk) 14:39, 2 January 2025 (UTC) FOARP (talk) 14:39, 2 January 2025 (UTC)[reply]
    we should be steering clear of copyvio we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to this discussion.
    if people upload faked images [...] the response should be as it is now in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. Thryduulf (talk) 15:14, 2 January 2025 (UTC)[reply]
    The idea that current policies are entirely adequate is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". XOR'easter (talk) 18:36, 2 January 2025 (UTC)[reply]
    I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)[reply]
    "in other words you are saying that the problem is faked images not AI" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt.
    "at least some AI images are legally acceptable for us" - Until they decide which ones that isn't much help. FOARP (talk) 19:05, 2 January 2025 (UTC)[reply]
    Yes – what FOARP said. AI-generated images are fakes and are misleading. Cremastra (u — c) 19:15, 2 January 2025 (UTC)[reply]
    Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. Thryduulf (talk) 19:05, 2 January 2025 (UTC)[reply]
  • No! This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. JuxtaposedJacob (talk) | :) | he/him | 15:00, 2 January 2025 (UTC)[reply]
    Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. JuxtaposedJacob (talk) | :) | he/him | 15:40, 3 January 2025 (UTC)[reply]
  • No, unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Wikipedia, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ModernDayTrilobite (talk • contribs) 15:25, 2 January 2025 (UTC)[reply]
    To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict[ing] a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)[reply]
    If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like Al Capp. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Wikipedia. ModernDayTrilobite (talk • contribs) 19:13, 2 January 2025 (UTC)[reply]
  • No, with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of original research... - Nabla (talk) 18:02, 2 January 2025 (UTC
  • Maybe I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Wikipedia. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask Some1 to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Wikipedia against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- Colin°Talk 18:17, 2 January 2025 (UTC)[reply]
  • No This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. Tepkunset (talk) 18:31, 2 January 2025 (UTC)[reply]
  • No. LLMs don't generate answers, they generate things that look like answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate things that look like photos. Using them on BLPs is unacceptable. DS (talk) 19:30, 2 January 2025 (UTC)[reply]
  • No. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). win8x (talk) 19:56, 2 January 2025 (UTC)[reply]
  • No. Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo (or drawing) of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Wikipedia link, without the disclaimer. ~ ToBeFree (talk) 23:54, 2 January 2025 (UTC)[reply]
  • I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ProcrastinatingReader (talk) 00:48, 3 January 2025 (UTC)[reply]
    A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not.
    DS (talk) 02:44, 3 January 2025 (UTC)[reply]
    Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)[reply]
    Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Wikipedia would accept an analog substitute like a painting, there's no reason Wikipedia shouldn't accept an equivalent painting made with digital tools, and there's no reason Wikipedia shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)[reply]
    For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Wikipedia readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)[reply]
    Wikipedia's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Wikipedia. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Wikipedia and why our opposition to these immediate proposals comes from a desire to prevent harm to Wikipedia. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)[reply]
    Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Wikipedia, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Wikipedia is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)[reply]
    Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Wikipedia when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. lethargilistic (talk) 06:10, 3 January 2025 (UTC)[reply]
    To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation (unlikely, given my lack of painting skills, but let's not get lost in the metaphor), but if my painting hasn't been discussed anywhere besides Wikipedia, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ModernDayTrilobite (talk • contribs) 05:57, 3 January 2025 (UTC)[reply]
    An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Wikipedia. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Wikipedia would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)[reply]
    Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…).
    These things are fakes. The analysis stops there. FOARP (talk) 10:48, 4 January 2025 (UTC)[reply]
    Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Wikipedia because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
    At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing SOLELY on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.
    Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been verifiability, not provenance or falsity. Sometimes, IMO, that has lead to disaster and Wikipedia saying things I know to be factually untrue despite the contents of reliable sources. But that is the policy. We compare the contents of Wikipedia to reliable sources, and the contents of Wikipedia are considered verifiable if they cohere.
    I ask again: If Wikipedia's response to the creation of AI imaging tools is to crack down on all artistic contributions to Wikipedia (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to limit what humans can do on Wikipedia, what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? lethargilistic (talk) 23:31, 4 January 2025 (UTC)[reply]
    "Verifiable by comparing them to a reliable source" - comparing two images and saying that one looks like the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.
    "Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake."" - Try presenting a paraphrasing as a quotation and see what happens.
    "Imagine it's 2002 and Wikipedia is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..." - This basically happened, and is the origin of WP:NOTGALLERY. Wikipedia is not a host for original works. FOARP (talk) 22:01, 6 January 2025 (UTC)[reply]
    Comparing two images and saying that one looks like the other is not "verifying" anything. Comparing text to text in a reliable source is literally the same thing.
    The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing. No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow more unverifiable simply because it is created in a lifelike style.
    Try presenting a paraphrasing as a quotation and see what happens. Besides what I just said, nobody is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.)
    This basically happened, and is the origin of WP:NOTGALLERY. That is not the same thing. User-generated images that illustrate the subject are not prohibited by WP:NOTGALLERY. Wikipedia is a host of encyclopedic content, and user-generated images can have encyclopedic content. lethargilistic (talk) 02:41, 7 January 2025 (UTC)[reply]
    Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (u — c) 02:44, 7 January 2025 (UTC)[reply]
    Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)[reply]
    So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Wikipedia editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)[reply]
    +1 to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's trying to depict the person. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. Cremastra (u — c) 23:18, 7 January 2025 (UTC)[reply]
    You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy.
    But to address your actual point: Any image—any photo—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery.
    Finally, a human being is responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—Is it an appropriate likeness? lethargilistic (talk) 10:20, 8 January 2025 (UTC)[reply]
    (Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Wikipedia image. Simple as. lethargilistic (talk) 10:32, 8 January 2025 (UTC)[reply]
    We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. isaacl (talk) 23:11, 7 January 2025 (UTC)[reply]
  • Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are not photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was added then removed from his article:
    AI-generated cartoon portrait of Germán Larrea Mota-Velasco by DALL-E
    Pinging people who !voted No above: User:Chaotic Enby, User:Cremastra, User:Horse Eye's Back, User:Pythoncoder, User:Kj cheetham, User:Bloodofox, User:Gnomingstuff, User:JoelleJay, User:Carrite, User:Seraphimblade, User:David Eppstein, User:Randy Kryn, User:Traumnovelle, User:SuperJew, User:Doawk7, User:Di (they-them), User:Masem, User:Cessaune, User:Zaathras, User:XOR'easter, User:Nikkimaria, User:FOARP, User:JuxtaposedJacob, User:ModernDayTrilobite, User:Nabla, User:Tepkunset, User:DragonflySixtyseven, User:Win8x, User:ToBeFree --- Some1 (talk) 03:55, 3 January 2025 (UTC)[reply]
    Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover.
    (this isn't even a good example, it looks more like Steve Bannon)
    Gnomingstuff (talk) 04:07, 3 January 2025 (UTC)[reply]
    Was I unclear? No to all of them. XOR'easter (talk) 04:13, 3 January 2025 (UTC)[reply]
    Still no, because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. pythoncoder (talk | contribs) 04:24, 3 January 2025 (UTC)[reply]
    I still think no. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we do end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. Di (they-them) (talk) 04:40, 3 January 2025 (UTC)[reply]
    No those are not okay, as this will just cause arguments from people saying a picture is obviously AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. win8x (talk) 05:27, 3 January 2025 (UTC)[reply]
    No to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ModernDayTrilobite (talk • contribs) 05:44, 3 January 2025 (UTC)[reply]
    Thanks for the ping, yes I can, the answer is no. ~ ToBeFree (talk) 07:31, 3 January 2025 (UTC)[reply]
    No, and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. Randy Kryn (talk) 09:28, 3 January 2025 (UTC)[reply]
    The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)[reply]
    Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)[reply]
    The RfC question hasn't been changed; see my response to Zaathras below. Some1 (talk) 15:42, 3 January 2025 (UTC)[reply]
    No, that's even a worse possible approach. — Masem (t) 13:24, 3 January 2025 (UTC)[reply]
    No. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the subject, not by machines trying to simulate an image. Besides, the given example is horribly drawn. Cremastra (u — c) 15:03, 3 January 2025 (UTC)[reply]
    I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) Horse Eye's Back (talk) 16:06, 3 January 2025 (UTC)[reply]
    I said *NO*. FOARP (talk) 10:37, 4 January 2025 (UTC)[reply]
    No Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --SuperJew (talk) 01:12, 5 January 2025 (UTC)[reply]
    Still no. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. Tepkunset (talk) 20:43, 6 January 2025 (UTC)[reply]
  • Absolutely not. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Wikipedia is better than this. Retswerb (talk) 10:16, 3 January 2025 (UTC)[reply]
  • Comment The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. Zaathras (talk) 14:33, 3 January 2025 (UTC)[reply]
    The RfC question hasn't been modified; I've only added a 03:58, January 3, 2025: Note clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the exact same [25] as it is now, so I don't think the addition of the Note makes a whole ton of difference). Some1 (talk) 15:29, 3 January 2025 (UTC)[reply]
  • No At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. pillowcrow 21:34, 3 January 2025 (UTC)[reply]
  • Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]
  • No. Wikipedia is made by and for humans. I don't want to become Google. Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. SWinxy (talk) 00:03, 4 January 2025 (UTC)[reply]
  • No. Generative AI may have its place, and it may even have a place on Wikipedia in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. SmittenGalaxy | talk! 01:07, 4 January 2025 (UTC)[reply]
  • No due to reasons of copyright (AI harvests copyrighted material) and verifiability. Gamaliel (talk) 18:12, 4 January 2025 (UTC)[reply]
  • No. Even if you are willing to ignore the inherently fraught nature of using AI-generated anything in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. Ithinkiplaygames (talk) 19:53, 4 January 2025 (UTC)[reply]
    There's no guarantee the images will actually look like the person in question there is no guarantee any image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. Thryduulf (talk) 20:39, 4 January 2025 (UTC)[reply]
  • Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—S Marshall T/C 01:17, 5 January 2025 (UTC)[reply]
    This subsection is about purely AI-generated works, not about AI-enhanced ones. Chaotic Enby (talk · contribs) 01:23, 5 January 2025 (UTC)[reply]
  • No. Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the subject," - User:RossEvans19 (talk) 02:12, 5 January 2025 (UTC)[reply]
  • Yes, depending on specific case. One can use drawings by artists, even such as caricature. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of Jesus. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by AlphaFold would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. My very best wishes (talk) 02:50, 5 January 2025 (UTC)[reply]
    This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. My very best wishes (talk) 03:19, 5 January 2025 (UTC)[reply]
  • No, I think there's legal and ethical issues here, especially with the current state of AI. Clovermoss🍀 (talk) 03:38, 5 January 2025 (UTC)[reply]
  • No: Obviously, we shouldn't be using AI images to represent anyone. Lazman321 (talk) 05:31, 5 January 2025 (UTC)[reply]
  • No Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. Alanscottwalker (talk) 14:51, 5 January 2025 (UTC)[reply]
  • No, as AI's grasp on the Internet takes hold stronger and stronger, it's important Wikipedia, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – zmbro (talk) (cont) 16:52, 5 January 2025 (UTC)[reply]
  • No, not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. scope_creepTalk 20:19, 5 January 2025 (UTC)[reply]
  • No for natural images (e.g. photos of people). Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. spintheer (talk) 20:37, 5 January 2025 (UTC)[reply]
  • No I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. LEPRICAVARK (talk) 22:26, 5 January 2025 (UTC)[reply]
  • No I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. Tpdwkouaa (talk) 06:50, 6 January 2025 (UTC)[reply]
  • No - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had significant coverage and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. Robert McClenon (talk) 02:31, 7 January 2025 (UTC)[reply]
    So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)[reply]
    At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. DS (talk) 19:18, 7 January 2025 (UTC)[reply]
  • Strong no per bloodofox. —Nythar (💬-🍀) 03:32, 7 January 2025 (UTC)[reply]
No for AI-generated BLP images Mrfoogles (talk) 21:40, 7 January 2025 (UTC)[reply]
  • No - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. Daß Wölf 23:25, 7 January 2025 (UTC)[reply]
  • NoWP:NFC says that Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people. While AI images may not be considered copyrightable, it could still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if no images of the person were used, how on Earth would we trust the output?) RunningTiger123 (talk) 02:43, 8 January 2025 (UTC)[reply]
  • No, AI images should not be permitted on Wikipedia at all. Stifle (talk) 11:27, 8 January 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Expiration date?

"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)[reply]

  • No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. FOARP (talk) 05:27, 5 January 2025 (UTC)[reply]
  • An end date is a positive suggestion. Consensus systems like Wikipedia's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Wikipedia goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. lethargilistic (talk) 10:22, 5 January 2025 (UTC)[reply]
  • Agree with FOARP, no need for an end date. If something significantly changes (e.g. reliable sources/news outlets such as the New York Times, BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. Some1 (talk) 11:39, 5 January 2025 (UTC)[reply]
    Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Wikipedia should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. lethargilistic (talk) 03:07, 6 January 2025 (UTC)[reply]
  • WP:Consensus can change on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. CMD (talk) 03:15, 6 January 2025 (UTC)[reply]
    An arbitrary sunset date might reduce the number of discussions before then. With no date for re-visiting the subject, then why not next month? And every month after that, until the rules align with my personal preferences? An agreed-upon date could function as a way to discourage repetitive discussions. WhatamIdoing (talk) 03:25, 3 February 2025 (UTC)[reply]
    That's a decent summary of how RfA eventually got adjusted. CMD (talk) 03:30, 3 February 2025 (UTC)[reply]
    If you opened discussions every month about the same topic, with nothing having changed beforehand, they would quickly get closed, and at some point it would be considered disruptive editing. If anything, I don't think there's any evidence of this being a problem in need of a solution. Chaotic Enby (talk · contribs) 10:10, 3 February 2025 (UTC)[reply]
  • No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so.
Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. Daß Wölf 22:17, 9 January 2025 (UTC)[reply]
  • This shouldn't need an expiration date, but in practice I think it is a good idea because this is a fast-changing field. Too many policies/guidelines become the way that we do things simply because that is the way we have done them in the past and any attempt to change them gets slapped down, or nobody can be bothered to try to change them. Phil Bridger (talk) 10:35, 3 February 2025 (UTC)[reply]
    Instead of having a strict expiration date, should we have something like "this consensus should be discussed again in X amount of time?" Even then, I'm not too sure whether technological improvements alone would mean that our policy on living people (mostly coming from ethical considerations) should expire – a more advanced AI shouldn't automatically bypass the ethical issues. Chaotic Enby (talk · contribs) 10:50, 3 February 2025 (UTC)[reply]
    Given there is no real agreement above about what exactly the issues are and why (it's mostly just a lot of people with similarly articulated vague fears), I don't think it is possible to say that they will or will not apply to a more advanced AI. Thryduulf (talk) 12:03, 3 February 2025 (UTC)[reply]
    In that case, if we're not even sure whether or not a more advanced AI will solve these issues, why pick an arbitrary expiration date and assume it does? I'm okay for discussing this again in the future, but automatically assuming that AI in a few years will have solved it and put an expiration date on the policy is not the way to go. Chaotic Enby (talk · contribs) 13:06, 3 February 2025 (UTC)[reply]
    I certainly don't see a more advanced AI as meaning that the policy should be abandoned. It would be equally possible to strengthen the policy. The only thing of which I am pretty sure is that the external forces will be different in a couple of years. Phil Bridger (talk) 14:19, 3 February 2025 (UTC)[reply]
    I do fully agree, which is why I think "we should rediscuss this in X years" is a more productive way to deal with it than "the policy will expire in X years", in order to not carry any presupposition about which direction the policy should take by then. Chaotic Enby (talk · contribs) 14:32, 3 February 2025 (UTC)[reply]
    I agree there shouldn't be any presupposition, but just "we should rediscuss this" in practice doesn't actually require a discussion and, the longer the in future it is from now the greater the inertia to change will be (regardless of what direction that change should be in). Having the policy expire without an active consensus for it to continue does not presuppose that the current policy is best of all the options available (for the reasons I explained at length in the discussion it isn't even the best now, let alone in the future, but that's a different argument). Thryduulf (talk) 14:47, 3 February 2025 (UTC)[reply]
    That is indeed a good point, but having the policy expire does presuppose that having no policy will be a better option – and, even if you do believe that it is, that is clearly not the consensus of other editors in the discussion above. Chaotic Enby (talk · contribs) 15:09, 3 February 2025 (UTC)[reply]
    Regardless of you view of this or any other policy I cannot agree that a policy continuing to exist when there is no consensus that it should continue to exist is of benefit to the project. This applies to our strongest, best worded policies like speedy deletion for copyright violations, policies like this vaguely defined disapproval and everything in between. Thryduulf (talk) 02:11, 4 February 2025 (UTC)[reply]
    I completely agree that this is a good idea. I've noticed a huge influx in AI related policy discussion. For me, it isn't so much that the tools people are using will get better and may eventually be OK, but instead assumptions are being made about tools to inform these decisions. When those assumptions are no longer true, should we make sure that the decisions hold on other grounds? Maybe an expiration date is not the right word, but a date marker on policies changes born out of discussions would be a good signal that, "hey this policy hasn't been revisited for over a year and its about something that's rate of change is much faster than one year." Zentavious (talk) 19:13, 5 February 2025 (UTC)[reply]
    Agree with the idea of a date marker! Better than making it automatically expire, while still pointing out that Wikipedia:Consensus can change especially in fast-moving fields. Chaotic Enby (talk · contribs) 19:32, 5 February 2025 (UTC)[reply]
    In practice, there might be a few technical blockers for implementing this. The one that stands out to me is there aren't any rules in place (to my knowledge) linking policy edits back to policy discussions. Is it a good idea to make a guildline for content changes to reference where consensus occurred?
    Separate from that guildline discussion, we could apply some kind of non-intrusive text to the subheads of policy pages to signify when consensus was last reached. Any thoughts? Zentavious (talk) 22:19, 6 February 2025 (UTC)[reply]
    I would support this for policies in general, but we'd have to figure out how to do it retroactively, what to prioritize, etc., because good god is there out of date stuff everywhere. (An analogous example is the RfC on WP:SPS, where the policy on whether websites are "self-published" quotes an Internet guide from literally 2000, and no one ever noticed until this year how wildly irrelevant it was to the internet beyond the early 2000s.) Gnomingstuff (talk) 07:36, 7 February 2025 (UTC)[reply]
    Potentially a good start would be to just implement it going forward. If there were actual indications on the page, as opposed to just edit history, it would eventually become obvious which points are dated and which are not. That said, there could be recommendations for how to backfill. My open question is when a content change is made as a result of a larger discussion and small changes are made to that section over time on top of that, at what point does the link back to the discussion no longer make sense? This point makes me think the links would act as more of a process marker (e.g., how did we get here) and less of a this change is X years old. Though one could infer the later in some cases! Zentavious (talk) 15:32, 7 February 2025 (UTC)[reply]
I don't think a formal date to review policy is necessary here. Normally I would but this isn't a topic that people are going to need a reminder for (judging that we've already had like five discussions opened on AI in the past several months). Gnomingstuff (talk) 16:17, 6 February 2025 (UTC)[reply]
  • Comment: I wasn’t able to participate in the RFC and my comment is not related to expiration date, but I wanted to comment that Generative artificial intelligence has a lot of concerns such as being incredibly environmentally costly and I suppose it could take jobs away from certain artists. For biological and medical illustrations, usually there are specialized artists who work with subject matter experts in their illustrations, which generative ai does not do. Also generative AI currently has a problem with counting the number of human fingers. Wafflefrites (talk) 18:19, 7 February 2025 (UTC)[reply]
    A computer running Stable Diffusion on a 4090 uses less electricity than a hair dryer. jp×g🗯️ 19:40, 5 March 2025 (UTC)[reply]
  • strong oppose. i don't believe an expiration date is needed. if consensus looks like it might change, which i really don't believe could be the case when it comes to representing people with images that don't actually feature them (or even look like them in a lot of cases, that laurence boccolini impersonation looks more like me than her, which is saying a lot when i look nothing like that), someone will likely start a discussion about it. as is, though, i can still name at least 2 ways in which representing someone in a biography (be the subject living or the opposite of that) would be a bad, bad idea on so many levels that i honestly wouldn't be surprised if it became the one exception to pillar 5 consarn (speak evil) (see evil) 20:40, 10 February 2025 (UTC)[reply]

Kommenteeri