Wiki Article
Wikipedia talk:Help Project
Nguồn dữ liệu từ Wikipedia, hiển thị bởi DefZone.Net
| Main page | Discussion | Scope | Guidelines | Templates | Participants | Newsletter |
| This page is not for seeking help or making test edits. It is solely for discussing the Help Project page. for using and editing Wikipedia. For common questions about Wikipedia, see Help:Contents. To make test edits, please use the Sandbox. |
| This project page does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||
| ||||||||
"Adding image" page is inadequate
[edit]I wanted to send a link to a help page to a person who asked how to embed an image from Commons into a Wikipedia article, and I found Help:Adding image, but the screenshot is outdated, and the instructions are confusing even for me as an experienced editor. It hasn't been updated since 2021 and would benefit from revision. Dreamyshade (talk) 02:58, 6 January 2026 (UTC)
- Yeah the help guidance is a bit of a mess.
- The current flow seems to be:
- Help:Contents ...
- Help:Menu ...
- Help:Menu/Images and media ...
- Help:Pictures (which is currently piped as How to insert and use pictures in Wikipedia articles in the previous step) ...
- From there, in the ℹ️ you can choose from:
- Help:Introduction to images with VisualEditor/3 (Inserting images) should probably say that you can insert the filename from Commons (without the File:) instead of general searching.
- Help:Pictures is a wikimarkup tutorial. It should be renamed to Help:Images and deal with VisualEditor also. The last two links in my bulleted flow should be simplified summaries of Help:Images.
- At some point we need to explain about fair use and why those images are not on Commons.
- I am not sure where Help:Adding image fits in and nothing from the Wikipedia: or Help: namespaces links to it. It should be deleted, or renamed and worked into the upload flow. Commander Keane (talk) 04:27, 6 January 2026 (UTC)
- Thanks for digging into this! Looks like Help:Adding image could be simply redirected to Help:Pictures. It also has an outdated sibling page, Help:Starting editing, that could be redirected to Help:Getting started or similar. Dreamyshade (talk) 04:42, 6 January 2026 (UTC)
- I marked Help:Starting editing and Help:Adding image as historical. There were made by Hannibal for outreach:Account Creation Improvement Project testing. Commander Keane (talk) 05:21, 6 January 2026 (UTC)
- Thanks, that sounds reasonable to me! Dreamyshade (talk) 03:02, 7 January 2026 (UTC)
- I marked Help:Starting editing and Help:Adding image as historical. There were made by Hannibal for outreach:Account Creation Improvement Project testing. Commander Keane (talk) 05:21, 6 January 2026 (UTC)
- Thanks for digging into this! Looks like Help:Adding image could be simply redirected to Help:Pictures. It also has an outdated sibling page, Help:Starting editing, that could be redirected to Help:Getting started or similar. Dreamyshade (talk) 04:42, 6 January 2026 (UTC)
Template:WikiEditor toolbar tab (edit | talk | history | links | watch | logs)
I've created a new template to better replicate WikiEditor toolbar on help pages. I switched several uses of Template:Menu icon to the new template.
Compare before: Advanced
and after: . —andrybak (talk) 18:29, 31 January 2026 (UTC)
- Nicely done. — The Transhumanist 11:13, 24 February 2026 (UTC)
You are invited to join the discussion at Help talk:Introduction § Centralization of discussion of Help:Introduction. —andrybak (talk) 16:46, 1 February 2026 (UTC)
Planning help for AI workflows
[edit](Speaking of LLMs/chatbots here).
AI is advancing on the scene quickly, and there are no help pages (that I know of) on how to use AI to improve Wikipedia.
This is what the help system is for: to provide how-to instructions for editing and developing Wikipedia. AI is here, and no doubt editors are using AI apps for all kinds of things. So, it is probably a good idea that we have help pages to teach effective workflows and best practices.
I suggest we brainstorm a bunch of potential uses for AI on Wikipedia, and then choose one to write up a draft help page for, in a sandbox or on a subpage, as a start. This will give us a feel for what we're dealing with here. Also, that will provide plenty of time to gather feedback before the first such page goes live, if at all. Here are some ideas to get this brainstorming session started:
The help system has become far too verbose and needs simplifying. One potential use of AI apps would be to analyze a help page, and then have it implement the suggestions that the editor specifically approves of. Draft of "Help:Editing help pages assisted by AI"
Another is to ask an AI to identify missing links from a list article. Draft of "Help:Editing lists assisted by AI"
Another is to have the AI identify gaps in an article, and then assist you in filling those gaps. Draft of "Help:Editing articles assisted by AI"
I look forward to your ideas. Sincerely, — The Transhumanist 11:45, 19 February 2026 (UTC)
- Thanks for the invite, User talk:The Transhumanist. I’m interested. SmokeyJoe (talk) 13:03, 19 February 2026 (UTC)
- Another potential use of LLMs would be to create a list of things that might be spelling/grammar/homophone/etc errors in articles based on the surrounding context. Similarly perhaps links to the wrong article. Obviously a human would need to verify these actually are errors before any changes are made, but it's hard to fix things you don't know need fixing. Thryduulf (talk) 13:48, 19 February 2026 (UTC)
- @Thryduulf: So, ask the AI "identify any links to the wrong article in the following article"? That seems worth testing. And "Identify all spelling, grammar, and homophone errors in the following article". After the instruction, you paste in the article, and press enter. Then you copy and paste its answer into the prompt box and delete the "corrections" that you don't want it to make, and tell it to "Implement these corrections on the article, and show me the revised article in MediaWiki wiki text in a code block." You then paste it into a Wikipedia edit window, and click "Show preview" and "Show changes", and touch it up, before clicking "Publish changes". — The Transhumanist 14:42, 19 February 2026 (UTC)
- I don't know how the specifics would work, but something like that. Thryduulf (talk) 14:46, 19 February 2026 (UTC)
- @Thryduulf: So, ask the AI "identify any links to the wrong article in the following article"? That seems worth testing. And "Identify all spelling, grammar, and homophone errors in the following article". After the instruction, you paste in the article, and press enter. Then you copy and paste its answer into the prompt box and delete the "corrections" that you don't want it to make, and tell it to "Implement these corrections on the article, and show me the revised article in MediaWiki wiki text in a code block." You then paste it into a Wikipedia edit window, and click "Show preview" and "Show changes", and touch it up, before clicking "Publish changes". — The Transhumanist 14:42, 19 February 2026 (UTC)
- Using LLMs as detectors flagging issues for human review seems fine to me; using them as content generators would definitely not be. In particular, using them as citation checkers to flag possibly-bogus cites, or to flag unreferenced statements that need cites. I'd be happy with using them to edit talk pages to add articles to Wikiprojects, though, as errors in that process would be harmless and self-correcting. — The Anome (talk) 14:25, 19 February 2026 (UTC)
- Thanks, The Transhumanist! I'm interested on this help page. Just let me know what I can contribute here. ROY is WAR Talk! 14:36, 19 February 2026 (UTC)
- @Royiswariii: We're in the brainstorming phase. You can contribute ideas. Like what kind of help page could we draft, that is, what AI tasks could we instruct Wikipedia editors how to do? After we've generated a bunch of ideas, we'll pick one and write a draft for it. — The Transhumanist 14:44, 19 February 2026 (UTC)
- @The Transhumanist: I'll read all conversation later. I'm busy right now. ROY is WAR Talk! 02:31, 20 February 2026 (UTC)
- @Royiswariii: We're in the brainstorming phase. You can contribute ideas. Like what kind of help page could we draft, that is, what AI tasks could we instruct Wikipedia editors how to do? After we've generated a bunch of ideas, we'll pick one and write a draft for it. — The Transhumanist 14:44, 19 February 2026 (UTC)
- Thanks for the courtesy ping! Regarding Wikipedia:WikiProject AI Tools, I will note that this project's scope explicitly excludes content generation, which echoes what The Anome said. I agree that we shouldn't encourage using AI for content generation, which such help pages will implicitly do. Chaotic Enby (talk · contribs) 14:38, 19 February 2026 (UTC)
- Actually, these help pages seem geared towards using general-purpose AI models on Wikipedia, rather than developing AI tools for Wikipedia specifically, so I'm not sure how necessary the AIT ping was. Chaotic Enby (talk · contribs) 14:39, 19 February 2026 (UTC)
- I should also mention that I'm looking at machine learning as a way to validate Wikidata coordinates for articles, so bad coordinates can be flagged for human review. — The Anome (talk) 14:40, 19 February 2026 (UTC)
- Yes, that definitely looks like a useful application! Chaotic Enby (talk · contribs) 14:41, 19 February 2026 (UTC)
- And also using for a response too. I've so many encountered in the AfD that using AI to defend the pending deletions especially for the new editors here. ROY is WAR Talk! 14:44, 19 February 2026 (UTC)
- Also adding a link to Wikipedia:Village pump (policy)/Replace NEWLLM's close, which found
consensus for better guidelines along the lines of and/or in the spirit of this draft
(referring to User:Qcne/LLMGuideline). This should be taken into account as a starting point for any such help pages. Chaotic Enby (talk · contribs) 14:42, 19 February 2026 (UTC)- Well, right now, we're in brainstorming mode. Content generation is one potential set of tasks AI can do. But there are lots of others. Various analysis tasks. Providing suggestions on improving a page. And there are many types of pages and data to work on. Keep the ideas coming! Criteria on the help page itself is good: we'll definitely need to draft any page so that it conforms to existing policies and guidelines. Thank you for pointing that out. We'll keep an eye on that guideline draft. Good catch. — The Transhumanist 14:57, 19 February 2026 (UTC)
- About the help system becoming too verbose. What about moving unneeded pages to new space calked archive? Wakelamp (talk) d[@-@]b 10:07, 20 February 2026 (UTC)
- We already have {{historical}} to tag these pages. Chaotic Enby (talk · contribs) 10:31, 20 February 2026 (UTC)
- We need to assume Wikipedia will exist; fixing help and processes will help.
- Suggest we do a few things for AI and for help on Wikipedia
- Move this discussion to a series of pages within this project. We are skilled at creating and getting consensus on articles/talk. It sucks that all these ideas wont end up somewhere
- Agree on our scope (maybe not mobile and maybe only English),and how we will measure our success. It would be nice to do everything, but it would diffuse our limited resources.
- Create very high level hierarchy of wikipedia processes eg article create, article quality check, ..
- List and then question our constraints - guidelines, mediawiki software, WMF strategy
- Then use a standard problem solving process to find data to prioritise it, work out whether we can mistake proof it, make it more obvious, or make it easier.
- Wakelamp (talk) d[@-@]b 12:55, 23 February 2026 (UTC)
P.S.: Here are a couple of AI-generated tables, summarizing contributed ideas so far:- The following 2 tables were AI-generated initially, with subsequent human edits and additions:
| Idea | Contributor | Details |
|---|---|---|
| Analyze help pages and implement approved suggestions | The Transhumanist | Simplify verbose help pages by having AI analyze and apply editor-approved changes. Draft: Draft of "Help:Editing help pages assisted by AI" |
| Identify missing links from list articles | The Transhumanist | AI detects missing wikilinks in lists. Draft: Draft of "Help:Editing lists assisted by AI" |
| Identify gaps in articles and assist filling them | The Transhumanist | AI finds content gaps and helps editors fill them. Draft: Draft of "Help:Editing articles assisted by AI" |
| Identify spelling/grammar/homophone errors | Thryduulf | AI flags potential errors based on context for human verification. |
| Identify wrong article links | Thryduulf | AI flags links pointing to potentially incorrect articles. |
| Flag possibly-bogus citations | The Anome | AI detects questionable citations for human review. |
| Flag unreferenced statements needing cites | The Anome | AI identifies statements without sources. |
| Edit talk pages to add articles to WikiProjects | The Anome | Automate harmless, self-correcting talk page additions. |
| Validate Wikidata coordinates | The Anome | Machine learning flags bad coordinates for review. |
Observed AIs defending articles in AfD |
Royiswariii | AI assists new editors in arguing against deletions at Articles for deletion. (Not appropriate). |
| Providing suggestions on improving a page | The Transhumanist | Various analysis tasks including suggestions for page improvements. |
| Suggested Help Page Focus | Contributor | Details |
|---|---|---|
| Editing help pages assisted by AI | The Transhumanist | How to use AI to simplify and improve verbose help pages. |
| Editing lists assisted by AI | The Transhumanist | Workflows for AI-assisted list editing, e.g., missing links. |
| Editing articles assisted by AI | The Transhumanist | Using AI to identify gaps and assist content filling. |
| Identifying errors (spelling/grammar/links) | Thryduulf | Detector workflows for article issues. |
| Assist in finding sources | Kowal2701 | Ask it to search for sources on a topic |
Tables initially generated using perplexity.ai. Feel free to edit and expand them. — The Transhumanist 01:48, 20 February 2026 (UTC)
- Thank you. Another thought; our guidelines should stated that using AI to create comments in talk page discussions should be explicitly disallowed unless it is flagged as AI-generated or AI-assisted. Your comment above would be fine because you have flagged it as such. Having talk page discussions with obvious bots is exhausting. — The Anome (talk) 15:40, 19 February 2026 (UTC)
- Agree. Also, that AI table is less than fully accurate, which is itself illustrative of the broader point – for instance, @Royiswariii (from what I understand) didn't suggest assisting new editors as a potential use case that should be encouraged, but as an example of why using AI to write comments shouldn't be encouraged. Chaotic Enby (talk · contribs) 16:02, 19 February 2026 (UTC)
- I added "Assist in finding sources", was unsure if that was included in the previous entries. Any help page on positive AI-use needs to explicitly say not to outsource your thinking to an LLM, and to treat all output critically (maybe w a link to critical thinking). I take issue w
Simplify verbose help pages by having AI analyze and apply editor-approved changes.
, the AI shouldn't be applying any changes itself. AlsoUsing AI to identify gaps and assist content filling.
raises alarms tbh. I like the general idea of using AI to flag issues Kowal2701 (talk, contribs) 16:55, 19 February 2026 (UTC)- Typically, the AI would apply changes off-Wikipedia, by generating a version based on a copy/paste. For that to become the article, an editor would have to copy/paste it over the current version of the article, which is allowed, as long as the editor proofreads and corrects/refines the AI version before doing so, or before clicking "Publish changes". See Wikipedia:Artificial intelligence#What is Wikipedia's AI policy? for clarification. — The Transhumanist 17:30, 19 February 2026 (UTC) @Kowal2701:
- I will again point to this close, which, while it hasn't been formalized as policy yet, found consensus for guidelines stricter than allowing any proofread, copy-pasted content. Chaotic Enby (talk · contribs) 17:33, 19 February 2026 (UTC)
- And once a new guideline is ratified, any help page instructing editors will have to conform to it. In the meantime, the exercise of drafting instructions necessitates experimentation with the tools, which could provide insight valuable to further guideline development and discussion. Such experimentation would be in the context of exploring and testing workflows, which in turn will expose us to how effective or ineffective they are, which is something we really need to know in order to craft a truly productive policy or guideline in the first place. We can't be expected to write a how-to page without knowing the "how to". That means we'll have to get our hands dirty and try some things out. Sounds fun to me.
Preferably, we should start with simple use cases, before moving on to revising colossal articles. However, we've just started the brainstorming phase, and a fundamental element of brainstorming is not judging the ideas generated until after the brainstorming is over. To do so is to kill the creative process. Critiquing comes later. — The Transhumanist 18:29, 19 February 2026 (UTC)
And once a new guideline is ratified, any help page instructing editors will have to conform to it.
Wikipedia is based in consensus, it isn't a legalistic process: the fact that we haven't "ratified" a guideline doesn't mean that it isn't going against consensus. Drafting help pages before we have any guidelines is putting the cart before the horse: advice pages follow best practice codified in existing guidelines, not the other way around. And yes, we can absolutely critique whether it is worth investing time in drafting such help pages to begin with: brainstorming doesn't make you immune to criticism, and the sunk cost fallacy is a real risk here.Regarding experimentation with the tools, well. We've had more than two years of dealing with AI cleanup already, and many earlier proposals, so we're not getting there from a blank slate. I don't think it's necessary to encourage people to experiment even more to reach these insights, especially given the issues with some of the current "experimenting". Chaotic Enby (talk · contribs) 19:35, 19 February 2026 (UTC)- That's a case of "which came first, the chicken or the egg." We do have policies and guidelines, which cover editor contributions regardless of the tools they are using. And before we had those guidelines, we had... editing. And editors providing advice to each other on how to edit. Guidelines grew out of those learning experiences. And as you said, it's not a legalistic process. There's no requirement to write a guideline before you write a tip, or a set of tips, let alone a draft of such. For example, there's a tip on how to make redirect links turn green, another on how to turn disambiguation links orange, and so on. No guidelines covered those tools or their work flows explicitly before people started using them and publishing that the tools and methods on how to use them existed (and what they were). Tools appear on the scene before complete guidelines on using them do. Before there was a full array of guidelines on how you could use and could not use WP:AWB, there was WP:AWB, and editors sharing "here's how I did this with this nifty tool". More guidelines on conduct followed soon after. We're seeing the same thing play out with AI apps. Besides, we're talking about potential drafts, here. Like in a sandbox. It's a mode of thought, and a very useful one. It's a variation of "what if?", applied to workflows. We're here to explore what works, and what doesn't, and drafting what we learn. We can expect a lot of "aha" moments. — The Transhumanist 02:54, 20 February 2026 (UTC)
There's no requirement to write a guideline before you write a tip, or a set of tips, let alone a draft of such.
But there is a requirement that these reflect broader consensus. Where your analogy breaks down is that, for something like turning redirects links green, there isn't any reason to oppose these tips existing. Here, the worry is that these tips will functionally write policy (by implicitly encouraging specific uses of AI) even though consensus isn't there.Before there was a full array of guidelines on how you could use and could not use WP:AWB, there was WP:AWB, and editors sharing "here's how I did this with this nifty tool".
While AWB is obviously a positive example, we can also consider the implications of a hypothetical, say, "here's how I'm hiding my IP thanks to this nifty tool", which would have made it harder to shape our policy on open proxies.We're here to explore what works, and what doesn't, and drafting what we learn.
That is a noble goal, but it is inherently skewed in practice, as this discussion is starting from the presupposition that we have to draft something (rather than potentially learning that these help pages might be counterproductive). Chaotic Enby (talk · contribs) 08:46, 20 February 2026 (UTC)
- That's a case of "which came first, the chicken or the egg." We do have policies and guidelines, which cover editor contributions regardless of the tools they are using. And before we had those guidelines, we had... editing. And editors providing advice to each other on how to edit. Guidelines grew out of those learning experiences. And as you said, it's not a legalistic process. There's no requirement to write a guideline before you write a tip, or a set of tips, let alone a draft of such. For example, there's a tip on how to make redirect links turn green, another on how to turn disambiguation links orange, and so on. No guidelines covered those tools or their work flows explicitly before people started using them and publishing that the tools and methods on how to use them existed (and what they were). Tools appear on the scene before complete guidelines on using them do. Before there was a full array of guidelines on how you could use and could not use WP:AWB, there was WP:AWB, and editors sharing "here's how I did this with this nifty tool". More guidelines on conduct followed soon after. We're seeing the same thing play out with AI apps. Besides, we're talking about potential drafts, here. Like in a sandbox. It's a mode of thought, and a very useful one. It's a variation of "what if?", applied to workflows. We're here to explore what works, and what doesn't, and drafting what we learn. We can expect a lot of "aha" moments. — The Transhumanist 02:54, 20 February 2026 (UTC)
- And once a new guideline is ratified, any help page instructing editors will have to conform to it. In the meantime, the exercise of drafting instructions necessitates experimentation with the tools, which could provide insight valuable to further guideline development and discussion. Such experimentation would be in the context of exploring and testing workflows, which in turn will expose us to how effective or ineffective they are, which is something we really need to know in order to craft a truly productive policy or guideline in the first place. We can't be expected to write a how-to page without knowing the "how to". That means we'll have to get our hands dirty and try some things out. Sounds fun to me.
- I will again point to this close, which, while it hasn't been formalized as policy yet, found consensus for guidelines stricter than allowing any proofread, copy-pasted content. Chaotic Enby (talk · contribs) 17:33, 19 February 2026 (UTC)
- Typically, the AI would apply changes off-Wikipedia, by generating a version based on a copy/paste. For that to become the article, an editor would have to copy/paste it over the current version of the article, which is allowed, as long as the editor proofreads and corrects/refines the AI version before doing so, or before clicking "Publish changes". See Wikipedia:Artificial intelligence#What is Wikipedia's AI policy? for clarification. — The Transhumanist 17:30, 19 February 2026 (UTC) @Kowal2701:
- And that's why you and many others interested in AI on Wikipedia were invited to this discussion, so that we don't charge forth blindly into the night. Keep in mind, that at this point, it's just a discussion of potential things to write about. As it progresses, that list will be whittled down. But, along the way, we will increase the awareness of all participants here as to how AI can and is being applied at and on Wikipedia, and what is most urgent or controversial, so that we can decide what to write about, if anything. Keep in mind that anybody can write a help page—this discussion will help prevent wasted effort on such pages doomed to deletion. Sincerely, — The Transhumanist 09:08, 23 February 2026 (UTC)
- Technically that is currently allowed, in practice enshrining that in guidance would never gain consensus Kowal2701 (talk, contribs) 17:38, 19 February 2026 (UTC)
- And attendent help pages. Hence, this discussion. (See my comments above). Sincerely, — The Transhumanist 09:08, 23 February 2026 (UTC)
- @The Transhumanist, I've had to reply to the Anome's comment here because you didn't sign your comment where you pasted the tables, which messes up the usual tools for managing discussions. Could you also add the names and versions of the models you used to generate the tables, per LLMDISCLOSE? Disclosure itself is required, so thank you for being upfront about it, but if you want to help others use AI responsibly, I'd strongly encourage demonstrating more transparency by naming the specific models. Over time, this practice can also help us evaluate and compare the different models for their trustworthiness in various tasks. ClaudineChionh (she/her · talk · email · global) 22:16, 19 February 2026 (UTC)
- Thank you for the feedback. It helps a lot. Signing posts. Missed that one. Got it. Disclosing the model: "perplexity.ai". — The Transhumanist 02:54, 20 February 2026 (UTC)
- There may be some complexities involved here. At least some client-facing chatbots keep LLMs in the background behind-the-scenes, and don't necessarily inform their users as to which one they are using to respond to any particular prompt. Paid accounts, which, in the freemium monetization model of the software market, make up about 1% of those products' users—and while paid users may choose which LLM models they use, the other 99% cannot (and they number in the millions). Sincerely, — The Transhumanist 09:08, 23 February 2026 (UTC)
- @ScottishFinnishRadish may be interested in a discussion like this. Izno (talk) 17:53, 19 February 2026 (UTC)
- Any such page would need to be very clear on what can and cannot be used for AI. I (and I suspect most of the community) would oppose anything that implies it is acceptable to use an LLM to generate content for Wikipedia in mainspace or talk pages; we have enough trouble with the lack of consensus in the community regarding what our attitude towards LLMs is to deal with problematic LLM-assisted edits, and a page that to the uninitiated looks like "use LLMs to improve these articles!" would, despite best intentions, likely become rapidly counterproductive.
- Referencing your list below, I would strongly oppose using LLMs to develop drafts even in sandboxes, merging articles, proofreading them (we know that even translations by LLM and/or Grammarly introduce a large amount of defects), modifying content in any automated manner, and analysis/summaries of articles that are to be solely relied upon by a human without further review to make edits.
- On the other hand, using AI to flag things that humans could further review is a potential use case, and in my opinion, a necessary use-case if we stand any chance of fighting the slop that comes in. I am currently working on a tone/source verification detector using LLMs to detect AI-generated edits. If such a tool were to reach some degree of maturity we could turn it into an edit filter which could log all the edits it flags and enable much less time-consuming human review of possible AI edits. We already have some use of machine-learning technology in the form of Cluebot and this can and should be further extended to catch the new types of spam in this new era. Fermiboson (talk) 20:11, 19 February 2026 (UTC)
- We are rapidly approaching the threshold where the AI is better on most tasks than even the very good humans. I now use AI for my job on a regular basis, and it is definitely better than me in many aspects already (for the avoidance of doubt, I am very senior, in both senses, and was nicknamed by my colleagues "Commander Data" during one of my job stints). Ignoring this fact, now-obvious to anyone with access to a good LLM (free models don't count at all!) does not do us any good, at the best postponing the inevitable by a year or so. See an eloquent (although definitely TLDR) explanation at https://archive.ph/2026.02.11-185837/https://shumer.dev/something-big-is-happening . While not mine, this text precisely summarizes my experience: the top models suddenly turned to be very useful about half a year ago. Викидим (talk) 20:50, 19 February 2026 (UTC)
- For some topics AI overviews are already better than Wikipedia articles (some time ago I compared the quality of the Economy of Yemen article on Wikipedia and on a recently launched competitor encyclopedia - Wikipedia:Village_pump_(miscellaneous)/Archive_86#Article_quality_-_wake_up_call? and it's not an isolated case).
- However there is no chance that the community would agree to allow LLM-generated content, especially for new editors. Being cynical, one can also say that it doesn't matter much since LLM usage (assuming minimal competence and access to the latest models) can't be reliably detected.
- So I'd suggest to focus on non-generative use cases. Alaexis¿question? 22:11, 19 February 2026 (UTC)
- Going through the LLM deliberations here (not just on this page), I am surprised how the community focuses on tools themselves, not the problems. AI posits just three issues, IMHO: (1) Sheer volume of text that needs to be eventually proofread by humans (2) A wide disparity in quality between the tools: unless one uses the top-end (hundreds of USD per month) brand-new (months old) models, results would be inferior, and at the current point in time, frequently unacceptable, and (3) source selection that for WP:NPV reasons still needs human input. The crossover (AI better than a inexperienced person), in my opinion, happened last fall. Addressing #1 requires per-person limits on contributions (different for different editors, similar to the credit-card limits), #2 requires mandating vetted tools (I can vouch for Google Gemini Pro 3.0 in Ultra mode). AI is rapidly getting better on #3, so I think that mandating inline citations might be enough, as LLM will match non-professionals in source selection in few months. Once we accept the #1-#3 as true issues, procedures become all-important, this is the WP:MOS of new scary but inescapable era. We will still need more eyeballs, but this might be solved by a more prominent "I found an error" button, a-la Grokipedia where "Suggest edit" is one click away. Викидим (talk) 23:16, 19 February 2026 (UTC)
- I tried to float this approach in general and rate limits in particular (great minds and all that) and the reception was decidedly mixed. Alaexis¿question? 23:24, 19 February 2026 (UTC)
- One specific problem I have with rate limits is that a long-standing understanding of Wikipedia researchers is that highly productive editors will often start with a burst of intense activity (memorably: "Wikipedians are born, not made", 2009). I suspect that anything that makes it less likely for those editors to start editing in the way they want to edit will be particularly punishing to the long-term health of the project. Instead of rate limits, I favor more intelligent scrutiny on new editor's contributions (which LLMs are well-positioned to help us do by focusing editor attention on the most problematic cases first). Suriname0 (talk) 17:52, 23 February 2026 (UTC)
- @Викидим:
Going through the LLM deliberations here (not just on this page), I am surprised how the community focuses on tools themselves, not the problems.
This focus is due in part because the tool is the problem: AIs are generators of synthetic data, which, when used to train AIs, may lead to model collapse. We are at risk of a similar problem when humans are trained on synthetic data, that is, on AI's hallucinations and other errors, which can lead to a cascade of maladaptive behaviors, including misperceptions, erroneous beliefs, ill-informed decisions, mistakes, operational failures, resulting in conflicts, accidents, or worse. Considering that Wikipedia is part of the AI ecosystem, if that ecosystem collapses, so would Wikipedia—the whole thing is a positive feedback loop, in which errors introduced into Wikipedia by AI feed those AIs via training, which in turn spew even worse garbage into Wikipedia, in a repetitive cycle. It appears to be an existential threat to Wikipedia and its mission. However, in the wider scope of things, it's but a ripple, and AI is being positioned to transcend or eclipse everything, depending how you look at it, making Wikipedia's fate insignificant by comparison. The leapfrogging of Wikipedia by superior tech may happen in as few as 18 months from now, but it will not likely take more than a few years. For Wikipedia to avoid being leapfrogged into obsolescence, it will have to leapfrog itself, which will require a much different conversation than the one we are having here. — The Transhumanist 13:35, 23 February 2026 (UTC)- I don't see how putting more AI into Wikipedia will reduce the effects of AI on wikipedia? Also I feel like modern AI companies are probably trying to find other forms of real-world data in order to train their newer models, Wikipedia has had issues with AI probably ever since 2024.
- Emily * Emi-Is-Annoyed (message me!) 13:52, 23 February 2026 (UTC)
- That's not the goal here. It's unlikely we can stem the tide of editor-assisted AI input into Wikipedia. But we may be able to mitigate the problems caused by it, by providing proper training on what tools to use and how to use them. See the AI Tools Project, and AI cleanup, for examples. — The Transhumanist 14:03, 23 February 2026 (UTC)
- The term 'AI workflow' kinda suggests that it involves editing using AI on Wikipedia, but is this really going to fix things? I don't see a world where this can really help combat AI, aside from maybe automating some counter-AI detection? Emily * Emi-Is-Annoyed (message me!) 14:07, 23 February 2026 (UTC)
- There are ways to contribute to Wikipedia using AI that are beneficial to the project, and there are ways that are harmful to the project. We cannot expect new editors to know which methods are beneficial and which are harmful if we don't give them guidance. Not everybody will follow the guidance of course, but if some editors who would unknowingly contribute in a harmful way are instead directed to the beneficial route then that's a net win for the project. Thryduulf (talk) 14:42, 23 February 2026 (UTC)
counter-AI detection
—good idea. I'll add that to the brainstorming list! — The Transhumanist 22:40, 23 February 2026 (UTC)
- The term 'AI workflow' kinda suggests that it involves editing using AI on Wikipedia, but is this really going to fix things? I don't see a world where this can really help combat AI, aside from maybe automating some counter-AI detection? Emily * Emi-Is-Annoyed (message me!) 14:07, 23 February 2026 (UTC)
- That's not the goal here. It's unlikely we can stem the tide of editor-assisted AI input into Wikipedia. But we may be able to mitigate the problems caused by it, by providing proper training on what tools to use and how to use them. See the AI Tools Project, and AI cleanup, for examples. — The Transhumanist 14:03, 23 February 2026 (UTC)
- I tried to float this approach in general and rate limits in particular (great minds and all that) and the reception was decidedly mixed. Alaexis¿question? 23:24, 19 February 2026 (UTC)
- Going through the LLM deliberations here (not just on this page), I am surprised how the community focuses on tools themselves, not the problems. AI posits just three issues, IMHO: (1) Sheer volume of text that needs to be eventually proofread by humans (2) A wide disparity in quality between the tools: unless one uses the top-end (hundreds of USD per month) brand-new (months old) models, results would be inferior, and at the current point in time, frequently unacceptable, and (3) source selection that for WP:NPV reasons still needs human input. The crossover (AI better than a inexperienced person), in my opinion, happened last fall. Addressing #1 requires per-person limits on contributions (different for different editors, similar to the credit-card limits), #2 requires mandating vetted tools (I can vouch for Google Gemini Pro 3.0 in Ultra mode). AI is rapidly getting better on #3, so I think that mandating inline citations might be enough, as LLM will match non-professionals in source selection in few months. Once we accept the #1-#3 as true issues, procedures become all-important, this is the WP:MOS of new scary but inescapable era. We will still need more eyeballs, but this might be solved by a more prominent "I found an error" button, a-la Grokipedia where "Suggest edit" is one click away. Викидим (talk) 23:16, 19 February 2026 (UTC)
- We are rapidly approaching the threshold where the AI is better on most tasks than even the very good humans. I now use AI for my job on a regular basis, and it is definitely better than me in many aspects already (for the avoidance of doubt, I am very senior, in both senses, and was nicknamed by my colleagues "Commander Data" during one of my job stints). Ignoring this fact, now-obvious to anyone with access to a good LLM (free models don't count at all!) does not do us any good, at the best postponing the inevitable by a year or so. See an eloquent (although definitely TLDR) explanation at https://archive.ph/2026.02.11-185837/https://shumer.dev/something-big-is-happening . While not mine, this text precisely summarizes my experience: the top models suddenly turned to be very useful about half a year ago. Викидим (talk) 20:50, 19 February 2026 (UTC)
- To clarify, the AI Tools project is about designing tools (such as Wikipedia scripts) using AI for non-generative purposes, it isn't for training users on how to contribute using LLMs. Chaotic Enby (talk · contribs) 14:17, 23 February 2026 (UTC)
- In that case, I did have a couple ideas that could specifically help new users/people who aren't as fluent with English edit things.
- Maybe a sentiment analysis model stored in-browser could be an easy way of helping people keep a WP:NPOV tone?
- Also, maybe an existing AI detection model could be fine-tuned on specifically all the Wikipedia articles that have been cleaned of AI in the past? That way it might make labeling an article as possibly being AI or not more reliable. Emily * Emi-Is-Annoyed (message me!) 15:22, 23 February 2026 (UTC)
- Regarding that first one, you'll be happy to know that WMF developers are working on exactly that! It's called Tone Check and is a good example of a positive use case of LLMs on Wikipedia! Chaotic Enby (talk · contribs) 18:32, 23 February 2026 (UTC)
- Awesome! I'd love to use something like this. Emily * Emi-Is-Annoyed (message me!) 06:50, 24 February 2026 (UTC)
- Regarding that first one, you'll be happy to know that WMF developers are working on exactly that! It's called Tone Check and is a good example of a positive use case of LLMs on Wikipedia! Chaotic Enby (talk · contribs) 18:32, 23 February 2026 (UTC)
It's unlikely we can stem the tide of editor-assisted AI input into Wikipedia
Words that conveniently only ever seem to be said by people who have no interest in trying. Athanelar (talk) 14:59, 23 February 2026 (UTC)
I'd argue strongly against allowing AI-generated drafts that can be cut-and-pasted into articles. Naive and/or newbie editors and bad-faith actors will just select all, copy, then paste, because anything else is simply too much effort. Regarding AI summaries having superhuman ability, yes, that is often the case, and it's amazing. But at the same time it is also the case that the AI will generate confident nonsense, or correct text intermingled with (sometimes horiffically bad) errors. We can't afford the risk. Not to mention that as one of the major sources of training data for LLMs, shoving LLM content into Wikipedia en masse risks model collapse in the LLM ecosystem. — The Anome (talk) 19:51, 22 February 2026 (UTC)
Getting back to brainstorming...
[edit]Getting back to brainstorming, some possible use cases for AI-assisted editing include:
- Working on a page in a sandbox
- Developing userscripts
- Writing userscripts
- Adding comments to userscripts to make them intelligible
- Writing userscript instructions
- Developing templates
- Writing template instructions
- Comparing lists (sets)
- Removing duplicates
- Maintaining navigation footers
- Adding missing key topics
- Restructuring
- Removing links that don't belong
- Reducing bloat (removing stubs and obscure topics)
- Improving WikiProject pages
- Merging articles
- Proofreading articles
- Proofreading AI drafts
- Error checking pages
- Error checking its own work (in a new session, to clear contextual memory and the bias that creates)
- Working on drafts in draft space
- Screening articles for AI problems, such as hallucinations and overconfident prose
- Can you think of any others? — The Transhumanist 19:06, 19 February 2026 (UTC)
- Formatting references, including creattion of {{cite}} templates
- Generating suggestions for WikiProject templates
The last two will definitely generate controversy, so perhaps they should not be included into the list
- Generating suggestions for infoboxes
- Translation from non-English Wikipedias. I can't really classify this as "content generation". We consistently, and IMHO correctly, assume (for the purposes of WP:COPYVIO) that the translation is not a creation of new text. Why are we then shy to acknowledge the same if LLM was involved?
- Викидим (talk) 20:21, 19 February 2026 (UTC)
- Regarding that last one, we have the recently enacted guideline Wikipedia:LLM-assisted translation, so I could see us writing a help page using that guideline as a starting point. We should be careful to not give the impression that we are encouraging editors to use LLMs when translating, but just providing help on how to proceed if they choose to do so. Chaotic Enby (talk · contribs) 20:30, 19 February 2026 (UTC)
- I oppose creating help pages that encourage editors to use AI. The strength of Wikipedia, and how we will compete with other knowledge websites such as Google's AI summaries, is that our content is written and curated by humans. AI is good at certain things (image generation, for example), and OK at certain things (certain use cases in programming, for example), and bad at certain things (writing Wikipedia articles, where the amount of effort needed to double check the AI by reading and summarizing sources is equal to or greater the amount of effort of just writing the article from scratch). –Novem Linguae (talk) 20:55, 19 February 2026 (UTC)
- Btw idk if people know but the German Wikipedia largely banned LLM-use a few days ago, stressing "written by humans for humans". The process was very German Kowal2701 (talk, contribs) 21:27, 19 February 2026 (UTC)
- They've made an exception for LLM-powered tools (see the third bullet point). Alaexis¿question? 22:14, 19 February 2026 (UTC)
- As long as translation is allowed (and it is), I am OK there (my German is quite weak, AI is definitely doing a better job at it). Generally, I think that the Wikipedias that accept no-LLM rule will quickly lose to Grokipedia and the similar ones that will follow, and it would be remarkably foolish for us to join these guys on the way to the rocky bottom. I had just used Wikipedia and Grokipedia alongside to sort out the links to ambiguous term encyclopedist and discovered to my dismay that for obscure Encyclopedists (lesser than d'Alambert) Grokipedia is already better than us as a casual reference: the style is uniform, the ordering of facts in the lead by importance IMHO is way better, etc. It seems that the real situation with LLM is way different than the one most people in German Wikipedia were lead to believe: cutting-edge AI actually does do better than a skilled human on many writing tasks. See the link in my message in the previous section for an actual experience of a professional who does a quite complex job creating new formal texts that have to be just right (software) as opposed to our IMHO easier task of summarizing existing knowledge with occasional imperfections being OK. Викидим (talk) 22:34, 19 February 2026 (UTC)
- Have you read Grokipedia recently? Much of its output is confident error-riddled bullshit. Not to mention the clear baked-in right-wing political bias; we bend over backwards to avoid bias to either the right or left and Wikipedia is better for it. Grokipedia is essentially a propaganda device at this point. — The Anome (talk) 19:54, 22 February 2026 (UTC)
- (1) Yes, as I have stated repeatedly on this page, I actually used Grokipedia recently, and found it "useful". (2) "Political bias" (mostly) does not apply when discussing the 18th-century Encyclopedists. Most articles are non-political by nature. (3) I agree that the quality of fully-developed Wikipedia articles is much better than an average article in Grokipedia. The problem is, not many of our articles are fully-developed. Викидим (talk) 20:14, 22 February 2026 (UTC)
- Have you read Grokipedia recently? Much of its output is confident error-riddled bullshit. Not to mention the clear baked-in right-wing political bias; we bend over backwards to avoid bias to either the right or left and Wikipedia is better for it. Grokipedia is essentially a propaganda device at this point. — The Anome (talk) 19:54, 22 February 2026 (UTC)
- If LLMs get to the point where they can write encyclopedia articles way better than humans, then we can have another discussion and consensus can change. But right now, they're god awful at it, and we need PAGs which help us address disruption. It may be that an 'LLM ban' only lasts a few years, who knows Kowal2701 (talk, contribs) 22:52, 19 February 2026 (UTC)
- Why do you think that a good LLM is god-awful at writing an encyclopedia? Let's take my Encyclopedists example, I had randomly chosen Pierre Tarin (no special selection, just mid-list with name completely unrecognizable to me). Compare our entry Pierre Tarin to https://grokipedia.com/page/pierre_tarin , and AI had clearly won. Why argue that we can do better when in many cases we clearly don't? Викидим (talk) 23:36, 19 February 2026 (UTC)
- because I see LLM-written articles every day? I don't have time to analyse that article (a fair few of its sources seem dodgy), but presenting Grokipedia as something we should aim to become, I can't take seriously. Would result in the destruction of this community, effectively everything we've built, and loss of product differentiation. Once a community loses its ethos it disintegrates, people aren’t going to want to spend their spare time babysitting bots Kowal2701 (talk, contribs) 00:13, 20 February 2026 (UTC)
- You may have misunderstood me. I did not hold the Grokipedia article as a shining example. Neither did I suggest to abolish the human factor ("community", "product differentiation"), or tried to disparage our article (it is a very decent stub with no problems). However, from the point of view of an unbiased reader, AI here wins - and by a wide margin.
- Now, if my shiny AI tool allows me to write pretty much the same quality texts as I would have done without it, but 10-50 times faster, why prohibit everyone (me included) its use on the basis that some newbie editor (not me!) can misuse their years-old clunky software (not mine!) and produce total junk? The sane approach IMHO would be to restrict this newbie, or their tools, or their methods, not me. Викидим (talk) 00:32, 20 February 2026 (UTC)
- Apologies. I’ve seen some support for a user right? Kowal2701 (talk, contribs) 01:01, 20 February 2026 (UTC)
- Yes, I am all for vetting the editors, and LLM tools, and rate-limiting. On top of it, we IMHO need approved methods, the latter, as I understand, is exactly what is being addressed on this page. Part of the process should be mandating detailed disclosure ("this is what had been prompted"), this has a side benefit of slowing the process a bit. Викидим (talk) 01:17, 20 February 2026 (UTC)
- If you're saying AI tools write articles at least as well as humans, and you're saying this how we should write articles on Wikipedia, then you are advocating for Wikipedia to be something that is not meaningfully different from Grokipedia. Thebiguglyalien (talk) 🛸 01:17, 20 February 2026 (UTC)
- It does not feel to me like that. The process is pretty much the same as when I write article without the AI assistance, roughly:
- I search for the sources (now with the help of AI)
- Collect the texts of the sources and read them
- Decide on the source (in some cases, more than one) that will define the subject (and, ideally, the plan) of the article
- Make a first draft summarizing this source
- Decide on another source, add section(s) based on it
- Repeat #5
- Check the source references (I make mistakes there more often than I would like to)
- With AI, the process feels the same with #4 and #5 being much faster at the expense of somewhat monotonic writing (but with less typos). At this point, I do not see a way to completely outsource the article creation to AI (Grokipedia apparently takes our text as ground truth and accepts human input, thus circumventing the obstacles). Then, in few years situation might change (only half a year ago I would not have suggested to use AI for piece-wise content creation, now it is ready IMHO). Викидим (talk) 01:38, 20 February 2026 (UTC)
- I feel that I need to stress that this process will never yield a WP:good article that requires immersion in the sources as a whole with the WP:TOC and use of sources scheduled in advance. This sounds to me too much like a job to be fun (for the avoidance of doubt, I have fun at my real job). My kudos go to the great editors who create GAs/FAs here. Викидим (talk) 01:49, 20 February 2026 (UTC)
- Finally, I would quote Matt Shumer:
Викидим (talk) 01:58, 20 February 2026 (UTC)The technology works ... the next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in my world. It's coming to yours ... the people who will come out of this best are the ones who start engaging now
- It does not feel to me like that. The process is pretty much the same as when I write article without the AI assistance, roughly:
- Apologies. I’ve seen some support for a user right? Kowal2701 (talk, contribs) 01:01, 20 February 2026 (UTC)
- I looked at the Grokipedia article. It's the usual mess of editorializing, promotional content, rampant AI-isms, and a suspicious amount of glazing of how "empirical" everything is compared to unspecified alternatives and how Latin is the foundation of knowledge. I also did a spot check for source-to-text integrity, at least the material not paywalled:
- The following promotional slop,
With no recorded major controversies attached to his work, his legacy rested on the enduring utility of his texts and Encyclopédie entries in shaping contemporary anatomical thought
, is cited to a Christie's book auction with one or two sentences about who he was. Growing up in the rural setting of 18th-century Courtenay, he was exposed to regional medical practices that shaped his early path toward a career in medicine, amid the broader intellectual currents of the Enlightenment filtering into provincial France
- Cited to a blog post that only says Tarin grew up then built a chapel there.- Amusingly, though Grokipedia has a system prompt to not cite Wikipedia, it gets around that by citing a machine translation of Wikipedia. (You will not be surprised to find that this doesn't back up the claim either.)
- The following promotional slop,
- Gnomingstuff (talk) 03:58, 20 February 2026 (UTC)
- Yes, they are sloppy with sourcing. Yes, they are using our text to connect to reality, whether they tell the world about it or not. But ... they have many useful details in this article we miss, their article is full-fledged, ours is a stub.
- An average reader who is OK with Christie's catalog as a source, will take their article over ours 10 times out of 10. That's how the discussion started: I was looking for information on who was one of the Encyclopedists and who wasn't. I needed to sort about 100 articles (many could have been rejected by dates alone). At first, I was checking our articles, but quickly switched to Grokipedia for convenience (consistent layout of facts). I am not a fan of Grokipedia, but this sounded like a warning to me - and I shared the message here (after first listing the observation while discussing the progress on Talk:Encyclopedist#Encyclopedists & Encyclopaedists). Somehow, here the messenger got blamed... (for the avoidance of doubt, I am OK with that). Викидим (talk) 04:24, 20 February 2026 (UTC)
- The problem is that without actual citations, there's no indication that the information actually is true -- and almost all of it is so heavily slathered with promotional slop and synthesis that it's hard to call it useful either. It can't keep its story straight -- do his texts have
enduring utility
, considering thathis early death curtailed broader influence
? For all its unending glazing, it is bad at identifying what people actually did outside vague abstractions. This source, which is heavily cited, says that one of his books "contains the first illustration of the nervous system." That seems very noteworthy! Yet Grokipedia does not mention this anywhere. (Really an apt illustration of this part of WP:AISIGNS: "[LLMs tend] to omit specific, unusual, nuanced facts (which are statistically rare) and replace them with more generic, positive descriptions (which are statistically common). Thus the highly specific 'inventor of the first train-coupling device' might become 'a revolutionary titan of industry.'" And of course, since it's Grok, it uses every opportunity it can to get on its soapbox:This popularization effort contributed to secular medical reforms by challenging Galenic traditions and religious dogma in favor of rational, evidence-based practices, aligning with the Enlightenment's broader push for scientific progress in health and healing.
Three guesses as to whether the source says anything like this. Gnomingstuff (talk) 06:18, 20 February 2026 (UTC)- The irony here is that the soapbox statement is factually correct AFAIK: With the harpsichord brain theory, Tarin et al. essentially declared that the notion of soul is redundant ("As for the mutual commerce of the soul and the body, it is not only the most inconceivable thing in the world, but even the most useless to the physician." "Quant au commerce mutuel de l'âme & du corps, c'est non-seulement la chose du monde la plus inconcevable, mais même la plus inutile au médecin."). Once again, I not trying to defend the Grok-wording or sourcing, I am here just to remind that Wikipedia won over the old-fashioned encyclopedias not by being more accurate, but by being more useful. Викидим (talk) 21:12, 20 February 2026 (UTC)
- Some context is necessary on this. Grokipedia has a system prompt that gives it a political slant, and the text usually announces it by claiming that its point of view is "empirical" or "evidence-based" or something similar, while the opposing point of view is "subjective" or "narrative-based" or whatever. It does this basically all the fucking time, to the point where the adjective becomes meaningless. (Data on this verbal tic: User:Gnomingstuff/AI experiment/Grokipedia) It also does this regardless of the source; I've seen articles where the "empirical data" is literally just a Quora post.
- In this article, the "other side" is religion, I guess, but more commonly it's anything Grok perceives as "woke." Here's an almost-too-"perfect" example from the "Activism" article:
Empirical studies on political intolerance further highlight asymmetries: left-wing groups often express greater prejudice toward conservative activists, driven by perceived threats from ideological outgroups, exacerbating the underrepresentation of right-leaning mobilization.
Gnomingstuff (talk) 03:58, 21 February 2026 (UTC)- I understand your reasoning and share your understanding of Grokipedia bias. As a practical person, however, I would always prefer a fact with a spin and without proper sources to no fact at all (I naively assume that my brain can undo the spin and quickly grasp the veracity of the source, but facts themselves require much more work to find on my own). I suspect that many people share my attitude ("deliver me the goods now!"). For the avoidance of doubt, I do not suggest us to lower our standards, having just gone through reading a whole series of Miss Venezuela articles as part of WP:NPP (Miss Venezuela 2017 is quite representative). With these articles, AI clearly can do way better than us (so far), somehow, nobody seems to be bothered.
- (offtopic, but related) I do not believe in the very existence of NPOV, as evaluation of many facts of life greatly depends on the culture and pragmatic interests of a person. In my life experience, the "neutral" position varies greatly with the mother tongue and social class. Викидим (talk) 08:45, 21 February 2026 (UTC)
- The irony here is that the soapbox statement is factually correct AFAIK: With the harpsichord brain theory, Tarin et al. essentially declared that the notion of soul is redundant ("As for the mutual commerce of the soul and the body, it is not only the most inconceivable thing in the world, but even the most useless to the physician." "Quant au commerce mutuel de l'âme & du corps, c'est non-seulement la chose du monde la plus inconcevable, mais même la plus inutile au médecin."). Once again, I not trying to defend the Grok-wording or sourcing, I am here just to remind that Wikipedia won over the old-fashioned encyclopedias not by being more accurate, but by being more useful. Викидим (talk) 21:12, 20 February 2026 (UTC)
- The problem is that without actual citations, there's no indication that the information actually is true -- and almost all of it is so heavily slathered with promotional slop and synthesis that it's hard to call it useful either. It can't keep its story straight -- do his texts have
- because I see LLM-written articles every day? I don't have time to analyse that article (a fair few of its sources seem dodgy), but presenting Grokipedia as something we should aim to become, I can't take seriously. Would result in the destruction of this community, effectively everything we've built, and loss of product differentiation. Once a community loses its ethos it disintegrates, people aren’t going to want to spend their spare time babysitting bots Kowal2701 (talk, contribs) 00:13, 20 February 2026 (UTC)
- Why do you think that a good LLM is god-awful at writing an encyclopedia? Let's take my Encyclopedists example, I had randomly chosen Pierre Tarin (no special selection, just mid-list with name completely unrecognizable to me). Compare our entry Pierre Tarin to https://grokipedia.com/page/pierre_tarin , and AI had clearly won. Why argue that we can do better when in many cases we clearly don't? Викидим (talk) 23:36, 19 February 2026 (UTC)
Generally, I think that the Wikipedias that accept no-LLM rule will quickly lose to Grokipedia and the similar ones that will follow
- We will 'lose' only those people who think the objective of Wikipedia is to create the most comprehensive encyclopedia possible as fast as possible and are willing to throw out the very spirit of human collaboration and creativity in the process. Good riddance. The Butlerian jihad can't come soon enough. Athanelar (talk) 02:05, 20 February 2026 (UTC)
- You may have misunderstood me. I did not say "lose" editors, I have meant losing the reader base and relevancy. As technology evolves, encyclopedias must evolve, too (like Britannica) - or wither on the vine (like Collier's). My position IMHO reflects the WP:PILLARS where the editors are only mentioned sparingly and mostly to suggests refraining from showing their personality:
Editors' personal experiences, interpretations, or opinions do not belong on Wikipedia
All editors freely license their work to the public, and no editor owns an article
Wikipedia's editors should treat each other with respect and civility ...
- "Spirit" in 5P is related to the rules:
spirit matter more than literal wording
. I am all for creativity, but the bulk of our work is in retelling someone else's creative thoughts. Викидим (talk) 04:09, 20 February 2026 (UTC)- Yes, I was also referring to the reader base. I won't shed any tears if the kinds of people who need ChatGPT to remind them to breathe no longer consider Wikipedia useful because it's not full of meandering, dubiously-sourced AI equivocations. If people want answers from AI, they can ask their favourite AI. I don't see why we have to fundamentally undermine the ethos and utility of our project just to placate those people, or to conform with some vision of the 'future' beinf artificially advertised and foisted on us by tech CEOs.
- Gnomingstuff has eloquently pointed out above that an article you cited as an example we should be aspiring towards is nothing more than a wall of the usual flowery AI nothingprose, complete with citations that arr actually nearly useless in telling you anything about the topic. You have fallen for the most surface-level trick; you saw that the article is bigger and apparently equated that with better; but everybody knows AI can bloviate for as long as you want it to about any topic, that doesn't mean the information it presents is actually any more useful or comprehensive.
- You've indicated that you "don't like Grokipedia," then why do you insist Wikipedia should follow in its footsteps? Athanelar (talk) 09:35, 20 February 2026 (UTC)
- You may have misunderstood me. I was not suggesting to copy Grokipedia processes here. There were two separate statements made by me:
- any Wikipedia that totally rejects the use of AI will quickly become noncompetitive and will be displaced by AI encyclopedias in the same way as any software development company rejecting the use of AI in development will meet a swift demise in the next few years. This comment was made due to an apparent decision of German Wikipedia to quit AI cold turkey. IMHO we would be remarkably foolish to follow in their footsteps;
- the idea that the AI-generated articles are all useless crap is wrong. Here is where the comparison of an article in Grokipedia to ours wcame in. To an "average reader" who cares about facts, not ways of sourcing them, our article is less useful, as it contains much less facts. I am not impressed with Grokipedia sourcing either, so I am not going to defend it or suggest us to adopt it. But the problem is in the open: a very large group of articles in Grokipedia feels much more complete, up-to-date, and thus much more useful to the reader. This is a problem IMHO. I was not confused, instead, I had a problem to solve here (sorting preexisting links to a newly created redirect) and was amazed to find that Grokipedia was easier to use to establish (trivial) facts.
- (offtopic) Unlike many editors here, I do not believe that top-end LLMs habitually lie - simply because in my job I type "design X" as a prompt daily (X is reasonably complex), and then iteratively coax AI to deliver the thing that is actually working (in my line of work, correctness can be established more or less definitely). This is exactly how I work with a human engineer, only AI is many times faster (about an order of magnitude). My common sense suggests me that writing encyclopedia is not that special, and a "correct" (in looser sense) text can be created in few iterations, too. Викидим (talk) 11:27, 20 February 2026 (UTC)
To an "average reader" who cares about facts, not ways of sourcing them, our article is less useful, as it contains much less facts
What do you suggest we do about that, though? We obviously can't take the Grokipedia approach of cramming our articles full of dubiously verifiable facts. The entire point is volume of content is not an indicator of a quality encyclopedia. It would obviously be absurd to suggest we should compromise our rigorous standards for sourcing just to satisfy some lowest common denoninator reader who doesn't care whether the facts they're reading are actually verifiable. I don't see how "Grokipedia has a far higher volume of text but it's basically all unverifiable" is any kind of 'win' Athanelar (talk) 12:28, 20 February 2026 (UTC)What do you suggest we do about that, though?
Within this page, I suggest (1) accept presence of AI as inevitable. If one feels that LLM is evil, it does not matter for the goals of this page, IMHO - we have experience regulating inescapable evil elsewhere (e.g., WP:PE and WP:COI in general) and therefore (2) to concentrate on setting up the best practices of using LLMs and not on total ban on the article text generation (I naturally support a near-total ban on AI on the talk pages, excluding the translation). In other words, restrict problematic people and practices, not tools. I have listed the problems here already (high rate of output, unvetted tools and users, NPOV needing human input). Викидим (talk) 23:55, 20 February 2026 (UTC)- We still ban undisclosed paid editing, even though it's inevitable. We ban sockpuppetry even though it's inevitable. I'm not sure what makes AI such a unique case that we can only negotiate its use rather than restricting it.
- It is not at all unreasonable to suggest that a jackhammer is a tool that might not belong in a china shop. Something being a tool does not make it moral-neutral, functionally neutral, or appropriate in every situation. If the tool is nigh-universally causing a detriment to the project and its volunteers, then the "problematic practice" is using that tool, and the "problematic people" are the people using that tool. Athanelar (talk) 10:08, 21 February 2026 (UTC)
- IMHO, a proper analogy for the WP:UPE is WP:LLMDISCLOSE, I would not expect the latter to be controversial. Sockpuppetry is very different from the use of LLM: it does not help us to create encyclopedia at all, the balance is a clear negative. Paid editors, on the other hand, do help to improve encyclopedia, so our approach is "allow - mandate disclosure - watch like a hawk". The latter way looks reasonable enough to apply to LLM: a pragmatic goal is usually unifying. Викидим (talk) 18:43, 21 February 2026 (UTC)
- I've already explained above how the Grokipedia article may feel much more complete because it contains more text, but (at the risk of sounding like Grok) facts are not feelings, length is not comprehensiveness, and platitudinous slop is neither fact nor deep wisdom. Gnomingstuff (talk) 17:31, 20 February 2026 (UTC)
- The above two comments by Athanelar and Gnomingstuff say more or less what I would have said. Stepwise Continuous Dysfunction (talk) 22:35, 20 February 2026 (UTC)
- You may have misunderstood me. I was not suggesting to copy Grokipedia processes here. There were two separate statements made by me:
- You may have misunderstood me. I did not say "lose" editors, I have meant losing the reader base and relevancy. As technology evolves, encyclopedias must evolve, too (like Britannica) - or wither on the vine (like Collier's). My position IMHO reflects the WP:PILLARS where the editors are only mentioned sparingly and mostly to suggests refraining from showing their personality:
- @Викидим:
Generally, I think that the Wikipedias that accept no-LLM rule will quickly lose to Grokipedia and the similar ones that will follow
. Probably not. Those will be in the same boat as Wikipedia, sort of. Grok's backer has deep pockets, and so he can replace Grok when instant overview sites reduce its traffic along with Wikipedia's. But, that's what's coming: the generation of tech that will replace conventional search will also replace conventional one-page-per-topic websites as well, or at least reduce their viewing traffic to a trickle, with a simple prompt box on a single page, which will be able to produce almost anything in any media format. What is the solution for Wikipedia? That's actually two questions in one: the first, pertaining to the Wikipedia knowledge site(s), and second, to the Wikipedia community. I'd say the community will fade away, as its traffic dwindles, regardless of what LLM rule it adopts. As for its knowledge network, it, or a successor could still be around if the Wikimedia Foundation develops an AI prompt box app of its own—or an app with whatever UI is in vogue at the time, whether it's an ear piece with bone conduction mic, or a neural chip. But, technology is accelerating, and its backers aren't going to wait for Wikipedia to catch up; the competition is so far ahead that it looks unlikely that Wikipedia (and its sister projects) will ever catch up. We've probably waited too long. So, what are we doing here on this page? We're supporting the project through to the end of its product cycle. — The Transhumanist 08:27, 21 February 2026 (UTC)- I agree with your very thoughtful message 100%. In my corner of universe, however, the humans are already feeling to a large extent like a test team for the AI: somehow, connection to the reality is harder to automate that the pure thinking part. Thus my position of "leave the writing part to AI, let humans guide and curate" might postpone th inevitable. I would not discount Musk that easily, he might intend Grokipedia to become precisely an answer box you have described for the more conservative part of population. Викидим (talk) 09:05, 21 February 2026 (UTC)
- @Викидим: Like I said, he (Mu$k) can replace Grok. Can Wikipedia adapt in that way? Probably not. Meanwhile, AI development progresses at rates faster than possible for humans alone. We're all a test team for AI now, especially power users. While the most creative cybernerds push the limits of what they can do with online AIs, all that data is being recorded and analyzed to become the basis of further refinements to the AIs. AI development is being driven by AI harvesting of human AI usage data, among other things. Can we postpone the inevitable (the swallowing of this niche by fully automated competitors)? That would be like postponing a tsunami. — The Transhumanist 12:34, 21 February 2026 (UTC)
- I agree with your very thoughtful message 100%. In my corner of universe, however, the humans are already feeling to a large extent like a test team for the AI: somehow, connection to the reality is harder to automate that the pure thinking part. Thus my position of "leave the writing part to AI, let humans guide and curate" might postpone th inevitable. I would not discount Musk that easily, he might intend Grokipedia to become precisely an answer box you have described for the more conservative part of population. Викидим (talk) 09:05, 21 February 2026 (UTC)
- Btw idk if people know but the German Wikipedia largely banned LLM-use a few days ago, stressing "written by humans for humans". The process was very German Kowal2701 (talk, contribs) 21:27, 19 February 2026 (UTC)
- @The Transhumanist, I'd start with the pain points of new editors. They are the ones who most need help and who struggle the most now. I answer thier questions all the time and I'd say that the most common topics are the WP:N/WP:RS policies, editing mechanics and the draft-to-publication pipeline. AI tools can help a lot since now our processes are slow, the editor is antiquated and the policies are perceived as arcane.
- Here are some examples
- Policy
- RS proofreading - are the sources reliable? There are some borderline cases but a simple tool can flag social media, non-independent sources, promotional pages, wikis which account for the overwhelming majority of problematic sources. The point is to add some friction. There are cases when such sources are legitimate and we want users to think when they use such sources. It is also better UX-wise since they get feedback immediately and not weeks later.
- Notability check - again, many drafts are obviously non-notable.
- Policy chatbot - for asking questions about our policies and guidelines.
- Editing
- Creating tables, adding references, etc. Can be useful for everyone.
- Draft-to-publication (WP:AFC backlog is 2k+ articles and
getting a review can take a while
)- Checking references (I dabbled in it)
- Checking for promotional/non-encyclopaedic language
- Policy
- AI can also be used for vandalism prevention but I suppose it's out of scope of this discussion. Alaexis¿question? 22:57, 19 February 2026 (UTC)
- I'm open to AI use for tools. Things that flag issues for editors or automate routine tasks. We already have variations of these with bots and userscripts, and AI could make them more powerful. I am against any LLM-generated content touching mainspace, and there should be a question of whether WP:CIR applies for any editor who adds it. We probably should not be creating help pages in this case, as the only people who should be using AI on Wikipedia are those who are already experienced in developing tools or bots. My question to those who are okay with LLM content on Wikipedia: why are you still here? If you want Wikipedia to resemble ChatGPT or Grokipedia, then go use ChatGPT or Grokipedia. Thebiguglyalien (talk) 🛸 23:37, 19 February 2026 (UTC)
- I cannot answer for everybody supporting the use of AI for content creation, but to me AI is just a writing tool, like a word processor, that speeds up my writing process. One can type junk text on a computer keyboard or generate it with AI, the latter is just faster. There are less skills involved in using WP:visual editor (VE) than in editing the plain text, so VE contributions, in my experience, are more often problematic, we are still OK with VE because it is more efficient for many people. Similarly, I can create an OK article (I don't do the good ones) by typing it letter-by-letter or by interacting with AI, the latter is just faster. In other words, these problems exist between the chain and the keyboard, the issue we have with some edits is not with AI per se, but with some editors, their particular tools and their particular processes. This page seems to address the "processes" aspect of the issues, thus I am posting here, simply trying to help. Викидим (talk) 01:00, 20 February 2026 (UTC)
- Now that I've had some time to digest this discussion, I've started organising some of my thoughts (which I've been meaning to do for a while, so thanks The Transhumanist for the motivation, and apologies for the mini essay). I'd describe my position on LLMs these days as informed skepticism rather than outright doomerism and I think there can be a role for carefully supervised LLM use in various kinds of knowledge work. I pay for AI use through Kagi, a search engine which provides access to various premium LLMs; for API use with OpenAI and Anthropic; and for GitHub Copilot.I'll start with the content problems that many others have already observed, especially when reviewing new articles and drafts. WP:AISIGNS systematically documents what many of us can intuitively recognise because we've seen so much of it. I suspect that the majority of the easily-identified offenders are using free or cheap LLMs with minimal customisation because we're seeing the same kinds of fluff in drafts and literally the same arguments and denials in the AfD and ANI discussions that follow. There is just so much of it these days – more mass-produced drivel comes in than human reviewers can handle. I strongly oppose any effort to encourage or facilitate using LLMs to assist article creation or editing, or to overturn WP:NEWLLM or WP:AITALK. Any tool intended for identifying or fixing potential errors should be supervised by a human (not fully-automated), as is the case for anti-vandalism tools and the like.Now, on to some of the ways I've used LLMs on Wikipedia. I've been refining the custom instructions for my review-and-research assistant to work alongside other automated tools.
- Identify possible LLM text: I don't systematically check the well-known LLM detectors, but if something smells off I'll ask the assistant to evaluate it as part of a broader review.
- Check sources referenced in an article or draft:
- Do the sources exist? Sohom_Datta's link dispenser is good for this (it even has an LLM detection mode) but sometimes doesn't pick up URLs that are not correctly formatted as references. An LLM can grab any URL it finds and try to extract the content if it hasn't been blocked.
- I've been trying out User:Alaexis/AI Source Verification (with nods to Polygnotus and Phlsph7) for verifying text-source integrity.
- Find sources for an initial research phase: a search is conducted with Kagi (which I find has higher-quality results than Google or DuckDuckGo/Bing) and the custom instructions point to the RS guidelines to filter the results, but sometimes the assistant still suggests non-RS sources and needs to be corrected.
- Translate questions and answers: sometimes a non-English speaker wanders on to an ENWP help page or talk page and Kagi Translate can help me point them in the right direction.
- Summarise discussions: We love to talk about Wikipedia on Wikipedia, and being in Australia, I sometimes find that nearly a hundred comments have accumulated in one discussion overnight. An LLM-generated summary can help me decide whether reading a specific discussion in detail is the best use of my time.
- For a concrete demonstration, my German isn't good enough to follow policy discussions without my attention wandering, so I asked Claude Opus to summarise the German discussion referenced earlier and identify possible takeaways for us here on ENWP: chat thread.In addition, I sometimes use GitHub Copilot to help me understand scripts and tools written by others (only those that already have code hosted on GitHub), for example to check my understanding of how a feature has been implemented. This can help me write bug reports and pull requests.I really want to emphasise that all of these tasks are ones that I can already do because I'm a (hopefully trusted) editor and patroller who has also done postgraduate research, worked in libraries and archives, dabbled in JavaScript, and has access to multilingual dictionaries. I only delegate grunt work to a machine because there aren't enough hours in a day – much like how a researcher might have delegated my undergraduate self to go to the library and bring back books on ${TOPIC}, which they would then read themselves. There are many editors here whom I would trust to use LLMs for specific tasks with competence and care, and a far greater number whom I do not know well enough to evaluate. I don’t know whether this means I'm arguing for a new user right, but at the minimum I would be very reluctant to see any guideline or help page that might encourage newcomers to try LLM-assisted tasks. ClaudineChionh (she/her · talk · email · global) 09:02, 20 February 2026 (UTC)
I could trivially generate a Wikipedia-like pseudo-encylopedia with a budget of a few million dollars for LLM inference. I mean in days; I could vibe-code the scripting in a few hours, and the rest would be limited only by inference speeds. Apart from that, the rest of the cost would be mere static file hosting and CDN fees. It would be absolute bullshit. There will be mock-Wikipedias for every possible bias and lunatic viewpoint, drowning out all reason. If LLM generation of content becomes the norm here, it will be the end of Wikipedia, as there will be nothing to differentiate it from the "shittipedias"; Wikipedia's core value (even for LLM operators!) is that it is "an encylopedia written by humans for humans" (and, as it happens, for LLM training also). — The Anome (talk) 20:00, 22 February 2026 (UTC)
- I work with AI on a daily basis. When I prompt Google Gemini 3.0 Pro to "Implement [thing A] in [language B]" (actual prompt is much longer, naturally), after a few iterations, the result is a usable code that passes the formal verification. On top of it, I truly do not know the language B, and cannot write texts in it myself. So I know for sure that the result of AI work is not absolute bullshit and matches the work of a human (measured formally, again). So I tend to think that when the same person asks the same tool to "based on [these texts], create wikitext formatted for English Wikipedia" (again, the actual prompts are longer), after a few iterations the text would resemble what the same person can do, but will be achieved much faster. Summarizing knowledge with "soft" criteria of correctness when the person in charge actually knows the output language looks to me like a simpler task than creating new stuff that has to be formally correct while being unable to read the output. Now, if I try to use the free version of the same Gemini for the same "thing A" task, then yes, I would get an unfixable wave of text that does not work. I would speculate that discounting the AI output out of hand generally has three roots: (1, primary) using bad tools. In my experience, everything released before the fall of 2025 did not come close to human performance. As of today, only the top models of each manufacturer perform adequately. These models are not cheap, the going cost is few hundreds of US dollars per month. IMHO, this performance will eventually come down to the free tier, but it will take a couple of years. (2) Bad prompts. A short prompt, like "write a Wikipedia article about X", allows AI to synthesize, which it will. (3) A firm belief on the part of the user that nothing good will come out of an AI effort. This skepticism makes people to use inferior, but easily available, tools, and half-baked prompts. Викидим (talk) 21:36, 22 February 2026 (UTC)
- I have used AI agents to do similar work. I currently use Claude Opus 4.6, currently Anthropic's top tier LLM agent, on the Max plan on a daily basis to develop software for PCB layout and routing, known to be a hard task. In order to generate a usable design, Claude's software's layout has to pass a series of rule checkers that check the design is actually valid in terms of both simple connectivity and other electrical/geometric rules. It works amazingly well, except when it doesn't, and the rule checker catches that, so no harm is done, and I get Claude to improve the software to deal with the failing case, so it gets better over time. AI + formal verification is absolutely the right approach, this is a classic inverse problem. However, this level of formal verification is not available for use writing encylopedia articles, so I still wouldn't trust it to write an encyclopedia article. — The Anome (talk) 21:55, 22 February 2026 (UTC)
- Thank you for a detailed post. I seems like we have observed the same set of facts: modern AI generally gets things right, and the things can be set truly right by subsequent verification. We apparently disagree on whether the ability of humans to drive verification is good enough to compensate for AI flaws. I think that the benefits of ability to actually read the output outweigh the natural human shortcomings in verification, so an acceptable article written by AI is possible. My understanding of your position is that
[n]aive and/or newbie editors and bad-faith actors
cannot be expected to do a proper job. Since these are not mutually exclusive positions, we can both be right. With that, bowing out. Викидим (talk) 22:27, 22 February 2026 (UTC) - For an illustration of AI being able to verify article texts automatically, see Talk:Miss Venezuela 2017#Possible errors. Miss Venezuela 2017 is a brand-new article (1/23/2026) that is practically unsourced, I am 100% sure that AI would have done better here. Викидим (talk) 22:44, 22 February 2026 (UTC)
- @Викидим: On this, we are in total agreement. Using LLMs to check citations against article text is precisely one of the uses I had in mind. When it gets it right, it's useful. When it gets it wrong (with I hope << 1% error rate), the errors are harmless, apart from the inconvenience it generates for other editors by wasting their time checking on its erroneous checks. There is some false positive error rate below which this becomes a net gain for Wikipedia. I'm guessing it's around 0.1%. — The Anome (talk) 09:28, 23 February 2026 (UTC)
- What I was trying to demonstrate is that a modern LLM probably does not do much worse on the fact checking than a human. So if we add verification to the workflow (using a tool similar to Earwig Copyvio, so we use a vetted tool for that) the problems of invented facts and wrong references will hopefully go away. Викидим (talk) 11:06, 23 February 2026 (UTC)
- @Викидим: On this, we are in total agreement. Using LLMs to check citations against article text is precisely one of the uses I had in mind. When it gets it right, it's useful. When it gets it wrong (with I hope << 1% error rate), the errors are harmless, apart from the inconvenience it generates for other editors by wasting their time checking on its erroneous checks. There is some false positive error rate below which this becomes a net gain for Wikipedia. I'm guessing it's around 0.1%. — The Anome (talk) 09:28, 23 February 2026 (UTC)
- Thank you for a detailed post. I seems like we have observed the same set of facts: modern AI generally gets things right, and the things can be set truly right by subsequent verification. We apparently disagree on whether the ability of humans to drive verification is good enough to compensate for AI flaws. I think that the benefits of ability to actually read the output outweigh the natural human shortcomings in verification, so an acceptable article written by AI is possible. My understanding of your position is that
- I have used AI agents to do similar work. I currently use Claude Opus 4.6, currently Anthropic's top tier LLM agent, on the Max plan on a daily basis to develop software for PCB layout and routing, known to be a hard task. In order to generate a usable design, Claude's software's layout has to pass a series of rule checkers that check the design is actually valid in terms of both simple connectivity and other electrical/geometric rules. It works amazingly well, except when it doesn't, and the rule checker catches that, so no harm is done, and I get Claude to improve the software to deal with the failing case, so it gets better over time. AI + formal verification is absolutely the right approach, this is a classic inverse problem. However, this level of formal verification is not available for use writing encylopedia articles, so I still wouldn't trust it to write an encyclopedia article. — The Anome (talk) 21:55, 22 February 2026 (UTC)
- I suggest reading https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipedia-editing-what-we-learned-in-2025/ WhatamIdoing (talk) 05:55, 23 February 2026 (UTC)
- Honestly, a really big issue that I have with AI is not how 'correct' it may be or how well it can summarize its own training data, but the fact that AI simply can't do things that humans can choose to do, like question actual information. When I ask AI to generate an article on my own work or some niche piece of software, it almost always prefers primary sources or takes things extremely literally, greatly reducing the actual substance of the work even if it's 'accurate' at a glance.
- AI can take the words of a paper that has possible conflicts of interest and run that as fact, which (hopefully) a human would notice and avoid. AI takes what I've said on some borderline unfinished things and somehow interprets that as being something fully feature-complete with an active community around it. It even somehow thinks that one of my apps is simultaneously capable of advanced rendering (unfiltered 2D spritesheets only), and incapable of system interaction (despite letting you have scripts with full system access, for better or worse). This isn't as horrendous with other models (I used GPT-5.2), but many still make similar errors.
- A good editor would probably find a variety of non-primary sources, whereas AI just looks for the most information (and we can't truly know how LLMs choose a source). While I think that using AI for basic sanity-checking and highlighting possible mistakes is a good thing for those who want it, using AI for more than that risks greatly degrading the quality of articles- and more importantly, quality of research on Wikipedia. If these tools go beyond that in Wikipedia, it will definitely embolden people to put in less effort, because that option existing implicitly devalues the importance of human work when a 'railroad the article outline and do research' button is right there on the official site.
- Emily * Emi-Is-Annoyed (message me!) 10:31, 23 February 2026 (UTC)
- I would expect a good editor to provide LLM with the text of sources, and not let it go and find its own - or even fetch the text on its own. A unskilled editor would most likely not research the sources and use whatever is easily available, regardless of their use on non-use of LLM. This is the paradox I observe: community is concerned with the possibilities of unskilled editors using LLM, but measures the potential poor outcomes against the plank set at the level of the group of experienced editors. Викидим (talk) 10:58, 23 February 2026 (UTC)
- But, again, using an LLM on sources you found won't suddenly make it 'critically think'.
- If you offload that to AI, you will not be able to ensure that you've missed important information. Worst case, you avoid information that could greatly change a perspective on the topic because AI decided to summarize away crucial information. In the best case, you end up reading roughly the same amount of text while questioning if you've 'missed out' on something.
- Wikipedia is really not a race to create content. Perhaps grokipedia fits that better, but the focus should be on providing accurate and considered information, not on making as much content as possible anyways.
- I'm sure some skilled editors have their own system in place that uses AI to automate parts of their workflow, and that's something done under their own judgement. I'm saying that there shouldn't be that kind of tool dangled in front of all editors, who don't have as long an opportunity/experience to consider the implications of this approach.
- Either way, I think my points are sensible in the context I'm trying to bring up. I don't want to add too much bloat to this topic/start arguing, so I'll leave it mostly here. Emily * Emi-Is-Annoyed (message me!) 11:53, 23 February 2026 (UTC)
- I would expect a good editor to provide LLM with the text of sources, and not let it go and find its own - or even fetch the text on its own. A unskilled editor would most likely not research the sources and use whatever is easily available, regardless of their use on non-use of LLM. This is the paradox I observe: community is concerned with the possibilities of unskilled editors using LLM, but measures the potential poor outcomes against the plank set at the level of the group of experienced editors. Викидим (talk) 10:58, 23 February 2026 (UTC)
Brainstorming, continued...
[edit]@Alaexis, Athanelar, Викидим, Chaotic Enby, ClaudineChionh, Emi-Is-Annoyed, Fermiboson, Gnomingstuff, Izno, Kowal2701, Novem Linguae, Royiswariii, ScottishFinnishRadish, SmokeyJoe, Stepwise Continuous Dysfunction, Suriname0, The Anome, Thebiguglyalien, and Thryduulf: — The Transhumanist 03:08, 24 February 2026 (UTC)
@Wakelamp and WhatamIdoing: — The Transhumanist 03:09, 24 February 2026 (UTC)
A lot of productive input so far, spanning the gamut of potential AI activity and areas for help to focus on or not. I've endeavored to gather it all in tables for easy digesting and review, and add whatever else I could think of.
Here are the previous tables edited, updated, and expanded by hand; plus a third table...
In the tables below, "AI" refers to large language models (LLMs) and chatbots. Not to be confused with refined AI-powered spellcheckers and grammar-correcting programs.
| AI use case ideas | Details |
|---|---|
| AI use for tools | Things that flag issues for editors or automate routine tasks. We already have variations of these with bots and userscripts, and AI could make them more powerful. |
| Analyze help pages and implement approved suggestions | Simplify verbose help pages by having AI analyze and apply editor-approved changes. |
| Any offline activity that stays offline | |
| Edit talk pages to add articles to WikiProjects | Automate harmless, self-correcting talk page additions. |
| Improving WikiProject pages | |
| Proofreading and copy editing AI drafts | Spotting errors and issues, creating a new draft with problems fixed. |
| Translating questions and answers | Sometimes a non-English speaker wanders on to an English Wikipedia help page or talk page and translation tool (chatbot, Kagi Translate, etc.) can help point them in the right direction. |
| Translation from non-English Wikipedias | We could write a help page using the Wikipedia:LLM-assisted translation guideline as a starting point, being careful to not give the impression that we are encouraging editors to use LLMs when translating, but just providing help on how to proceed if they choose to do so. |
| Validating Wikidata coordinates | How you can help fix bad coordinates flagged by a machine learning tool |
| As a WP:WIKISHIELD | such as for vandalism prevention |
| Working on a page in a sandbox | |
| Research and browsing assisting | |
| As a policy chatbot | for asking it questions about our policies and guidelines. |
| Asking an AI how to do something on Wikipedia | The chatbot engages in basic off-Wikipedia Q & A, providing procedures and instructions for Wikipedia activity |
| Find sources for an initial research phase | a search is conducted, such as with Kagi (typically higher-quality results than Google or DuckDuckGo/Bing) and the custom instructions point to the RS guidelines to filter the results, but sometimes the assistant still suggests non-RS sources and needs to be corrected. |
| Summarise discussions | We love to talk about Wikipedia on Wikipedia—a hundred comments can accumulate in one discussion overnight. An LLM-generated summary can help decide whether reading a specific discussion in detail is the best use of one's time |
| Generating suggestions | |
| Generating suggestions for infoboxes | |
| Generating suggestions for WikiProject templates | |
| Providing suggestions on improving a page | Various analysis tasks including suggestions for page improvements. |
| Identifying | |
| AI content detection | Running an article or other page through an AI, to find suspected LLM-generated content, before working on the page. |
| Checking references | Check sources referenced in an article or draft: Do the sources exist? Sohom_Datta's link dispenser is good for this (it even has an LLM detection mode) but sometimes doesn't pick up URLs that are not correctly formatted as references. An LLM can grab any URL it finds and try to extract the content if it hasn't been blocked. See also: User:Alaexis/AI Source Verification. |
| Checking for promotional/non-encyclopaedic language | |
| Comparing lists (sets) | |
| Comparing pages to find contradictions | |
| Diagnosing WikiProject pages | |
| Error checking pages | |
| Error checking its own work (in a new session, to clear contextual memory and the bias that creates) | |
| Flag possibly-bogus citations | AI detects questionable citations for human review. |
| Flag unreferenced statements needing cites | AI identifies statements without sources. |
| Identifying gaps in articles and assist filling them | AI finds content gaps and helps editors fill them. |
| Identifying missing links from list articles | AI detects missing wikilinks in lists. |
| Identifying spelling/grammar/homophone errors | AI flags potential errors based on context for human verification. |
| Identifying wrong article links | AI flags links pointing to potentially incorrect articles. |
| Notability checking | Many drafts are obviously non-notable. |
| Proofreading articles | Spotting errors and issues, suggesting corrections. |
| Screening articles for AI problems, such as hallucinations and overconfident prose | |
| Developing userscripts | |
| Writing userscript sourcecode | |
| Adding comments to userscripts to make them intelligible | |
| Writing userscript instructions | |
| Developing templates | |
| Writing template instructions | |
| Maintaining navigation footers | |
| Adding missing key topics | |
| Restructuring | |
| Removing links that don't belong | |
| Reducing bloat (removing stubs and obscure topics) | |
| Working on policy | |
| RS proofreading | are the sources reliable? There are some borderline cases but a simple tool can flag social media, non-independent sources, promotional pages, wikis which account for the overwhelming majority of problematic sources. The point is to add some friction. There are cases when such sources are legitimate and we want users to think when they use such sources. It is also better UX-wise since they get feedback immediately and not weeks later. |
| Editing encyclopedia content | |
| Creating tables | which can be tedious by hand; adding references, etc. |
| Draft-to-publication | (WP:AFC backlog is 2k+ articles and getting a review can take a while) |
| Editing | That is, the user copying/pasting AI-generated text into Wikipedia articles |
| Formatting references, including creattion of {{cite}} templates | |
| Merging articles | |
| Removing duplicates | Deleting duplicate list items, duplicate paragraphs, etc. |
| Updating WikiProject pages | |
| Working on drafts in draft space | |
| Potential Help Page Focus (alphabetical) |
|---|
| Articles |
| Categories |
| Content forks (the unacceptable kind, per WP:BADFORK) |
| Contradictions |
| Coordinates |
| Discussions |
| Dynamic help pages (refdesk, help desk, etc.) |
| Editor tools |
| Errors |
| External links sections |
| FAQs |
| Further reading sections |
| Gaps |
| Help pages |
| Introductory pages |
| Lists |
| Navigation aids |
| Notes |
| Poor quality writing |
| Problems |
| References sections |
| Reports |
| Rules |
| Scripts |
| Script comments (to help programmers understand what the code is doing) |
| Script documentation (often missing altogether) |
| See also sections |
| Sources |
| Tags |
| Tables |
| Templates |
| Tutorials |
| Vandalism |
| Warnings, caveats, risks |
| WikiProject pages |
| Danger | Description | Prevention |
|---|---|---|
| AI data pollution | Low quality, misleading, hallucinated, or otherwise erroneous content generated by AI. | Don't copy/paste AI-generated content into Wikipedia. Write your contributions yourself according to Wikipedia's content policies and style guidelines. |
| AI-Wikipedia degenerative feedback loop | Repetitive cycle of AI being trained on AI-polluted Wikipedia which is then polluted even worse by the pollution-degraded AI. | Don't copy/paste AI-generated content into Wikipedia. Write your contributions yourself according to Wikipedia's content policies and style guidelines. |
| Model collapse | Wikipedia and AI training data becoming so polluted by AI errors as to be rendered unacceptably unreliable, in turn making Wikipedia and the AI models trained upon it useless | Don't copy/paste AI-generated content into Wikipedia. Write your contributions yourself according to Wikipedia's content policies and style guidelines. |
| Diminished user agency in discussions, which may amplify the AI data pollution problem presented above | Using AI to speak for you on talk pages, making you the meat puppet of an AI that is error-prone, and biased in undisclosed ways—a form of sockpuppetry. This endangers the decision making and consensus building process on Wikipedia, which can result in editing rules and content decisions that can screw up Wikipedia even more. | Write your talk page messages yourself, marking AI support text (like tables, and examples of AI text) as AI-generated, to avoid confusion and to help others be on their guard for AI data errors. |
The tables above include ideas pulled from the discussion text, and additional brainstormed items.
If the tables prompt you to think of further ideas or comments, please share. Sincerely, — The Transhumanist 02:24, 24 February 2026 (UTC)
- Please don't ping me unless you have to. Gnomingstuff (talk) 04:29, 24 February 2026 (UTC)
- I found Chatgpt to be highly useful in generating or modifying database queries (quarry.wikimedia.org), editing existing mediawiki table code (like drop some columns) and editing template usage with parameters ending with numerals (like Template: Mapframe). Arjunaraoc (talk) 08:44, 24 February 2026 (UTC)
- Yeah, AI is much more net positive for programming questions than it is for encyclopedia writing. Programmers including myself use it all the time for asking it programming questions. Whatever AI suggests for programming is quick to check (run the code and see if it works), unlike encyclopedia writing (read sources thoroughly trying to catch hallucinations). –Novem Linguae (talk) 02:48, 25 February 2026 (UTC)
- Sorry, but I feel like this entire process has just been talking to a brick wall. Most of the things listed in your tables above were raised once, received strong opposition, which was then summarily ignored. Virtually every single item in your "editing encyclopaedia content" section have been rebutted by multiple members of the community with detailed examples on how they don't work. While AI tools will certainly have their place on Wikipedia in some of the roles you have mentioned, may I suggest spending a week or two around the AfC queue and familiarising yourself with the way AI tools are usually currently used on Wikipedia before proceeding with what is frankly not that much better than a wall of text.
- To take the example of paid editing raised earlier in the discussion, while paid editing done properly can be constructive, we take no issue with strongly discouraging paid editing on every page we could possibly fit such a thing into. I see no fundamental reason, again with the current way the tools are used on Wikipedia (and not some hypothetical proper way it could be used if everyone was magically better informed) why we should not do the same for AI, rather than pretending to be neutral on the issue because there is a nonzero probability it could be done properly. Fermiboson (talk) 11:55, 24 February 2026 (UTC)
- Seconding Fermiboson's comments. This is not a productive way of brainstorming without listening to any feedback that may be negative in nature. Chaotic Enby (talk · contribs) 12:20, 24 February 2026 (UTC)
- Yeah, +2 to that. I'm now regretting talking about how I've used AI and I'm frankly getting really fed up with this approach of taking a superficial look at our comments, chucking the positive ones in a summary table and calling it a day. @The Transhumanist you're not the first editor to do this and you won't be the last, but you've simply cherry-picked my positive examples without engaging at all with the warnings and caveats that bookended my comments. You have to read all of the text, not just the bullet points! After seeing these tables, to put it bluntly, I don't trust you to carefully digest a long discussion like this on your own, let alone to critically evaluate a machine-generated summary. ClaudineChionh (she/her · talk · email · global) 13:22, 24 February 2026 (UTC)
- Seconding Fermiboson's comments. This is not a productive way of brainstorming without listening to any feedback that may be negative in nature. Chaotic Enby (talk · contribs) 12:20, 24 February 2026 (UTC)
- No need to worry. The idea is to pick one option among the ideas generated to collaboratively write a help page on. There are two approaches going on at the same time here. Some of us are still in the gathering ideas phase, and are reserving our thoughts on the best choice until after that step in the brainstorming process is completed. Others, like yourself, have weighed in early. The process is far from complete. Nobody has started writing the help page, so you have nothing to worry about. Also, what good would it serve anybody if everybody writing it were angry at each other? Keep in mind, it's going to be a draft, which will never get published if collaboration isn't successful. The goal is to find an option that we all more or less agree with. Note, that the tables are not finished yet, as I barely got started filling in the details column—and even though I spent much of the night on this, I pledge to continue working on it as time allows, which may be a day or two, because this has turned into one big long wall of text. Also, the tables aren't just for selection, they are an aid for future reference as well. In that regard, this has been one very productive discussion, and quite a learning experience. I'd appreciate it if you exercised some patience and assumed good faith. — The Transhumanist 17:27, 24 February 2026 (UTC)
- Apologies if I have not been clear enough. I oppose any creation of help pages to guide new editors in using LLMs to assist their editing, as this creates the impression that this practice is accepted by the community when it is not, and implicitly encourages new editors to do so. If you want to write such a help page, given the official-looking nature of such pages, you will have to start an RfC to gain community consensus on what you want to do - not just what you put on the page but whether to do it in the first place. Fermiboson (talk) 22:01, 24 February 2026 (UTC)
- I'll echo Fermiboson – there is no consensus yet to write these help pages at all. If you go ahead with your plan for these help pages without establishing consensus first, you risk wasting a lot of time and effort and possibly tanking your reputation in the community. ClaudineChionh (she/her · talk · email · global) 22:36, 24 February 2026 (UTC)
- No need to worry. The idea is to pick one option among the ideas generated to collaboratively write a help page on. There are two approaches going on at the same time here. Some of us are still in the gathering ideas phase, and are reserving our thoughts on the best choice until after that step in the brainstorming process is completed. Others, like yourself, have weighed in early. The process is far from complete. Nobody has started writing the help page, so you have nothing to worry about. Also, what good would it serve anybody if everybody writing it were angry at each other? Keep in mind, it's going to be a draft, which will never get published if collaboration isn't successful. The goal is to find an option that we all more or less agree with. Note, that the tables are not finished yet, as I barely got started filling in the details column—and even though I spent much of the night on this, I pledge to continue working on it as time allows, which may be a day or two, because this has turned into one big long wall of text. Also, the tables aren't just for selection, they are an aid for future reference as well. In that regard, this has been one very productive discussion, and quite a learning experience. I'd appreciate it if you exercised some patience and assumed good faith. — The Transhumanist 17:27, 24 February 2026 (UTC)
- I oppose this effort for the reasons articulated by Novem Linguae. Wikipedia's value is as a human-written source of knowledge. Anyone choosing to read our articles 5 years from now will be making a conscious decision to seek out human-written content. If people want to create content with LLMs they should go do it somewhere else. There is no shortage of options. NicheSports (talk) 00:10, 25 February 2026 (UTC)
Fermiboson, apology accepted. I think I understand the situation now: the community is at a stalemate in policy discussions on LLMs. They have neither been banned nor accepted outright. Then I come along, figuring that if LLMs aren't disallowed, and people are using them, there should be guidance on how to do so, and that there must be something that could be agreed upon to write about. But, any established help guidance on LLMs could shatter the status quo in the direction of official acceptance, thwart the effort to ban or severely limit LLMs, and potentially open the floodgates of unskilled LLM users further than they already are, jeopardizing what you clearly see as a defense of the integrity of Wikipedia's human-authored data. Therefore, any movement either way on the policy standoff, should be through official proposals or policy drafting. That is, be part of the mainstream LLM debate, not some discussion of help instructions on a backwater talk page. No wonder you were getting noticeably agitated. Upon reflection, I have to agree with you. Discussing chatbot help is not helpful in the current environment, as it is merely an extension of the politically sensitive LLM situation. My apologies to you. Sincerely, — The Transhumanist 06:16, 25 February 2026 (UTC)P.S.: I'm fine with the closing/archiving of this discussion.
How does this work in practice?
[edit]I post this as a courtesy message, after pondering what just happened above, so you are aware of the questions an analysis of that discussion may raise...
I think I spotted a danger to the anti-LLM mission, in which the assumptions made in the discussion above might put the anti-LLM guards of Wikipedia off their guard. Since you have apparently come to the conclusion that a community consensus (or specific lack thereof) has been established, it may help to be aware of these questions so that you can take whatever measures are needed to protect your mission (and Wikipedia).
After rereading the discussion above, it looks like it has resulted in a site-wide topic ban prohibition on LLM help pages, and the assertion was made that inorder to create such a page, someone would have to post an RfC first.
It is pretty shocking that such a decision, that appears to be on the order of a behavioral sanction, guideline, or policy, could be made on a backwater talk page, in answer to an invitation to a page writing collaboration.
So, I ask the question: "How would that work in practice?"
It seems inevitable that somebody, say "John Doe" or "Jane Doe", will someday create an LLM help page. You will then be faced with a physical page in defiance of your decision above. What must you do in order to get rid of that page?
Will that person be considered by the community to be in violation of a topic ban prohibition, guideline, or policy?
Could the page be speedy deleted, or would it have to go through the normal channels of deletion?
Given the state of the stand-off that currently exists on the issue of banning LLM use on Wikipedia, it is not certain that an AfD MfD on such a page would result in "Delete".
Therefore, it would behoove you to take whatever actions are necessary to prevent that from happening, such as start an RfC and/or a draft of a guideline or policy, or a discussion to add a passage to an existing guideline or policy.
Otherwise, you may find yourself in a situation where you are pointing to the discussion above to justify blocking preventing someone in the future from creating an LLM help page, and the community may not agree that a precedent was well established.
Just a heads up.
Sincerely, — The Transhumanist 22:58, 25 February 2026 (UTC)
@Alaexis, Athanelar, Викидим, Chaotic Enby, ClaudineChionh, Emi-Is-Annoyed, Fermiboson, Izno, Kowal2701, Novem Linguae, Royiswariii, ScottishFinnishRadish, SmokeyJoe, Stepwise Continuous Dysfunction, Suriname0, The Anome, Thebiguglyalien, Thryduulf, Wakelamp, and WhatamIdoing: — The Transhumanist 03:08, 24 February 2026 (UTC)
- The issue isn't necessarily LLMs, but the problem that:
- Any AI usage on Wikipedia has the potential to become problematic.
- You're making an article without having consensus in favor of LLMs (meaning people will disagree with your actions).
- There still hasn't been enough use-cases brought up in which running LLMs on Wikipedia would make a net positive change, or that couldn't be done with human-made scripts/specialized AI models.
- I don't know what you mean by 'anti-LLM mission', but the worry is more generally about new users treating AI as a crutch instead of as a potentially dangerous tool (especially if you let it create page content like some of the suggestions listed).
- Emily * Emi-Is-Annoyed (message me!) 07:19, 26 February 2026 (UTC)
- @Emi-Is-Annoyed"There still hasn't been enough use-cases brought up in which running LLMs on Wikipedia".
- @The Transhumanist "I suggest we brainstorm a bunch of potential uses for AI on Wikipedia, and then choose one to write up a draft help page for, in a sandbox or on a subpage, as a start."
- Brainstorming within an agreed structure Is a good way to find the use cases. And the draft help page could be on mediawiki so not visible to editors. Wakelamp (talk) d[@-@]b 10:22, 26 February 2026 (UTC)
- This was about them wanting to potentially publish a help page against consensus. That's not the same as drafting/brainstorming. And people have suggested great uses for AI in this thread, and they're just not done with LLMs. Emily * Emi-Is-Annoyed (message me!) 11:36, 26 February 2026 (UTC)
- It was about choosing a topic to write a help page draft on, and not without consensus (That's why help and AI projects were both invited). I am in support of a sandbox/brainstorming effort, but the opposition is claiming that one needs to establish consensus formally to move forward on the creation of any draft (and sandboxes are a form of drafting space). Keep in mind that since any draft can potentially be published, to prevent that from happening, strategically, you have to prevent the drafts. And so, what you have left is discussion on what LLM uses fall under AI tools or not. What use cases are missing from Table 1 above, and are they AI tool concepts? — The Transhumanist 04:07, 27 February 2026 (UTC)
- No one is claiming that consensus is needed to start drafting anything, but that publishing the draft may go against consensus, and that, in general, being less than receptive to feedback (including negative feedback) makes for a less collaborative drafting process. Saying that
to prevent that from happening, strategically, you have to prevent the drafts
is assuming nefarious motives that go far against WP:AGF. Chaotic Enby (talk · contribs) 12:59, 27 February 2026 (UTC)- It's not nefarious; preventing something you think is bad is common sense, and nobody is accusing anyone, nor holding that against anyone—just pointing out the strategic factors here. No criticism intended. Though we do need to clarify what is allowed and what isn't. — The Transhumanist 23:44, 27 February 2026 (UTC)
- Somebody did claim above that consensus is needed before proceeding with writing any such thing:
If you want to write such a help page, given the official-looking nature of such pages, you will have to start an RfC to gain community consensus on what you want to do - not just what you put on the page but whether to do it in the first place.
And another editor seconded it. That makes it pretty clear they don't want it written at all. — The Transhumanist 23:54, 27 February 2026 (UTC)- Yes, that is for writing the help page itself (as they say
given the official-looking nature of such pages
), presumably not for a draft. Chaotic Enby (talk · contribs) 00:09, 28 February 2026 (UTC)- @Chaotic Enby: The proposal was to create a draft:
I suggest we brainstorm a bunch of potential uses for AI on Wikipedia, and then choose one to write up a draft help page for, in a sandbox or on a subpage, as a start.
And that proposal was opposed, with phrases such asoppose any creation
andoppose this effort
. I think they made their opposition very clear. If you still doubt this interpretation, you could ask them to clarify their positions. — The Transhumanist 01:57, 28 February 2026 (UTC)
- @Chaotic Enby: The proposal was to create a draft:
- Yes, that is for writing the help page itself (as they say
- No one is claiming that consensus is needed to start drafting anything, but that publishing the draft may go against consensus, and that, in general, being less than receptive to feedback (including negative feedback) makes for a less collaborative drafting process. Saying that
- It was about choosing a topic to write a help page draft on, and not without consensus (That's why help and AI projects were both invited). I am in support of a sandbox/brainstorming effort, but the opposition is claiming that one needs to establish consensus formally to move forward on the creation of any draft (and sandboxes are a form of drafting space). Keep in mind that since any draft can potentially be published, to prevent that from happening, strategically, you have to prevent the drafts. And so, what you have left is discussion on what LLM uses fall under AI tools or not. What use cases are missing from Table 1 above, and are they AI tool concepts? — The Transhumanist 04:07, 27 February 2026 (UTC)
- This was about them wanting to potentially publish a help page against consensus. That's not the same as drafting/brainstorming. And people have suggested great uses for AI in this thread, and they're just not done with LLMs. Emily * Emi-Is-Annoyed (message me!) 11:36, 26 February 2026 (UTC)
- It's unlikely folks would get blocked for making such content in good faith. If it's added to an existing page, it might get reverted. If a brand new page is created, it would probably go to WP:MFD. I don't think a "topic ban" is the right term, since a topic ban is usually a sanction applied at a noticeboard to a specific user. If you're reading the above discussion correctly, what we have here is a talk page consensus. Defying a talk page consensus is not good, but would probably not result in a block on the first offense. –Novem Linguae (talk) 23:03, 25 February 2026 (UTC)
- I've pinged @Moxy:, as they are the most prolific help page editor and may help provide perspective on this issue. — The Transhumanist 23:19, 25 February 2026 (UTC)
- I'm really not sure what is being asked here. But we have WP:G15....as for creating pages to help using LLM.... I don't see how you can stop a community member from making an essay. Moxy🍁 00:43, 26 February 2026 (UTC)
- I've pinged @Moxy:, as they are the most prolific help page editor and may help provide perspective on this issue. — The Transhumanist 23:19, 25 February 2026 (UTC)
- I don't think this discussion established any new precedent (certainly not a binding community-wide consensus), but, even if it does, it isn't a topic ban any more than, say, the fact that we can't write a help page on making convincing-sounding hoaxes. It will likely go to MfD (this certainly doesn't establish a speedy deletion criterion!), and I can expect this discussion to be brought up in arguments. What is more certain is that we can't mark a page as being authoritative without explicit consensus: even without considering this discussion, it is the responsibility of whoever write such a page to get consensus for it, rather than that of people opposing it to get consensus against it.Regarding whether such a MfD would close as "Delete": maybe, maybe not (it might also close as "Userfy", or be tagged as an essay, or kept), although that is still crystal-balling, and we shouldn't change opinions now based on our guesses of whether a future MfD might line up with it. Chaotic Enby (talk · contribs) 23:19, 25 February 2026 (UTC)
- I was using "topic ban" generically, not in the Wikipedia jargon sense. And generically, it definitely fits. — The Transhumanist 01:58, 1 March 2026 (UTC)
Artificial intelligence resources
[edit]Here's a new page; feel free to improve it...
Sincerely, — The Transhumanist 10:11, 24 February 2026 (UTC)
- Thanks a lot for putting that page together! Really helpful! Chaotic Enby (talk · contribs) 12:39, 24 February 2026 (UTC)
- You are welcome. — The Transhumanist 06:45, 25 February 2026 (UTC)
- @Transhumanist: This is a great resource. I tried to start a private conversation on your talk page, and can't figure out how to create a new topic. Where did the new topic tab go? In any case, you'll be interested in this. An editor started going through some articles tagged for being overly promotional, and adding lots of LLM text. The content was written fairly well, and grammatically correct, but the tone was off. Nonetheless, most non-editor readers wouldn't know it was written by an LLM. The kicker was the poor/non-existent sourcing, which most readers also won't notice, and the speed which which the content was created. Special:Contributions/Neepn3r The user has been warned and many of the changes were reverted. Next time we may not be so lucky to spot this. I've played around with LLM to create draft content, and the machines aren't good at discerning which sources are good or not. One example is they pull bad and outdated info that sits in press release archives, for old discontinued products, treating them as current. The tools aren't there yet. But that will surely be fixed before the bots can have a conversation to build consensus, and pass the Turing Text. Once that happens, look out. STEMinfo (talk) 01:43, 27 February 2026 (UTC)
- @STEMinfo: Thanks for the heads up. (Keep in mind that conversations on user talk pages are not private.) It sounds like the user interface may have been covered up on the skin that you have set in preferences, so I've added a big button for creating a new thread, for convenience. You're right that chatbots, especially their free tier, can't handle some article writing tasks yet. But, there are quite a few tasks that they can handle pretty well. (See the tables above.) Even for those, and new ones they will be able to do in the near future, we'll have to be especially careful to watch for biases. — The Transhumanist 01:11, 28 February 2026 (UTC)
- @Transhumanist: This is a great resource. I tried to start a private conversation on your talk page, and can't figure out how to create a new topic. Where did the new topic tab go? In any case, you'll be interested in this. An editor started going through some articles tagged for being overly promotional, and adding lots of LLM text. The content was written fairly well, and grammatically correct, but the tone was off. Nonetheless, most non-editor readers wouldn't know it was written by an LLM. The kicker was the poor/non-existent sourcing, which most readers also won't notice, and the speed which which the content was created. Special:Contributions/Neepn3r The user has been warned and many of the changes were reverted. Next time we may not be so lucky to spot this. I've played around with LLM to create draft content, and the machines aren't good at discerning which sources are good or not. One example is they pull bad and outdated info that sits in press release archives, for old discontinued products, treating them as current. The tools aren't there yet. But that will surely be fixed before the bots can have a conversation to build consensus, and pass the Turing Text. Once that happens, look out. STEMinfo (talk) 01:43, 27 February 2026 (UTC)
- You are welcome. — The Transhumanist 06:45, 25 February 2026 (UTC)
Rewrote Wikipedia Maintenance section on AI
[edit]See: Wikipedia:Maintenance#Artificial intelligence. — The Transhumanist 10:11, 24 February 2026 (UTC)
- I personally disagree with that rewrite, as it fundamentally changes the meaning of the section to be a lot more promotional than it previously was. For example,
have become useful for writing
becominghave exploded onto the scene and can write material much faster than humans can
, or the (yet) added tothey cannot write as well as humans (yet)
(which, while it may be accurate, is still promotional and might be seen as crystal-balling). Chaotic Enby (talk · contribs) 12:29, 24 February 2026 (UTC)
- Promotional: heck yes. For recruiting. Though, feel free to revert or revise it in any way you see fit. I'm easy. I was going for characterizing the scope of the problem to emphasize how badly maintenance is needed there. No worries. — The Transhumanist 15:57, 24 February 2026 (UTC)
- I think the rewrite looks OK. Considering that you are pro AI, I think you did a good job of keeping the existing anti AI tone. I think AI did explode onto the scene, so I find that statement accurate. (It spawned a noticeboard, maintenance tag, afc decline reason, and wikiproject. That's a pretty big impact.) –Novem Linguae (talk) 02:56, 25 February 2026 (UTC)
- I think I'm less pro AI than I was before. Having an oasis of human generated, or at least human-curated, knowledge is starting to look appealing. Now I'm curious as to what I'll find if I follow Fermiboson's suggestion of hanging out at AfC for a week. — The Transhumanist 06:33, 25 February 2026 (UTC)
Wikimedia AI strategy
[edit]Related to the above discussions. WMF and en WP seem to have different thoughts on AI
WMF High Level Strategy The goal of the AI strategy lines up with the Brand Stewardship report (see figure 4) and the product and revenue dance partners document
- More and younger volunteers (rich experiences such as Roblox)
- Verifiable
- Fund the future of ‘free.’ Product must drive revenue
WMF AI strategy The WMF AI strategy is outlined in this article, detailed here, and Y-combinator and reddit discussions.
Implementation
- Workflow Automation - "Supporting Wikipedia’s moderators and patrollers with AI-assisted workflows that automate tedious tasks in support of knowledge integrity";
- AI help - "Giving Wikipedia’s editors time back by improving the discoverability of information on Wikipedia to leave more time for human deliberation, judgment, and consensus building"; (Being discussed currently on Wikipedia talk:Help Project#Planning help for AI workflows above.
- Automated translation - "Helping editors share local perspectives or context by automating the translation and adaptation of common topics"; (Similar to OKA)
- Guided editing - "Scaling the onboarding of new Wikipedia volunteers with guided mentorship".
- Mobile first design
General Discussion
[edit]- Regardless of AI discussions,
Product must drive revenue
is not the mindset I would be expecting from a non-profit foundation, and I don't want to see this mindset guide decisions here on Wikipedia. Chaotic Enby (talk · contribs) 13:02, 27 February 2026 (UTC)- @Chaotic Enby: It evolved out of the situation in which Big Tech's crawling of Wikimedia servers was getting expensive for Wikimedia (many third-party apps now search Wikipedia in real time, and that generates a great deal of traffic, and serving such traffic costs money). And since AI apps display the data without sending user traffic (potential donors) back to Wikipedia, Wikimedia set up a high traffic solution just for them in the form of enterprise access, through which the biggies like Microsoft, Google, OpenAI, Anthropic, etc. crawl Wikipedia (and its sister projects) on enterprise servers for a fee, to help them pay their fair share, and so that it doesn't slow down the server banks the the rest of us are on. This generates millions in revenue per year, but is still a relatively small percentage (13%?) of total Wikimedia fundraising. It is good strategy to look for ways to expand this type of revenue generation. It's strictly Wikimedia-level.
As for Wikipedia-level, I'm surprised the encyclopedia has remained neutral and independent for so long. I expected that corporations would take it over 10 years ago as a promotional platform, and am amazed that they have not. The potential problem is, that once enough corporate on-the-clock employees become users, consensus would expectedly shift to them. It hasn't happened yet, so, big sigh of relief. I suppose the issue is being rendered moot by AI apps, and will be completely moot when they've captured the bulk of reference/research/learning support traffic on the Web, leaving Wikipedia with a trickle. Which appears to be the trend and most likely projection, considering Wikimedia's stance on AI. — The Transhumanist 02:09, 1 March 2026 (UTC)
- @Chaotic Enby: It evolved out of the situation in which Big Tech's crawling of Wikimedia servers was getting expensive for Wikimedia (many third-party apps now search Wikipedia in real time, and that generates a great deal of traffic, and serving such traffic costs money). And since AI apps display the data without sending user traffic (potential donors) back to Wikipedia, Wikimedia set up a high traffic solution just for them in the form of enterprise access, through which the biggies like Microsoft, Google, OpenAI, Anthropic, etc. crawl Wikipedia (and its sister projects) on enterprise servers for a fee, to help them pay their fair share, and so that it doesn't slow down the server banks the the rest of us are on. This generates millions in revenue per year, but is still a relatively small percentage (13%?) of total Wikimedia fundraising. It is good strategy to look for ways to expand this type of revenue generation. It's strictly Wikimedia-level.
@Wakelamp: It is not clear whether you have interpreted the Wikimedia plans in context...
Our prioritized strategy is to invest in AI to support editors in areas where AI can have a unique advantage over other technologies to solve problems of impact and to prioritize editors’ agency in interacting with AI. More specifically, we recommend investing in AI to support the editors as follows: Create more time for editing, human judgment, discussion, and consensus building. Editors spend a significant amount of time before they can edit Wikipedia. Part of this time is invested in finding the information they need for their editing, discussion, or decision making. AI excels at handling tasks such as information retrieval, translation, and pattern detection. By automating these repetitive tasks, AI frees up editors’ time to focus on areas of encyclopedic work that require human expertise: editing, discussions, consensus building, and making judgment calls in complex situations where the stakes are high and the impact is significant.
That sounds like it covers tools that they are going to provide, not help pages written by this department on chatbots in general. Once Wikimedia has provided the MediaWiki software with such features, or make further AI tools available as separate apps, the corresponding pages on Wikipedia, such as Help:Searching will get updated, or new pages to explain the specific tools (wherever they happen to reside). Though the tools they aluded to in that paragraph seem pretty chatbotty; it would be awkward if they provided tools that the Wikipedia community rejected. By the way, what AI information retrieval tools do they have under development for us? — The Transhumanist 02:41, 1 March 2026 (UTC)
Workflow Automation Discussion
[edit]AI Help
[edit]Automated translation
[edit]Guided editing
[edit]Mobile first design
[edit]Wakelamp (talk) d[@-@]b 14:59, 26 February 2026 (UTC)
- @Wakelamp: What percentage of Wikipedia traffic is from mobile devices? — The Transhumanist 01:53, 1 March 2026 (UTC)
