Wikipedia:Miscellany for deletion/User:BrandonXLF/ReferenceExpander

The following discussion is an archived debate of the proposed deletion of the miscellaneous page below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the page's talk page or in a deletion review). No further edits should be made to this page.

The result of the discussion was: keep. However, there is also consensus to disable the script until such time that the major bugs have been fixed and relative stability has been achieved. If this script is being widely used, and that use is causing significant damage to the project, then both the editors using the tool as well as the creator of the tool are to blame. Therefore, @BrandonXLF: please disable this script temporarily to work out the bugs. Then, consider opening up access to the script to a small number of users for additional testing before opening it up more widely. —⁠ScottyWong⁠— 06:04, 20 June 2023 (UTC)[reply]

User:BrandonXLF/ReferenceExpander (edit | talk | history | links | watch | logs)
User:BrandonXLF/ReferenceExpander.js (edit | talk | history | links | watch | logs)[a]

Careless use of this script has spawned multiple AN/ANI threads (1, 2, 3) and created a huge mess resulted in thousands of damaged citations that will likely take several months and hundreds of collective hours of work to clean up. I don't have any technical expertise so I'm not qualified to evaluate the code, but a cursory examination reveals that a very large portion (perhaps a majority) of edits made using this script have removed useful information from references or introduced errors. The author of the script, BrandonXLF, has added a disclaimer to the UI, but hasn't made any other changes to the script since he was made aware of the issues. The disclaimer hasn't been entirely effective at preventing misuse, as seen here. As a preventative measure, this script should be deleted or at least disabled until its functionality is improved. — SamX [talk · contribs] 20:03, 27 May 2023 (UTC) edited 03:56, 30 May 2023 (UTC)[reply]

  1. ^ A technical decision was made to nominate the documentation page. This discussion is intended to pertain to the javascript code. Folly Mox (talk) 13:51, 3 June 2023 (UTC)
  • @SamX Instead of outright disabling it, could we restrict it to more experienced editors? Maybe extended confirmed? BrandonXLF (talk) 20:30, 27 May 2023 (UTC)[reply]
    @BrandonXLF: Philoserf (talk · contribs) had made nearly 80,000 edits by the time he was blocked. Leomk0403 (talk · contribs) has made over 6,000 edits, and has introduced errors such as this one even after you added the disclaimer. I don't think such a solution would be effective. — SamX [talk · contribs] 20:45, 27 May 2023 (UTC)[reply]
    User:BrandonXLF how about instead of restricting the users, you restrict the input? Your script never removes information from a reference that is just a bare URL, so if it could check against that and skip all its operation if there's anything else between the ref tags, I feel it would be safe for any editor to continue to run.
    I had planned on nominating it for deletion myself but didn't put together the nom first. Your script works well for a limited set of sources, adequately for a limited set of use cases, and damagingly for the remainder. I'm not going to enumerate the errors and suboptimal behaviours in this edit, but since we're here I'll prepare a categorized summary to be posted later.
    For now, at root, there are three fundamental design flaws
    1. it doesn't check its input, causing information loss
    2. it doesn't check its output, leading to linking to 404 errors and usurped domains, as well as ridiculous suggestions like at Bryce Canyon National Park where it filled author fields with addresses and phone numbers
    3. and it doesn't process the pages it's served adequately, which means that even the references it doesn't damage are usually still incomplete but now in a cite web template.
    It seems to trust completely that all the information needed for the suggested reference can be obtained by naively processing metadata from a url. Careful use leans its processing towards being a net positive, but I'd rather it did basic error checking and not have to rely on every editor checking the output against the details of the source they're attempting to cite, which risks another Philoserf and creates so much work for the editors using your script that they might as well fill in the templates manually.
    I am experiencing a lot of discomfort being so unkind to a stranger on the internet, and I've written my own share of bad code. Folly Mox (talk) 23:54, 27 May 2023 (UTC)[reply]
    FWIW, "Your script never removes information from a reference that is just a bare URL" may not be true. I think if it arrives at the link and gets a 404 page or redirect it may sometimes replace the URL with nonsense. –jacobolus (t) 20:29, 1 June 2023 (UTC)[reply]
    I recently again found a conversion by the script of an https url to http while reviewing and repairing Smash Mouth (https://www.realrocknews.com became http://www.realrocknews.com). Beccaynr (talk) 15:13, 4 June 2023 (UTC)[reply]
    My browser performs the same redirect to an unencrypted connection, but point taken that if the fetched URL is not the same as the input URL, it's probably time to have a human QA it. Folly Mox (talk) 15:24, 4 June 2023 (UTC)[reply]
  • WP:IAR and delete/disable this ASAP. No offence to BrandonXLF, but there are a lot of editors (and people) who will not use tools carefully. For most semi-automated tools, they are either very difficult to screw up with in a way to cause a lot of damage without people noticing (WP:TWINKLE) or are gated behind human approval (WP:AWB or WP:HUGGLE). This tool, on the other hand, is gated behind nothing and makes it easy for people who are not careful to mess up a lot of sources without anyone noticing.
    At the very least, as an interim measure, I would like to see manual approval to a small group of editors while bugs in the tool are ironed out. Chess (talk) (please Reply to icon mention me on reply) 06:45, 28 May 2023 (UTC)[reply]
  • Change so that it does nothing if there's anything other than a bare url; failing that, delete. Releasing this out into the wild was irresponsible. —Alalch E. 18:21, 28 May 2023 (UTC)[reply]
  • Keep: Discuss modifications if the need be. One can do far more damage with Twinkle et al. if they want to, but that won't be a reason to delete Twinkle for every user. CX Zoom[he/him] (let's talk • {CX}) 20:13, 28 May 2023 (UTC)[reply]
    • What systematic damage to the encyclopedia was ever done with Twinkle?—Alalch E. 20:31, 28 May 2023 (UTC)[reply]
    • That just sounds like What about X? for bots.... XOR'easter (talk) 14:16, 31 May 2023 (UTC)[reply]
      Firstly, it isn't a bot, it's a user script and every script has potential for abuse. We shouldn't delete them because person X went rogue and abused the wonderful resources available to them. Instead of deleting these, one should explore ways to ensure that incompetent users aren't able to abuse it. There are several ways to do it: by creating whitelists of approved users (see WP:ANVDL; WP:AWB, the latter's whitelist doubles also for WP:JWB) or by creating blacklists barring those who are known to abuse it while allowing everyone else to use it. There might simply be other ways that just enhances the script such that even in the absence of human review (which is expected, but oft ignored), the script itself recognises the issues and just doesn't do them. CX Zoom[he/him] (let's talk • {CX}) 08:16, 16 June 2023 (UTC)[reply]
  • Comment - this looks like a potentially useful tool that is not ready for prime time. Deletion seems to be a rather drastic approach. Can this tool be improved so that what it produces can be relied upon? -- Whpq (talk) 17:32, 29 May 2023 (UTC)[reply]
    @Whpq: I'd support the script being kept if improvements are made that address the problems it's been causing, but its continued existence in its current state is a liability, and perhaps actively harmful to the project. There's definitely some urgency here given the potential for further misuse. I don't think deletion would be an ideal outcome, but it'd be a heck of a lot better than nothing. — SamX [talk · contribs] 18:02, 29 May 2023 (UTC)[reply]
    Can this not be removed from wherever it is being advertised, like Wikipedia:User scripts/List? And perhaps notify those using it to stop using it until the script is fully baked? -- Whpq (talk) 18:22, 29 May 2023 (UTC)[reply]
    That sounds like a good interim solution. — SamX [talk · contribs] 18:40, 29 May 2023 (UTC)[reply]
  • Weak Keep as a tool that needs fixing, as discussed here. Deleting is too drastic a measure when major repairs are instead needed. Robert McClenon (talk) 22:49, 29 May 2023 (UTC)[reply]
  • I've been struggling with this. I mentioned above how uncomfortable I am tearing apart someone's work, and I'm not a thorough thinker or categorizer. This script can be used for good, but it requires better safeguards internal to the code, not just trusting users to double check its work rather than blindly accept its suggestions.
    The problems all really come down to inadequate analysis. I have some suggestions roughly in order of importance:
  1. Trust the input over the served page. Run the "populate citation template" parsing function on the input data, and run it separately on the URL.
    1. Any newly populated fields obtained from the URL, add to the template created from the existing data.
    2. Any fields that are populated from each function return that differ between the two results, leave as they are in the original but highlight the results from the URL for users to compare before they decide to commit the suggestion or not.
    3. If the count of alphanumeric characters in the populated fields in the template created from the input data is lower than the count of alphanumeric characters minus the URL in the input data itself, the parser has missed something and the suggestion should be discarded. This check should handle things like bundled citations, quotes, and other notes that I'm not smart enough to have suggestions about.
      Trusting the manually entered information over information parsed from the URL with this or a similar process should by itself solve a large number of issues. Crucially, the citation-wrecking error of processing a link to the root of a usurped domain, which destroys all the information in the reference, but also other information-loss scenarios like the script failing to parse out an author, publication date, page number, etc. from the served page but which was present in the original reference, changing references entirely because someone got an ISBN off by one, the script mistaking an author for a title, or a website for a title, and many other misparses.
  2. Stop removing archives. When a pair of ref tags contains a call to webarchive, incorporate that data into the new citation rather than discarding it. This is one of the more damaging errors, since it can make verification impossible without checking the article history if the link is dead, and takes a long time to repair manually since we have to go dig up an archive.
  3. If the input contains a citation template and any other information, pass along unchanged to the output any information outside the citation template, in the same location.
  4. Do basic error checking on the results. If the page title contains "404", "Page not found", "Request rejected", "Not anonymous" etc., skip the reference. If the author fields contain numeric strings or "Contact Us" or "Uploading..." or more than five non-consecutive whitespace characters (e.g.), skip the reference. String parsing for arbitrary information over the set of all webpages is not an easy task, and it's better to know your limits than assume that your parser is going to get it right every time.
  5. Discriminate between editors and authors in book results. Search for "ed." or "eds." or "edited by" in the served webpage.
  6. Incorporate functionality to handle chapter contributions by authors who are not listed amongst the main editors of books. I know page numbers for chapters will probably not be possible, since this is almost never included in the page results, but comparing the input to the suggested output should be a viable route to this functionality.
  7. Stop escaping special characters (particularly =, &, and ?) = in URLs. I see your homebrew Citoid.js is responsible for this, and a bot (don't remember which) has been following behind edits made by your script and correcting this unwanted behaviour. User:Citation bot will correct this when run on pages following edits made using your script.
    11:31, 4 June 2023 (UTC): Updating this bit again to add that escaping % can easily break links containing non-ASCII glyphs.
  8. Look for special separator characters like dashes and pipes in the html title metadata, since they usually indicate a break between the actual title and the name of the website, series, or author, and any information after the separator almost certainly doesn't belong in the title= parameter.
  9. Any template calls in the input data that are not a citation template or webarchive template, or anything your script will be resolving, like Template:bare URL inline, pass along unchanged. This should stop the script from removing things like Template:pd-notice, which breaks attribution.
These changes should improve the script to the point where it no longer damages references. To improve the script to the level where it genuinely can expand references to a point where a manual double check is probably not needed, I suggest the following changes:
  • In served webpages, look for boxes like "how to cite this page" or "download citation" to make things easier. (or "entry information" "datasheet citation" etc.)
  • Look harder for things like authors and publication dates, which are often at the top or bottom of the body text.
  • Check better to see if the citation is to a book or news article. I've only seen ReferenceExpander create Template:cite book or Template:cite journal when it's fed an ISBN or DOI or already properly formatted citation. In all other cases it generates Template:cite web, irrespective of the type of source, just because it happens to have a URL. (Edited to add that I have now seen the script produce Template:cite book on its own, although in two cases it should have been Template:cite magazine.)
  • Be more discriminating about populating the website= parameter. About nineteen times out of twenty, I see the script fill this in with whatever is between the https:// and the next forward slash character, which is trivially obvious by inspecting the URL and adds no value.
  • This is a genuine nitpick, but it's been frustrating for me personally to go through all these diffs and see how many times the script has added the useless (on en-wiki) language=en field, while failing to add a language= parameter for foreign languages, like Malay, Russian, Icelandic, or Latin, all of which I've encountered during my repairs. language=en doesn't alter the appearance of the page on en-wiki, and so adds no benefit to the reader. (The script does sometimes add parameters identifying foreign language sources; its lapses in that regard are only frustrating in the context of how frequently it needlessly identifies sources as English.)
I've repaired some citations during the course of this cleanup that seemed impossible for even the most thoughtfully crafted algorithm, like one where the publication information on google books matched a different version than the preview pages, and having done some string parsing lo these many decades ago I understand it's a difficult task and I'm not expecting perfection. I'd rather see the script improved than thrown out, and it's true that we have User:Philoserf to blame for the vast, vast majority of bad edits facilitated by this script. But I think I said in an edit summary somewhere that ReferenceExpander gave Philoserf a lot of dumb suggestions.
We trusted Philoserf to make constructive edits, and Philoserf trusted ReferenceExpander to make good suggestions. The cleanup has already taken tens, maybe hundreds of volunteer hours, and it's unclear how far we've gotten, because the failure states are so varied every repair is its own journey. Each reference usually takes meet about five to ten minutes (I'm likely slower than most editors due to editing on mobile and also maybe my standards are too high?), and sometimes there are dozens of references in a diff. Right now I think the safest thing is to disable the script while BrandonXLF works on improvements, then maybe we can do a trial run like BRFAs until we're satisfied it can be used safely. Unfortunately if we can't get the script disabled by consensus or voluntarily by the maintainer, my second choice is delete. I'm pretty sure there are other scripts that have similar functionality (ReFill maybe?) without the dangers.
I intend come back and add diffs to this.  Done Folly Mox (talk) 06:19, 30 May 2023 (UTC) Edited 09:54, 30 May 2023 (UTC). Diffed 31 May 2023[reply]
Peeking at the code again, it looks like the entire reference suggestion hinges on getCitoidRef, which relies on mediawiki's own Citoid.js, so I'm wondering if the bulleted suggestions above might be asking BrandonXLF to do the impossible, and maybe other citation creating scripts using Citoid will have the same error rate given an input URL. I'm also not sure if there's a way to call a Citoid function to create its JSON object or whatever it does based on the input text of an existing reference rather than a URL, but that's really what needs to happen to make sure this script doesn't lose information when altering references that include more than a bare URL. It looks like the fix might not be as simple as I conceived, as per usual in programming. Still planning on adding diffs. I have other things going on unfortunately. Folly Mox (talk) 18:08, 30 May 2023 (UTC)[reply]
I just repaired an edit in which the script ignored {{cbignore}} and removed it from the reference. I'm not sure if user scripts are supposed to follow {{cbignore}}, but I figured I'd bring it up here since it probably isn't desired behavior. — SamX [talk · contribs] 16:28, 31 May 2023 (UTC)[reply]
As far as I've been able to determine from my halfass code review, the entire core functionality goes like: extract first URL from input data → generate citation template based off of processing the URL and nothing else → suggest change. It doesn't look at anything. Folly Mox (talk) 16:44, 31 May 2023 (UTC)[reply]
  • Keep. Deleting the documentation page will not prevent the script from being (ab)used. Instead you should create an interface-protected edit request and ask intadmins to blank the .js page. Outright deletion of either page would be completely counterproductive because that would only make examining what went wrong or informing users of the disabling more difficult. Nardog (talk) 13:30, 30 May 2023 (UTC)[reply]
  • Keep the documentation page; no comment on the script itself. This documentation page is, at the very least, historically relevant. I don't know enough about the script's problems to determine whether it needs disabling or not. Skarmory (talk • contribs) 00:51, 31 May 2023 (UTC)[reply]
    Why are people commenting "keep the documentation page"? The nomination here is very obviously referring to the user script, not the documentation. The deletion template has been put on the documentation page because you cannot use wikitext templates in JavaScript pages. 163.1.15.238 (talk) 14:49, 31 May 2023 (UTC)[reply]
    Because the name and heading of this page also lack ".js". Besides, MfD is the wrong venue either way. Nardog (talk) 02:14, 1 June 2023 (UTC)[reply]
    I'm curious, what would be the proper venue? I chose MFD because it gets quite a bit of traffic, and I figure it's a good idea to get consensus before asking an intadmin to disable the script. — SamX [talk · contribs] 04:49, 1 June 2023 (UTC)[reply]
    I'd say WP:VPPR or WP:VPT. Nardog (talk) 04:55, 1 June 2023 (UTC)[reply]
    Thanks, I've posted a link to this discussion at VPT. — SamX [talk · contribs] 05:51, 1 June 2023 (UTC)[reply]
    Because the name and heading of this page also lack ".js". And? The name of the deletion discussion doesn't have to exactly match the name of the page being nominated for deletion. Wikipedia:Miscellany for deletion/Second batch of mass-created portals based on a single navbox Wasn't a discussion about a page called "Second batch of mass-created portals based on a single navbox".
    Besides, MfD is the wrong venue either way. How is this the wrong venue? Miscellany for deletion covers Pages not covered by other XFD venues, including pages in these namespaces: Draft:, Help:, Portal:, MediaWiki:, Wikipedia: (including WikiProjects), User:, TimedText:, Gadget:, Gadget definition:, and the various Talk: namespaces and Any other page, that is not in article space, where there is dispute as to the correct XfD venue.. This isn't a template, module, redirect, category, file or article so this would be the correct place to hold a deletion discussion. 163.1.15.238 (talk) 14:24, 1 June 2023 (UTC)[reply]
    That MfD lists the exact pages to be deleted. Only the doc page is listed at the top of this MfD with {{pagelinks}}, and the opening statement does not specify it is seeking to delete just the .js page. So the default assumption everyone has upon arriving here is that it is seeking to delete the doc page.
    Because deleting the .js page will only make it more difficult to see what happened with the script and is worse than blanking. Proposals to blank pages are not hosted by MfD AFAIK. Nardog (talk) 15:56, 1 June 2023 (UTC)[reply]
  • Disable until fixed – several problematic test cases should be gathered together and the script should be proven to treat them correctly (not remove any information, not create nonsense output, etc.) before being allowed to run as normal. Relying on users to manually check is not sufficient for this kind of semi-automated tool. If the tool cannot be fixed to consensus satisfaction then it should be deleted. –jacobolus (t) 05:41, 31 May 2023 (UTC)[reply]
  • Disable the bot — how is there even any debate about that at this point? All it takes is one editor who wants to rack up their numbers, and the bot will damage thousands of pages for them. XOR'easter (talk) 14:16, 31 May 2023 (UTC)[reply]
    @XOR'easter: it isn't a bot, it is a user script and cannot be used to edit thousands of pages unless the editor was to interactively edit every one of them, click 'expand references' to run the script and accept the changes without properly reviewing them. Curb Safe Charmer (talk) 13:39, 4 June 2023 (UTC)[reply]
    Since that is literally what happened and why days of volunteer time have gone into fixing hundreds of damaged pages with hundreds more still to go, what's your point? The damn thing shouldn't be usable. XOR'easter (talk) 14:57, 4 June 2023 (UTC)[reply]
    @XOR'easter, there are plenty of semi-automated scripts that have potential for abuse. I am sure this scri0pt could be improved. — Qwerfjkltalk 15:09, 4 June 2023 (UTC)[reply]
    Judging from the technical points raised in this discussion so far, the possibility of meaningfully improving it sounds actually rather dubious, as it might be built on flawed infrastructure. Even if it can be fixed, it shouldn't be open for general use until it is demonstrated to be safe. We have wasted too much time already cleaning up after its misfires. Nor can we trust that the errors it introduces will be caught by the community using regular methods. It's damaged FAs with thousands of watchers who all apparently saw the innocuous edit summaries and didn't stop to think that a "ReferenceExpander" edit that contracted the page might be a bad thing. We're not talking about a script with some hypothetical "potential for abuse". We're talking about a script that already has been a tool for abuse. XOR'easter (talk) 15:18, 4 June 2023 (UTC)[reply]
    @XOR'easter, any script only has potential for abuse until it's used for abuse (I can actually think of another script that could easily be abused, and has lead to a thread or two at ANI). I am certain the script can be improved, and I could probably do so myself. That said, I agree there are problems with the script currently, and it may have to be disabled. — Qwerfjkltalk 15:28, 4 June 2023 (UTC)[reply]
  • Disable the script - it does not seem possible in a volunteer community to require fixes to be made, and the need for major fixes seems to support disabling this script until such fixes, assuming they are possible, are made. Based on my experience with clean up of a few articles on the list of thousands of damaged citations, it appears this tool is not working well and is both capable of and has caused extensive disruption. Examples include Red herring (e.g. removal of the archived definition in the lead, removal of a quote in a cite, removal of what should have been a note); Henry David Thoreau (e.g. removal of an archive link, leaving what had become a blog post titled "How I Stopped Paying Taxes and Started Living My Values" instead of "Resistance to Civil Government" by Thoreau, and removal of what should have been a note; after making some repairs, I noticed the RefExpander changes had been reverted [1] with the edit summary "revert to version from 8 March; some of the references were mangled afterward, and the edits since seem mostly like partial cleanups. feel free to convert plain-wikimarkup references to citation templates, but be careful not to lose info in the process"); Greg Lake, where authors, quotes, the wikilink for a publication, and a book cite were among the losses; and Generation X, where cites stacked in single references were lost, and some author information. Beccaynr (talk) 15:34, 31 May 2023 (UTC)[reply]
  • Disable the script. I have used it a few times, but I've been careful to check the results before saving. I hope that BrandonXLF will make the necessary repairs, since the idea of ReferenceExpander is attractive. Eastmain (talkcontribs) 06:03, 1 June 2023 (UTC)[reply]
  • Keep, but disable until fixed. This is a crude but very powerful script which needs to be used with care. Instead of being deleted, it needs some tweaks.
    I have been aware of its problems for about 2 years -- see User:BrownHairedGirl/No-reflinks_websites#Reference-filling_tools -- But I still use it in some circumstances, i.e. as a first pass on articles where all the refs are bare URLs.
    It's two issues are that a) some fills are crude; b) it overwrites existing fills. The first problem is no biggie: nearly all ref-filling tools require some work after use. The problem here is that this tool gives no warning about the need for manual review. The second issue is an inexcusable bug: the tool needs to preserve existing metadata, either by entirely skipping any ref which already has a template, or at least by only adding data rater than replacing it.
    So, what's needed here is:
    1 / a warning that all its changes need to be manually reviewed, which could be done by either or both of i) a preview-before-save and/or ii) an intrusive warning box before save, stressing that tis is a first pass which probably has glitches and needs manual review;
    2/ the no-overrwrite fix. --BrownHairedGirl (talk) • (contribs) 07:02, 1 June 2023 (UTC)[reply]
  • Keep the page. Deleting the page wouldn't achieve much, and would remove documentation. I don't think this is the right place to discuss the performance of the script (not sure why people are calling it a bot). A better place would be Wikipedia:Village pump (technical). Users are responsible for what they do with a script and now that Philoserf has been blocked that is the main issue dealt with. BrandonXLF is a valued editor and technically competent so hopefully they will continue to enhance the script in some of the ways described above, though BrownHairedGirl is right that this is just one of a number of scripts that rely on Citoid, and Citoid is only as good as the underlying Zotero translators. There's a wider issue over Citoid ownership and lots on the backlog including some of the issues described above. Curb Safe Charmer (talk) 12:02, 1 June 2023 (UTC)[reply]
    User:Curb Safe Charmer I think I was the one who initially mentioned MfD, at the AN thread about the cleanup effort. I wasn't aware of any established process for gaining consensus to disable a user script. I just spent about an hour cleaning up ReferenceExpander errors introduced by two users other than Philoserf, and would characterise the main issue as the script overwriting existing references. I also value BrandonXLF as a technically competent editor; this particular piece of code has some design flaws. Folly Mox (talk) 13:20, 1 June 2023 (UTC)[reply]
    It was fine to start this MfD, and blanking can also be a valid outcome of this discussion.—Alalch E. 23:56, 1 June 2023 (UTC)[reply]
  • There is no reason to delete, simply blanking will do. That means once a forked version of the script is repaired, it can be copied into the page and the users of the script would be affected the least. —TheDJ (talkcontribs) 13:21, 1 June 2023 (UTC)[reply]
  • Keep the page, but I'd recommend disabling the script until the issues are addressed. The problem is not the mere existence of the script itself - it's not against policy per se to expand references, and the script has been useful to me - but it's only a net positive to the encyclopedia if used correctly. Folly Mox makes some good points here, but deleting either the script or the documentation page is not going to solve the underlying issues, which are with how people use the script, not whether the tool is actively harming the encyclopedia. WP:IAR deletion, as has been suggested above, would be harmful to the encyclopedia in my opinion, since people would not be able to fork the script in the future, and since this script by itself is again not doing something harmful - it's just relying on Citoid to fill out references. – Epicgenius (talk) 13:59, 1 June 2023 (UTC)[reply]
    • The script is absolutely doing something harmful. Historical usage has shown that in average circumstances, i.e. in an average use case, the script has a high likelihood of damaging citations. This is due to its screamingly flawed design. The page that is nominated for deletion contains provenly dangerous code. I don't think that we should want people simply forking the script to make forks with various hypothetical improvements but with no promise of removing the underlying key problem which the bulk of the risk is associated with, or, in other words, if we should want the code to be accessible, an approximate solution should at least be identified. For me, the solution is to limit the script to only work on references containing nothing but urls.—Alalch E. 23:56, 1 June 2023 (UTC)[reply]
      The script is absolutely doing something harmful
      I strongly disagree on that point. When used correctly, it does help fill in bare-URL citations, which are formatted similarly to those automatically generated through VisualEditor. That in itself is not harmful. The problem with the script is that it also fills non-bare URLs (in fact, basically anything that has a URL outside of a reference template). This is exacerbated by the fact many users don't use it correctly, because they just use the script's default outputs rather than repairing metadata that they may have overwritten. The script actually used to be more flawed, in that it would automatically make changes without allowing users to modify the default outputs, but the current interface does allow users to override whatever the script generates. My ideal solution is actually the same as yours, which would be to only allow the script to format bare URLs, similar to what Wikipedia:ReFill does (although ReFill is much worse than ReferenceExpander for this purpose).
      I don't think that we should want people simply forking the script to make forks with various hypothetical improvements but with no promise of removing the underlying key problem - I don't see evidence that the improvements in question would be merely hypothetical. Deleting the script would not give anyone the chance to work on it at all, regardless of whether they actually solve the issues with the script. By contrast, if the script were merely disabled, it could be improved, even if only some of the issues with it are solved. – Epicgenius (talk) 14:17, 2 June 2023 (UTC)[reply]
  • Keep - I am unconvinced that deletion must be the solution. The script can be disabled and made unavailable while it is improved. -- Whpq (talk) 17:35, 1 June 2023 (UTC)[reply]
  • Comment – It might also be good to get a group of supporters who are willing to take responsibility for personally fixing any issues this script causes. I notice that the folks urging that it be deleted/disabled are the ones currently trying to clean up its mess, while the folks arguing it should be kept don't really have any skin in the game. –jacobolus (t) 20:37, 1 June 2023 (UTC)[reply]
  • Disable: blank without deleting the JS page until improvements are made and mass message current users of the script about the disabling until main issues are worked out and tested. For testing, something like ReferenceExpander-alpha.js could be used. Of the few I've helped clean up, the errors are difficult to fix and required a couple hours worth of searching digital archives. Sennecaster (Chat) 02:13, 2 June 2023 (UTC)[reply]
  • Comment: There seems to be some confusion here. The current page nominated for deletion is the documentation page. The script itself is at User:BrandonXLF/ReferenceExpander.js. Deleting the documentation will not affect the script at all. It will continue to function as before. Some comments above suggest that deleting the page nominated will disable the script, such as The page that is nominated for deletion contains provenly dangerous code.. On the contrary, the page nominated for deletion contains absolutely no code. It must be made clear what page is actually being discussed for deletion here. Is it the documentation page, or the actual .js page? No comment regarding merits of deletion/keeping. Mako001 (C)  (T)  🇺🇦 12:50, 3 June 2023 (UTC)[reply]
    @SamX@Folly Mox: I'm not too familiar with this stuff, but it might be better to close this as "Procedural keep" due to the fact that editors above aren't all discussing the same page, and many "Delete"-type votes are actually calling for the script to be disabled (but not deleted) and then improved, which isn't in the scope of MfD (which is for deletions, not disabling)
    At that point, there are two options:
    1. Get consensus at VPT to require the script to be modified to prevent anyone except BrandonXLF (or perhaps anyone listed on a fully-protected checkpage, which woukd be limited to users helping to fix the script) from using it. Blanking the .js page may not allow for the script to be fixed without forking it, which would not be ideal. Once the script is fixed, it would go back to VPT to get consensus for re-enabling.
    2. Open a new MfD for deleting (explicitly ruling out just "disabling") the .js page. (Presumably at Wikipedia:Miscellany for deletion/User:BrandonXLF/ReferenceExpander.js) [1] That said, an explicit "This is for deletion, not merely disabling" is going to make the consensus for deletion much clearer, and I think it may actually be against outright deletion.
    Consensus seems to exist in the discussion so far to disable the script in some way, so I guess that option 1 is probably worth trying. Mako001 (C)  (T)  🇺🇦 12:36, 4 June 2023 (UTC)[reply]
    I'm not very familiar with the deletion processes either, but I think any closer should be able to read the consensus here, which is actually rather clear if procedurally non-standard. This reminds me of the recent RfC on planet symbols in infoboxes at VPI, where everyone was arguing for the same thing, if you just looked past the first bolded word of their comment.
    Discussions don't always have to yield a consensus that's one of a constrained set of common options. I don't know that we need to have the discussion another time, especially as there's not really an established venue for this extremely uncommon process, and maybe one or two people here – myself not amongst them – are actually in favour of deleting over disabling.
    We don't even need to blank the .js page if this closes as "keep and disable": the code could be commented out with the addition of a few characters, such that it would be inoperable. I think my very first comment here was not even to disable it, but restrict its main loop to cases where the input was a bare URL (or equivalent). Folly Mox (talk) 13:19, 4 June 2023 (UTC)[reply]
    We're here, we're talking about it, the discussion has been advertised in multiple places, and Wikipedia is not a bureaucracy. Closing this discussion on "procedural" grounds just to restart it again somewhere else so that another page can be filled with "per my !vote in the MfD" comments would be an utter waste of time. XOR'easter (talk) 15:00, 4 June 2023 (UTC)[reply]
    @Mako001: What Folly Mox and XOR'easter said. I started this MFD was because there was (and still is) an element of urgency, and I wasn't willing to risk waiting 2-3 weeks for consensus to develop at VPT to disable the script. I've since notified everyone who has the script installed (aside from a few accounts that are indefinitely blocked) of the script's issues which perhaps alleviates that concern to some extent, although Philoserf has demonstrated that we can't trust that every user of the script will heed warnings about its problems. Either way, I don't see the point of restarting this discussion elsewhere on procedural grounds. I do think it might be worth discussing conditions for re-enabling the script in a way that satisfies the community's concerns, which would probably be a discussion best suited to VPT. — SamX [talk · contribs] 15:31, 4 June 2023 (UTC)[reply]
    We don't need to start over. This MfD can be closed as "disable the script". Blanking the .js page may not allow for the script to be fixed without forking it, which would not be ideal. But if you kept the script functioning, the only way to stop people from using it would be to edit its users' common.js, global.js, etc., which would need not only local but Meta intadmins. Forking would be far easier. Nardog (talk) 20:48, 4 June 2023 (UTC)[reply]
    @Folly Mox@XOR'easter@SamX@Nardog: Agreed, my suggesting for a new discussion was a stupid, bureaucratic, time-wasting idea. The consensus to disable the script is fairly clear here already, I think any closer would be able to read the discussion and see that. @SamX:Indeed, re-enabling should be requested at VPT. @Nardog: Folly Mox suggested that it was possible to disable without blanking, would that also work? @Folly Mox: Yep, I got the same number asking for outright deletion. The same number have !voted but not asked for deletion or disabling, but both said something to the effect of "problems can be fixed if they exist", without definitively saying that "problems exist, they must be fixed" as most others have.
    Do we know if any forks of this script already exist? Mako001 (C)  (T)  🇺🇦 03:19, 5 June 2023 (UTC)[reply]
    FWIW, @BrandonXLF: I think this is a promising script, the ability to accept or decline individual changes sets it apart from ReFill, where you get a "take it all or leave it all" approach. Mako001 (C)  (T)  🇺🇦 03:43, 5 June 2023 (UTC)[reply]
    The intention of this MfD or at the very least my delete vote is to disable/delete the script. Chess (talk) (please Reply to icon mention me on reply) 01:23, 6 June 2023 (UTC)[reply]
  • Keep: But disable the script as this allow BrandonXLF to continue to work on the script and fix its issues. Lightoil (talk) 02:21, 7 June 2023 (UTC)[reply]
  • delete or change to severely limit its use cases. After fixing many of the errors generated by this script, it basically never works well in any situation that is not just a bare URL, and even then has an extremely high fail rate with the amount of links that point to 404 pages. Much of the problem is few over eager editors who sprayed and prayed with this tool and did not exercise due diligence in reviewing their own changes, operating essentially as incapable bots. Presumably there will be editors who will continue to behave this way in the future and removing tools that don't have particularly high value even in the "right" hands seems like the right call to me. That being said, I am somewhat new and don't use any automated tools in my editting and don't have any practical experience using such things on WP. Gnisacc (talk) 04:01, 7 June 2023 (UTC)[reply]
  • information Note: Since this discussion appears to have stalled and the consensus seems pretty clear, I've requested that this discussion be closed at WP:IANB. — SamX [talk · contribs] 02:55, 11 June 2023 (UTC)[reply]

References

  1. ^ MfDs ending in ".js" have been made before, and nothing seemed to break.
The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the page's talk page or in a deletion review). No further edits should be made to this page.