Wikipedia:Village pump (proposals)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Lowercase sigmabot III (talk | contribs) at 02:23, 8 July 2017 (Archiving 4 discussion(s) to Wikipedia:Village pump (proposals)/Archive 140) (bot). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

 Policy Technical Proposals Idea lab WMF Miscellaneous 

New ideas and proposals are discussed here. Before submitting:


Adding {{Redirect category shell}} to all single-redirect-template redirects

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.



Per a recent bot RfA, and Template:Redirect category shell/doc#Purpose, should {{Redirect category shell}} be added to all redirects which only contain a single redirect template? The template documentation explains how it is useful not only for single-{{R}} redirects, but also for 0-{{R}} redirects. Pinging the template's creator, Paine Ellsworth, for input.   ~ Tom.Reding (talkdgaf)  16:05, 17 June 2017 (UTC)[reply]

To clarify, this is specifically addressing the question "Should a bot do this task?", which is slightly different than whether it should be done. The estimate provided at the BRFA was that this would take 400,000 edits. ~ Rob13Talk 16:20, 17 June 2017 (UTC)[reply]
400,000 is a conservative estimate. I'll start another database scan with a much higher limit to get an accurate count.   ~ Tom.Reding (talkdgaf)  16:23, 17 June 2017 (UTC)[reply]
My guess is now ~3 million... AWB's database scanner runs out of memory if I set the result-limit to several million (approaching 1 GB it starts to freeze, emit errors, and misbehave; my system memory still has many GB free at that time). With a limit of 2M, it considers itself ~16.9% done with ~337k matches, corresponding to the saturation limit of 2M (337k/0.169 = ~2M), so I need to raise the limit. With a limit of 5M, it's ~11.7% complete with ~344k matches, or ~3M estimated total.   ~ Tom.Reding (talkdgaf)  17:40, 17 June 2017 (UTC)[reply]
I think the real question is "should this be done". If yes, then it makes no sense to do this manually rather than by bot. Since I'll evaluate the BRFA assuming there is consensus for this task, I'll recuse myself from giving me opinion here. One thing that should be done is to clearly explain exactly what the benefits of putting single templates within a shell, and what is proposed to be done with uncategorized redirects. Headbomb {t · c · p · b} 16:37, 17 June 2017 (UTC)[reply]
@Headbomb: My point-of-reference is cosmetic-only edits. No-one thinks cleaning up whitespace is a negative, just that having a bot perform high volumes of such edits is a negative. This task has the same potential, in my opinion, although I'm pretty neutral on it for the moment. ~ Rob13Talk 18:20, 17 June 2017 (UTC)[reply]
(edit conflict) I can give input on the basic question of whether or not to use the Rcat shell on singularly categorized redirects, but I don't know enough about the needs to speak to whether or not a bot should take up the task.
  1. Rcat shell may be used to fulfill its primary impact, which is to "standardize redirect templates (rcats). Its basic purpose is to simplify the process of tagging and categorizing redirects.
  2. The shell template is also used to hasten the learning curve for editors who get into categorizing redirects. No matter how many rcats are needed, from zero to an as yet unknown maximum (usually six to eight rcats), the "manifold sort" can be used to populate the Miscellaneous redirects category. That category is monitored, so editors can check back to see what rcats have been used, and they will know next time what to do with similar redirects.
  3. The shell has the ability to sense protected redirects, so no matter how many rcats are needed, from zero to the maximum, protection levels are automatically sensed, described, categorized and changed when appropriate.
For these reasons, it is considered useful to enclose one or more rcats within the Rcat shell, and to use the shell without rcats when unsure of the categorization needs.  Paine Ellsworth  put'r there  16:58, 17 June 2017 (UTC)[reply]
I would also like to note that if the bare Rcat shell is added without rcats by a bot to a lot of redirects, then the manifold sort category would become too unwieldy for just a few editors to efficiently administer. Please don't do that, because it would effectively defeat the purpose of the manifold sort.  Paine Ellsworth  put'r there  17:05, 17 June 2017 (UTC)[reply]
Please note, some exact examples of what the change will do are linked from in Wikipedia:Bots/Requests for approval/TomBot. — xaosflux Talk 00:13, 19 June 2017 (UTC)[reply]

Will the bot add the rcat shell even if there are no existing rcats on the page? If it does, that would totally overwhelm the manifold sort. TheMagikCow (T) (C) 10:55, 19 June 2017 (UTC)[reply]

Nope; the title of this proposal and the BRfA explicitly & exclusively target single-redirect-template redirects.   ~ Tom.Reding (talkdgaf)  11:19, 19 June 2017 (UTC)[reply]
Oppose making millions of edits to pages hardly anyone ever sees, which don't change the functionality of the page one bit, but which may help other users to make somehow better redirect pages in some way, perhaps. Very unclear which actual issue this is trying to solve. Fram (talk) 12:40, 19 June 2017 (UTC)[reply]
The main issue, I believe (but I could be wrong), is that this will automatically include protection levels, improve sorting, and make things more standard. Tom has been doing this mind-numbingly boring task semi-automatically for a while, and doing this by bot makes it much easier to ignore on watchlists, both for people who ignore all bots, or for those who'll ignore TomBot in particular (e.g. WP:HIDEBOTS). Headbomb {t · c · p · b} 14:24, 19 June 2017 (UTC)[reply]
Yes, Wikipedia has both sharpened and dulled various parts of my brain. On balance I suspect it has at least retained its curvature, but I could also be wrong.   ~ Tom.Reding (talkdgaf)  15:27, 19 June 2017 (UTC)[reply]
Support but only just. Adding rcat shell to single-rcat redirects does make a slight cosmetic change; one of the examples given at BRFA is [1] to [2]. Before the edit, the redirect had the text "from a fictional character" floating on the page with no context. Adding rcat shell encloses that text in another text box which adds the contextual information "this page is a redirect:". It's unlikely readers will notice that change, and it's unlikely most editors would not be already aware that the page is a redirect, but this helps slightly. Also standardizing redirect categorization and adding the other technical functionality of the shell. I'm strongly against having a bot add the shell to non-categorized redirects, that's pointless busywork that just creates a huge maintenance chore. Ivanvector (Talk/Edits) 13:50, 19 June 2017 (UTC)[reply]
Is slightly modifying the redirect templates (e.g. R from fictional character) then not the much simpler solution than going to 3 million redirects to add that shell? Just change what, perhaps 100 templates, to make them clearer, and be done with it. Fram (talk) 14:26, 19 June 2017 (UTC)[reply]
Some of the benefits would likely carry over (like protection/categorization) easily. But the improved appearance and standardization might not, or might be hard to achieve. It's probably worth exploring the idea however. Headbomb {t · c · p · b} 14:39, 19 June 2017 (UTC)[reply]
From a standardization perspective, to eliminate creep/inconsistencies and maintenance/updating, it is best to have 1 wrapper template than multiple {{R}}s that "self-wrap". Furthermore, potentially-"naked" {{R}}s would have to detect whether they have or have not already been wrapped, to avoid unnecessary nesting. If that's easy/not prone to error, then that might be the best way forward. Is that the case, though?   ~ Tom.Reding (talkdgaf)  14:48, 19 June 2017 (UTC)[reply]
Perhaps the individual redirect templates can be turned into redirects (hah!) to the shell then? Although I don't really see the need for standardization of the look of pages hardly anyone ever looks at anyway: it is important that these are categorized, but apart from that one could eliminate the text from all of these and nothing really would be lost. Fram (talk) 15:04, 19 June 2017 (UTC)[reply]
Yes, if all {{R}}s call a module (I suspect it'd have to be a module rather than standard template code) that is able to perform the same duties as {{Redirect category shell}} and were able to sense whether or not the calling {{R}} is wrapped in {{Redirect category shell}}, that would be ideal. I certainly don't have the expertise to answer that question though, and someone should be summoned here that can.   ~ Tom.Reding (talkdgaf) 
Yeah, I'm not really convinced that the category shell is the best solution to whatever problem we're trying to solve, if the category templates themselves can just be modified to produce similar code. Like, the category shell has protection-level-sensing code, why can't all of the rcat templates have that, and do away with the shell? Since that seems so obvious to me but I'm not a coder, I've assumed that option has already been explored and ruled out, and having the template shell is the only (or most desireable) way to achieve the desired result. If that's not the case, then let's back up. Ivanvector (Talk/Edits) 17:09, 19 June 2017 (UTC)[reply]
  • After further thought, I must oppose (and recuse from handling any BAG aspect of the BRFA going forward). The examples given by Tom indicate that the only change being made here is a box that says "This is a redirect". Ivanvector rightly points out this adds context to the redirect page, which initially had me leaning toward support until I recalled that the only way you actually see a redirect page is by clicking the link at the top of the page you're redirected to that says "Redirected from X". That means by the time you've navigated to a redirect page, you would already know it's a redirect. Alternatively, you can navigate to redirect pages by clicking certain direct links that editors leave in discussions, but at that point you're almost definitely in project space and should know what a redirect looks like. Yes, the wrapper template handles protection, etc. That's a good rationale for adding an option to Twinkle to add the wrapper when protection is added to a redirect. Protecting a redirect is very rare (template-protection being the only semi-frequent case that comes to mind), so I don't see that as justifying three million edits. I do want to thank Tom.Reding for quickly bringing this to the village pump when requested, and I will be happy with the outcome of this discussion whatever it is. This is the correct and drama-free way to handle potentially controversial tasks. ~ Rob13Talk 16:17, 19 June 2017 (UTC)[reply]
  • Oppose as pointless per above. (although I wouldn't object to a bot adding it on all protected redirects). In addition to what BU Rob13 said about "Redirected from" links, all redirect pages have "Redirect page" at the top, just after "From Wikipedia, the free encyclopedia", so the template doesn't actually serve the purpose of notifying people it is a redirect. Pppery 11:50, 21 June 2017 (UTC)[reply]
  • Support, if the edits are made very gradually. The wrapper template seems like a Good Thing on balance. Enterprisey (talk!) 19:32, 21 June 2017 (UTC)[reply]
    @Enterprisey: How "gradual" - at 1epm this would take ~7months running non stop. — xaosflux Talk 22:16, 27 June 2017 (UTC)[reply]
    We could cut that time in half by going at 2epm :) No deadline, and especially for this not-very-critical task. Enterprisey (talk!) 22:56, 27 June 2017 (UTC)[reply]
  • Oppose - adding 3 million entries to the revision table (by making 3 million edits) is not a trivial increase and should not be done without a compelling reason. Bloating the revision table makes almost every process on Wikipedia a tiny bit slower. Of course 3 million isn't a huge number in the grand scheme of things, but it adds up. Kaldari (talk) 22:46, 27 June 2017 (UTC)[reply]
  • Support. Ultimately, all redirects should contain at least one redirect category, to indicate why such a redirect should exist. bd2412 T 23:10, 27 June 2017 (UTC)[reply]
  • Oppose, essentially per BU Rob13. This strikes me as allocating a disproportionate amount of resources for marginal benefit. If we ever need to make 3 million edits to fix a problem, the problem should be well worth fixing, and this doesn't strike me as such a high priority issue. Mz7 (talk) 23:24, 27 June 2017 (UTC)[reply]
  • Strong Oppose - It is the choice of those categorizing a redirect whether or not to use {{Redirect category shell}}. For example, see MediaWiki talk:Move-redirect-text#Redr, where no consensus was gained to add the categorization template to MediaWiki:Move-redirect-text. If consensus is gained to add this to every redirect, I wouldn't oppose a bot doing it. — Godsy (TALKCONT) 23:46, 27 June 2017 (UTC)[reply]
WT:WikiProject Redirect has been notified of this discussion — Godsy (TALKCONT) 00:01, 28 June 2017 (UTC)[reply]
  • Oppose This would functionally disable WP:MOR for those editors who can't delete pages. Breaking an often-used function for minimal gain is not a good bot task. --AntiCompositeNumber (Ring me) 03:21, 28 June 2017 (UTC)[reply]
  • Oppose per AntiCompositeNumber and, to a lesser degree, Godsy. -- Tavix (talk) 22:59, 29 June 2017 (UTC)[reply]
  • Absolutely not. Heck, I'm not even convinced that the shell should be added when there's just two cats. But adding it with just one is absolutely be overkill. --IJBall (contribstalk) 04:37, 30 June 2017 (UTC)[reply]
  • Oppose, except for all protected redirects. Limit the scope to just those edits that produce some more tangible benefit. wbm1058 (talk) 12:01, 30 June 2017 (UTC)[reply]
  • Oppose for now, except for protected redirects. Redirect pages aren't usually read by much anyone, so doing this for hundreds of thousands of rarely-used redirects serves little purpose. Jc86035 (talk) Use {{re|Jc86035}}
    to reply to me
    16:34, 30 June 2017 (UTC)[reply]
  • Oppose per ACN. Would support for redirects with two or more rcats, if that is not already done. feminist 15:43, 1 July 2017 (UTC)[reply]
Oppose for single-rcat redirects with no history, as it would interfere unnecessarily with page moves. עוד מישהו Od Mishehu 16:27, 5 July 2017 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Disable/opt-out option needed for sister projects search results

Resolved
 – The gadget is now available as "Do not show search results for sister projects on the search results page" via user preferences. --George Ho (talk) 13:52, 26 June 2017 (UTC); amended, 15:37, 26 June 2017 (UTC)[reply]

That about says it. It appears there have been RFCs & discussions about this new feature, I posted a query about it on VP:tech ( see here but apparently I was supposed to post here?... I don't need it in my editing, these sister project results visually clutter up the page plus the results I have been getting are not useful. YMMV, etc, but I'd really like to be given the (checkbox) option to opt-out. I think I am supposed to ping the folks from that discussion to this discussion so here ya go: @BlackcurrantTea:, @Lugnuts: @George Ho:. Thanks, Shearonink (talk) 06:05, 18 June 2017 (UTC)[reply]

  • Comment - I added this discussion to Template:Centralized discussion because the search results update affects everyone, including those unsigned (i.e. using IP addresses) and registered editors. --George Ho (talk) 06:17, 18 June 2017 (UTC)[reply]
  • I note that there was a community RFC (on VPP I think) that supported the inclusion of sister project results on the search page. Are you saying you want to make it so they don't appear for you personally? — This, that and the other (talk) 07:02, 18 June 2017 (UTC)[reply]
YES. Don't want it, don't like it, glad to have the code-fix posted below, think this should be an opt-in/opt-out checkmark-box under preferences. Shearonink (talk) 18:25, 18 June 2017 (UTC)[reply]
That's what Shearonink is trying to say. Isn't that right? BTW, here's the RfC discussion. George Ho (talk) 07:22, 18 June 2017 (UTC)[reply]
  • Support Just as Shearonink said, "I don't need it in my editing, these sister project results visually clutter up the page." I edit Wikipedia, or (rarely) Wikidata, not the sister projects. If I'm looking for information and want to include Wikiquote, Wiktionary, or other projects, I'll use a search engine. If I search Wikipedia, then I want Wikipedia results. For example, when I'm fixing typos, seeing that someone spelled "television" as "televison" at Wikivoyage is neither interesting nor helpful. It's visual noise, and I don't want it. BlackcurrantTea (talk) 07:25, 18 June 2017 (UTC)[reply]
  • Support I don't care about the other projects and this is just clutter. Lugnuts Fire Walk with Me 07:44, 18 June 2017 (UTC)[reply]
YES. SO much this ^^^ Shearonink (talk) 18:25, 18 June 2017 (UTC)[reply]
  •  Already available, just add
    div#mw-interwiki-results { display: none !important }
    
    to your own common.css. By the way, I notice some scarce interaction with sister projects: maybe wait and see for a few days, you could find out that the sister projects contain something of interest for yourself too! --Nemo 09:29, 18 June 2017 (UTC)[reply]
The search results I have been getting are not relevant to my editing and, also, are useless to my editing at Wikipedia, which is what I am here to do. EDIT. End of story. Shearonink (talk) 18:25, 18 June 2017 (UTC)[reply]
I know how you feel about the updated search results, Shearonink. They can look awkward to or frustrate a user. Therefore, I won't object to having the option to disable/opt-out if a user wants those results disabled. Also, I'll respect your wishes and allow you to make decisions, especially when the "disable" option happens. However, with those search results, more likely some editors would come to those pages and improve existing pages or create new ones. Also, sometimes it helps editors not to create any more dubious articles in Wikipedia or pages that would have meant for other sister projects. It even would prompt some or many editors to create less cross-wiki redirect pages than we have seen lately. Well, I created one page with a German-language title to redirect to a Wiktionary entry, but it got deleted (...because I was convinced into requesting deletion on it). Even when the results from those projects are disabled by user preference, any Wikipedia article may still likely be nominated for deletion, especially due to notability and/or content issues (like "Georgia for Georgians") or because it looks like either a definition page, a travel guide, a mere document, a page of a book, or a collection of quotes. Of course, I was discussing mainspace results. Unsure about what to do with results of other non-article namespaces from those projects. --George Ho (talk) 01:36, 20 June 2017 (UTC)[reply]
Bam! Thanks - that works a treat. Lugnuts Fire Walk with Me 17:51, 18 June 2017 (UTC)[reply]
Yes, thanks for posting the code-fix. I still think it would be great if this feature were available as a checkmark opt-in/opt-out on a registered editor's preferences. Shearonink (talk) 18:25, 18 June 2017 (UTC)[reply]
I agree. Thanks for the css fix; please make it a checkbox on the prefs page. BlackcurrantTea (talk) 03:58, 19 June 2017 (UTC)[reply]
  • Support - I can't think of any reason why we would not make this available as a configurable option in preferences. Ivanvector (Talk/Edits) 13:36, 19 June 2017 (UTC)[reply]
  • Support - It should be an option in preferences. — Godsy (TALKCONT) 00:27, 20 June 2017 (UTC)[reply]
  • Support opt-out, oppose disabling. I see how others could find this unnecessary and obtrusive, but as someone who occasionally translates stuff from other Wikipedias I find this quite useful. DaßWölf 02:43, 20 June 2017 (UTC)[reply]
I don't think anyone is suggesting disabling the feature (for everyone), but fwiw I would also oppose disabling. Ivanvector (Talk/Edits) 11:59, 20 June 2017 (UTC)[reply]
Nobody had specified it thus far, so I added that for clarity. DaßWölf 03:33, 26 June 2017 (UTC)[reply]
Oh gawd yes...was just re-reading through this section - Thank you Wölf & Ivanvector. I didn't want this feature deleted, just wanted to be able to turn it off. Thanks for clarifying my somewhat muddy syntax. Shearonink (talk) 15:27, 26 June 2017 (UTC)[reply]
  • Comment. It is easy to create a gadget. Ruslik_Zero 17:53, 20 June 2017 (UTC)[reply]
  • Support opt out as a user preference. it shouldn't require figuring out more than that. So far, I too find it unhelpful, and I suspect so will many others as they come to notice it. I predict it will soon change to opt-in. DGG ( talk ) 22:12, 20 June 2017 (UTC)[reply]
  • Oppose - we already have way too many preferences. The sister project results don't interfere with using the search interface as normal and it's easy for anyone that doesn't like them to hide them using their User CSS. Kaldari (talk) 04:18, 21 June 2017 (UTC)[reply]
    • User CSS is relatively inaccessible. Not everyone knows how to use it. Opt-out tick box is much more accessible. Options should be accessible to all, particularly opt-outs. • • • Peter (Southwood) (talk): 14:39, 21 June 2017 (UTC)[reply]
    • Pinging Kaldari for response. George Ho (talk) 17:03, 21 June 2017 (UTC)[reply]
      • We know from Special:GadgetUsage that gadgets that do nothing but disable features are consistently among the least used gadgets. Adding lots of preferences just for the small number of people who don't like certain new features causes 2 problems: It makes the preferences interface overwhelming and less usable for everyone; It means that developers have to support more variations of the interface, slowing down future development. A good example of that is when we had 3 different versions of the WikiText editor depending on how many new features users wanted to opt-out of. This required building 3 different version of any features for the editor, such as RefToolbar. (We finally removed one of the opt-outs from the prefs last month, which hardly anyone had been using anyway.) Kaldari (talk) 17:49, 21 June 2017 (UTC)[reply]
        • What "future development" do you mean, Kaldari? Do you mean the Foundation's search engine? If that's not it, may you please elaborate further? Thanks. --George Ho (talk) 02:35, 26 June 2017 (UTC)[reply]
          • @George Ho: I mean any future development related to the search page, including gadgets and WMF software. If they have to take into account multiple possible interfaces, it makes development more complicated and slower. Adding this one option may not seem like much, but every option must be multiplied by all the other options. So if I want to write a gadget that reformats the presentation of the search results, I need to accommodate all the possible skins, multiplied by whether or not they are using the new "advanced search" beta feature that is about to come out, multiplied by whether or not they are hiding the sister search results, etc. So this one extra option actually means that I have to accommodate 8 additional UI variations in my software (assuming we aren't even worrying about mobile). Kaldari (talk) 03:04, 26 June 2017 (UTC)[reply]
            • Well... Take your time, Kaldari. I believe you and other developers can find ways to make the future developments more simple and easier to use, no matter how long it takes. I'll explain more via email soon. --George Ho (talk) 04:56, 26 June 2017 (UTC)[reply]
  • I notified communities of just selected sister projects about this proposal. --George Ho (talk) 19:17, 22 June 2017 (UTC)[reply]
    Is it possible for the interwiki searches to be selective? I, for one, greatly value WP, Commons, and Wikispecies in my play/work at Wiktionary, but would find other projects less useful. DCDuring (talk) 19:30, 22 June 2017 (UTC)[reply]
    That would take an RfC discussion and a Phabricator task to do so, DCDuring. Right now, due to the results, we don't see multimedia (audio/visual) results from Commons here. --George Ho (talk) 19:38, 22 June 2017 (UTC)[reply]
    As two of the three projects I am interested in are not-yet/may-never-be included, my enthusiasm is a bit further diminished for the taxonomy-type contributing that I do most often. For other editing, WikiSource is helpful. I expect that those who contribute to en.wikt in languages other than English would value including pedia and wikt projects in their languages of interest. DCDuring (talk) 19:51, 22 June 2017 (UTC)[reply]
    I made a note at some other venues, including Wikibooks, saying that Wikibooks was included as part of search results by developers. However, they didn't realize that there was "no consensus" to include those results. Therefore, I filed a task at Phabricator. I also noted the error at WP:VPT#Requesting suppression on search result from Wikibooks. --George Ho (talk) 00:57, 23 June 2017 (UTC); modified, 15:25, 23 June 2017 (UTC)[reply]
    I saw that another proposal to include Wikibooks as part of search system is made at WP:VPP. --George Ho (talk) 22:09, 23 June 2017 (UTC)[reply]
  • Support Another configurable button (Opt in or out -either works) in preferences won't hurt anything. GenQuest "Talk to Me" 22:09, 22 June 2017 (UTC)[reply]
  • Lukewarm support for a selectable preference. Note that English Wikiquote has all of 30,000 pages, about .08% of English Wikipedia's total. I don't see those search results, at least, as being particularly disruptive to Wikipedia searches. bd2412 T 00:24, 24 June 2017 (UTC)[reply]
  • I have no issue having it as an opt-out capability within preferences, or as a gadget. For the proponents, you completely have the capacity to hide it through your Special:MyPage/common.css and I invite you to use that methodology as that is its purpose; even to add instructions to users on this. I totally dispute the myopic view that it should be automatically done, and challenge those who propose such are very much in the "I don't like it" camp, and suggest that this is a key component of the WMF search team, and is purposefully added to improve the whole product that WMF produces. — billinghurst sDrewth 05:14, 24 June 2017 (UTC)[reply]
  • There is now a gadget available to do this. —TheDJ (talkcontribs) 08:33, 26 June 2017 (UTC)[reply]
In other words, TheDJ, there is the gadget saying, "Do not show search results for sister projects on the search results page", under the "Appearance" section of the "Gadgets" tab, right? Thanks. Pinging Shearonink, "This, that and the other", Lugnuts, Ivanvector, Godsy, DGG, Kaldari, GenQuest, BD2412, and billinghurst. Therefore, they can individually decide whether to click that via user preferences, and done. --George Ho (talk) 13:52, 26 June 2017 (UTC)[reply]
  • It does to me, GenQuest. I tested the option, and it still works. BTW, I wonder whether the "working" issue can continue at WP:VPT. --George Ho (talk) 19:00, 3 July 2017 (UTC)[reply]
  • The link doesn't work, GenQuest. I think you missed the "n" after "discussio" (typo). May you correct the link please? Thanks. --George Ho (talk) 21:13, 3 July 2017 (UTC)[reply]

Increasing notability requirements in light of plethora of articles created by Third/Second-worlders about local/regional "celebrities"

Hi,

I wanted to share a proposal that involves curtailing the simple notability checks with respect to "celebrities" in the developing nations.

Specifically, I find that the number of biographical pages made for "celebrities" from the developing nations, is extremely large.

  • For example, while I haven't counted, it would not surprise me if the number of pages created for Indian actors outnumbers the number of actors who have pages in Hollywood.

I understand that notability criteria must be objective and established so that all administrators/page-approvers can reasonably assess the importance, and make a decision.

However, with the incredibly large population of the aforementioned demographic (3 billion total), it is hard to see why they are deserving of pages on the en.wikipedia.org page, especially when many of the readers/users of en.wikipedia.org do not care about Bollywood.

Is there any way the criteria for notability for these "celebrities" can be applied to the in.wikipedia.org instead of en.wikipedia.org? I am growing tired of seeing these people curate pages for their heroes, where these heroes often have less notability/ability/skills than a first-worlder that is half their age.

Some standardisation and consideration needs to be given to my proposal, as the large population and Bollywood-fanfare has created a recipe of absolute disaster. There are indian politicians from local levels who have pages whereas the same criteria results in a first worlder's page getting rejected.

To be fair, I have also noticed that many of the pages that I feel are undeserving of an entry on en.wikipedia.org (but maybe not in.wikipedia.org), often involve sock puppetry and users using multiple accounts (over time) to first create, then slowly build the page without catching anyone's attention.

Please, let us have this discussion. PLEASE — Preceding unsigned comment added by 138.117.121.10 (talkcontribs) 23:32, 26 June 2017 (UTC)[reply]

Lets get a few things out of the way:
  • There are less than 2000 articles in the subcategories of Category:Actors in Hindi cinema‎. This actually seems like too few.
  • It absolutely does not matter how many readers/users of the English Wikipedia care about Bollywood. Only notability matters.
  • Notability guidelines already exist for entertainers (see Wikipedia:Notability_(people)#Entertainers).
  • I believe you misunderstand how "page approval" works - outside of Wikipedia:Pending changes, there is no page approval. All Wikipedians approve (by doing nothing) or disapprove (by fixing or nominating for deletion) articles they read, regardless of being an Administrator or not. In fact, you too can be a part of this process - if you see an article that you believe doesn't follow the notability guidelines you are free to propose it's deletion.
  • in.wikipedia.org does not exist because there is no single "Indian" language, which you have definitely implied. How embarrassing.
So are you just proposing specific guidelines for developing-nation actors? That seems awfully specific. Brightgalrs (/braɪtˈɡæl.ərˌɛs/)[1] 20:34, 27 June 2017 (UTC)[reply]
  • I oppose limitations aimed at reducing the number of "third world" entertainers. Many entertainers from "first world" countries are only notable locally, but they get more press coverage because they come from places with more robust press outlets. The gravamen of this proposal seems to be to give equally situated entertainers short shrift because of the particular location from which their popularity arises. bd2412 T 20:49, 27 June 2017 (UTC)[reply]

Hi, thanks for your response.

First of all, no it's not just India, but rather all developing nations. For example, it seems as of late there is a booming interest in the Phillipines for industries similar to India's Bollywood.

Secondly, I am not sure if using the category tool is appropriate here, as it would not surprise me if there are many more actors whose pages have not been put under the category.

I am not sure those who create the pages care (or, at the risk of being 'impolite': know) enough about categories as much as they do the page itself that can be found via searching the name on google or wikipedia.

Thirdly, when it comes to notability guidelines, how are those guidelines applied to developing nations? India is a great example of the issues I'm talking about, as they have for many years been masters of the media. Two examples:

  1. A great example of this media mastery can be illustrated by their farcical claim about a fighter jet, which embarrasingly had the Pakistani flag.[1]
  2. Another example is the purported photoshopping of Modi's visits to the 2015 Chennai floods, which again were taken down shortly after the gaffe was discovered.[2]

So you can see that exaggeration in Indian Media, whether it is on behalf of the Government or Actors/Politicians, is quite common.

  • While I may be at risk of "picking on India", surely these examples are not exclusive to them.

Aye, one could probably easily pick out similar instances and abuse of media by Najib Razak in Malaysia, or whoever-the-leader-is of the Phillipines.

What I wanted to communicate, and what I hope the two instances above demonstrate, is that the notability criteria for developing nations must be elevated.

That is, I feel the notability criteria should involve first-world outlets recognising the developing-nation celebrity-hopeful, and even in those cases I think caution must be urged and multiple outlets should be reporting it.

Regarding the lack of an in.wikipedia.org, maybe that is something that needs to be discussed?? Surely a subdomain wouldn't kill User:Jimmy Wales?

I am not sure how you feel that my point on that matter should make me feel embarrassed, as I was simply alluding to the inapporpriateness of foreign content on what-is-supposed-to-be the First World wikipedia.
  • For example, many of these actors' "art" is in a language that is not english (i.e. not the 'en' in en.wikipedia.org), so for them to have a page on the english wikipedia (for english speakers), when the audience cannot understand the foreign dialect, is not reasonable.

— Preceding unsigned comment added by 138.117.121.10 (talk) 20:55, 27 June 2017‎ (UTC)[reply]

Additionally, I noticed that User:bd2412 quickly jumped in before I could post my response. You claim first worlders are only notable locally, yet that's exactly what I stated is the case for India and other developing nations but on a larger scale.

  • In the developing nations, their typically-larger population densities actually exacerbate the effect you're suggesting (that I originally suggested).

I am not defending "first world" celebrities at all. Indeed I've found some myself on wikipedia who only have entries due to their paid media articles, which are then used as WP:RS to justify curation. I am against that, as well.

There is a fine line that needs to be walked when it comes to biographical page curation, and I think it needs to be more thoroughly discussed in light of no indian wikipedia (though one should think they should have their own, and in their own language, which I feel would help them much more than the english wikipedia)

— Preceding unsigned comment added by 138.117.121.10 (talk) 20:59, 27 June 2017‎ (UTC)[reply]

"no indian wikipedia (though one should think they should have their own, and in their own language, which I feel would help them much more than the english wikipedia)" Yeah um. There is no single Indian language. There are dozens. And good news: Every one of them has a Wikipedia! --Golbez (talk) 21:04, 27 June 2017 (UTC)[reply]
  • Support. Will help get rid of cruft. KMF (talk) 01:10, 3 July 2017 (UTC)[reply]

References

  1. ^ "Ministry's I-day video shows Chinese jet with Indian flag". The Hindu. 13 August 2016. {{cite web}}: Cite has empty unknown parameter: |1= (help)
  2. ^ "Narendra Modi 'photoshopped' image of Chennai floods goes viral". The Telegraph. 4 December 2015.

Request for new tool

I have no clue if this is practical; that's one reason I'm coming here, together with the fact that a single user's request is less significant than a request that got lots of support at WP:VP/Pr.

If an edit war occurs on a less-heavily-trafficked page, and nobody watching the page happens to report it, the chances of the war being caught are minimal, especially if no edit warrior wants to risk a boomerang by reporting at WP:ANEW. What if we had a tool that tracked pages with (1) lots of recent edits by multiple people, and (2) no recent edits to the talk page? It would still miss some edit-wars, and it would find a bunch of pages that aren't experiencing any conflicts, but I can imagine it being useful: people looking at the page's top hits would more easily find an ongoing edit war, thus making it easier for them to report it. Nyttend (talk) 00:01, 30 June 2017 (UTC)[reply]

I am incredulous that such a tool does not already exist, and that we still have to rely on vigilant third parties to even notice that an edit war is taking place anywhere. We should absolutely have a tool that monitors this automatically so that we can check those articles and see if such a war is taking place. 'Cause many times, it would be! KDS4444 (talk) 00:07, 30 June 2017 (UTC)[reply]
(edit conflict)Actually, this could be taken one step further. An admin bot that monitors pages being actively edited and checks for back and forth reverts. If it detects a certain threshold of warring it auto protects the page with the minimum needed protection to stop all parties from warring. The affected parties are warned and the page protection lasts 48 hours. This is something I could write up. I have experience with advanced bots and could probably fashion up a good bot to do the job.—CYBERPOWER (Message) 00:10, 30 June 2017 (UTC)[reply]
That would make you my hero. That would be great!!! KDS4444 (talk) 00:12, 30 June 2017 (UTC)[reply]
I enjoy the idea. It's possible the cluebot also partly does some of this, it can report at AIV and appears to sometimes revert edits simply because they have recently been reverted already (in case it can serve as reference point). Of course, its focus is vandalism not warring. —PaleoNeonate - 00:17, 30 June 2017 (UTC)[reply]
I'm really concerned at the idea of an adminbot or anything else that actively does something. Imagine that an article is suddenly in the news, and lots of people start editing the article in a generally collaborative fashion without anybody having reason to go to talk — the page would be reported as edit-warring with my proposed tool, but while a human could easily see that it wasn't, a bot couldn't. What would such a bot do if some people were hitting "undo" or rollback, while others were merely going into the page history and reverting manually? Or what if some editors are warring while others are making productive edits, i.e. a couple of blocks are better than page protection? There are just too many factors for a bot to handle, but given the tool, a human could easily discriminate between war and no-war. Nyttend (talk) 00:40, 30 June 2017 (UTC)[reply]
Indeed there are many factors, and I have general ideas on how to account for them. This would be seriously tested before it actually performs any action. I operate IABot, in which the parser, which I designed, has to account for the human formatting factor. So complex operations and analysis is something I am capable of doing. The bot would cache recent revisions and compare new revisions to older revisions and look for matches to determine reverts. It would analyze the ratio of editors reverting to a preferred version. A many-to-one ratio would indicate users reverting possible vandalism. It would keep track of the reverting users and look for patterns spanning over the last 3 days of the article's history. It's certainly doable. The algorithms of how the bot handles the diplomacy can be figured out before the bot goes into development.—CYBERPOWER (Message) 00:47, 30 June 2017 (UTC)[reply]
We already have bots that automatically revert possible vandalism and test edits— while a certain degree of careful implementation is clearly necessary with this proposal, I think we'd be fools not to at least see if it can be done. We got the guy with the bot programming skills on board to create the program, let's see what it can do. If it messes up in a trial run, then we can either discard it or modify it. But what if it works?? If it works, then edit wars can no longer hope to go unnoticed. As I said before, I think this idea is one that Wikipedia is long overdue to bring into reality. We say edit warring is bad, we say there are consequences for doing it, but we have no tools to make sure it gets dealt with if it happens and no one is around to notice and report it. KDS4444 (talk) 01:11, 30 June 2017 (UTC)[reply]
I'd like to suggest we call it "Warbot"! KDS4444 (talk) 01:13, 30 June 2017 (UTC)[reply]
[edit conflict] Aside from my hesitation at using an adminbot for anything other than routine issues (e.g. updating DYK, blocking open proxies, adding {{Pp}} to full-protected pages, handling a one-time task approved by the community), I'm uncomfortable with this idea because of the response that it would be designed to make. What if you have a few editors making a mess amidst a lot of editors doing good work? How would the bot know that blocking is better than protecting? I see absolutely no reason for a bot to protect a page when blocking a miscreant or two is a much better response, and aside from open proxies, there's no way that a bot should be blocking anyone without constant human oversight of some sort — people rightly get upset when a misconfigured filter or a mis-understanding bot prevents a specific edit or wrongly interprets a good edit as vandalism, but imagine the effect if they get blocked by a bot, or if the page gets bot-protected in the middle of a good string of edits! Perfect is the enemy of good, but I believe that we're better off with the current system than with a bot doing this. Nyttend (talk) 01:19, 30 June 2017 (UTC)[reply]
I agree, automated reporting of possible edit wars would be a good addition, but an adminbot that auto-protects such pages could easily end up protecting a vandalised revision, or one with contentious unsourced statements on a BLP. Fixing this would be much harder than e.g. simply reverting Cluebot. DaßWölf 01:26, 30 June 2017 (UTC)[reply]
The bot could also just warn and report users, and not do anything at all to stop the warring. I'm certain a good adminbot could be designed. I do research the topics before designing, but it doesn't have to try and stop the edit war. Like I said designing algorithms for complex cases is something I'm pretty good at. Either way, I merely suggested it, as an option, and like I said in my RfA, any bot using admin permissions would be heavily scrutinized and tested before being allowed to execute any admin tasks.—CYBERPOWER (Message) 01:32, 30 June 2017 (UTC)[reply]
I like the idea of the bot if it only reports and doesn't take action. If it generated a report on a page we could watch, (AN and ANI would not be good, needs to be dedicated page, like AN/Bot) then we just check when we see changes. This also gives us a good picture of what is going on wiki-wide and the page and archives would be full of interesting snapshots of editor activity. Dennis Brown - 01:40, 30 June 2017 (UTC)[reply]
I can go and explore making a bot the generates a reliable report. The idea of it being an adminbot can always be revisited in the future if it turns out the bot can be trusted with such a task.—CYBERPOWER (Message) 01:48, 30 June 2017 (UTC)[reply]
That it could only report at first also seems ideal to me. If it could also make use of the existing edit tagging system, that may also be a plus? This would allow a second stage to include reports at the abuselog if so (it's unclear to me at this point if only edit filters can currently tag edits though)... Another complication is that it might need to take into account common CIDR ranges (and "grow" the network that is to be reported). (Example of someone soapboxing multiple times from 171.78.0.0/16 at History of Earth.) I have some experience dealing with the latter type of scenario since I authored IDS software, but for efficiency at a large enough scale it might require patricia trees implemented in C... —PaleoNeonate - 02:17, 30 June 2017 (UTC)[reply]
Just as I wanted a tool to find possible EW situations, I think a reporting bot would be great. It wouldn't be "forced to choose" between blocking and protecting, and false positives could be handled with the undo button instead of forcing us to unblock someone or unprotect a page. Please just be careful to have the bot wait a while (a day or two?) after reporting a page before making the next report about the same page; it really wouldn't be pretty if the bot were determined to list something and got into an edit war with a human :-) Nyttend (talk) 03:11, 30 June 2017 (UTC)[reply]
PS, we have a bot subpage for AIV. Maybe we could create a bot subpage for AN3? We already handle edit-warring there anyway; I don't see how it would help if the bot reported at some other page. Nyttend (talk) 03:13, 30 June 2017 (UTC)[reply]
Go for it, we stand to gain more than lose. If it works well, we have a useful new tool, if it fails, we learn from it why it doesn't work. • • • Peter (Southwood) (talk): 16:16, 30 June 2017 (UTC)[reply]
I would suggest that the bot does not report any editors names. Just the page name would do. CambridgeBayWeather, Uqaqtuq (talk), Sunasuttuq 12:14, 1 July 2017 (UTC)[reply]
I would support a report-only bot for now, and the possibilityof a protection bot can be decided later based on the false-positive rate of the report-only bot. To convince us that the protection bot is a good idea, it can estimate the likelyhood level of needing admin action, and we can then decide on a threshhold for bot protection. עוד מישהו Od Mishehu 03:27, 2 July 2017 (UTC)[reply]
The WMF is currently working on a Community health initiative on English Wikipedia. They are investigating, and building tools for, exactly this kind of issue. I posted a link to this discussion on the Initiative's talk page. Alsee (talk) 21:57, 1 July 2017 (UTC)[reply]
Thanks, @User:Alsee, for bringing this discussion to our attention. The WMF's Anti-Harassment Tools team is currently exploring how the Edit filter (AbuseFilter) can be used to combat harassment and I proposed a filter for 3RR violations. (I acknowledge an edit war is not an act of harassment itself, but is often the impetus for harassing behavior.) In general I would agree that edit wars are too nuanced for a bot/filter to block or protect and I agree it would be more appropriate to gently remind (or educate, if they're a new user) of the edit-war policy. A log could also be useful. We'd have to add some new functionality to the Edit filter to get this to work, which would take a few months to accomplish if prioritized. @User:Cyberpower678 will likely be able to more quickly build a proof-of-concept log, and we can learn from what it shows us! Cyberpower — let us know in IRC if/how we can help! — Trevor Bolliger, WMF Product Manager 🗨 15:40, 3 July 2017 (UTC)[reply]
Thanks Trevor. Considering the intricacies this bot has to handle, and the complexity of human interaction, this bot is still being conceptualized before I start building it.—CYBERPOWER (Chat) 16:13, 3 July 2017 (UTC)[reply]

Add archives to ALL URLs, dead or alive

So as many people are aware, InternetArchiveBot has been a major help to combating link rot in that it continuously and actively attempts to add archives to any URL that it sees as dead, or is tagged as dead. This really makes sure all of our sources continue to remain accessible and helps with verifiability. However there are those moments, when archives were not created in time and as such are unavailable when the original URL goes down. Having the bot add archives not only allows us users to make sure that it has an archive in case of possible link rot, but for those URLs missing an archive, allows us to take the opportunity to archive them elsewhere before possible link rot and add them, thus also letting IABot know that an archive exists, or vice versa.

So how would this work? Quite simple. Most sources use the CS1 and/or CS2 citation templates, which has the "deadurl" parameter. When set to "no", archive URLs are not made clickable on the rendered page, but when set to "yes" or when the parameter is omitted entirely, it will make the title link to the archive version instead. This allows users that detect dead urls to only have to flip the switch to enable the archived version. The bot will set deadurl=no to all cite templates it adds archives to that are still working, and of course it will set deadurl=yes to the dead ones as it does now. The side effect of this is, the non-cite template references that use external links will have {{Webarchive}} attached to them if it can't convert the link to a cite template.

Thoughts?—CYBERPOWER (Chat) 18:22, 3 July 2017 (UTC)[reply]

About cs1|2 and |dead-url=no: not quite true what you said. The archive is clickable, but you don't get to it through the source's title. To get to the archived source, click the word 'Archived':
{{cite web |title=Title |url=http://en.wikipedia.org |archive-url=http://example.com |archive-date=2017-01-01 |dead-url=no}}
"Title". Archived from the original on 2017-01-01. {{cite web}}: Unknown parameter |dead-url= ignored (|url-status= suggested) (help)
There are cases that are unclickable but for that to happen, the value assigned to |dead-url= must be one of unfit, usurped, or the bot-only value bot: unknown.
{{cite web |title=Title |url=http://en.wikipedia.org |archive-url=http://example.com |archive-date=2017-01-01 |dead-url=unfit}}
"Title". Archived from the original on 2017-01-01. {{cite web}}: Unknown parameter |dead-url= ignored (|url-status= suggested) (help)
The default state of |dead-url= (when omitted or not assigned a value) is yes so adding |dead-url=yes may not be required.
Trappist the monk (talk) 18:41, 3 July 2017 (UTC)[reply]
  • Oppose – Systematic archiving would create needless bloat in articles, and would drown editors and readers alike. Sources that actually need archiving would also become harder to identify at a glance. See the disastrous effect when a well-meaning editor applied IABot-assisted archiving of hundreds of sources at Barack Obama and Donald Trump. — JFG talk 12:32, 7 July 2017 (UTC)[reply]

Preventing unreviewed articles from entering mainspace without approval/screening

Hi,

I've noticed that many articles are being inserted into mainspace without any review or quality control, and I find that this behaviour is rampant among specific groups.

I was hoping that admins would consider a mechanism in which new articles cannot be inserted into mainspace without approval, which I think would save a lot of headaches.

I do see that many users are able to use the sandbox and Draft: templates properly, but there are many more who ignore that process and simply insert it into mainspace.

If you think about it, the idea that someone can just insert a new page into mainspace without approval is sort of ridiculous and I feel is worthy of further discussion. — Preceding unsigned comment added by 218.61.3.229 (talk) 07:10, 4 July 2017 (UTC)[reply]

You may wish to read Wikipedia:Autoconfirmed_article_creation_trial and Wikipedia:WikiProject_Articles_for_creation/RfC_2013 for background on a similar proposal and the technical and administrative challenges this proposal would face. You can keep up with progress made on page curation at Wikipedia:Page_Curation/Suggested_improvements. Snuge purveyor (talk) 07:22, 4 July 2017 (UTC)[reply]

When will this stop?

It's clear that Wikipedia is now becoming a feeder system for second and third worlders, where the administrators of this site have now developed a system to absolve them of the trash that ends up on here.

I've seen how administrators paint the "page reviewer" approval as some stringent process, that ensures the user is active and able to contribute with responsibility.

Tell me the, oh mighty admins-who-bestow-page-reviewing-privileges, how is trash like the page linked in the section title, worthy of wikipedia?

no sources, nothing. Just a bunch of hooey that User:Boleyn approved. Is he out of his mind? Or am is my naive and simpleton view being averaged out by the reality of how wikipedia is used?

After seeing this egregious display of irresponsibility, it is hard for any user to have any confidence in the page reviewers because I suspect other pages (that likely lack notability/sources) have been approved in a similar manner.

Thank you User:Boleyn for showing us what wikipedia really is.

Damn shame none of these admins have nothing to do but defend, aid, and abet the developing nations whilst pooh-poohing concerns of the first world.

outrageous. — Preceding unsigned comment added by 198.217.126.151 (talkcontribs)

"becoming a feeder system for second and third worlders"? really? I agree that the best thing for everyone would be for you to give up on this site and not come back. --Floquenbeam (talk) 19:44, 6 July 2017 (UTC)[reply]
Someone got out of the wrong side of the bed.
You start by painting with a broad brush an entire process while purportedly linking to one single example. You couldn't even accomplish that, as the link in the section heading doesn't go to an article. Presumably you meant: Edayilakkad.
You claim it has no sources, yet it does. If you would claim that some of the sources are blogs and therefore not acceptable as sources you point would have more validity for climbing it has no sources when it does means either that you don't know what you're talking about, or you are being sloppy.
You claim, I've seen how administrators paint the "page reviewer" approval as some stringent process,. I've never seen such a claim, can you link it?
You suggest that users are to make a blanket condemnation of all page review is because one page reviewer made one acceptance that you think is wanting. Seriously?--S Philbrick(Talk) 19:52, 6 July 2017 (UTC)[reply]
Boleyn is one of our best reviewers and one of our most active. I'd urge the user to sign in with their main account if they want to launch what is essentially a personal attack on them. I'd also recommend that more experienced editors request the user right and review even one or two pages a day. We need all the help we can get. TonyBallioni (talk) 19:56, 6 July 2017 (UTC)[reply]

Wow, I feel sorry for you carrying all that negativity. The island meets WP:GEOLAND, as most islands do. I'm guessing this isn't the real issue as you are clearly somewhat familiar with Wikipedia but didn't add your actual username. Perhaps get another hobby, which will make you happier. Sorry we are disappointing you so badly. Boleyn (talk) 20:17, 6 July 2017 (UTC)[reply]

As a totally uninvolved editor, not a high-flying reviewer or admin, might I just say that "some bigot"'s comments are ridiculous. The article could do with a lot of improving (copy editing, English, citations) but is clearly about a notable island with a population similar to a small hamlet in the west. See Coggeshall Hamlet for comparison, Edayilakkad is clearly a better stub. Keep going Boleyn and just ignore the knockers. Martin of Sheffield (talk) 20:27, 6 July 2017 (UTC)[reply]
  • @Boleyn:, @Martin of Sheffield: I'm sorry, I added the {{unsigned}} template, and said it was a comment by "some bigot" as a kind of meta commentary. I see now that I'm confusing people, so I fixed the template. The IP obviously doesn't think he's a bigot, I'm sure he is utterly convinced that what he's saying isn't ridiculous. Anyway, he's not responsible for not using an actual username. He is responsible for being a bigot. --Floquenbeam (talk) 20:33, 6 July 2017 (UTC)[reply]
  • Did you approve the island here or approve the article? Whatever quality standard we have, this article fails it. Maybe it passes WP:N - but that is far from implicit for every island (I'm just back from Finland - don't go down that "every island its own article" path). It certainly fails WP:RS and WP:V.
The OP is angry, and enough said about that. But we do have a real problem with geographical articles in India, where every minor school or web cafe seems to spawn an article on its locale, almost all terrible, unverifiable and far from notable. Andy Dingley (talk) 20:42, 6 July 2017 (UTC)[reply]
What sort of quality standard are we talking about? If this is "New page reviews" (not sure I have that exactly straight), then I found this description: New Page Review is essentially the first (and only) firewall against totally unwanted content and the place to broadly accept articles that may not be perfect but do not need to be deleted.
The article under discussion does appear to pass that very low bar. Oh, I'm not prejudging the outcome if it were brought to AfD on notability grounds, but I don't get the impression that that's the point. --Trovatore (talk) 20:48, 6 July 2017 (UTC)[reply]
WP has no quality standards. It has always been that way, and many consider that to be a bad thing. I certainly do.
What it does have, as a basic standard, is that all articles should be proof against CSD (not AfD - they're only required to be able to get past AfD after a week's work on them). This one isn't even that good.
More to the point here though, why was it marked as reviewed? It went from being a crap article, and quite obviously so, to being a crap article now marked as 'reviewed'. How does that help? It's not fit to pass any sort of review, quite possibly it can't ever (there's no sourcing to allow me to judge this). So why mark it as such? If necessary, at least leave it unreviewed. Andy Dingley (talk) 23:34, 6 July 2017 (UTC)[reply]
Well, it means that at least one person has looked at it and decided that it's not a candidate for speedy deletion. The advantage of that is that other reviewers can skip it, if all they're looking for is new articles that ought to be speedied, and assuming that they trust that the corpus of new-page reviewers is trustworthy, or at least trustworthy enough that they are more likely to find articles that ought to be speedied among unreviewed articles than among approved ones. --Trovatore (talk) 23:53, 6 July 2017 (UTC)[reply]
The disadvantage is that other reviewers will now skip it, thinking that it's adequate. This is anything but and I wouldn't oppose speedying it. I might even have speedied it myself. I might yet do so, and if it appears at AfD as it is at present, I'd seek to delete it. Andy Dingley (talk) 00:01, 7 July 2017 (UTC)[reply]
It seems quite clear from this discussion that, if you add a speedy tag, someone will remove it. You may as well skip the intermediary step and nominate it at AfD now. --Trovatore (talk) 00:08, 7 July 2017 (UTC)[reply]
I already have, Wikipedia:Articles for deletion/Edayilakkad Andy Dingley (talk) 00:13, 7 July 2017 (UTC)[reply]

Thanks for the comments, Floquenbeam, TonyBallioni and Martin of Sheffield. I've spent most of the day working really hard reviewing new pages, so it was quite dispiriting to see this thread, even if the IP's comments made little sense! We can't please everyone. Thanks again for making me feel OK about it, Boleyn (talk) 20:45, 6 July 2017 (UTC)[reply]

  • Perhaps worth noting that the OP IP has now been blocked for 6 months and won't be contributing further to this thread. ―Mandruss  20:47, 6 July 2017 (UTC)[reply]
  • Please note that, while it is unlikely that they will make any further posts to this thread, there is nothing stopping them from moving to a new IP and posting here or anywhere else on Wikipedia. MarnetteD|Talk 20:58, 6 July 2017 (UTC)[reply]
  • Once again, WP shoots the messenger. Andy Dingley (talk) 23:30, 6 July 2017 (UTC)[reply]
  • Andy, is it your view that the article should not have been "approved"? My understanding of "approved" is not any great compliment; basically it just means "doesn't qualify for speedy deletion". Do you disagree with that understanding, or do you think that it should have been speedied, and if so under what criterion? --Trovatore (talk) 23:33, 6 July 2017 (UTC)[reply]
  • You know, I just got done running the article through my usual copy edit process for newer articles. It was a disaster when I started, and I am afraid it is still in a horrible state. This is a case where the article probably should have been been speedied, TNT'd, or at least moved back into Draftspace. GenQuest "Talk to Me" 00:26, 7 July 2017 (UTC)[reply]

The internet has a term for someone who lobs an inflammatory comment, then sits back to watch the fireworks rather than engaging in constructive discussion. Can anybody remember what it is? Shock Brigade Harvester Boris (talk) 00:55, 7 July 2017 (UTC)[reply]

"still not wrong". Andy Dingley (talk) 01:58, 7 July 2017 (UTC)[reply]

Move to draft as a better alternative to deletion

I would like to propose that we change the way we go about deleting articles. Our current setup is largely binary - keep or delete, with other options such as merging, redirecting, userfying, or moving to draft space being relatively little used. I believe that the process would be much less contentious if the default decision was between keeping or moving to draft, with outright deletion reserved for hoaxes, defamatory material, and the like. Pages moved to draft pursuant to such a process would be move-protected, so that they could not be moved back to mainspace without admin review. Otherwise, the usual rules for drafts would apply: interested editors would be able to continue making improvements to the article until they felt that it deserved review for restoration to mainspace, at which point it would be up to other editors to evaluate the draft and determine whether it was up to snuff. Drafts that went unedited for six months would be deemed abandoned and deleted.

The advantages of this approach, I believe, are that it would make deletion discussions less strident and less prone to gaming and sockpuppetry (why go to all that effort, after all, if the result of losing the discussion is that the article gets moved to draft for potential further improvement); and, perhaps paradoxically, the elimination of more dross and cruft, because advocates will argue against deleting it, but more often than not will never bother to make improvements once it has been moved to draft space. bd2412 T 21:37, 6 July 2017 (UTC)[reply]

The disadvantages I see is that this dramatically increases the amount of noise in the draft namespace. And the amount of material that needs to be policed for serious rules violations - a common and seldom addressed issue with all policy proposals that seek to diminish the number of deletions in one way or another. I also don't think that people with high stakes in an article (-->those most likely to engage in disruptive behaviour) will accept anything less than mainspace. Jo-Jo Eumerus (talk, contributions) 22:38, 6 July 2017 (UTC)[reply]
This is already the practice at NPP for articles that have the possibility of being included in the encyclopedia but that are not currently ready for the article mainspace. There is currently discussion ongoing at WT:NPP as to the best practices for draftifying. I wouldn't support it being the default option at AfD, but for non-harmful new articles, I'm fine with it. TonyBallioni (talk) 22:41, 6 July 2017 (UTC)[reply]
I like the direction of the discussion - my proposal is basically to apply that kind of thinking to all pages nominated for deletion, not just new pages. In many cases, an old page nominated for deletion is just one that was missed when it was new. Of course, the bulk of deletion nominations are fairly new pages, so this would amount to a relatively small extension, if the NPP proposal continues getting fleshed out in that direction. bd2412 T 22:52, 6 July 2017 (UTC)[reply]
I support more use of userfy or move to draft if there's any potential. --S Philbrick(Talk) 01:13, 7 July 2017 (UTC)[reply]
I agree with Jo-Jo Eumerus it would be a huge pile of stuff that ends up in search results locally and Google, in effect articles never get deleted. There are many automated processes that will then churn through and maintain it, and manual processes will get overloaded and backlogged. Rather than making it a default, users can recommend move to Draft based on the details of the case. That happens sometimes, there should be justification for it. -- GreenC 13:57, 7 July 2017 (UTC)[reply]
Can we find a way to ignore the automated processes and dispose of those not manually edited in the requisite time? bd2412 T 21:39, 7 July 2017 (UTC)[reply]

WikiSandy (Contextually Enhanced Search)

WikiSandy’s mission is to enhance the ease of knowledge discovery from Wikipedia and where possible make more user efficient. WikiSandy’s “tMt” (take me there) links allow jumping directly to sentences of interest no matter where in an article the sentence may be. We have received some very positive feedback, see [pump (idea lab)]. Any help navigating the Village pump (proposal) process would be greatly appreciated. We propose to provide a Wikipedia search service that indexes Wikipedia data semantically, based on sentence structure; subject, subject complement, or direct object, etc. versus just key words, titles etc. Recognize information that is not directly communicated by the author, by relating acronyms, abbreviations, and compound nouns to appropriate subject matter within an article. Results will be ordered and prioritized by the strength of the correlation of search term to the sentences returned. Results will provide full sentences where feasible, with deep links to those sentences, making it possible for users to jump directly to those sentences of interest. Such a tool will improve the search experience within Wikipedia and increase the value of the Wikipedia data. What’s unique about WikiSandy?

WikiSandy is an en.wikipedia.org specific experimental search engine that indexes Wikipedia data semantically, based on sentence structure; subject, subject complement, or direct object, etc. versus just key words.

• It typically returns full sentences for the user to review.

• Take Me there (tMt) links to sentences in the actual source articles which allows the user to jump directly to that sentence no matter where it is in the article!

WikiSandy answers follow-up questions maintaining context. This means you can search for Abraham Lincoln, get results, and then ask, “What did he do?”, and get relevant results.

• Click “aLike” to view other documents with the same sentence which has been discovered by WikiSandy. This shows you other articles on the same topic.

• Tell Me Something will bring you to random query results.

• Example Questions will give you some demo queries.

• View the User Guide for more details. Please feel free to test drive @ http://www.wikisandy.org