Wikipedia talk:Wikipedia Signpost/Single/2024-04-25
Comments
The following is an automatically-generated compilation of all talk pages for the Signpost issue dated 2024-04-25. For general Signpost discussion, see Wikipedia talk:Signpost.
In the media: Censorship and wikiwashing looming over RuWiki, edit wars over San Francisco politics and another wikirace on live TV (10,547 bytes · 💬)
Putin's Wiki-censor
Kind of funny to see the Russian government putting so much effort into their own propaganda Wikipedia knockoff. They don't like the facts, so they created their own EncyCopedia. Allan Nonymous (talk) 15:25, 26 April 2024 (UTC)
Linked to on 404 Media. Axem Titanium (talk) 18:58, 29 April 2024 (UTC)
Taylor Tomlinson hosts a wikirace on live TV – again !
- Imagine if the After Midnight staff found out about this shout-out for real, though... Live TV, here we come! : D --Oltrepier (talk) 17:13, 25 April 2024 (UTC)
- Taylor Tomlinson is great! I recommend that anyone check out her stand-up sets on YouTube if you haven't yet. Quercus solaris (talk) 22:56, 3 May 2024 (UTC)
In brief
- For the "King of Wikipedia" bit, there was the opportunity to write something like "the King of Rock and Roll, the king of the UK, and the King of the Jews." Would've been such a satisfying sentence! ~Maplestrip/Mable (chat) 14:19, 25 April 2024 (UTC)
- @Maplestrip: Well Cheesy Chuck seems to have 139, but I wouldn't want to write anything in poor taste. I suppose we could have worked something in about zombies seen in Michigan being all shook up, but I'll leave that for the NY Post, Fox and other Murdoch outlets. Murdoch only has a score of 75, so he wouldn't qualify as king of anything. Smallbones(smalltalk) 19:06, 25 April 2024 (UTC)
- Ok I've bee trying to parse this message all day. What is "Cheesy Chuck" and what do zombies have to do with anything? What's with the sudden mention of American news agencies? ~Maplestrip/Mable (chat) 11:12, 26 April 2024 (UTC)
- I'm fairly certain Cheesy Chuck refers to the king of England. Gråbergs Gråa Sång (talk) 12:16, 26 April 2024 (UTC)
- Ok I've bee trying to parse this message all day. What is "Cheesy Chuck" and what do zombies have to do with anything? What's with the sudden mention of American news agencies? ~Maplestrip/Mable (chat) 11:12, 26 April 2024 (UTC)
- @Maplestrip: Well Cheesy Chuck seems to have 139, but I wouldn't want to write anything in poor taste. I suppose we could have worked something in about zombies seen in Michigan being all shook up, but I'll leave that for the NY Post, Fox and other Murdoch outlets. Murdoch only has a score of 75, so he wouldn't qualify as king of anything. Smallbones(smalltalk) 19:06, 25 April 2024 (UTC)
- Mildly surprised Netanyahu the Younger's article was not redirected to his father's as a valid alternative to deletion. Rotideypoc41352 (talk · contribs) 19:42, 25 April 2024 (UTC)
- @Rotideypoc41352: You're correct. Haaretz stated it as
- "Hebrew Wikipedia Votes to Remove Entry on PM Netanyahu's Son Following His Request" (headline)
- "The community of Hebrew Wikipedia editors voted 75-54 to merge Avner Netanyahu's page into that of the Netanyahu family after the Israeli prime minister's son requested that his entry be removed earlier this month" (lede), so the lede is more technically correct and you are right. It's about the same IMHO because apparently there was almost nothing to merge - just something about what football team he cheers for. I do have problems getting around in Hebrew in translation and Haaretz tends to link to their own stories rather than Wikipedia. And of course HeWiki rules are a bit different and the way it's expressed in translation is slightly different. Sorry. Smallbones(smalltalk) 20:23, 25 April 2024 (UTC)
- There is no redirect at he:אבנר נתניהו, though. And you are right that since
HeWiki rules are a bit different and the way it's expressed in translation is slightly different
, the machine translations of the deletion discussion at he:ויקיפדיה:רשימת מועמדים למחיקה/אבנר נתניהו2 are of little help. Rotideypoc41352 (talk · contribs) 21:40, 25 April 2024 (UTC)
- There is no redirect at he:אבנר נתניהו, though. And you are right that since
- I can't help musing "So, that's 2 articles about him in Haaretz... en-WP WP:N approaching..." Then again, IMO it makes a bit of sense to somewhat disregard such coverage for WP:N purposes. Gråbergs Gråa Sång (talk) 08:01, 26 April 2024 (UTC)
- So, nothing on Maher? Gråbergs Gråa Sång (talk) 05:35, 26 April 2024 (UTC)
- @Gråbergs Gråa Sång: The news came out very late in the writing process, but I'm pretty sure we're going to cover it soon. Oltrepier (talk) 07:27, 26 April 2024 (UTC)
- @Oltrepier: You have done a lot of great work for the Signpost in recent months, but let's avoid such misleading comments. (Gråbergs Gråa Sång commented earlier in the newsroom discussion, so they are probably well aware that this was not a mere timing issue.) Regards, HaeB (talk) 09:47, 26 April 2024 (UTC)
- @Gråbergs Gråa Sång: The news came out very late in the writing process, but I'm pretty sure we're going to cover it soon. Oltrepier (talk) 07:27, 26 April 2024 (UTC)
- I'd suggest using Avner Netanyahu --Piotr Konieczny aka Prokonsul Piotrus| reply here 22:57, 27 April 2024 (UTC)
- Regarding the Baidu Baike closing their app, the report noted that "
The Beijing-based, Hong Kong-listed company will end support for the app on June 30 to focus on “better user experiences”, according to a statement published on the Baike app on Wednesday. Users can continue to access the service through a mini-program on the main Baidu app
". So they will only close Baidu Baike app, rather than Baidu Baike the entire website. Thanks. --SCP-2000 03:29, 27 April 2024 (UTC)- Yeah, I was kind of confused about this -- the source article didn't seem to make any distinction between a phone app and a website. jp×g🗯️ 20:46, 3 May 2024 (UTC)
Medeyko
I appreciate the brief summary, but I have more questions than answers. Was Medeyko always a Russian "agent', or was he "turned" at some point? Did he have a choice or was he threatened? And was it inevitable that someone like this would end up running Wikimedia Russia? Further, why was the organization dissovled in December? Couldn't it be run from outside the country by expatriates? Frontline covered a similar issue last year by journalists who were against the idea of operating outside of Russia, IIRC. Viriditas (talk) 21:15, 2 May 2024 (UTC)
- News organizations can operate outside of Russia, but an organization dedicated to organizing events inside Russia such as WMFRu definitely cannot, as that would prevent it from doing anything. Aaron Liu (talk) 21:34, 2 May 2024 (UTC)
- We never know but it looks like his primary interest in running Wikimedia.ru was to get financial benefits. When he saw an opportunity to get bigger financial benefits from the state he immediately moved to grab it, I do not think he cared much about Wikipedia or Wikipedia movement, at least not of the aspects which did not (indirectly) result in getting money from it. Ymblanter (talk) 05:59, 3 May 2024 (UTC)
Dean Preston
I took the liberty of fixing the supervisor's affiliation - while lowercase "socialist" would also be accurate, he's not a member of any organizations that would lead to his description as a plain, capitalized "Socialist". Abeg92contribs 21:06, 8 May 2024 (UTC)
On Commons, there is a discussion whether a file used by The Signpost is demeaning to the subject. Abzeronow (talk) 21:09, 10 June 2024 (UTC)
News and notes: A sigh of relief for open access as Italy makes a slight U-turn on their cultural heritage reproduction law (6,846 bytes · 💬)
- News has just come in on the Wikimedia-l mailing list that Persian Wikipedia has surpassed 1 million articles. --Andreas JN466 14:46, 25 April 2024 (UTC)
- Do you have any information for the reason for the global bans, especially for Mohsen Salek? Your links just take one to a log page. Was there a discussion about this? Or was it just a mysterious "office action" where no justification is provided? It just seems like this global ban might be based in political reasons which would be a huge disappointment I think. Liz Read! Talk! 17:56, 25 April 2024 (UTC)```
- There was some speculation in the Signpost newsroom, but it didn't progress to where we thought it made responsible elaboration on what was published. ☆ Bri (talk) 18:37, 25 April 2024 (UTC)
- I met Mohsen at a conference — such a gregarious conversationalist and delightful person to have around! I'm very curious about what happened. I'll miss seeing him around. Crunchydillpickle🥒 (talk) 19:04, 25 April 2024 (UTC)
- I agree with Liz, this is very worrying. It reeks of political censorship. --NSH001 (talk) 20:50, 25 April 2024 (UTC)
- It's so weird there's no note on why, or the lack or a reason, especially given the last Signpost issue's discussion of Wikipedia's transparency mentioning blocks as an example. //Replayful (talk | contribs) 21:17, 28 April 2024 (UTC)
- I asked on the Wikimedia-l mailing list on April 25 and have received no reply. Andreas JN466 17:41, 14 May 2024 (UTC)
- @Jayen466: I actually emailed them about the Mardetanha ban a while back, and got a response on April 11th. The WMF said that they could not share any details for privacy reasons. Not sure about the other ban, but they probably have the same reason for why you haven't received a response. QuicoleJR (talk) 15:20, 22 May 2024 (UTC)
- I asked on the Wikimedia-l mailing list on April 25 and have received no reply. Andreas JN466 17:41, 14 May 2024 (UTC)
- About the Graph Extension: I made the comment on the Telegram channel that this is one of the cases where the relationship between the volunteer community & the Foundation has not been successful. By this I am not blaming the Foundation (well, not having a prompt fix is arguably their fault), but that we volunteers have allowed ourselves to become helpless when faced with a problem like this. There was a time when a problem with the software presented itself that the volunteers with programming experience &/or knowledge would simply come up with a patch themselves to fix the problem. But now we expect the Foundation to fix all of the problems we encounter.So what was preventing one or more people who were tired of the Foundation's lack of action doing something? The source code is available for download; it could have been put on a test bed & one or more volunteers could have hacked out a patch, then submitted it to the Foundation. That patch then would have to be seriously evaluated -- rejecting it out of hand would be a bad look for the Foundation. Definitely it would require testing before being applied, which would lead to bugs identified, & the submitters informed. That's part of the software development process. Worst case would be volunteers confirming the Foundation's statement that fixing this was a difficult task. Nevertheless, something would be happening, & we volunteers would feel empowered.In short, this issue about the Graphs extension is a sign that the volunteer community have adopted learned helplessness into our culture. And it is up to us to expunge this defect, because it does not critically affect the Foundation. --- llywrch (talk) 18:05, 26 April 2024 (UTC)
- We at Wiki Project Med have been working on integrating OWID for nearly a decade now. This initially began as a collaboration with WMF staff using the graph extension. When that failed we duplicated OWID on the wmcloud.[2] And than imported that into a mediawiki.[3] To get it on WP we were told it needs to move to production servers and have a WMF team to manage it. We next looked at just bringing OWID in directly via a gadget after getting reader consent which is now live in EU WP.[4] There is currently discussion regarding if this will be permitted.[5] I guess we will see. But the community is working on various potential solutions. Doc James (talk · contribs · email) 22:29, 26 April 2024 (UTC)
- While i agree that learned helplessness is part of the problem here, i think it goes further than that. It looks like the articles were intentionally kept in a broken state in order to try to put presure on WMF to implement a preferred solution. There are failures here on all sides. That said, i think the biggest problem is that once you dig, there is wide disagreement among the community about what "graphs" should be. Its hard to fix a problem when you can't even figure out what you are supposed to be fixing. Bawolff (talk) 02:11, 28 April 2024 (UTC)
Recent research: New survey of over 100,000 Wikipedia users (18,966 bytes · 💬)
Wikipedians are more careful than to believe in the results of convenience sampling. -SusanLesch (talk) 14:21, 25 April 2024 (UTC)
- Huh, can you explain in more detail why you characterize the sampling method used by this survey as "convenience sampling"? That term is most often used for methods that rely on a grossly unrepresentative population (say surveying a class of US college students for making conclusions about all humans). But "people who access the Wikipedia website within a given timespan" is a pretty reasonable proxy for "Wikipedia users" (in the general sense).
- For context: Recruitment of survey participants via banners or other kinds of messages on the Wikipedia website itself is kind of the state of the art in this area. (It has also been used in numerous editor and reader surveys conducted by the Wikimedia Foundation.) It e.g. forms the basis of many of the most-cited results on e.g. the gender gap among Wikipedia editors. Yes, it comes with various biases (which, as already indicated in the review, one can try to correct after the fact using various means, see e.g. our earlier coverage here of an important 2012 paper which did this regarding editors: "Survey participation bias analysis: More Wikipedia editors are female, married or parents than previously assumed", and the WMF's "Global Gender Differences in Wikipedia Readership" paper also listed in this issue). But so does any other method (door-knocking, cold-calling landline telephones, etc. - and regarding phone surveys, these biases have become much worse in the last decade or so, at least in the US, as political pollsters have found out).
- In sum, it's fine to call out specific potential biases in such surveys (e.g. I have been reminding people for a over a decade now that - per the aforementioned 2012 paper - one of the best available estimate for the share of women editors in the US is 22.7% as of 2008, considerably higher than various other numbers floating around). But dismissing their results entirely strikes me as a nirvana fallacy.
- Regards, HaeB (talk) 19:25, 25 April 2024 (UTC) (Tilman)
- Hi, Mr./Dr. Bayer, thank you for your enthusiastic defense. Your sample size is admirable. Maybe our difficulty is in defining terms. I use the term convenience to describe samples created at the convenience of the researcher, to include self-selected participants. The latter is the problem here. I have no knowledge of statistics to share, only the admonition from a former professor that convenience surveys are the weakest sort. It's pretty simple: I never do surveys. My sister always does. The same caveat applied when Elon Musk asked whether he should step down as head of Twitter. His answer looks legitimate and scientific all the way down to one decimal point. I promise to read your article and all of its sources in detail (which I have not had a chance to do) after my editing chores are done. -SusanLesch (talk) 13:55, 26 April 2024 (UTC)
- I still sense a lot of confusion here.
Your sample size is admirable.
- Not sure what you mean by the possessive pronoun here, I was not involved at all with this survey.Maybe our difficulty is in defining terms.
- If you were using the term "convenience sampling" in a different meaning than the established one, it would have been good to clarify that from the beginning.to include self-selected participants
- It sounds like you are referring to the mundane fact that participation in the survey was voluntary, which is the case for almost all large-scale social science surveys (and even legally compulsory surveys like the US census have great trouble achieving a 100% response rate and avoiding undercounting). Again, while this might cause participation biases, these can be examined and to some extent handled (see above). It's not a valid reason for dismissing such empirical results out of hand.- I am also very unclear about the relevance of your sister and Elon Musk to this conversation, except perhaps that the latter's social media use illustrates the dangers of shooting off snarky one-sentence remarks based on a very incomplete understanding the topic being discussed. In any case, I appreciate your intention to now actually read the Signpost story that you have been commenting on.
- Regards, HaeB (talk) 21:00, 26 April 2024 (UTC)
- Mr./Dr. Bayer, I don't have your fancy vocabulary, nor am I being
snarky
(nor was Mr. Musk, who asked an honest question). This discussion has become so unpleasant that I no longer wish to read your sources' methodology. The sampling your article describes leads us away from high grade information. -SusanLesch (talk) 16:54, 27 April 2024 (UTC)
- Mr./Dr. Bayer, I don't have your fancy vocabulary, nor am I being
- Hi, Mr./Dr. Bayer, thank you for your enthusiastic defense. Your sample size is admirable. Maybe our difficulty is in defining terms. I use the term convenience to describe samples created at the convenience of the researcher, to include self-selected participants. The latter is the problem here. I have no knowledge of statistics to share, only the admonition from a former professor that convenience surveys are the weakest sort. It's pretty simple: I never do surveys. My sister always does. The same caveat applied when Elon Musk asked whether he should step down as head of Twitter. His answer looks legitimate and scientific all the way down to one decimal point. I promise to read your article and all of its sources in detail (which I have not had a chance to do) after my editing chores are done. -SusanLesch (talk) 13:55, 26 April 2024 (UTC)
- It is great that we have some new good survey data about the community. It is ridcolous they are not available under open licence as open data, and that such a big survey was done without WMF cooperating with this and/or ensuring the data will be available. This is something for the mentioned white paper on best research practices to consider, actually. --Piotr Konieczny aka Prokonsul Piotrus| reply here 00:57, 26 April 2024 (UTC)
- I am a bit confused about what you are referring to.
It is ridcolous they are not available under open licence as open data
- the dataset is available (it's how I was able to create the graphs for this review, after all), and licensed under CC-BY SA 4.0.such a big survey was done without WMF cooperating with this
- judging from the project's page on Meta-wiki, the team extensively cooperated with the Wikipedia communities where the survey was to be run (and also invited feedback from some WMF staff who had previously run related surveys). Plus they followed best practices by creating this public project page on Meta-wiki in the first place (actually on your own suggestion it seems?), something even some WMF researchers occasionally forget unfortunately. What's more, the team also notified the research community in advance on the Wiki-research-l mailing list.- Regards, HaeB (talk) 03:46, 26 April 2024 (UTC)
- PS: Also keep in mind that the Wikimedia Foundation has so far not been releasing any datasets from its somewhat comparable "Community Insights" editor surveys. (At least that is my conclusion based on a cursory search and this FAQ item; CCing TAndic and KCVelaga to confirm.) So I am unsure why you are confident that a collaboration with WMF would have been
ensuring the data will be available
. - PPS: To clarify just in case, I entirely agree with you on the principle that (sanitized) replication data for such surveys should be made available as open data.
- Regards, HaeB (talk) 04:08, 26 April 2024 (UTC)
- @HaeB what you write in PPS is pretty much what I meant. Reading the Signpost article gave me the impression this is not the case here (
This dataset paper doesn't contain any results from the survey itself. And from the communications around it (including the project's page on Meta-wiki at Research:Surveying readers and contributors to Wikipedia) it is not clear whether and when the authors or others are planning to publish any analyses themselves. Hence we are taking a quick look ourselves at some topline results below (note: these are taken directly from the "filtered" dataset published by the authors, without any weighing by language or other debiasing efforts).
) I gather that something is available but not as much as it shoulds be. As for PS, yes, WMF is hardly a paragon of virtue in this regard either, and it is worth complaining about it too. WMF should be a paragon here, and should be both showcasing and enforcing best practices. Piotr Konieczny aka Prokonsul Piotrus| reply here 01:48, 28 April 2024 (UTC)
- @HaeB what you write in PPS is pretty much what I meant. Reading the Signpost article gave me the impression this is not the case here (
- Hi @HaeB, thanks for the ping and sharing analysis of this survey data! I'm confirming that we don't release the Community Insights data under open access, as the FAQ states, because we don't have the resourcing to do so (though we are open to working with Affiliates and Researchers under NDA).
- To shine a bit of a light on at least my understanding why we don't do this: typically the most interesting data in Community Insights is the demographic data, which also happens to be the most sensitive. As procedures for data re-identification have become more sophisticated (cf. Rocher et al. 2019), survey techniques used for decades for deidentification and anonymization have fallen behind (Evans et al. 2023). This becomes even more complex as lots of data about Wikimedians is open and queryable, and thus provides secondary datasets to potentially use for identification. One current approach to deidentification is differential privacy, which can increase plausible deniability about participation in a survey (ibid.) by shifting data around within the dataset, but this requires resourcing to do it right and increases confidence intervals, which then require larger sample sizes. However, active editors are a finite and relatively small population (compared to, say, the country-level populations for the European Social Survey or American Community Surveys), and with the tools we have to reach them while maintaining data integrity, increasing sample sizes is currently not possible. Another approach would be to do more heavy-handed data suppression and grouping (eg. the US Census ACS data suppression procedure, which suppresses 1-year sample data for any geographic or group with less than 65,000 eligible participants), which would cause discrepancies in independent analyses and remove most variables of research interest. While the Community Insights data may seem quite trivial from a US perspective, we also have to think about a whole world of laws and possibilities where it may not be trivial (“unknown unknowns”). In essence, before we release any data for open access, we want to be extra careful about the privacy implications of that data – because, once it’s out there, it’s out there forever.
- Regarding the analyses above and questions others have had about the quality of the data, my instinct from a brief look at the age distribution (specifically the modal age categories of 18-24 and 65+) is that the survey attrition between who started and who got to the demographic questions at the end of a rather long survey may be biased towards people who have more time (in this case, potentially college students and retirees). Based on the CentralNotice banner being displayed to both logged in and logged out users, I would assume that the sample is primarily readers rather than contributors (there are more logged out than logged in people by a large margin), and the gender data more closely resembles that of the Wikimedia Readers Survey recently conducted by @YLiou (WMF). I’m less worried about the sampling bias (as it was technically randomly sampled after the initial 100% display, though different sampling rates and changes within wikis creates some complications for calculating sampling error estimates) than the non-response bias (different response rates from different types of users conditional on being sampled), which could both be introduced during the survey banner display and again in attrition on who is willing to respond to any individual item and go through completing the entire survey. Weights could help the data be more representative – I would at least consider applying weights to the wiki level based on the Wiki comparison data, and potentially by geographic data on Wiki Stats. All of that said, regardless of the limitation of whether the data should be used for population estimates, it can still be very useful for in-group analyses (eg. comparing demographics on a sentiment question) and it’s nice to see it published for open use. - TAndic (WMF) (talk) 15:09, 13 May 2024 (UTC)
- PS: Also keep in mind that the Wikimedia Foundation has so far not been releasing any datasets from its somewhat comparable "Community Insights" editor surveys. (At least that is my conclusion based on a cursory search and this FAQ item; CCing TAndic and KCVelaga to confirm.) So I am unsure why you are confident that a collaboration with WMF would have been
It would be interesting (at least to me) to see the results/analyses of the following questions from the survey:
- As a reader, overall, how much time do you spend on Wikipedia searching for information, reading articles, etc., on average:
- Do/did you discuss Wikipedia with...
- Would you say that Wikipedia has ever contributed to changing your opinion on a political subject?
- Overall, in your opinion, for the following areas, does Wikipedia have a neutral presentation of the various admissible points of view?
- In your opinion, are the following statements about the development of Wikipedia true or false?
- If Wikipedia disappeared, it would be, for you:
- Do you know any Wikipedia contributors among your friends, family, or professional contacts?
- What "hinders you" from contributing, or contributing more?
Anyway, thanks for creating those graphs and sharing some of the topline results! Some1 (talk) 00:08, 27 April 2024 (UTC)
HaeB: The issue with the survey is that the sample is non-random, so the results cannot be relied upon. It is not impossible that the self-selected participants represent a valid sample of the population, but there is no assurance that this is so. Very often, such a sample turns out to be skewed. Chiswick Chap (talk) 11:31, 28 April 2024 (UTC)
HaeB: I've found at least two attempts to randomize content (which might be easier than randomizing users). That they exist suggests that "state of the art" remains RCTs.
- The first is by Aaron Halfaker of Microsoft, formerly WMF.[1]
- The second I daresay had a hilarious hypothesis.[2]
References
- ^ Halfaker, A.; Kittur, A.; Kraut, R.; Riedl, J. (October 2009). A jury of your peers: quality, experience and ownership in Wikipedia. 5th International Symposium on Wikis and Open Collaboration. Association for Computing Machinery (ACM). pp. 1–10. doi:10.1145/1641309.1641332 – via Penn State.
For our analysis, we used a random sample of approximately 1.4 million revisions attributed to registered editors (with bots removed) as extracted from the January, 2008 database snapshot of the English version of Wikipedia made available by the Wikimedia Foundation.
- ^ Thompson, Neil; Hanley, Douglas (February 13, 2018). "Science Is Shaped by Wikipedia: Evidence From a Randomized Control Trial". MIT Sloan Research Paper No. 5238-17. Social Science Research Network. doi:10.2139/ssrn.3039505.
From 2013-2016 we ran an experiment to ascertain the causal impact of Wikipedia on academic science. We did this by having new Wikipedia articles on scientific topics written by PhD students from top universities who were studying those fields. Then, half the articles were randomized to be uploaded to Wikipedia, while the other half were not uploaded.
-SusanLesch (talk) 13:41, 4 May 2024 (UTC)
- The abstract says, "The survey includes 200 questions about..." and the instructions to respondents say, "It will take you from 10 to 20 minutes to complete it." The survey questions are in a form linked from the meta page but not easy to browse as they are locked in that interface. It is not clear to me which respondents got served which questions, but obviously study design to answer 200 questions in 10 minutes needs explanation. There is no paper so not currently a way to understand methods, right? Bluerasberry (talk) 16:45, 30 April 2024 (UTC)
The survey questions are in a form linked from the meta page but not easy to browse
- but they are also reproduced in the codebook.There is no paper
- excuse me? The dataset paper mentioned and cited in the review does discuss methods.It is not clear to me which respondents got served which questions
- Tables 1 and 2 in the paper provide detailed information about how many respondents got how far in the survey in which language (we should be so lucky to get that level of detail in every report about a Wikipedia survey).- As for the duration, that's an interesting question, but honestly this wouldn't be the first survey to make over-optimistic promises about how long it takes to complete it.
- Regards, HaeB (talk) 13:45, 1 May 2024 (UTC)
Traffic report: O.J., cricket and a three body problem (0 bytes · 💬)
Wikipedia talk:Wikipedia Signpost/2024-04-25/Traffic report
WikiConference report: WikiConference North America 2023 in Toronto recap (1,241 bytes · 💬)
Great recap! Nice to see the gallery. Thanks! Crunchydillpickle🥒 (talk) 19:21, 25 April 2024 (UTC)
- I agree. I enjoyed the video interviews. I'm looking forward to attending WikiConference 2024. Finger's crossed! Ckoerner (talk) 21:48, 25 April 2024 (UTC)
Minor correction: We were all well back in the building well before 11:45 ... since it was Remembrance Day in Canada, I very much recall observing the usual moment of silence at the 11th hour. Daniel Case (talk) 04:27, 27 April 2024 (UTC)
- A great conference, lots of excellent discussions and well organized. Thanks to the many good volunteers who put it together. The wait during the unexpected delay actually gave attendees a lot of extra conference time to further meet and discuss, not the best way to mix and mingle but some good things were accomplished during the time-out. Randy Kryn (talk) 13:19, 27 April 2024 (UTC)
WikiProject report: WikiProject Newspapers (Not WP:NOTNEWS) (619 bytes · 💬)
- Finding out the Wikipedia Library – which I've had access to for years – gives access to Newspapers.com is a literal godsend for future article research. Krisgabwoosh (talk) 16:47, 25 April 2024 (UTC)
- I've always loved the interviews with editors involved with a WikiProject. Thank you for resuming this informative feature! - kosboot (talk) 13:18, 1 May 2024 (UTC)