Jump to content

Wikipedia:Articles for deletion/Center for AI Safety

From Wikipedia, the free encyclopedia
The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.

The result was no consensus‎. A consensus isn't going to form here, and I would encourage editors to continue discussing a potential merger on the Talk. Star Mississippi 02:24, 3 August 2023 (UTC)[reply]

Center for AI Safety[edit]

Center for AI Safety (edit | talk | history | protect | delete | links | watch | logs | views) – (View log | edits since nomination)
(Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL)

No evidence of notability beyond a single announcement that AI has inherent existential risks, which hardly constitutes insight. Nevertheless, the only evidence here is of a one man band and the organisation fails WP:NORG as well as WP:GNG - and has to be considered independently of its director, whose work does not confer notability on the center they direct. Alexandermcnabb (talk) 04:42, 5 July 2023 (UTC)[reply]

Hendrycks and his collaborators have had some success developing an artificial conscience that could steer AIs toward moral behaviors. And in one paper, he explores the possibility of a “moral parliament” that would inject instantaneous ethics into the quick, weighty decisions that AIs will be making all the time. […] How, exactly, such a system would be implemented is unclear. And even if it were, it’s easy to imagine how it could err.
So in the meantime, the Center for AI Safety is pursuing more modest approaches. It’s providing high-octane computer resources to AI safety researchers. It has published an online course on the subject. And a group of philosophy professors is finishing up a months-long fellowship at the center.
CAIS hopes to run a seminar for lawyers and economists at the end of August — anything to get more people thinking about the risks of AI.
  • Fox News Tech covers the organization's paper "An Overview of Catastrophic AI Risks" (and in a separate article, covers the paper "Natural Selection Favors AIs Over Humans"):
Tech experts, Silicon Valley billionaires and everyday Americans have voiced their concerns that artificial intelligence could spiral out of control and lead to the downfall of humanity. Now, researchers at the Center for AI Safety have detailed exactly what "catastrophic" risks AI poses to the world.
"The world as we know it is not normal," researchers with the Center for AI Safety (CAIS) wrote in a recent paper titled "An Overview of Catastrophic AI Risks." […]
CAIS is a tech nonprofit that works to reduce "societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards," while also acknowledging artificial intelligence has the power to benefit the world. […]
The CAIS leaders behind the study, including the nonprofit’s director Dan Hendrycks, broke down four categories encapsulating the main sources of catastrophic AI risks, which include: malicious use, the AI race itself, organizational risks and rogue AIs.
  • "Artificial intelligence warning over human extinction – all you need to know". The Independent. 2023-05-31. Retrieved 2023-06-03.
The Centre for AI Safety says it reduces risks from AI through research, field-building, and advocacy.
The AI research includes: identifying and removing dangerous behaviours; studying deceptive and unethical behaviour in it; training AI to behave morally; and improving its security and reliability.
The centre says it also grows the AI safety research field through funding, research infrastructure, and educational resources.
And it raises public awareness of AI risks and safety, provides technical expertise to inform policymaking and advises industry leaders on structures and practices to prioritise AI safety.
The Center for AI Safety divides the risks of AI into eight categories. Among the dangers it foresees are AI-designed chemical weapons, personalized disinformation campaigns, humans becoming completely dependent on machines and synthetic minds evolving past the point where humans can control them. […]
Tech writer Alex Kantrowitz noted on Twitter that the Center for AI Safety’s funding was opaque, speculating that the media campaign around the danger of AI might be linked to calls from AI executives for more regulation. In the past, social media companies such as Facebook used a similar playbook: ask for regulation, then get a seat at the table when the laws are written.

Relisted to generate a more thorough discussion and clearer consensus.
Relisting comment: Relisting to consider option of a Merge. If you have sources to share, please just provide a link to an article, do not reproduce the article within this discussion.
Please add new comments below this notice. Thanks, Liz Read! Talk! 05:36, 12 July 2023 (UTC)[reply]

  • Comment Most of the existing text was based on unsuitable sources, and the mission is to reduce societal-scale risks from AI line was copyvio. XOR'easter (talk) 23:06, 12 July 2023 (UTC)[reply]
Re "arXiv preprints are not reliable sources" – this is true, but some of the preprints removed were published in peer-reviewed machine learning conferences. I can replace the arXiv links with NeurIPS links later. Enervation (talk) 06:40, 13 July 2023 (UTC)[reply]
Even if they're peer-reviewed, they're primary sources. Wikipedia articles aren't CVs for individual researchers or coatracks for groups to hang all their publications. XOR'easter (talk) 20:01, 14 July 2023 (UTC)[reply]
It's good to distinguish primary and secondary sources, but I'm not sure that the page you linked, Wikipedia:No original research, states that peer-reviewed research as primary sources shouldn't be covered on Wikipedia. If an academic meets WP:ACADEMIC, should the Wikipedia page only include content that news media and textbooks have picked up on and omit any other research? In any case, some of the research was covered in secondary sources, and I've added this back in. Enervation (talk) 07:45, 17 July 2023 (UTC)[reply]
It would have no elevating effect on notability anyhow Graywalls (talk) 08:32, 26 July 2023 (UTC)[reply]
Note that the Boston Globe article is essentially a full-length organization profile – it's not a piece that reflects the author's opinion, despite being placed in the opinion section. This piece discusses a variety of topics in a fair amount of depth, including: an explanation of the papers "An Overview of Catastrophic AI Risks" and "Natural Selection Favors AIs Over Humans"; activities such as the compute cluster, online course, and fellowship for philosophy professors; and its statement on AI risk of extinction. Besides the Boston Globe piece, there are also many other articles that discuss the Center for AI Safety. I don't think Statement on AI risk of extinction would make the most sense as a redirect target, as much of the content on the Center for AI Safety page would not be in scope there. Enervation (talk) 21:50, 18 July 2023 (UTC)[reply]

Relisted to generate a more thorough discussion and clearer consensus.
Please add new comments below this notice. Thanks, Liz Read! Talk! 13:32, 19 July 2023 (UTC)[reply]

  • Keep - Improved substantially since nomination, and my personal preference is that these organizations are particularly important to have on Wikipedia. There are at least three major mainstream news stories with the organization's name in the headline. Sandizer (talk) 14:16, 24 July 2023 (UTC)[reply]
  • Redirect to Dan Hendrycks the article at this point doesn't appear to meet WP:NCORP and all the sources are clustered around one sentence suggesting difficulty in getting in-depth coverage on the organization itself. This article, as well as the director's article are both short, so redirect is appropriate Graywalls (talk) 05:31, 25 July 2023 (UTC)[reply]
    From my read of WP:NCORP (and the sources in question), it seems to favor keeping the article: "A company, corporation, organization, group, product, or service is presumed notable if it has been the subject of significant coverage in multiple reliable secondary sources that are independent of the subject." Enervation (talk) 06:28, 26 July 2023 (UTC)[reply]
    And significant coverage is coverage universally derived from a single activation and pronouncement? Best Alexandermcnabb (talk) 06:32, 26 July 2023 (UTC)[reply]
@Alexandermcnabb: Not sure if I understand your question. Graywalls (talk) 06:34, 26 July 2023 (UTC)[reply]
All of the RS sources are "Artificial intelligence warning over human extinction". That's one round of announcements/press release/pitching media on a single topic. Best Alexandermcnabb (talk) 06:52, 26 July 2023 (UTC)[reply]
announcements and PRs don't count towards notability at all, nor do interviews with the subject. A significant amount of coverage in a single piece on the company is expected. Graywalls (talk) 06:58, 26 July 2023 (UTC)[reply]
"announcements don't count towards notability at all, nor do interviews with the subject" I believe you're thinking of primary sources here – news coverage in reliable sources that substantively discusses the organization would definitely count. Enervation (talk) 13:24, 26 July 2023 (UTC)[reply]
@Enervation: A lot the contents is about Dan and what he's done and what's specifically devoted to the specific company isn't all that thorough, so it makes sense to just talk about it in Dan's page as an alternative to deleting this article. Graywalls (talk) 06:34, 26 July 2023 (UTC)[reply]
  • Redirect to Statement on AI risk of extinction. The sources really only seem to cover the organization in the context of the statement. Steven Walling • talk 06:48, 26 July 2023 (UTC)[reply]
    There's not much overlap between the two articles as is, and they're quite different topics conceptually. For example, I'm not sure that it makes sense to cover the organization activities on the page Statement on AI risk of extinction, and the wide array of responses to the statement would not be the focus of the article on the organization.
    (Not directed at you in particular) There appears to be some disagreement here about what counts as significant coverage, so it's worth noting the array of information covered in reliable sources:
    • The paper "An Overview of Catastrophic Risks", including a substantial summary and synthesis
    • The statement on AI risk of extinction and responses
    • Various other papers, such as "Natural Selection Favors AIs Over Humans" and "X-Risk Analysis for AI Research" (note that there is a full-length article dedicated to this)
    • The organization's compute cluster for AI safety researchers
    • The organization's online course
    • The fellowship for philosophy professors and planned seminar for lawyers and economists
    There are a number of reliable sources that have substantive coverage of the organization itself, even if we exclude discussion of the statement on AI risk. This coverage is much more in-depth than "brief mentions and routine announcements", and is more than enough to "make it possible to write more than a very brief, incomplete stub about the organization". Enervation (talk) 14:03, 26 July 2023 (UTC)[reply]

Relisted to generate a more thorough discussion and clearer consensus.
Relisting comment: Final relist as there is more than one Merge/Redirect target suggested here.
Please add new comments below this notice. Thanks, Liz Read! Talk! 14:08, 26 July 2023 (UTC)[reply]

Reply it's a long article that is quite detailed, but there's very little coverage on the article subject Center for AI Safety and the value of this article for notability purpose is minimal. Graywalls (talk) 21:22, 1 August 2023 (UTC)[reply]
The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.