Wikipedia:Articles for deletion/Friendly artificial intelligence

From Wikipedia, the free encyclopedia
The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.

The result was keep. Some issues with article, but it appears editors are working them out, from reading the discussion I believe a keep is warranted Tawker (talk) 06:58, 5 April 2014 (UTC)[reply]

Friendly artificial intelligence[edit]

Friendly artificial intelligence (edit | talk | history | protect | delete | links | watch | logs | views) – (View log · Stats)
(Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL)
  • Delete Since the subject appears to be non-notable and/or original research, I propose to delete the article. Although the general issue of constraing AIs to prevent dangerous behaviors is notable, and is the subject of Machine ethics, this article mostly deals with this "Friendliness theory" or "Frendly AI theory" or "Coherent Extrapolated Volition" which are neologisms that refer to concepts put forward by Yudkowsky and his institute, which didn't receive significant recognition in academic or otherwise notable sources.
  • Comment - I completed the nomination for IP 131.114.88.73. ansh666 19:00, 28 March 2014 (UTC)[reply]
  • Strong Keep. The IJ Good / MIRI conception of posthuman superintelligence needs to be critiqued not deleted. The (alleged) prospect of an Intelligence Explosion and nonfriendly singleton AGI has generated much controversy, both on the Net and elsewhere (e.g. the recent Springer Singularities volume)

Several of the external links need updating. --Davidcpearce (talk) 21:38, 28 March 2014 (UTC)[reply]

Note: This debate has been included in the list of Social science-related deletion discussions. • Gene93k (talk) 23:28, 28 March 2014 (UTC)[reply]
Note: This debate has been included in the list of Computing-related deletion discussions. • Gene93k (talk) 23:28, 28 March 2014 (UTC)[reply]

Comment: (I'm the user who proposed the deletion) There is already the Machine ethics article covering these issues. The Friendly artificial intelligence article is almost entirely about specific ideas put forward by Yudkowsky and his institute. They may be notable enough to deserve a mention in Machine ethics, not an article on their own. Most of the references are primary sources such as blog posts or papers self-published on MIRI's own webiste, which don't meet reliability criteria. The only source published by an indepdendent editor is the chapter written by Yudkowsky in the Global Catastrophic Risks book, which is still a primary source. The only academic source is Omohundro's paper which, although related, doesn't directly reference these issues. As far as I know, other sources meeting reliability criteria don't exist. Moreover, various passages of this article seem highly speculative and are not clearly attributed, and may be well original research. For instance: "Yudkowsky's Friendliness Theory relates, through the fields of biopsychology, that if a truly intelligent mind feels motivated to carry out some function, the result of which would violate some constraint imposed against it, then given enough time and resources, it will develop methods of defeating all such constraints (as humans have done repeatedly throughout the history of technological civilization)." Seriously, Yudkowsky can infer that using biopsychology? Biopsychology is defined in its own article as "the application of the principles of biology (in particular neurobiology), to the study of physiological, genetic, and developmental mechanisms of behavior in human and non-human animals. It typically investigates at the level of nerves, neurotransmitters, brain circuitry and the basic biological processes that underlie normal and abnormal behavior."

Anon, like you, I disagree with the MIRI conception of AGI and the threat it poses. But if we think the academic references need beefing up, perhaps add the Springer volume - or Nick Bostrom's new book "Superintelligence: Paths, Dangers, Strategies" (2014)? --Davidcpearce (talk) 08:17, 29 March 2014 (UTC)[reply]
  • Delete There are several issues here: the first is that Friendly AI is and always has been WP:OR. That it has lasted this long on the Wikipedia is evidence of the lack of interest to researchers who would have otherwise recognized this and nominated deletion sooner. As we all know, Wikipedia is not a place for original research. Second, even if you manage to find WP:NOTABLE sources, this does not substantiate an article for it when it can and should be referenced in the biography for the author. Frankly, that is a stretch itself, given that it doesn't pass WP:TRUTH as a verifiable topic, but I don't think anyone would object to it. Third, in WP:TRUTH, the minimum condition is that the information can be verified from a notable source. This strengthens the deletion argument, as there are no primary, peer-reviewed sources on the topic of Friendly AI. And it is not sufficient to pass notability by proxy; using a notable source that references non-notable sources, such as Friendly AI web publications, would invalidate such a reference immediately. Fourth, even if we were to accept such a stand-alone article, it would be difficult to establish it to an acceptable quality due to the immense falsehood of the topic. This kind of undue weight issue is mentioned in WP:TRUTH. Therefore, and in light of these issues, I strongly recommend deletion. --Lightbound talk 20:22, 29 March 2014 (UTC)[reply]
Lightbound, for better or worse, all of the essays commissioned for the recent Springer volume ("Singularity Hypotheses: A Scientific and Philosophical Assessment:
Amnon H. Eden (Editor), James H Moor (Editor), Johnny H Soraker (Editor)) were academically peer-reviewed, including "Eliezer's Friendly" AI paper; and critical comments on it. --Davidcpearce (talk) 20:43, 29 March 2014 (UTC)[reply]
I'm afraid I'm going to have to invoke WP:COI, as you, David, were involved with the organization, publication, and execution of that source. And you were also a contributing author beyond this. Any administrator considering this page's contents should be made aware of that fact. Now, moving back to the main points: firstly, Friendly AI as a theory is WP:PSCI, and any Wikipedia article that would feature it would immediately have to contend with issues of WP:UNDUE and WP:PSCI. That Springer published an anthology of essays does not substantiate the mathematical or logical theories behind Friendly AI theory. In fact, this will never occur, as it is mathematically impossible to do what the theory suggests and intractible in practice, even if it were. That this wasn't caught by the referees calls into question the validity of that source. Strong evidence can be brought here to counter the theory, and it would end up spilling over into the majority of the contents of the article as to why it is WP:PSCI. Should every Wikipedia page become an open critique on fringe and pseudoscientific theories? I would hope not. Further, to substantiate a stand-alone article, this topic will need to have several high quality primary sources. Even if we somehow allow these issues I've raised to pass, that final concern should be sufficient to recommend deletion alone. --Lightbound talk 21:12, 29 March 2014 (UTC)[reply]
Hm? Lightbound, you wrote "it is mathematically impossible to do what the theory suggests and intractible in practice". What, specifically, are you claiming is 'mathematically impossible', and how do you know this? On what basis are you confident in your original-research mathematical disproof of a published, peer-reviewed academic anthology? Have you even read the book in question? -Silence (talk) 09:34, 1 April 2014 (UTC)[reply]
The source you are referring to has already been discredited with multiple links within the comments here with verifiable links and quotes. --Lightbound talk 09:58, 1 April 2014 (UTC)[reply]
I don't know which refutation you're referring to; to my knowledge, Singularity Hypotheses is still taken seriously as an academic publication under Springer, and it's certainly peer-reviewed. But you're also changing the topic. How about just answering my question? Then we can move on to other topics at our leisure. What is the 'mathematical impossibility' you have in mind? -Silence (talk) 10:23, 1 April 2014 (UTC)[reply]
Lightbound, if I have a declaration of interest to make, it's that I'm highly critical of MIRIs concept of "Friendly AI" - and likewise of both Kurzweilian and MIRI's conception of a "Technological Singularity". Given my views, I didn't expect to be invited to contribute to the Springer volume; I wasn't one of the editors, all of whom are career academics.
Either way, to say that there are "no primary, peer-reviewed sources on the topic of Friendly AI" is factually incorrect. It's a claim that you might on reflection wish to withdraw. --Davidcpearce (talk) 22:03, 29 March 2014 (UTC)[reply]
(OP) The Springer pubblication is paywalled, I can only access the first page where Yudkowsky discusses examples of anthropomorphization in science fiction. Does the paper substantially supports the points in this article? Even if it does, it is still a primary source. If I understand correctly, even though Springer is generally an academic publisher, this volume is part of the special series "The Frontiers Collection", which is aimed at non-technical audiences. Hence I wouldn't consider it an academic pubblication.
I think that the subject may be notable enough to deserve a mention in MIRI and/or Machine ethics, but not notable and verifiable enough to deserve an article on its own. — Preceding unsigned comment added by 131.114.88.192 (talk) 21:14, 29 March 2014 (UTC)[reply]
David, there is not a single primary, peer-reviewed journal article on the scientific theory of "Friendly AI". And there is a very logical reason why there is not, and it is related to why it was published in an anthology. "Friendly AI" can not survive the peer-review process of a technical journal. To do so, such a paper would need to come in the form of a mathematical proof or, at the very least, a rigorous conjecture. As pointed out above, the book is oriented towards a non-technical audience. Again, even if we let this source pass (which we shouldn't), this is not sufficient in quality or quantity to warrant a stand-alone article. --Lightbound talk 22:26, 29 March 2014 (UTC)[reply]
Lightbound, your criticisms of Friendly AI are probably best addressed by one of its advocates, not me! All I was doing was pointing out that your original claim - although made I'm sure in good faith - was not factually correct. --Davidcpearce (talk) 23:06, 29 March 2014 (UTC)[reply]
Comment: I still strongly support deletion. David, feel free to cite the actual rigorous mathematical conjecture or scientific theory paper that directly entails the "Friendly AI" theory and I'll gladly concede; however, if you cite the anthology from Springer, then it has its own issues, though largely moot as one source is not enough for a stand-alone article. That a source is from a major publication does not automatically make it sufficient to establish the due dilligence in the spirit of WP:NOTABLE, especially in light of the arguments made against it above. You could replace "Friendly AI" with any pseudoscientific theory and I would (and have, in the past) respond the same. This is a significantly weak minimal POV that can scarcely stand on its own outside of this encyclopedia. Yet, somehow, it has spread into many articles and sideboxes on Wikipedia as if a "de facto" part of machine ethics! That no one has taken issue with it until now is that it has simply been ignored. Lastly, I would point out that if your primary concern was WP:POV, the article could have reflected that before it was nominated for deletion, as it has been in place for years, and you have ties with its author and those interested in its theme. Again, sharing a close connection with the topic and or authors should be noted by administrators. --Lightbound talk 23:21, 29 March 2014 (UTC)[reply]
Lighthound, what are these mysterious "ties" of which you speak? Yes, I have criticized in print MIRI's conception of Friendly AI; but this is not grounds for suggesting I might be biased in its favour (!). --Davidcpearce (talk) 23:58, 29 March 2014 (UTC)[reply]
David, in the interest of keeping this on topic, I'm not going to fill this comment section with all the links that would show your affiliations with many of the authors of the Springer anthology source you mentioned, and the author of the "Friendly AI" theory. Anyone who wishes to do that can find that information through a few Google searches. It is sufficient for WP:COI that you share a close relationship with the source material, topic, and reference(s) you are trying to bring forward. This is irrespective of your intentions outside this context. And note that this is supplimental information and is not neccessary to defend the case for deletion. I digress on further comment to keep this focused. Still waiting on that burden of proof that there is a scientific paper that entails "Friendly AI" theory. I'm not sure there is much more that anyone can really say at this point, as, unless new sources are brought forward this seems to devolve into trilemma. --Lightbound talk 00:06, 30 March 2014 (UTC)[reply]
Lighthound, the ad hominems are completely out of place. I have no affiliations whatsoever with MIRI or Eliezer Yudkowsky.
As to your very different charge of having "a close relationship with the source material, topic, and reference", well, yes! Don't you?
How would ignorance of - or a mere nodding acquaintance with - the topic and the source material serve as a better qualification for an opinion?
How else can one make informed criticism?
This debate is lame; our time could be more usefully spent strengthening the entry.--Davidcpearce (talk) 00:59, 30 March 2014 (UTC)[reply]
I would like to propose a final closing perspective, which is independent of my former arguments and notwithstanding them. Consider this article as an analogy to perpetual motion, but before we knew that it was an "epistemic impossibility". This is a concept that is mentioned in the perpetual motion article as well. The problem with having a stand-alone article on this fallacious topic is that it shifts the burden of proof onto editors to compile a criticism section for something that is so wantonly false that it is unlikely to be formally taken up. That is to say, disproving this is simple enough that one can point to the Halting problem and Gödel's incompleteness theorems for the theoretical side, and cracking and reverse engineering for the practical side. But these are basic facts within the field, and this basic nature is part of the problem of establishing WP:NPOV; while the world waits for an academic to draft a formal refutation of an informally stated concept that hasn't even been put forward as a stand-alone mathematical conjecture, the article would remain here on the Wikipedia as WP:OR. I believe this clearly violates the spirit of these guidelines, and that knowledge of this asymmetry has been used as an opportunity to present this "theory" as something stronger than it actually is. That this isn't just a matter of debate, but something so incredulous that it has been nearly totally ignored by the mainstream scientific community. That should be a strong indicator of the status of this "theory". --Lightbound talk 00:42, 30 March 2014 (UTC)[reply]
David, pointing out to administrators that you may be involved in WP:EXTERNALREL is not an ad hominem; it is a fact that you contributed to the Springer source, and it is a verifiable fact through simple Google queries that you know the author(s) involved in the article. This is important for judgement in looking at the big picture of WP:NPOV and WP:COI. Thankfully, someone was able to bring this information to light so that it could at least be known. What to be done about it is up to administrators. My only purpose in pointing out a fact was to provide the whole truth. I do not have a WP:COI with this topic as I did not create the theory nor contribute or collaborate with others who did. The spirit of WP:EXTERNALREL is that you are affiliated or involved in some non-trivial way with the contributors or sources or topic of concern, which is completely distinct from a Wikipedian who is absolutely putting the interest of this community first. And, in the interest of this community it should be a non-issue that this article can not stand on its own. --Lightbound talk 01:08, 30 March 2014 (UTC)[reply]
Lighthound, I was invited to contribute to Springer volume as a critic, not an advocate, of the MIRI perspective. So to use this as evidence of bias in their favour takes ingenuity worthy of a better cause.
--Davidcpearce (talk) 01:36, 30 March 2014 (UTC)[reply]
You may want to review what is meant by WP:COI. Again, the issue isn't just intention but proximity. And here is the evidence that you helped plan the book. That you weren't merely a contributor who happened to not know anyone involved. This proves the proximity of WP:EXTERNALREL: "He will be joined as a speaker by David Pearce, who has been actively involved behind the scenes in the planning of the book, and who contributed two articles in the book." This is useful knowledge to anyone making a judgement on this page. Of the two citations you brought to the table to use, both of them are WP:EXTERNALREL. What is being stated is that there is significant proximity to the sole ensemble of resources for which you are providing to defend the notability of the article as a stand-alone topic. There are more links available if desired, but I think this shows that this isn't conjecture on my part. By the way, still waiting on that scientific journal article on the theory of "Friendly AI" that you said was not factual on my part. --Lightbound talk 01:59, 30 March 2014 (UTC)[reply]
Comment: Lighthound, forgive me, but you're missing my point. I'm a critic of the MIRI conception of an Intelligence Explosion and Friendly AI. Many of the contributors to the Springer volume are critical too.

This critical stance is not evidence of bias in its favour! --Davidcpearce (talk) 02:23, 30 March 2014 (UTC)[reply]

I am giving your comments deep consideration and have not missed your points. Again, it isn't about pro/con. I don't make the decision on the deletion, but others should know your involvement. The issue is that you originally raised two sources to defend this article as stand-alone, but it is about your proximity to those sole sources you are providing that is part of the issue. It is not to say they are invalid because of this, but that it is need to know information for someone making the final decision. That has been done, and we need not discuss it further. This isn't even the primary concern of the deletion of this article. Can you actually provide any credible 3rd party sources that you didn't orchestrate or were involved with? Can you show, objectively, why this "theory" merits its own dedicated article? Also, what about the arguments that this is an impossible concept, and therefor will always be in lack of equally credible POV to dismiss it, as I mentioned above? I've asked you to prove to us that I was wrong that there exists nothing in the technical scientific literature on the theory of "Friendly AI". I know I certainly can't find it, despite reading the literature daily. This could have been solved with a quick Scholar search. But I understand you won't be doing that because it doesn't exist and can't exist due to the nature of its impossibility. So, please, do prove me wrong, and bring forth at least one or two really strong notable sources. Otherwise, I still strongly recommend deletion. --Lightbound talk 02:35, 30 March 2014 (UTC)[reply]
Lighthound, any Wikipedia contributor is perfectly entitled to a use a pseudonym - or indeed an anonymous IP address, as did the originator of the proposal for deletion. Where a pseudonym becomes problematic is when it's used to attack the integrity of those who don't. I have not "orchestrated" any literature - popular or academic - favourable to MIRI / Friendly AI. My only comments on Friendly AI have been entirely critical. So it's surreal to be accused of bias in its favour. If the Wikipedia Flat Earth Society entry were nominated for deletion, I'd vote a "Strong Keep" too. This isn't because I'm a closet Flat Earther.--Davidcpearce (talk) 08:18, 30 March 2014 (UTC)[reply]
Again, David, claims of bad faith are not going to help your case. The statements made are factual and evidence/references have been provided; that is enough to prove WP:COI. Again, it doesn't require us to form conjecture about your agenda, only to show proximity. Regardless, this does not solve the notability issue of the source, nor the issues of WP:OR as per the comments above. This has now been repeated several times. I'll be stepping back from this as I believe all that is needed has been shown in all the comments above. --Lightbound talk 08:58, 30 March 2014 (UTC)[reply]
Lighthound, a willingness to engage in critical examination does not indicate favourable bias - any more than your own critique above. We both disagree with "Friendly AI"; the difference is that you believe its Wikipedia entry should be deleted, whereas I think it should be strengthened - ideally by someone less critical of the MIRI perspective than either of us, i.e. a neutral point of view.
Perhaps I should add - without claiming to know all the details - that I am troubled by the lack of courtesy shown to Richard Loosemore below. --Davidcpearce (talk) 16:43, 30 March 2014 (UTC)[reply]
  • Delete I am Richard Loosemore, and I am also a contributor to the recent Springer volume ("Singularity Hypotheses: A Scientific and Philosophical Assessment: Amnon H. Eden (Editor), James H Moor (Editor), Johnny H Soraker (Editor)). That book is not sufficient justification for keeping the Friendly artificial intelligence page: it was one of the poorest peer reviewed publications that I have ever seen, with credible articles placed alongside others that had close to zero credibility. Also, it does not help to cite people at the Future of Humanity Institute (e.g. Nick Bostrum) as evidence of independent scientific support for the Friendly artificial intelligence idea, because the Yudkowsky organization (Machine Intelligence Research Institute) and FHI are so closely aligned that they sometimes appear to be branches of the same outfit. I think the main issue here is not whether the general concept of AI friendliness is worth having a page on, but whether the concept as it currently stands is anything more than the idiosyncratic speculations of one person and his friends. The phrase ″Friendly artificial intelligence″ is generally used to mean the particular ideas of a small group around Eliezer Yudkowsky. Is it worth having a page about it because there are pros and cons that have been discussed in the literature? To answer that question, I think it is important to note the ways in which people who disagree with the ″FAI″ idea are treated when they voice their dissent. I am one of the most vocal critics of his theory, and my experience is that whenever I do mention my reservations, Yudkowsky and/or his associates go out of their way to intervene in the discussion to make slanderous ad hominem remarks and encourage others not to engage in discussion with me. Yudkowsky commented in a recent discussion: Comment dated 5th September 2013 ″Warning: Richard Loosemore is a known permanent idiot, ponder carefully before deciding to spend much time arguing with him.″. And, contrariwise, I have just returned from a meeting of the Association for the Advancement of Artificial Intelligence, where there was a symposium on the subject of ″Implementing Selves with Safe Motivational Systems and Self-Improvement," which was mostly about safe AI motivational systems ... friendliness, in other words. I delivered a paper debunking some of the main ideas of Yudkoswky's FAI proposals, and although someone from MIRI was local to the conference venue (Stanford University) and was offered a spot on the program as invited speaker, he refused on the grounds that the symposium was of no interest (Mark Waser: personal communication). I submit that all of this is evidence that the ″Friendly artificial intelligence″ concept has no wider academic credibility, but is only the speculation of someone with no academic standing, aided and abetted by his friends and associates. If the page were to stay, it would need to be heavily edited (by someone like myself, among others) to make it objective, and my experience is that this would immediately provoke the kind of aggressive response I described above. LimitingFactor (talk) 16:15, 30 March 2014 (UTC)[reply]

Comment: David Pearce, you are indeed a respected critic of FAI, so I would not attack your position just because you were also involved with the Singularity Hypotheses book. My reasons for disagreement have only to do with the wider acceptance of the idea and the maturity of those who aggressively promote it. Your presence in the book and my presence in the book are clearly not the issue, since it is now clear that we take opposite positions on the deletion question. So perhaps that argument can be put aside. LimitingFactor (talk) 16:29, 30 March 2014 (UTC)[reply]

Limitingfactor, many thanks, you're probably right; I should let it pass. --Davidcpearce (talk) 16:46, 30 March 2014 (UTC)[reply]
Limitingfactor and David are clearly choosing to ignore what WP:COI means and why their close relationship to the people and processes behind the sources they promote would need to be a consideration. Your close proximity to the source(s) are sufficient. You can continue to WP:CANVAS, David, and bring in more meat puppets, but that isn't going to help the fact that this article can not stand on its own without a significant body of notable sources. You claimed early on that there were in fact notable sources. You claimed I was incorrect that no technical/mathematical scientific paper or rigorous conjecture exists that is published from a real source, then failed to provide or substantiate that. And the reason is because such a paper does not exist in the literature. You've been asked several times to provide some sources and citations beyond the two you did. It has been explained that even withstanding those two sources, and if there were even no issue with them, that they are not enough to allow this page to stand as-is. All you or anyone else has to do, instead of ignoring well-established guidelines, is to provide some strong sources beyond the two which have been contested. And they are contested beyond the need the fact of proximity; they don't hold up even if you had been someone else suggesting them. --Lightbound talk 17:50, 30 March 2014 (UTC)[reply]
Aghh, Lighthound, please re-read. I am a critic of "Friendly AI"! I would like to see a balanced and authoritative Wikipedia entry on the topic by someone less critical than me - not polemics. --Davidcpearce (talk) 18:07, 30 March 2014 (UTC)[reply]
Administrators have been contacted. This is out of hand. Again, it isn't the primary issue whether or not you are polemical or not; for the topic or against the topic; pro or con; love it or hate it. The sources are contested here and are invalid, regardless of the fact that you helped create and organize them. But what does matter is that you are clearly canvasing at this point. The points to be made have been made. It has been requested that someone — anyone — please provide credible sources other than these. Let us end this futile discussion on whether or not you are for or against whatever topics. It has never been the issue, only that it is important to know that you are pumping the source because you contributed to it and helped orchestrate it. For or against it, that is still WP:COI in my view. And you continue to pump them when we've asked that you provide at least a few alternatives. But we know why that isn't going to happen! --Lightbound talk 18:22, 30 March 2014 (UTC)[reply]
Lighthound, you've left me scratching my head. I am a critic of "Friendly AI", not a partisan. I neither contributed to the entry nor helped "orchestrate" it. If you've seriously any doubts on that score, why don't you drill through the history of the article's edits? --Davidcpearce (talk) 18:46, 30 March 2014 (UTC)[reply]
(OP) Please let's try to avoid personal attacks. I don't Davidcpearce canvassed Limitingfactor into the discussion, since they voted in opposite ways. Also, in my understading of WP:COI, it is sufficient that users who have professional stakes in the subject or personal relationships with people or organizations associated to it declare them. Limitingfactor declared them himself and in the case of Davidcpearce they are public domain, since he is commenting under his real name. The fact that they have these relationships doesn't automatically invalidate their votes and comments, it just means that their votes and comments should be considered while taking into account that these relatioships exist. Also, the fact that Davidcpearce suggested to add a source he was involved with doesn't automatically disqualify that source. — Preceding unsigned comment added by 131.114.88.192 (talk) 18:49, 30 March 2014 (UTC)[reply]
All you did was repeat what I've said above at least four times. And, again, these are not "personal attacks". This is all externally verifiable information. It is canvasing because he is bringing people into the discussion from outside the Wikipedia to support his arguments. This particular argument was that he was somehow for or against this topic, which has been pointed out repeatedly to be irrelevant and not the issue. The real issue, which I keep trying to steer us towards, is that even if we accept this anthology of essays as a credible source, it is not sufficient for a stand-alone article on an impossible topic. It has already been repeated that it is not sufficient that he is proximal to it to invalidate it alone, but that is valuable need to know information. This was all stated over and over again. Reading the full discourse is helpful to prevent this kind of circular argumentation. Again, let us stop this. Provide more sources, please. The ones listed are contested because of their non-technical status, and that they don't actually substantiate the theory beyond speculation! --Lightbound talk 18:57, 30 March 2014 (UTC)[reply]
(OP) Just to restate my case for the deletion proposal, it seems that this "Friendly AI" is a neologism WP:NEO created by Yudkowsky to encompass a number of arguments he and people closely associated to him have made on the subject of Machine ethics. A more apt title for the article would be something like "Yudkowskian Machine ethics" or "Eliezer Yudkowsky's school of Machine ethics", but the point is that these views are not notable enough to warrant a stand-alone Wikipedia article. This is evidenced by the fact that the only available sources are primary sources written by Yudkowsky and his associates, and most of them are non-academic and in fact even self-published sources. — Preceding unsigned comment added by 131.114.88.192 (talk) 19:07, 30 March 2014 (UTC)[reply]
  • Strong Delete Neologism created by Eliezer Yudkowsky. Can be more than adequately covered in articles about the highschoolelementary school graduate who invented the term or his Harry Potter fanclub. Hipocrite (talk) 19:32, 30 March 2014 (UTC)[reply]
Comment: ...and adopted by big-name Oxford professor. There are powerful arguments against singleton AGI; Eliezer Yudkowsky's home-schooling isn't one of them.
(cf. http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=9126040 ) --Davidcpearce (talk) 20:01, 30 March 2014 (UTC)[reply]
It is a primary source with a close relationship. One of the authors is the director at MIRI. The other author is from The Future of Humanity Insititute. It is a verifiable fact that these organizations are aligned and in public cooperation with each other as evidenced by their websites and the cross-promotion of their member's books and articles. This does not represent a strong, notable secondary/tertiary source. There needs to be something more. Further, the article is only 7 pages long and is devoid of logical or mathematical rigor on the topic. --Lightbound talk 20:55, 30 March 2014 (UTC)[reply]
  • Keep I don;t care about who wrote the article or created the terminology. I think its a reasonable topic, and not really covered in detail in any other existing article. Further, I think it's likely to be expandable. There are sufficient secondary sources from other than the devisor of the term. What the article needs is some editing for clarity. (and not mentioning the creator's name quite as often) DGG ( talk ) 19:42, 30 March 2014 (UTC)[reply]
Comment: That's clearly untrue. There are no notable, credible secondary/tertiary sources on the theory of "Friendly AI". Prove us wrong by linking them! It can't be done; because, they don't exist. --Lightbound talk 19:47, 30 March 2014 (UTC)[reply]
I am troubled by the inflammatory allegations being made in this discussion (by Lightbound). First, I am not a meatpuppet or sockpuppet, nor did David Pearce contact me in any way, directly or indirectly, about this discussion. I have long had an interest in this page because it is in my field of research. I came here because there was a discussion in progress, and I felt that I had relevant information to offer.
Second, I did not become an editor in order to comment here: I have been registered as a Wikipedia editor since 2006.
Third, you do not seem to have noticed, Lightbound, that when I entered the discussion I voted against David Pearce! I therefore makes no sense to claim that I was canvassed into the discussion by him.
Fourth, The conflict of interest issue is a red herring. I do not stand to gain by the deletion, and I exposed my involvement in the community of intellectual discourse related to the issue here straight away. ::: It would help matters if the discussion from here forward did not contain any more accusations. LimitingFactor (talk) 21:19, 30 March 2014 (UTC)[reply]
If a conscientious reader starts at the top of this page and follows to the bottom, they will see that careful attention has been paid to separating the fact that the WP:COI notice was informational/supplimental in content. And that all arguments are as it pertains to the quality of sources. Again, and this has now been repeated many times, it is not about whether or not someone is for or against the topic, but to root out the true quality of these sources and citations. So far, no one has provided any significant citation or reference, and all that is being done is an attempt to spin or frame my responses and informational annotations about all relevant facts as ad hominem, which is in bad taste. I've already repeatedly asked that we drop this informational line of discourse on the WP:COI issue. So, you can remain troubled, but there is no issue other than the quality of the sources. To which it presently stands that there are none, and all that has been brought forth is not even substantive of the subject matter. All of this leads to the fact that this is an article long overdue for deletion. --Lightbound talk 21:25, 30 March 2014 (UTC)[reply]

Comment: Stepping back from the fray... I think the deletion proposal is not an easy one to decide, because the topic itself (the friendliness or hostility of a future artificial intelligence) is without doubt a topic of interest and research. I voted to delete because the page, as it stands, treats the topic as if it were the original scientific creation of Eliezer Yudkowsky. Most of the page is couched in language that implies that his 'theory' is the topic, but nowhere is there a pointer to peer-reviewed scientific papers stating any 'theory' of friendliness at all. Instead, the articles that do exist are either (a) poor quality, non-peer-reviewed and sourced by people with an activist agenda in favor of Yudkowsky, or (b) by credible people (Bostrum, Omohundro, myself and others) but few in number and NOT lending credibility to Yudkowsky's writings. That imbalance makes it difficult to imagine a satisfactory article, because it would still end up looking like a pean to Yudkowsky (on account of the sheer volume of speculation generated by him and his associates) with a little garnish of other articles around the edges. LimitingFactor (talk) 21:49, 30 March 2014 (UTC)[reply]

Agreed, and thank you for dropping that previous line of discourse. I am in consensus with the above comment. What LimitingFactor is alluding to at the end of his comment is explained by philosopher Daniel Dennett in his paper The Higher Order Truths of Chmess. That discourse on a philosophical topic does not actually mean that it makes sense or is substantial or real in any meaningful way. So far, all the sources that can be found are merely this kind of discourse. There has never been an actual technical mathematical or logical proof or rigorous conjecture published anywhere on the idea itself, only vague language and speculation. This supports the remarks echoed by LimitingFactor and the anonymous editor(s) above as well, ultimately showing that making a quality Wikipedia article on this topic would be a feat as impossible as the topic itself. --Lightbound talk 22:10, 30 March 2014 (UTC)[reply]
  • Keep or merge. Secondary sources found:
  1. a New Atlantis journal article
  2. a New Atlantis journal article, reply to previous article
  3. section 5.3 of the book "The Nexus between Artificial Intelligence and Economics"
  4. chapter 4 of the book "Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World"
The Omohundro paper is a RS independent of Yudkowsky, but looks more like primary research than a secondary review of FAI. The four sources above are in depth about FAI, and seem independent. The nexus book is from Springer and presumed reliable. The singularity book is from BenBella Books, a "publishing boutique" that may be reputable. Based on the two New Atlantis articles and the nexus book, this topic looks marginally notable per WP:GNG. The article is essay-like in parts and I agree with DGG that it is a bit promotional, but these are surmountable problems, per WP:SURMOUNTABLE. A marginally notable topic and surmountable article problems suggest keeping the article. Even if others don't find it notable, basic facts about FAI ideas (it exists, when it was coined, a short summary) are verifiable in reliable sources. Per WP:PRESERVE and WP:ATD, preservation of verifiable material is preferable to deletion. Machine ethics would be a reasonable target for such a merge. --Mark viking (talk) 22:21, 30 March 2014 (UTC)[reply]
  • Delete None of those articles above substantiate and rigorously define the concept of "Friendly AI" as a theory beyond merely being WP:NEO. Further, the books you linked are citing non-notable sources for the materials on "Friendly AI" theory and are only covering the topic in 2-3 pages maximum at minimal depth. Oppose Keep on those grounds. As for a merge, I oppose that based on the argumentation that it isn't clear that "Friendly AI" as a WP:NEO can be separated cleanly from this loose concept of the "theory" of "Friendly AI", which indeed has no credible sources which detail the subject matter. That is to say, people are saying that AI should be "friendly" and confusing or not seeing that there was indeed a speculative, non-rigorous fringe theory that specifies a kind of architecture for doing this. The Atlantic articles are blog-like, and directly link to the non-notable sources in question as well. --Lightbound talk 22:30, 30 March 2014 (UTC)[reply]
  • Delete or Merge (OP) The "Singularity Rising" book by James Miller probably shouldn't be considered as an independent source, as the author has professional ties with MIRI: he is listed as a "research advisor" on MIRI's website and as you can see on the Amazon page, the book is endorsed by MIRI's director Luke Muehlhauser, MIRI's main donor and advisor Peter Thiel, and advisor Aubrey de Grey. The very chapter you cited directly pleas for donations to MIRI! The other sources look valid, however. I agree that the general topic of Machine ethics is notable, and Yudkowsky's "Friendly AI" is probably notable enough to deserve a mention in that page, but a stand-alone article gives it undue weight, since it is a minority view in the already niche field of machine ethics. In my understanding "Friendly AI" boils down to "Any advanced AI will be dangerous to humans by default, unless its design was provably safe in a mathematical sense". This view has been commented on and criticized by independent academics such as David Pearce and Richard Loosemore, among others, and therefore probably passes notability criteria, but most of the content of this article is unencyclopedic essay-like/poorly sourced/promotional content, and if you were to remove it, very little content would remain, and I doubt that the article could be expanded with high-quality notable content. Therefore, 'Delete or Merge seem reasonable. 131.114.88.192 (talk) 00:02, 31 March 2014 (UTC)[reply]
The issue with a merge is that there still isn't a significant source on the actual theory of "Friendly AI". Thus, the merge would be based on a concept entailed by a neologism and wouldn't even stand on its own even in that context. A source merely mentioning it, referencing a non-notable primary source is still not actually telling us what this "theory" is in any concrete way; they are simply documenting an apparent controversy in an idea of whether or not machine intelligence can be benevolent, which is distinct from the actual non-rigorous concepts presented by "Friendly AI" as a theory. There are two sub-issues to be unpacked:
  1. Distinguishing criticisms about whether or not AI can be made or to stay benevolent, which is more general than and not specific to the "Friendly AI" theory. This, doubtless, was part of the idea behind naming this theory in such a way. This is the issue with it being WP:NEO; the attempt to rebrand a concept and redefine what it means when its always been about what is already being covered under machine ethics as a whole.
  2. Criticisms of the architectural/mathematical framework that is "Friendly AI" and "Coherent Extrapolated Volition", which are indeed not notable sourced concepts, and are WP:PSCI. This is also clear given that these concepts as an architecture are often presented or introduced in the context of science fiction/laws of robotics.
Thus, trying to merge doesn't solve the WP:NEO and WP:OR issues. The problems will remain: finding sources that do not merely discuss (and confuse) the two above issues, and finding sources that actually give a technically sound, rigorous, peer-reviewed proof or mathematical conjecture for the topic. That is, if someone is going to promote a new kind of physics or a new kind of communications theory, and we were going to cover that, we would at least need a strong source that fully details that concept. It would be fair enough to provide a criticism section under machine ethics that simply addresses the concerns of making AI benevolent instead of trying to force everyone into this lexicon, which is not only not widely supported but is becoming increasingly confused with the two points above. --Lightbound talk 00:28, 31 March 2014 (UTC)[reply]
The nomination for deletion isn't just that this doesn't stand on its own. It's that it doesn't stand anywhere. Merging doesn't solve the fact that the actual "Friendly AI" theory is WP:PSCI, of which doesn't get consideration of equal footing the same as POV and minor POV issues, as explicitly stated in those guidelines. Such a theory could never survive direct publication in a technical journal; this is why no one so far has been able to come up with an actual source that specifies unambiguously and rigorously what the theory of "Friendly AI" is. And the burden of proof is not on editors to keep pseudoscience, but to establish first with notable sources. All that the sources so far establish is that some people have been using the phrase "friendly AI" to refer to the act of making machine intelligence safe(er) or to discuss the theoretical implications. So, again, are we merging a neologism or merging the theory of "Friendly AI"? Neither appear to be acceptable, and for all the reasons that have been unveiled in the above comments. --Lightbound talk 18:54, 31 March 2014 (UTC)[reply]
  • Keep – Friendly AI is a concept in transhumanist philosophy, under widespread discussion in that field and the field of AI. I've never read that the concept itself is a theory. A hypothetical technology, yes. A scientific research objective, yes. A potential solution to the existential risk of the technological singularity, yes. Much of the article is unverified, and rather than the whole article being deleted, unverified statements can be challenged via WP:VER and removed. I suggest moving any challenged material to the article's talk page, where it can be stored and accessed for the purpose of finding supporting citations. The article needs some TLC, and is worth saving. The Transhumanist 23:25, 31 March 2014 (UTC)[reply]
Wikipedia is not a sounding board for our opinions, nor a discussion forum to debate WP:OR about a philosophical or speculative issue through talk pages. If we followed this suggestion, the entire article would have to be moved to the talk page, at which point it would simply become a forum. That you didn't know that "Friendly AI" is part of the "theory" along with "Coherent Extrapolated Volition" is part of the issue with neologisms, and why they are usually weeded out on this encyclopedia. The desire to have ethical machines is distinct, more general, and has been in existence, long before "Friendly AI" theory came onto the scene. If we want to have a topic about making machines ethical there is already an article namespace for that. If we want to talk about the pseudoscientific, non-credible, non-independently sourced fringe theory that is "Friendly AI", which is what this page is about, then that is another issue. I am repeating all of this; because, people are coming in and expressing an emotional appeal or vote without considering these issues or looking at the (lack of) evidence to support the existence of this original research on Wikipedia. I believe strongly at this point that someone needs to at least start moving this forward by providing strong sources that substantiate this theory. But, as mentioned before, those references do not exist. Had we been having this discussion while this article was a stub it would have been a candidate for speedy deletion, but it has embedded itself and slipped unnoticed for years because of its (non-)status in the field. --Lightbound talk 00:23, 1 April 2014 (UTC)[reply]
You didn't present it as part of a theory, but as the theory. It is not the name of a theory. It's the article title we're talking about here, and whether it warrants a place on Wikipedia. Problems with the content should be worked out on the article's talk page. The term and subject "friendly AI" exists as a philosophical concept independently of the theory you so adamantly oppose. You could remove the theory from the page, or give it a proper "Theory of" heading, or clarify it as pseudoscience (there are plenty of those covered on Wikipedia). Deleting the article would be counterproductive. Because... The subject "friendly AI" is encountered as a philosophical concept so frequently out there in transhumanist circles and on the internet, that not to cover its existence as such on Wikipedia would be an obvious oversight on our part. And by "discussion" (in the field of transhumanism), I meant philosophical debate (that's what discussions in a philosophical field are). Such debate takes place in articles, in presentations and panel discussions at conferences, etc. In less than fifteen minutes of browsing, I came across multiple articles on friendly AI, a mention in a Times magazine article, an interview, and found it included in a course outline. But as a philosophical concept or design consideration, not a field of science. It was apparent there is a lot more out there. (Google reported 131,000 hits). I strongly support fixing the article. The Transhumanist 02:45, 1 April 2014 (UTC)[reply]
Yes, it is, in fact, a claim to a theory, and I quote the words of the creator of this "theory" and neologism from that non-notable source: "This is an update to that part of Friendly AI theory [sic] that describes Friendliness, the objective or thing-we’re-trying-to-do". My emphasis has been added so it is crystal clear. See, this is part of the problem. There is a pseudoscientific "theory" (read: not a theory) called "Friendly AI" and then there is the adjective enhancing AI that refers to the concept, practice, or goal of making an AI friendly, viz. benevolent. These are two very, very distinct concepts which have been laminated together and are trying to be used here to edge in an unsubstantiated theory. Again, there are no notable, credible, independent 3rd party sources on the "theory" of "Friendly AI", and this has been stated over, and over again now. As for wanting or desiring or wishing there was a canonical place to discuss "friendliness" of AI, this is not it unless it can be backed by significant quality sources. As it stands, machine ethics should be the place for the general overview of this field and the goals it shares. Anyone reading this so far should see clearly this distinction. This is intentionally obfuscated for a reason and it is part of why this is so difficult to separate out, unpack, and discuss. Please try to see the distinction that is not without difference. --Lightbound talk 03:13, 1 April 2014 (UTC)[reply]
Well, the title and the lead section indicate a concept, not a formal theory. Though Yudkowski and his capitalized "Friendly AI" and "Friendliness" were interwoven throughout the entire article. I've started to revamp the article, and have begun extricating the edged-in "theory" (per WP:VER), so that it can be properly differentiated from the general concept later (when someone is willing to include citations). The article is much more generic now. By the way, since the "F"riendly material was presented out of context, almost indistinguishable from the primary subject of the article, I've opted not to copy it to the talk page. It needs to be rewritten in context, if at all. I've got to go for awhile, and have left the "Coherent Extrapolated Volition" section for last, but feel free to pick up where I left off. The Transhumanist 05:39, 1 April 2014 (UTC)[reply]
The creator(s) of "Friendly AI" theory have explicitly stated in numerous places that they attempt mathematical modeling and theories. But the problem is that there is not even a primary source that specifies the rigorous mathematical theory of "Friendly AI" or even a mathematical conjecture, let alone secondary and tertiary independent sources. I'm afraid I'm not sure what we could even do with this article except to make it a redirect into machine ethics the way that Strong AI redirects into Artificial General Intelligence. That would at least not give WP:UNDUE to a WP:NEO, which is what this page would quickly deflate to since we have now established it is an attempt at a WP:PSCI "theory". And there is no way we can credibly, reliably source such a distinction between "Friendly AI" as a "benevolence" colloquialism from the more general discipline of machine ethics. This was also pointed out in my comment below in response to User: Silence. --Lightbound talk 05:53, 1 April 2014 (UTC)[reply]
You seem to think you will win the argument by ignoring the other side of the debate. But it doesn't work that way. The "Friendly AI theory" and "Friendliness" have mostly been removed from the article. So now the article for the most part deals with the common term "friendly AI". Continuing to argue against Yudkowsky even after he's been largely removed from the article, is starting to look like you are attempting a straw man argument. Also, your prolific replies to everybody imply that you think you can win by shear volume. But whether you acknowledge it or not, the generic topic friendly artificial intelligence (uncapitalized) exists as a philosophical concept. The term appears to get more use than the term "machine ethics". Note that "machine ethics" and "friendly AI" are not synonymous. The Transhumanist 09:24, 1 April 2014 (UTC)[reply]
The term you would be looking for would be ad hominem, but my comments are not about a person, but a concept, so it won't work out. It's not a Straw man argument, as I'm simply presenting a source which verbatim presents what I'm saying. Claims of bad faith do not equal bad faith. We've engaged in an AfD to consider deletion of the page. Removing or blanking, or even completely starting over doesn't resolve the issues that have been raised. Editing the article during the AfD doesn't change that we are in an AfD. The objective is to come to consensus, and that can not be achieved through editing the article. I am also not the only editor that has requested delete or merge, which would have been a compromise. My "prolific responses" are due to the initial canvasing that took place and defending against issues of WP:IS in the numerous sources, which, at first glance appear independent, but are actually by the same group of people working in concert. All of this has been documented with links. --Lightbound talk 09:50, 1 April 2014 (UTC)[reply]
You're arguing against the article based on material that isn't even included in it anymore. That's a straw man argument. The article is well on its way to being repaired. The Transhumanist 03:28, 2 April 2014 (UTC)[reply]
  • Keep. The topic is noteworthy, as it's discussed extensively in the leading textbook in the field of AI, Russell and Norvig's Artificial Intelligence: A Modern Approach. The idea is also discussed in the Journal of Consciousness Studies by philosophers like David Chalmers, and by AI theorists in publications and conference proceedings surrounding artificial general intelligence (AGI). The topic can't be merged into Machine ethics, because 'build a safe smarter-than-human AI' is a much broader topic than 'build a moral smarter-than-human AI'. Software safety engineering, even for autonomous agents, is mostly not about resolving dilemmas in applied or theoretical ethics. (And 'Machine ethics' can't be merged into Friendly AI, because most of machine ethics is concerned with the behavior of narrow AI or approximately human-level AI, not with the behavior of superintelligent AI.) -Silence (talk) 04:45, 1 April 2014 (UTC)[reply]
Comment: (OP). Russell and Norvig briefly mention "Friendly AI" in the context of machine ethics. Chalmers briefly and critically mentions Yudkowsky's "Provably friendly AI" in one paragraph of his 59-pages long paper "The singularity: A philosophical analysis". In general, mentions of "Friendly AI" in secondary sources are rare, brief, often critical, and always appear in a broader discussion of machine ethics. Moreover, even though Yudkowsky idiosyncratically uses the term "friendliness" instead of "ethics", is approach is all about how to incorporate an ethical system inside of an artificial intelligence. It is neither about "friendliness" in the common meaning of the word (the quality of "being friends") nor about safety engineering as commonly intended. Therefore, Friendly AI is a minority view inside the field of Machine ethics. It's not notable enough for a stand-alone article. — Preceding unsigned comment added by 131.114.88.192 (talk) 13:33, 1 April 2014 (UTC)[reply]
OP, I disagree with your characterizations. The relevant section of Russell and Norvig is called 'The Ethics and Risks of Developing Artificial Intelligence', which obviously includes machine ethics but is a broader topic than that. Russell and Norvig's description repeatedly mentions things like 'checks and balances' and 'safeguards', but never mentions 'morality' or 'ethics' or the like in the discussion of Yudkowsky specifically. Morality (and therefore machine ethics) is very relevant, but it's not the whole topic (or even the primary one, according to Yudkowsky). According to Yudkowsky, building a Friendly AI is primarily about designing a system with "stable, specifiable goals", independent of whether those goals are moral.
The fact that this topic gets discussed in the world's leading AI textbook at all establishes notability; it's fine if the discussion within that textbook is "rare" and "brief", since the textbook's breadth makes it remarkable that the topic is raised at all. As for whether discussions of this idea are "often critical", I agree. Yudkowsky's views are very clearly not in the mainstream, and it's important that this article be improved by including both a fuller discussion of what those views are, and a fuller presentations of published objections and alternative views. WP can handle controversial topics fine, as long as they're noteworthy enough to leave a paper trail through the literature. -Silence (talk) 17:08, 1 April 2014 (UTC)[reply]
(OP) Sorry, but it seems to me that you are going through No true Scotsman route to argue that Friendly AI is not in Machine ethics. Russell and Norvig discuss it in a section which has "Ethics" in its title, and somehow it is not about ethics because they didn't drop the word in the specific paragraph? Come on! As for notability, I agree that the topic has some notability, but the notable and reliable material available is very scarce. If you where to combine all the reliable secondary source you would get perhaps two or three paragraphs worth of content. Is that enough to deserve a Wikipedia article? How does that compare with other views in machine ethics that may be even more notable among experts but don't happen to be backed by a large-ish community like LessWrong? Does having a stand-alone article for Friendly AI give it a fair representation, or undue weight? 93.147.153.8 (talk) 22:53, 1 April 2014 (UTC)[reply]
First, about your statement: "most of machine ethics is concerned with the behavior of narrow AI or approximately human-level AI, not with the behavior of superintelligent AI". That is your opinion. The following is rhetorical, but explains why we have WP:WEASEL. How much is "most"? At what point does that subjective interpretation become justified in a neutral observer's eyes? Where has it been rigourously defined that there is a categorical exclusion in the type of automation or its level of complexity or "intelligence" that makes it outside the domain of machine ethics? These are not answerable in a way that would justify what it is you are attempting to do. The source from Russell and Norvig? I just looked at the table of contents and can't seem to find even a sub-heading on the mention of "Friendly AI" as a theory. Could you show a page number? Does this source substantially cover the WP:PSCI "theory" of "Friendly AI" as opposed to the neologism "friendly" being passed off as colloquialism for "benevolent"? Lastly, the original theory and all of the related original work, as non-notable as it is, has been regarding the philosophical, ethical, and theoretical implications of the ability for it to make decisions and definitely not about the "software safety engineering". If that is where we are headed then that is well outside the scope of this article's conception. Further, please see comment above (diff) pointing out that this article's topic is about an attempt at establishing WP:OR as a "theory". I provided, above, evidence of a claim to a "theory", which I am linking again for posterity. "Friendly AI" theory has never been about "software safey". By their organization's own admission, it's been purely a research and theoretical issue and not an engineering one. Here is a quote from the director of MIRI explicitly stating this in a recent article (literally, days ago): "If we can reformulate the important philosophical problems related to intelligence, identity, and value into precise enough math that it can be wrong or not, then I think we can build models that will be able to be successfully built on, and one day be useful as input for real world engineering." In fact, the entire post there is exactly about this, that it would be "unethical and stupid [sic]" to do so. This is their own words. It is exactly opposite of the claims you are making and by the very people pushing this fringe theory. --Lightbound talk 05:03, 1 April 2014 (UTC)[reply]
See section 26.3, 'The Ethics and Risks of Developing Artificial Intelligence'. Friendly AI is also discussed in the book's introduction (p. 27), in its general discussion of human-level-or-greater artificial intelligence; 'Friendly AI' is one of the main terms it highlights as important for anyone acquiring an introductory understanding of the contemporary field of AI. And, yes, it's not just a homonym; Yudkowsky is cited multiple times, the term is capitalized, etc.

"That is your opinion." - It's my assessment. Whether it's my "opinion" depends on whether it's grounded in fact, since not all beliefs or judgments are mere 'opinions'. Beware of polemical framings. See e.g. the contents of Anderson and Anderson's anthology on Machine Ethics. There's a paper or two that discuss superintelligence, but most do not. "How much is "most"?" - Minimally: Less than 50% of machine ethics is about the ethics of superintelligent agents. "regarding the philosophical, ethical, and theoretical implications of the ability for it to make decisions and definitely not about the 'software safety engineering'" - Can you cite a source that shows this? This Luke Muehlhauser interview seems in tension with that claim: Muehlhauser repeatedly suggests that it's misleading to describe the AIs Yudkowsky/MIRI worry about in human terms, and that terms like 'decision' and 'goal' are mostly useful shorthands rather than anything philosophically deep. He suggests thinking of AIs as 'equations' or 'really powerful optimization processes' when we're tempted to overly anthropomorphize them. -Silence (talk) 05:46, 1 April 2014 (UTC)[reply]

Some of my response to User:Silence was deleted. In which I explicitly did substantiate the question they are asking. I'm going to have to go through the log to find it. --Lightbound talk 05:56, 1 April 2014 (UTC)[reply]
I've now updated my response to you, User:Silence to my actual original edit (diff). I deeply substantiated their position in their own words with those links, as I anticipated that such a thing would be contested. Again, the book you reference does not provide the mathematical theorem or mathematical conjecture of "Friendly AI" theory, which is the modus operandi of MIRI and "Friendly AI" theory. It's what it's always been about, and they explicitly eschew the "engineering side", as evidenced by their own words from articles just days ago. I don't feel that this is the place to have a technical or philosophical discussion on this topic. This is about whether this page should be deleted, and no one, not a single person, has been able to provide a source that substantiates the WP:PSCI of "Friendly AI" theory. Attempting to segue into another category based on the current wording and narrative coming out of MIRI isn't going to help substantiate this concept or this page, as that has never been and, by their admission, is not what "Friendly AI" theory is about. --Lightbound talk 06:04, 1 April 2014 (UTC)[reply]
Probably my fault via an edit conflict as you were revising your comment. To respond to your added points: The article seems to mainly be about the hypothetical agent called 'Friendly AI', not about 'Friendly AI theory', the research initiative or AGI subdiscipline concerned with forecasting, understanding, designing, etc. Friendly AIs. I confess I don't understand what your concern is; the existence of the word 'theory' does not make a topic non-notable. (I'm also not clear on what you think the content of 'Friendly AI theory' is supposed to be; if it's a fringe belief, what belief, exactly, is it?) Certainly there are claims being made here, and hypotheses and predictions put forward; Wikipedia should report on those claims, citing both noteworthy endorsements and noteworthy criticisms of them.
"the book you reference does not provide the mathematical theorem or mathematical conjecture of "Friendly AI" theory" - This seems to be OR on your part. There is no such thing as a 'Friendly AI theorem' or 'Friendly AI conjecture'.
"no one, not a single person, has been able to provide a source that substantiates the WP:PSCI of 'Friendly AI' theory." - Er, no? Russell and Norvig has been cited. Go look at a copy of the text. If an AI topic gets cited in Russell and Norvig, that's the end of the discussion as far as notability goes. The only question now is how best to organize the content on WP, not whether the content is encyclopedic, significant, verified, etc. -Silence (talk) 06:15, 1 April 2014 (UTC)[reply]
It is a theory and it is also a goal. I know this is confusing, but that is due to the unfortunate naming of it. Here is the direct evidence that it is a claim to a theory by its creator, and I quote: "This is an update to that part of Friendly AI theory [sic] that describes Friendliness, the objective or thing-we’re-trying-to-do". My emphasis is added. What I am saying is that, as an article about the fringe theory, which does not exist in a mathematical formulation anywhere, it can not stand. It thus comes down to the article being about a goal that happens to be named "Friendly AI". Unlike the neologism we are debating, "Strong AI" has been in use for decades, but was successfully debated to be redirected, and it is mentioned in the lead of the AGI article. But only a handful of authors, relative to the decades worth of "Strong AI", have taken up this nomenclature of "Friendly AI", despite it having been around for nearly a decade. And it still took a significant effort to get that redirect done if I'm not mistaken. Based on this, it is definitely undue weight when we consider the balance of other topics in this field. Simply having a few sources do not constitute unrooting the entierty of existing literature on machine ethics. To have a full stand-alone article when a subsection with a POV balanced to the rest of machine ethics discourse would suffice on that page. It is undue weight especially because ultimately the entire point of "Friendly AI" was that there is and always will be only one way to do it right. And that is stated everywhere in its materials. Not only is that assertion untrue, the burden of proof rests on them, and any editor, that would try to bring WP:FRINGE here as a stand-alone topic. This isn't a complex issue, but it is obfuscated due to the naming. We all must accept the facts that evidence has shown that it is both a theory and a goal, often at the same time, especially from its adovcates. But if we are going to write an article on this, it has to be able to stand. Those sources you mention are not alone in name dropping the words "Friendly AI", and also confusing it as a goal and a theory. But nowhere do we have the credible materials we need, not even from a primary source, on the actual mathematical proof, theorem, or conjecture of the theory itself. Hence, it collapses to purely a semantic issue about whether the goal of making AI "friendly" is or is not part of machine ethics. And, by their own admission, that is the area they work in; that they are purely theoretical, mathematical, and based on logic and decision theory, meta-ethics, etc. So it rightly belongs, at best, and that is a stretch, as a minor POV in machine ethics as it certainly is not widely accepted as part of the scientific consensus. --Lightbound talk 06:47, 1 April 2014 (UTC)[reply]
"It is a theory and it is also a goal." - Neither of those is a well-defined term. By 'theory' you might mean a body of knowledge, a body of beliefs, a field of inquiry, a scientific theory, a scientific hypothesis, etc. By 'goal' you might mean a state of affairs that's desired, or the desire itself, or some concrete object involved in the desired state of affairs. Yet you seem to want to get a lot of work done using these amorphous terms, in spite of the fact that the article we're discussing is Friendly AI (the topic being: a specific class of hypothetical agent), not Friendly AI theory. A Friendly AI is a kind of agent, not a kind of theory; and the fact that there is a thing (or things) called 'X theory' that are associated with X, doesn't tell us anything directly about the nature of X itself.
'Strong AI' is a redirect because it's ambiguous, not because it's non-noteworthy. So I don't see any direct relevance to the term 'Friendly AI'.
"It is undue weight especially because ultimately the entire point of "Friendly AI" was that there is and always will be only one way to do it right." - That way being...? A topic can be noteworthy even if some people have normative beliefs about the topic. E.g., 'Marxism was proposed as the right way to organize society' is not a very good reason for deleting the article Marxism...! Likewise 'alternating current was proposed as the right way to transmit electric charge' is not a reason to delete the article Alternating current.
"nowhere do we have the credible materials we need, not even from a primary source, on the actual mathematical proof, theorem, or conjecture of the theory itself" - There is no such 'mathematical proof, theorem, or conjecture'. You confabulated it yourself. So it's not super surprising that you can't find the thing no one ever claimed existed..?
I already refuted the claim that Friendly AI theory is a subdiscipline of machine ethics. My claim wasn't 'it's an engineering topic, therefore it's not machine ethics' (which is a non sequitur, false, and has been asserted by no one). Rather, my claim was 'making an agent moral isn't the same thing as making it safe, and Friendly AI theory (the research project / subfield) is mainly about making it safe'. Obviously the two aren't unrelated, but they aren't in a subset relationship either. -Silence (talk) 07:03, 1 April 2014 (UTC)[reply]
The author of "Friendly AI" himself has said it is both a theory and a goal by the quotes I gave. We can agree to disagree on this, the direct evidence is in my corner on that. As for your WP:OR theory that we ought not merge or make it a POV in machine ethics, that's not relevant as I don't accept the rejection that an article called "Friendly Artificial Intelligence", which has been stated by the creator of the theory and the goal itself, and to which 90% of the article's body text refers, is not about "Friendly AI" theory, CEV, and the "Friendliness" goal. Any reader thus far should concede this point. Hence, the original issues I've raised stand. Simple contradiction in the face of such obvious evidence doesn't follow logically. It's interesting that anyone could ignore direct wording from the author of the very concept we're debating... I'm not buying that this article is not about that which its content clearly indicates it is. So, we are at an impasse and I don't see any further reason to continue our dialectic unless new arguments are presented. I rest my position against your points as they stand. If new arguments to which we can make progress on are presented, then I'll rejoin on those talking points. But since this is already getting extremely long, I won't just engage in simple contradiction. I feel the evidence stands for deletion. --Lightbound talk 07:14, 1 April 2014 (UTC)[reply]
"The author of "Friendly AI" himself has said it is both a theory and a goal by the quotes I gave." - Where did I say anything to the contrary? The point I made wasn't that there's no such thing as 'Friendly AI theory'; it was that the existence of 'X theory' doesn't establish that 'X' is non-noteworthy or otherwise unencyclopedic. Your original grounds for deleting the article were refuted in my first comment, so I don't know anymore what your new concerns are. Possibly you should re-propose deletion in a few weeks or months after you've looked at the sources I cited and had time to organize your concern a bit.
"As for your WP:OR theory that we ought not merge or make it a POV in machine ethics" - ??? Have you read WP:OR? You cite policy and guideline names, but in strange contexts that don't seem to have much to do with the contents of the WP-namespace pages.
"I don't accept the rejection that an article called "Friendly Artificial Intelligence", which has been stated by the creator of the theory and the goal itself, and to which 90% of the article's body text refers, is not about "Friendly AI" theory, CEV, and the "Friendliness" goal" - The article as it's currently written is about Friendly AIs, not about those things, which would be the central topic of articles called Friendly AI theory, Coherent extrapolated volition, and perhaps Friendliness in artificial agents. Obviously all of those topics are extremely relevant to the 'Friendly AI' page, but it's a fallacy of equivocation to conflate 'X is about Y' in the sense of 'X is in some way relevant to Y' with 'X is about Y' in the sense of 'Y is the topic of X'. -Silence (talk) 09:16, 1 April 2014 (UTC)[reply]

Comment: Not a WP:FRINGE theory? I just found out this very page used to be called Friendliness Theory, which now redirects to this page and had previously been nominated for deletion. Here is the discussion on the talk page. So, that, plus the above arguments, should slam the door on that issue. This has always been both a theory and a goal. And, as a theory, it can not stand on its own, as I've repeated now many times. Show the peer-reviewed mathematical proof or mathematical conjecture to counter this. See above for many pieces of evidence that the author claims it as a theory as well. So much evidence at this point I can't see any reasonable editor continuing to contradict it in good faith. Strongly recommend delete. --Lightbound talk 07:31, 1 April 2014 (UTC)[reply]

"Not a WP:FRINGE theory? I just found out this very page used to be called Friendliness Theory [....] And, as a theory, it can not stand on its own" - I can't tell whether you just aren't expressing yourself clearly, or whether you don't understand the varied ways the word 'theory' is used (or, e.g., that 'theory' is not the same thing as 'mathematical theorem', even in the context of mathematical logic) or the policies and community norms on WP. You seem to be an experienced editor, yet you don't seem to see the obvious problem with deleting all articles that are about 'theories'. No one has claimed that Yudkowsky's view is the mainstream, establishment AI view. But it's acknowledged and engaged with and taken seriously by at least some of the biggest names in mainstream AI, so the topic is encyclopedic, by ordinary Wikipedia standards.
You seem to want to delete it because you dislike Yudkowsky's views; but 'Yudkowsky's views are false' is not grounds for deletion, any more than 'Yudkowsky's views are a theory' is. I noted already that Marxism is not a mainstream view in contemporary economics, and is a 'theory' -- a fringe one, at that -- yet 'Marxism' gets its own page. Ditto intelligent design. So, again, I have to note that your arguments are just not relevant to the issue of deletion. If you think Yudkowsky is a pseudoscientist, go find reputable sources saying as much, and help make WP's coverage of the topic comprehensive and useful. Deleting every topic you think is pseudoscientific isn't how WP works; WP reports on demarcation controversies in the sciences, but it does not try to adjudicate them all. Nor does it try to use its inclusion criteria to bludgeon noteworthy views it dislikes out of memetic existence.
"Show the peer-reviewed mathematical proof or mathematical conjecture to counter this." - A third time, I note this is something you made up, not something with any basis in any external text, including Yudkowsky's. You simply made the leap from 'Yudkowsky is writing about something mathematics-related and used the word "theory" for something epistemic, THEREFORE Yudkowsky is claiming to have a mathematical conjecture, THEREFORE if the formalized conjecture is not provided the topic is not encyclopedic'. None of these leaps in logic has any textual basis. You really did just make them up. If you're interested in promoting encyclopedic accuracy and not spreading fabrications of any sort, you won't keep repeating this claim until you've actually found it stated in the literature.
"So much evidence at this point I can't see any reasonable editor continuing to contradict it in good faith." - Wikipedia:Assume good faith is one of the many community norms you need to spend a bit more time with. If you find it inconceivable that any human being could possibly disagree with you without being evil or deceptive in some fashion, that probably says more about the limits of your imagination than about the limits of human error. Suffice it to say that I disagree you've provided much evidence (or even, at this point, a coherent argumentative skeleton into which evidence could be fit). Yet I'm pretty sure I'm not an evil mutant troll who hates Wikipedia and puppies. :) So, maybe dial the theatrics back, at least a notch or two? -Silence (talk) 09:16, 1 April 2014 (UTC)[reply]
I believe a strong case has been made for keeping the article. Now, the main thing concerning me is the length of this discussion compared to the length of the article. We should get back to building the encyclopedia, by working on the article itself. The Transhumanist 09:36, 1 April 2014 (UTC)[reply]
Ignoring the evidence and arguments brought up is not consensus. We're in disagreement and that's OK, as I've stated above. Insulting or attempting to attribute statements I've not made is not going to help reach consensus. I would appreciate if we kept the discussion on the arguments, as it is becoming difficult to see good faith. Also, I did not nominate this page to be deleted. So, attempting to frame the "deleting every page you see" bit is simply WP:BAIT, of which I'm not biting. Attempting to paint my position as being against a person or persons is also not going to help your case, as I am and always have been on policy; thankfully, I've been always civil and my comments reflect that here. I understand this must be frustrating, but, again, I would appreciate if we addressed ideas and arguments and not each other. I wonder who to contact when an administrator is doing this? I'll have to look into it. It is really unfortunate to see. --Lightbound talk 10:11, 1 April 2014 (UTC)[reply]
@Lightbound: Your edits connected to this AfD make up over 11% of your contributions to Wikipedia. I suggest you read WP:BLUDGEON and give it a rest. BMK (talk) 11:43, 1 April 2014 (UTC)[reply]
Indeed, I would expect a very long discussion like this to affect my stats, as I have been away from the Wikipedia for years and have traditionally been a light editor. Also, entry length does not model contribution. If that were the case, those who do line-editing would also seem biased on complex topics requiring extended discussion. I've been responding to people in good faith and on point, and with new materials and evidence. I hear you, and I'm stepping back regardless, but it is because evidence is being ignored. Nothing further can be done. --Lightbound talk 19:15, 1 April 2014 (UTC)[reply]
  • Strong Keep. Fringe or not, this concept is a relevant subject of debate. The topic was discussed in the 1990s already, in the context of extropianism and transhumanism. Current article does need balancing. I don't see how a merge with machine ethics would improve. Therefore I vote to keep and expand. — JFG talk 11:44, 1 April 2014 (UTC)[reply]
Comment: (OP) Please provide reliable sources. The fact that a meme may have been circulating in online communities is not, by iself, grounds for inclusion in Wikipedia. 131.114.88.192 (talk) 13:51, 1 April 2014 (UTC)[reply]
  • The nature of the Russell/Norvig citation mostly renders your concerns moot. If the only citation for a topic at a given time is Britannica, it's probably noteworthy, because Britannica filters strongly for noteworthiness. Similarly for a highly esteemed introductory biology textbook and a biology topic; and, here, for a highly esteemed AI textbook and an AI topic. You also don't seem to have noticed that I added two independent scholarly references to the lead, not one; so your citations of WP:1R and WP:E=N are strange.
  • Your citation of WP:TRIVIALMENTION demonstrates to me that you haven't looked at the source text I cited, yet are still making strong claims about it based on some intuition that it must be a trivial, passing, tangential, one-sentence mention. This is not the case, in spite of the obvious space constraints imposed by the huge range of topics R/N have to cover.
  • Your self-citation seems to only be about David Pearce and whether he was involved in Singularity Hypotheses (which could be used to establish COI, or, equally, to establish relevant expertise). I don't see anything about whether or why Singularity Hypotheses is not a reliable source.
  • WP:AKON is not relevant here, as your claim 'this is not a noteworthy article' is what's under dispute. Adding references to establish notability is precisely what's called for in notability AfDs, and it doesn't make sense to dismiss scholarly sources on the grounds that if nothing were useful for establishing notability, the article would need to go. Go actually read WP:AKON. (And the new sources.) -Silence (talk) 16:42, 2 April 2014 (UTC)[reply]
This is WP:MASKing at this point. The Norvig source is a tertiary source by definition; an introduction/survey/handbook to the whole of the field of AI. Even if interpreted as secondary by some stretch, one or two sources do not constitute WP:SCICON and "significant independent sources". To be clear: we're talking about only a few sources brought forward since AfD started, and that's all that has been brought forth when we exclude the Springer volume. And as for that Springer volume, which has been refuted by many editors above, two points: (1) again, it is not just violating WP:INDEPENDENT because of the orchestration evidenced, but is further weakened that authors/individuals closely related to the author of "Friendly AI" are part of the volume. Evidence of that relation can be shown here and here by cross-referencing the authors of the Springer volume. On the MIRI staff page you will find the names: Helm, Bostrom, Yudowsky, Muehlhauser. Bostrom's connection to Pearce is public knowledge, but can be shown here, as an article from The Guardian and here, under the FAQ of Humanity+, the organization they co-founded. To be WP:INDEPENDENT means fully and completely independent, not merely the appearance of independence. I suspect this is why there has been such heated WP:POV RAILROAD against me for pointing out these journalistic facts. (2) The Springer volume is listed as "Content Level: Popular/general" and is part of the "The Frontiers Collection" series and not part of the technical journals. This was pointed out above by other editors as well, which I already diffed. I'm not going to reply further on this line of argumentation. --Lightbound talk 22:01, 2 April 2014 (UTC)[reply]
Introductory books are generally a mix of secondary and tertiary material; large portions of Russell and Norvig are I think secondary, because the field of AI itself is relatively fast-changing and new. I don't know whether to classify the Friendly AI stuff as secondary or tertiary; since the distinction is fuzzy to begin with, it's probably a mix of the two. Secondary and tertiary sources are both good for citation; secondary sources are preferred for more detailed presentations, tertiary sources are good for broad overviews. When the tertiary source is as widely cited and respected as R/N, it's also useful for locating the topic in its academic context and establishing notability. Primary sources too are fine for citing encyclopedically, as long as it's to fact-check a secondary source or cite an isolated claim, not to synthesize multiple claims in a novel way.
"we're talking about only a few sources brought forward since AfD started" - Yes, that's normal in notability AfDs. People who think the topic is noteworthy throw some quick references into the pot, and we reassess. AfDs are short, so in most cases the entire job of adding new sources isn't finished during one, but if in such a short span of time we find a lot of really high-quality references (as in this AfD), that's very promising. -Silence (talk) 00:06, 3 April 2014 (UTC)[reply]

Recommendation for Reformatting[edit]

(this section moved to the talk page by Dennis Brown |  | WER)

The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.