User contributions for Vluczkow
A user with 73 edits. Account created on 21 June 2015.
6 April 2016
- 16:5616:56, 6 April 2016 diff hist +12 N User:Vluczkow ←Created page with 'Placeholder.' current
3 April 2016
- 18:0818:08, 3 April 2016 diff hist +68 Biotechnology risk →CRISPR
- 17:5617:56, 3 April 2016 diff hist −2 Biotechnology risk →Genetically modified viruses
- 17:5517:55, 3 April 2016 diff hist +53 Biotechnology risk →Engineered viruses: Added see also.
- 17:5017:50, 3 April 2016 diff hist −15 m Biotechnology risk No edit summary
- 17:4917:49, 3 April 2016 diff hist 0 Biotechnology risk →Engineered Viruses
- 17:4817:48, 3 April 2016 diff hist +11 Biotechnology risk →Viruses: Changed title to better reflect type of risk.
- 17:4817:48, 3 April 2016 diff hist +21 m Biotechnology risk →CRISPR: Added reference to main CRISPR page.
- 17:4717:47, 3 April 2016 diff hist −6 m Biotechnology risk →Viruses: Typos and grammar.
- 17:4317:43, 3 April 2016 diff hist −463 Biotechnology risk No edit summary
- 17:3817:38, 3 April 2016 diff hist +9 Biotechnology risk No edit summary
- 17:3717:37, 3 April 2016 diff hist −24 Biotechnology risk Removed dead link.
- 17:3517:35, 3 April 2016 diff hist −331 User:Vluczkow/sandbox →List of plans for World War III current
- 17:3517:35, 3 April 2016 diff hist −30,674 User:Vluczkow/sandbox →List of plans for World War III
- 17:1917:19, 3 April 2016 diff hist +31,594 N User:Vluczkow/sandbox ←Created page with '{{User sandbox}} <!-- EDIT BELOW THIS LINE --> ==List of plans for World War III== ===Operation Unthinkable=== {{main|Operation Unthinkable}} British Prime Mini...'
- 17:1517:15, 3 April 2016 diff hist +323 User talk:Oshwah →Edits to World War III: new section
- 17:0417:04, 3 April 2016 diff hist −725 World War III →Military plans: Removed another plan with no main article.
- 17:0317:03, 3 April 2016 diff hist +37 World War III →See also
- 16:4416:44, 3 April 2016 diff hist −4,562 World War III →Military plans: Started paring down the list. Removed all the sections without a main article.
- 16:3816:38, 3 April 2016 diff hist 0 m Future of Life Institute minor typo
- 16:1416:14, 3 April 2016 diff hist −1 m OpenAI Minor fix
23 March 2016
- 02:5302:53, 23 March 2016 diff hist −907 Existential risk from artificial general intelligence →Instrumental goal convergence: Would a superintelligence just ignore us?: Removed the paragraph. It does not pertain to instrumental convergence in particular. It clearly belongs in the skepticism section.
- 02:5202:52, 23 March 2016 diff hist −1 Existential risk from artificial general intelligence →Difficulties of "fixing" goal specification after launch
3 March 2016
- 23:3323:33, 3 March 2016 diff hist −1 Existential risk from artificial general intelligence No edit summary
- 23:3223:32, 3 March 2016 diff hist +88 N Talk:Existential risk from advanced artificial intelligence Vluczkow moved page Talk:Existential risk from advanced artificial intelligence to Talk:Existential risk from artificial general intelligence: AGI is the proper technical term for >= human level AI. "Advanced artificial intelligence" is too amb...
- 23:3223:32, 3 March 2016 diff hist 0 m Talk:Existential risk from artificial general intelligence Vluczkow moved page Talk:Existential risk from advanced artificial intelligence to Talk:Existential risk from artificial general intelligence: AGI is the proper technical term for >= human level AI. "Advanced artificial intelligence" is too amb...
- 23:3223:32, 3 March 2016 diff hist +83 N Existential risk from advanced artificial intelligence Vluczkow moved page Existential risk from advanced artificial intelligence to Existential risk from artificial general intelligence: AGI is the proper technical term for >= human level AI. "Advanced artificial intelligence" is too ambiguous - A...
- 23:3223:32, 3 March 2016 diff hist 0 m Existential risk from artificial general intelligence Vluczkow moved page Existential risk from advanced artificial intelligence to Existential risk from artificial general intelligence: AGI is the proper technical term for >= human level AI. "Advanced artificial intelligence" is too ambiguous - A...
- 23:1123:11, 3 March 2016 diff hist +4 m Nuclear holocaust Grammatical edit.
20 February 2016
- 16:1916:19, 20 February 2016 diff hist +181 Effective altruism →Meta: Added slightly more detail to this section.
- 16:0816:08, 20 February 2016 diff hist −388 Existential risk from artificial general intelligence →Reactions: Removed indifference section. One quote does not merit its own section.
6 December 2015
- 18:3818:38, 6 December 2015 diff hist +251 Talk:Nuclear holocaust →Adding discussion of actual effects of nuclear war: new section
1 November 2015
- 20:1620:16, 1 November 2015 diff hist 0 Existential risk from artificial general intelligence →Instrumental Convergence
- 20:1520:15, 1 November 2015 diff hist −359 Existential risk from artificial general intelligence →Convergent goals: Several small changes designed to clarify the definition of instrumental convergence.
- 20:0820:08, 1 November 2015 diff hist +4 Existential risk from artificial general intelligence →Misspecified goals
- 19:4919:49, 1 November 2015 diff hist −72 AI takeover →Takeover scenarios in science fiction
- 19:4719:47, 1 November 2015 diff hist +23 AI takeover →Existential risk of AI
- 19:4619:46, 1 November 2015 diff hist +117 AI takeover No edit summary
- 19:4319:43, 1 November 2015 diff hist −60 AI takeover No edit summary
- 19:3919:39, 1 November 2015 diff hist −402 AI takeover Modified opening section to mention other scenarios.
- 19:3219:32, 1 November 2015 diff hist +256 Ethics of artificial intelligence →Weaponization of artificial intelligence
- 19:1619:16, 1 November 2015 diff hist −104 Existential risk from artificial general intelligence Removed line about weaponization of AI. It is raised as a concern, but rarely as an existential one.
- 19:1419:14, 1 November 2015 diff hist −4,284 Existential risk from artificial general intelligence →Weaponization: Moved section to Ethics of artificial intelligence page. Weaponization is generally not considered an existential risk.
- 19:1419:14, 1 November 2015 diff hist −21 m Ethics of artificial intelligence →Weaponization of artificial intelligence
- 19:1319:13, 1 November 2015 diff hist +4,330 Ethics of artificial intelligence Moved Weaponization section here from the existential risks from advanced artificial intelligence page.
- 19:1119:11, 1 November 2015 diff hist 0 Ethics of artificial intelligence →Unintended consequences: Artificial intelligence is not a proper noun. Removed upper case.
- 19:0319:03, 1 November 2015 diff hist 0 Friendly artificial intelligence →Coherent Extrapolated Volition: Removed title capitalization.
- 18:4918:49, 1 November 2015 diff hist −337 AI takeover →Existential risk of AI: Removed quote from Yudkowsky. Just doesn't add any information.