Talk:Geoffrey Hinton

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

godfather joke[edit]

The first paragraph claims his work has gained him the nickname "godfather of neural networks". This must be a joke. What about the other, much older pioneers of neural networks? For example, Warren McCulloch was called the godfather of neural networks - see http://soma.berkeley.edu/books/MA/MassAction.html . And there are Teuvo Kohonen, Kunihiko Fukushima, Shun'ichi Amari, Paul Werbos, David E. Rumelhart, and others who may be more deserving of such a title. I checked the source - apparently it was Andrew Ng who called Hinton that in a Wired magazine article. But both Hinton and Ng are working for the same company, Google (next door to Wired magazine). This looks like a company's self-promotion in disguise. Should not appear in any biography. Putting things straight (talk) 16:15, 8 October 2013 (UTC)[reply]

I believe it's because Prof. Hinton also does an excellent Marlon Brando. – AndyFielding (talk) 15:46, 7 April 2023 (UTC)[reply]

For future use[edit]

My father was a Stalinist and sent me to a private Christian school where I was the only person to pray every morning. From a very young age I was convinced that many of the things that the teachers and other kids believed were just obvious nonsense. That's great training for a scientist and it transferred very well to artificial intelligence. But it was a nasty shock when I found out what Stalin actually did.

— Preceding unsigned comment added by 176.199.175.30 (talk) 23:53, 10 November 2014 (UTC)[reply]

location of birth[edit]

This article says he was born in Bristol? http://www.magazine.utoronto.ca/feature/getting-smarter-computer-science-professor-geoffrey-hinton-is-helping-to-build-a-new-generation-of-intelligent-machines/ but the wiki article says london? — Preceding unsigned comment added by 82.2.180.6 (talk) 07:12, 23 June 2015 (UTC)[reply]

Why delete Alex Krizhevsky whose breakthrough made this possible? Other co-workers are also mentioned.[edit]

User Nelson: You deleted my text on Krizhevsky and others. One cannot give Hinton sole credit for the work of Alex Krizhevsky. In fact, Hinton was resistant to Krizhevsky's idea. Why delete Krizhevsky? Other co-workers are also mentioned. Same for David E. Rumelhart and others. I also added Seppo Linnainmaa, the inventor of backpropagation (1970):

While a professor at Carnegie Mellon University (1982–1987), Hinton and David E. Rumelhart and Ronald J. Williams were among the first researchers who demonstrated the use of back-propagation algorithm (also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970) for training multi-layer neural networks that has been widely used for practical applications.[1]

The dramatic image-recognition milestone of the AlexNet designed by his student Alex Krizhevsky[2] for the Imagenet challenge 2012[3] helped to revolutionize the field of computer vision. — Preceding unsigned comment added by Uf11 (talkcontribs) 19:51, 16 November 2018 (UTC)[reply]

References

  1. ^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986-10-09). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
  2. ^ Dave Gershgorn (18 June 2018). "The inside story of how AI got good enough to dominate Silicon Valley". Quartz. Retrieved 5 October 2018.
  3. ^ Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (2012-12-03). "ImageNet classification with deep convolutional neural networks". Curran Associates Inc.: 1097–1105. {{cite journal}}: Cite journal requires |journal= (help)

Extra section for high-profile case of plagiarism in the backpropagation paper?[edit]

Hinton's backpropagation paper[1] (he was the second of three authors) did not mention Seppo Linnainmaa, the inventor of the method. This is actually Hinton's most highly cited paper, together with the Krizhevsky paper.[2] At the moment, the article mentions only in passing this high-profile case of plagiarism although it probably deserves an extra section.

Uf11 (talk) — Preceding undated comment added 18:45, 14 December 2018 (UTC)[reply]

  1. ^ Cite error: The named reference backprop was invoked but never defined (see the help page).
  2. ^ Cite error: The named reference imagenet was invoked but never defined (see the help page).
Do you have a source that suggests either that Seppo Linnainmaa is the inventor of backpropagation or that any part of this paper was plagiarized? The backpropagation article suggests that there is no single clear inventor, i.e., that the idea has been rediscovered and refined by many researchers. This is the story presented in a journal article[1] by neural network researcher and credit assignment enthusiast Jurgen Schmidhuber. In the absence of a source suggesting otherwise, it seems plausible that the failure to credit previous work reflects multiple discovery rather than plagiarism. 72.140.47.98 (talk) 00:39, 11 March 2019 (UTC)[reply]
We cannot look into the minds of people. But the facts are as follows. In an interview of 2018,[2] Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention." Your well-known reference[1] of 2015 (as well as many other references) show that backpropagation was invented much earlier. You deleted these important facts. I undid that. In fact, Hinton himself says in the same interview: "I have seen things in the press that say that I invented backpropagation, and that is completely wrong." Uf11 (talk) 15:44, 14 April 2019 (UTC)[reply]
I've attempted to make the article a little more unbiased, by noting that the Rumelhart paper was not the first paper to propose backpropagation in the introduction and moving the quote to the section that is actually about backprop. (It's a little weird to focus so much on backprop in the introduction, especially given that Hinton was a middle author on the paper and does not take credit for it himself.) If you do not agree with this, please discuss why here before changing it. 72.140.133.233 (talk) 14:35, 19 April 2019 (UTC)[reply]

Strange table in the article[edit]

Is there a reason for almost an entire section's worth of screen space to be devoted to some kind of strange table of Hinton's quotes during a TV interview paired with interpretation and rephrasing of his (fairly straightforward) words? jp×g 19:06, 8 April 2023 (UTC)[reply]

Came here just to see if my view on this interpretation table was already supported. It will be reduced to a proper paragraph and the table removed. JSory (talk) 15:21, 10 April 2023 (UTC)[reply]
I'm no wikipedia expert, but the right-side column approaches original research (or some odd attempt at translating quotes in to some manifesto-style language). Very odd table indeed. Douglaswyatt (talk) 20:52, 12 April 2023 (UTC)[reply]
Very odd and certainly unencyclopedic. I've been bold and removed it. Maybe someone can reword it into something that makes sense in an actual encyclopedia article although I doubt that's possible without it qualifying as OR. --Cyllel (talk) 14:27, 1 May 2023 (UTC)[reply]
To the creator of the table, if you want more context for why this isn't appropriate, see Wikipedia:Manual_of_Style/Tables#Prose. It's something really more suited to a (certain kind of) blog post than an encyclopedia article. I see no pressing reason that the information from the interview can't be communicated in conventional prose, the ideas are not at all complex or difficult to handle in a way that might justify a novel presentation like a table. --Cyllel (talk) 14:34, 1 May 2023 (UTC)[reply]
  1. ^ a b Schmidhuber, Jürgen (2015). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637.
  2. ^ Ford, Martin (2018). Architects of Intelligence: The truth about AI from the people building it. Packt Publishing. ISBN 978-1-78913-151-2.