laitimes

Artificial intelligence celebrity archives - Li Feifei, Yang Likun

author:rykler

1 Li Fei-Fei

He is currently a professor of computer science at Stanford University, an academician of the National Academy of Engineering, the dean of the Human-Centered Artificial Intelligence Institute (HAI), the co-founder and chairman of AI4ALL, and an independent director of the board of directors of Twitter, his main research directions are machine learning, computer vision, cognitive computational neurology, and he has won the Sloan Research Award for Computer Science, the Impact World Chinese Award, and was selected as one of the "Top 100 Global Thinkers" in 2015 and "2017 List of 50 Chinese Students". A 2015 TED talk "Feifei Li: How to Teach Computers to Understand Pictures" was a hit.

Artificial intelligence celebrity archives - Li Feifei, Yang Likun

Most notable contribution: Construction of the ImageNet dataset

Li Feifei's research on data has changed the shape of artificial intelligence research, and in this sense, it can be called "changed the world".
  • ImageNet:

The ImageNet project was initiated by Professor Feifei Li in 2007, and his team took two and a half years to complete a database of 15 million photos covering 22,000 objects, and in 2009 published a paper called "ImageNet: A Large-Scale Hierarchical Image Database" and made the dataset freely available. But there was no big splash at the time, and even the simple notion that more data could improve the algorithm was quite skeptical.

The big turning point for ImageNet was the ImageNet Challenge: Li convinced the organizers of the famous image recognition competition PASCAL VOC to co-host the event with ImageNet, and although the PASCAL competition was high-profile and the quality of the dataset was high, there were very few categories, only 20, compared to ImageNet's image categories as high as 1,000. As the competition continued, ImageNet's collaboration with PASAL became a benchmark for how well image classification algorithms performed on the most complex image datasets of the time. The ImageNet Challenge began in 2010 and was no longer held in 2017, after which ImageNet was maintained by Kaggle.

"People were surprised to find that models trained with ImageNet could be used as starters for other recognition tasks. You can train a model with ImageNet and then debug the model for other tasks, which is not only a breakthrough in neural networks, but also a major advance in recognition. ”

ImageNet has really changed the way "data" is perceived in AI, and it has made people truly aware that data sets are at the heart of AI research, and that they are just as important as algorithms.

Reference: https://www.zhihu.com/question/30990652

2 Jia Deng

Feifei Li, a PhD student with a bachelor's degree in computer science from Tsinghua University, is currently an assistant professor in the Department of Computer Science and Engineering at the University of Michigan, and is the recipient of the Yahoo ACE Award, ICCV Marr Award, and ECCV Best Paper Award. He has been assisting Feifei Li in running the ImageNet project, co-organizing the ImageNet Large-Scale Visual Identity Challenge (ILSVRC) since 2010 until 2017. He was the main organizer of the NIPS 2012 and CVPR 2014 BigVision workshops.

Representative papers:

3 Geoffrey Everest Hinton

Canadian cognitive psychologist and computer scientist, known as the "father of neural networks", "the originator of deep learning", and "the godfather of artificial intelligence". He is currently a vice president and engineering researcher at Google, a distinguished professor at the University of Toronto, and chief scientific advisor to the Vector Institute. In 2018, he won the Turing Award for being one of the "three pioneers in the field of deep learning" and was selected as one of the 50 people who changed the global business landscape in 2017.

Main contributions: the first to use backpropagation for multi-layer neural networks, the invention of the Boltzmann machine, the introduction of layer-by-layer initialization pre-training method to open the prelude to deep learning, and the capsule network (capsule network).

Representative papers:

  1. The use of backpropagation algorithms
  2. Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. Cognitive modeling, 1988, 5(3): 1.
  3. CNN speech recognition opens with TDN network
  4. Waibel A, Hanazawa T, Hinton G, et al. Phoneme recognition using time-delay neural networks[J]. Backpropagation: Theory, Architectures and Applications, 1995: 35-61.
  5. Learning of DBN networks
  6. Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets[J]. Neural computation, 2006, 18(7): 1527-1554.
  7. The beginning of deep learning
  8. Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. science, 2006, 313(5786): 504-507.
  9. Data dimensionality reduction visualization method t-SNE
  10. Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of machine learning research, 2008, 9(Nov): 2579-2605.
  11. DBM model
  12. Salakhutdinov R, Hinton G. Deep boltzmann machines[C]//Artificial intelligence and statistics. 2009: 448-455.
  13. Use of the ReLU activation function
  14. Nair V, Hinton G E. Rectified linear units improve restricted boltzmann machines[C]//Proceedings of the 27th international conference on machine learning (ICML-10). 2010: 807-814.
  15. Training of RBM models
  16. Hinton G E. A practical guide to training restricted Boltzmann machines[M]//Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 2012: 599-619.
  17. Deep Learning Speech Recognition begins
  18. Hinton G, Deng L, Yu D, et al. Deep neural networks for acoustic modeling in speech recognition[J]. IEEE Signal processing magazine, 2012, 29.
  19. Deep learning image recognition opens with AlexNet
  20. Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105.
  21. Research on weight initialization and Momentum optimization methods
  22. Sutskever I, Martens J, Dahl G, et al. On the importance of initialization and momentum in deep learning[C]//International conference on machine learning. 2013: 1139-1147.
  23. The Dropout method is proposed
  24. Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
  25. A review of the Big Three Deep Learning
  26. LeCun Y, Bengio Y, Hinton G. Deep learning[J]. nature, 2015, 521(7553): 436.
  27. Distillation learning algorithm
  28. Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network[J]. arXiv preprint arXiv:1503.02531, 2015.
  29. Capsule NetworkSabour S, Frosst N, Hinton G E.
  30. Dynamic routing between capsules[C]//Advances in neural information processing systems. 2017: 3856-3866.
Reference: https://www.sohu.com/a/328382912_120233360

4 Alex Krizhevsky

The first author of the AlexNet paper and a PhD student at Hinton.

(From left to right: Ilya Sutskever, Alex Krizhevsky, Geoffrey Hinton)

Representative papers:

5 Yann LeCun

Hinton's postdoc, father of CNN, tenured professor at New York University, former head of the Facebook Institute for Artificial Intelligence, reviewer for IJCV, PAMI, and IEEE Trans, founded the ICLR (International Conference on Learning Representations) conference and co-chaired it with Yoshua Bengio. In 2014, he won the IEEE Neural Network Leader Award and in 2018 won the Turing Award.

Major contributions: Developed LeNet5 in 1998 and produced the classic dataset MNIST, which Hinton called "the fruit flies of machine learning."

Representative papers:

  1. Use backpropagation and neural networks to recognize handwritten digits
  2. LeCun Y, Boser B, Denker J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural computation, 1989, 1(4): 541-551.
  3. Study of early weight pruning
  4. LeCun Y, Denker J S, Solla S A. Optimal brain damage[C]//Advances in neural information processing systems. 1990: 598-605.
  5. Use the Siamese network for signature verification
  6. Bromley J, Guyon I, LeCun Y, et al. Signature verification using a" siamese" time delay neural network[C]//Advances in neural information processing systems. 1994: 737-744.
  7. LeNet5 convolutional neural network proposed
  8. LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
  9. Theoretical analysis of max pooling and average pooling
  10. Boureau Y L, Ponce J, LeCun Y. A theoretical analysis of feature pooling in visual recognition[C]//Proceedings of the 27th international conference on machine learning (ICML-10). 2010: 111-118.
  11. DropConnect method
  12. Wan L, Zeiler M, Zhang S, et al. Regularization of neural networks using dropconnect[C]//International conference on machine learning. 2013: 1058-1066.
  13. OverFeat detection framework
  14. Sermanet P, Eigen D, Zhang X, et al. Overfeat: Integrated recognition, localization and detection using convolutional networks[J]. arXiv preprint arXiv:1312.6229, 2013.
  15. CNNs are used for stereo matching
  16. Zbontar J, LeCun Y. Computing the stereo matching cost with a convolutional neural network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1592-1599.
  17. A review of the Big Three Deep Learning
  18. LeCun Y, Bengio Y, Hinton G. Deep learning[J]. nature, 2015, 521(7553): 436.
  19. EBGAN
  20. Zhao J, Mathieu M, LeCun Y. Energy-based generative adversarial network[J]. arXiv preprint arXiv:1609.03126, 2016.
Reference: https://www.sohu.com/a/328598636_120233360

Read on