Neural Networks for Machine Learning
Geoffrey Hinton
Learn about artificial neural networks and how they're being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc. We'll emphasize both the basic algorithms and the practical tricks needed to get them to work well.
Announcements
Syllabus
The course syllabus will be finalized sometime in the next few days. Meanwhile, the topics of the first 9 lectures are already decided and I am also including a list of topics that will be covered in the last 7 lectures.
Lecture 1: Introduction
Lecture 2: The Perceptron learning procedure
Lecture 3: The backpropagation learning proccedure
Lecture 4: Learning feature vectors for words
Lecture 5: Object recognition with neural nets
Lecture 6: Optimization: How to make the learning go faster
Lecture 7: Recurrent neural networks and advanced optimization
Lecture 8: How to make neural networks generalize better
Lecture 9: Combining multiple neural networks to improve generalization
TOPICS TO BE COVERED IN LECTURES 10-16
Deep Autoencoders (including semantic hashing and image search with binary codes)
Hopfield Nets and Simulated Annealing
Boltzmann machines and the general learning algorithm
Restricted Boltzmann machines and contrastive divergence learning
Applications of Restricted Boltzmann machines to collaborative filtering and document modeling.
Stacking restricted Boltzmann machines or shallow autoencoders to make deep nets.
The wake-sleep algorithm and its contrastive version
Recent applications of generatively pre-trained deep nets
Deep Boltzmann machines and how to pre-train them
Modeling hierarchical structure with neural nets
No comments:
Post a Comment