CMPSCI 791DD Fall 2005 Tuesday and Thursday, 9:30-11:00 CS 150
Instructor |
Erik Learned-Miller |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Prerequisites |
Calculus, a course in probability. Machine learning useful, but not necessary. Computer Vision or at least some exposure to manipulating images on a computer and dealing with arrays of pixels. Ask me if you're not sure whether you're qualified. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Reading Materials |
Information Theory, Inference, and Learning Algorithms by David Mackay. This book may be downloaded for free, but may not be printed. If you want a printed version, you will need to buy it. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Additional Links |
11/10/05: Final exam solutions. 10/20/05: Answers to assignment 4, except for Matlab part. 11/10/05: Cool web page on vestibular system. 10/20/05: Answers to assignment 4, except for Matlab part. 10/20/05: See Shaolei's homework 4 for good examples of Matlab code for homework 4. Answers to assignment 4, Matlab part. 9/14/05: Unsupervised recognition of keyboard strokes from sound, with NO LABELLED TRAINING EXAMPLES! 9/14/05: Braitenberg Vehicles. These were pointed out by several people in Problem Set 1. 9/14/05: Sample solution to problem set 1. 9/14/05: Another sample solution to problem set 1. 9/12/05: Carl de Marcken's Ph.D. thesis on Unsupervised Language Acquisition. This is one of the best Ph.D. theses I've ever seen. Totally awesome! Go to this page, then scroll down for a link to his Ph.D. thesis. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Problem Sets |
Problem sets are due at the beginning of class on the day indicated on the course
web page. I will take off 50% for problem sets turned in after lecture starts, as I often want to talk about the solutions in class. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description |
This course explores what it means to "learn to see", but it is about more than just vision. It is really about a methodology for approaching artificial intelligence. While most of the examples I give in this course are motivated by vision problems, many of the concepts, techniques, and results apply equally well to things like speech recognition, robotics, and other areas of artificial intelligence. In this class, I try to move away from the assumptions which might be made in a typical machine learning or artificial intelligence class that don't really make sense for human beings. Here are some examples. 1) "Supervised learning" is a major component of traditional machine learning classes, but the problem with supervised learning, as applied to real world artificial intelligence problems is that it often doesn't make sense to assume one has dozens, or hundreds, or thousands, of training examples. If not, how do people (or animals) learn so effectively? 2) Modern inference algorithms often assume underlying distributions are either of a simple parametric form (can you say Gaussian?) or take on just a few discrete values. Obviously, both of these assumptions are false, and human behavior suggests models much richer than these. How do we get around these assumptions? Where do our models come from?
This course will contain a wide mix of topics. It will include discussions of general philosophical issues about approaching artificial intelligence. It will include the reading of original research in computer vision, and possibly in machine learning and other areas. It will also include at least two series of lectures on "technical subjects". The first of these will be Information Theory. The second will be Non-parametric Statistics. These are really almost the same thing! |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Schedule |
|