CSE 5525: Foundations of Spoken Language Processing
CSE 5539: Seminar in Artificial Intelligence
CSE 5914: Capstone: Knowledge-based systems
In teaching various AI classes, I have created a number of videos demonstrating several learning techniques that are covered in CSE 630/730; these are based on matlab code that I hacked together over the years. Since I've had a number of requests for the videos, I'm placing them here for the community to use. You are welcome to use or modify the videos as you see fit; I do ask that you credit me if you show them somewhere. (Keeping the title slide with my name in small print satisfies this.) The quicktime versions are much smaller (and lower quality) than the mp4 versions. Warning: in general, they aren't very high video quality because of the compression. I may get around someday to redoing them. Maybe....
Perceptron Learning Rule demo: This demo shows the PLR learning a 2-dimensional hyperplane between two separable classes. The thing to note is that at convergence, the red line (hypothesis) does not match the true function (blue dashed line) -- the end hypothesis will depend on the training data. (Click the movie below, or download Quicktime or mp4 format.)
K-means demo: This demo shows k-means clustering of 4 and 8 data clusters in 2 dimensions. I use this demo as a jumping off point to show that random initialization can lead to "incorrect" clusters. The first demo has 4 relatively separate data clusters, and k-means comes up with a pretty good clustering. The iterations first show a mean assignment, and then the data points are colored according to the closest mean. The iteration continues until convergence. The second movie shows the example with 8 means and a "bad" initialization. After 11 shots at initialization, I did get one that worked (demo 3). Someone in the class ususally picks up on the fact that the number of "clusters" and number of means are matched in the video, which can be used to generate a discussion of different techniques for finding the number of means. (Click the movie below, or download Quicktime or mp4 format.)
1-dimensional Gaussian EM demo: This demo shows learning of a 3-mixture of Gaussians over one dimensional data. I particularly like this demo because it shows how the means and variances evolve over EM iterations. The means are represented by the center of the line in each graph; the variance is represented by the width of the line. The data were generated by three overlapping Gaussians, and in this case the EM algorithm does recover the means and variances of the original generating functions pretty accurately. (Click the movie below, or download Quicktime or mp4 format.)
2-dimensional Gaussian EM demo: This is similar to the 1-d demo, but uses data that are 2-dimensional. I used diagonal covariance matrices here (I was too lazy to code up the full covariance version :-), but this also gives me the ability to talk about what happens if you use diagonal covariance matrices (notice the axis-parallel ellipsoids). This is a case where the EM algorithm does not converge to the same parameters that generated the data (pretty obvious in the movie). The particular algorithm I used here was to start by taking the global mean of the data, then splitting the Gaussian cloning the mean and perturbing it slightly and letting EM converge. I then iteratively split the each Gaussian, perturb, and let EM converge again. This is similar to some techniques used in speech recognition Gaussian modeling but isn't the only way to do this.
(Click the movie below, or download Quicktime or mp4 format.)
Geoff Zweig and I gave a tutorial at Interspeech 2010 on Conditional
Random Fields and Direct Modeling. The powerpoint slides and pdf slides are available from that tutorial.
Last modified: Wed Jul 6 12:02:40 EDT 2016