Comp. Sci. 311 Study Guide for First Test Fall, 2008

As you know, the first test will be Tuesday Oct. 7 from 7:15 to 8:45 in the evening in Hasbrouck 134. For those interested, Creidieki will hold a review session that day at the usual class time and room.

The test is closed book and closed notes, with no calculators, computers, cell phones, etc. allowed. This is because I want you to understand the material beforehand. As promised though, the test will include a statement of the master theorem.

I try hard to make my exams like my problem sets only much easier because they are closed book and there is a rather short limit on time.

To study for the test, I urge you to do the following:

  1. Go over the four homeworks, and their model solutions, making sure that you now understand all of these!

  2. Go over your notes from the first seven lectures, and the corresponding readings. In particular, we studied a bunch of topics and algorithms that I expect you to be comfortable and familiar with. Below is a list of the topics and algorithms that I have in mind.

  3. Relax and get a good night's sleep before taking the test.

Main Topics Covered

  1. Asymptotic analysis: mostly big-oh, but also big-omega and big-theta.

  2. Number Theoretic Algorithms, and a little bit of number theory concerning modular arithmetic and Fermat's Little Theorem.

  3. Divide and Conquer Algorithms: how this works; how to come up with the relevant recurrence equation; how to solve the recurrence equation using the master theorem.

  4. Sorting algorithms and the Omega(nlog n) lower bound for comparison sorts.

Main Algorithms Studied

  1. Number theoretic algorithms: Euclid's algorithm, Modular exponentiation, Fast Primality testing using Fermat's Little Theorem, and RSA.
  2. Merge sort, Bucket Sort, and Radix Sort.
  3. Strassen's Fast Matrix Multiplication Algorithm and Fast Extended Precision Integer Multiplication
  4. We didn't cover universal hashing, but I will assume that your general knowledge about dictionaries from 187 includes the following two facts:
    1. There are good classes of hash functions that allow constant-time maps of about n items from a large key space to O(n) buckets so that with high probability the expected length of each bucket is bounded by a constant, although the worst case length is n. This allows us to have a dictionary that has average case time O(1) per operation, but worst case time O(n) per operation.
    2. We can also build balanced tree dictionaries with O(log n) worst case time per operation.