As you know, the first test will be Tuesday Oct. 7 from 7:15 to 8:45
in the evening in Hasbrouck 134. For those interested, Creidieki will
hold a review session that day at the usual class time and room.
The test is closed book and closed notes, with no
calculators, computers, cell phones, etc. allowed. This is because
I want you to understand the material beforehand. As promised though,
the test will include
a statement of the master theorem.
I try hard to make my exams like my problem sets only much easier
because they are closed book and there is a rather short limit on time.
To study for the test, I urge you to do the following:
- Go over the four homeworks, and their model solutions, making sure that you now
understand all of these!
- Go over your notes from the first seven lectures, and the
corresponding readings. In
particular, we studied a bunch of topics and algorithms that I expect you to be
comfortable and familiar with. Below is a list of the topics and algorithms that I
have in mind.
- Relax and get a good night's sleep before taking the test.
Main Topics Covered
- Asymptotic analysis: mostly big-oh, but also big-omega and
big-theta.
- Number Theoretic Algorithms, and a little bit of number theory
concerning modular arithmetic and Fermat's Little Theorem.
- Divide and Conquer Algorithms: how this works; how to come up with the relevant
recurrence equation; how to solve the recurrence equation using the master theorem.
- Sorting algorithms and the Omega(nlog n) lower bound for comparison sorts.
Main Algorithms Studied
- Number theoretic algorithms: Euclid's algorithm, Modular
exponentiation, Fast Primality testing using Fermat's Little
Theorem, and RSA.
- Merge sort, Bucket Sort, and Radix Sort.
- Strassen's Fast Matrix Multiplication Algorithm and Fast Extended
Precision Integer Multiplication
- We didn't cover universal hashing, but I will assume that your general
knowledge about dictionaries from 187 includes the following two facts:
- There are good classes of hash functions that allow
constant-time maps of about n items from a large key space to O(n) buckets so
that with high probability the expected length of each bucket is
bounded by a constant, although the worst case length is n. This
allows us to have a dictionary that has average case time O(1) per
operation, but worst case time O(n) per operation.
- We can also build balanced tree dictionaries with O(log n) worst
case time per operation.