## Lecture 3

1. What do virtual methods let you do? How would you make and use one?
• They let you tell an object to do something, but how it does it is up to the object.
• You make one by making a method in a base class/interface, and then overriding/implementing it in subclasses.
2. What is polymorphism, and why is it a useful tool?
• Polymorphism of any kind is when you write one thing, but one of many things can happen.
• It’s useful for abstraction, mostly - it lets you say “what” to do without saying “how.”
3. In Java, you can use an Object variable to hold any type of value. But they introduced generics to solve some problems with that. What are some of the problems with using the Object variable approach that generics solve?
• The main one is being able to check for type correctness at compile time instead of at run time.
• They also allow you to put constraints on types, such as “this type must be comparable.”
4. ADT stands for “Abstract Data Type.” What kinds of things does an ADT specify? What kinds of things does it NOT specify?
• It specifies:
• What data a data type holds
• What things you can do to the data
• Contracts and invariants (e.g. you can’t pop from an empty stack)
• It does not specify:
• The actual code that implements these actions
• The actual underlying representation of the data (e.g. array vs. linked list)

## Lecture 4

1. When implementing an ADT, our programming languages can enforce some things (like method type signatures), but not other things. Give two examples of things that cannot be enforced by Java, but which are still important to know about an ADT.
• What the method should actually do to the object
• should the size change?
• should the item be in the data structure afterwards?
• etc.
• The conditions under which exceptions are thrown
• you can say “throws IllegalArgumentException” in the interface
• but you can’t say “throws IllegalArgumentException if size < 0” or something
• Other “contractual” things, like “should never return null” or whatever
2. For the resizable array implementation of Bag, what do we have to do when we run out of space in the array? What is bad about that?
• We have to make the array bigger, and copy all the items from the old array into the new one
• It takes time proportional to the number of items in the array.
• Another possible downside is that if you don’t make the array smaller again, you could end up with a big empty array and waste space.
3. When designing an ADT, you get to choose what methods it will support. You could put a lot of methods or just a few. What’s the benefit of having a lot of methods? What’s a downside? (This is pretty open-ended.)
• Having lots of methods makes your interface more pleasant to use
• But having lots of methods also makes it harder for implementors
• Now they have a lot more work to do.
• Many people answered this in a more general sense, i.e. ‘why having lots of methods in your program is good/bad. It’s true, but a bit off topic.

## Lecture 5

1. In an array, the items are in order and are indexed by a number. In a linked list, the items are still in order, but you cannot index them by number. How do you get to any item in a linked list?
• You must start at the head (the beginning) and follow the “next” links until you get to the item you want. (This is O(n).)
• Some people just described a linked list for this. Be sure you’re actually answering the question.
2. Let’s say you calculate that your algorithm’s runtime is “3n^2 + 9n + 24”. What is the big-O runtime of your algorithm? What two rules did you follow to get that answer?
• O(n^2).
• We ignore lower-order terms, so the 9n and 24 drop out.
• We ignore multiplicative constants, so 3n^2 becomes n^2.
3. Give the overall big-O runtime for this small procedure, and explain how you got that answer:
 void doStuff(int[] array) {
for(int i = 0; i < array.length; i++) {
System.out.println(array[i]);
}

if(array.length > 1) {
array[0]++;
array[1]++;
}

int count = 0;

for(int i = 0; i < array.length; i++) {
for(int j = 0; j < i; j++) {
if(i != j && array[i] == array[j]) {
count++;
}
}
}
}

• The first for loop runs n times, and the code inside is O(1), so the for loop is O(n).
• The if statement runs once, and the code inside is O(1), so the if is O(1).
• The second for loop runs n times;
• Inside is a for loop which runs O(n) times;
• and inside that the code is O(1), so the inner for loop is O(n).
• Since the code inside is O(n), the second for loop is O(n^2).
• So the overall big-O runtime is O(n^2 + n + 1) = O(n^2).
• Not all nested loops are O(n^2) - keep this in mind. You must look at each loop and the code inside to be sure.
• Remember that sequential steps add, not multiply.

## Lecture 7

1. When you push an item onto a stack, the items come out in (the same, reverse) reverse order.
2. A recursive algorithm has three characteristics: the base case says when to stop; the recursive case splits the problem into a smaller problem; and termination guarantees that the algorithm ends (doesn’t recurse infinitely).
3. When parsing expressions, what determines the order in which we evaluate?
• ✗ Go left to right
• that’s silly!
• ✅ Operator precedence
• ✗ Parentheses first
• true, but only partly true.
• that’s why I said choose the BEST answer.
• ✗ Stack order
• this is meaningless.
4. Briefly describe the process of doing an amortized time analysis.
• You look at a sequence of operations.
• You figure out how long each operation takes.
• Then you sum them all up, and divide by the number of operations (n).
• not quite “average of best and worse cases” - it’s more to do with algorithms whose runtimes fluctuate for various values of n.

## Lecture 9

1. For some collection of objects to be sortable in Java, they must implement the Comparable interface.
2. Selection sort works by finding the minimum value of the unsorted part, and putting it at the end of the sorted part.
3. Bubble sort works by finding inversions and swapping them.
4. Selection sort uses ____ steps and ____ comparisons.
• ✗ O(n), O(n^2)
• ✗ O(n), O(n)
• ✗ O(n^2), O(n)
• ✅ O(n^2), O(n^2)
5. Insertion sort and selection sort both have a “sorted” part and an “unsorted” part. If I wanted to find the 10 smallest values in an array, I could try doing 10 steps of each and then stopping and looking at the sorted part. Which algorithm will give me the 10 smallest values if I stop early? Why doesn’t the other one do that?
• selection sort.
• this is because on each step, selection sort finds the smallest value in the unsorted part.
• so after 10 iterations, it will have found the 10 smallest values.
• insertion sort does not work like this: it only looks at the first value in the unsorted part.
• so after 10 iterations, it will have sorted the first 10 items in the array.
• I never said anything about bubble sort… ;)

## Lecture 10

1. When considering a recursive function, it’s helpful to draw a recursion tree and to analyze it by levels
2. A stable sorting algorithm will not reorder elements that compare as equal.
3. In the worst case, mergesort takes O n log n time, but quicksort takes O n^2.
4. Radix sort can be O(n) in the worst case, but it can only do this because it is no longer general - it can’t be used on any data, just certain kinds.
5. How we pick the pivot in quicksort is very important. Explain what can go wrong if we pick bad pivots, and describe one method of picking a good one.
• The worst possible pivot would be the biggest or smallest item in the array.
• If you picked that worst pivot every time, we’d end up with O(n) recursions, and an O(n^2) overall runtime.
• Median-of-three is a good method: pick 3 values from the array and use the median.

## Lecture 12

1. In many cases, we want to access all the items of a collection, in some kind of order. This is called sequential access and Java uses iterators to do it.
2. A generic for loop is how we can use one of those objects from the previous question.
3. The best-case height of a binary tree is ___, and the worst-case is ___.
• ✗ n, n
• ✗ n/2, n
• ✅ log(n), n
• worst case is basically a linked list (a “stick”)
• ✗ log(n), log(n)
4. Write a small pseudocode recursive function to find the minimum value in a binary tree. Don’t worry about writing types on the variables and don’t worry about null. Use the formatting bar to change the font to monospace.

 int minimum(t) {
// get the minimums of the left and right subtrees
l = minimum(t.left);
r = minimum(t.right);

// t's value is the current "winner"
ret = t.value;

// then see if l or r "win"
if(l < ret)
ret = l;

if(r < ret)
ret = r;

return ret;
}