## Announcements

• There was a typo in the Driver.java I gave - new MovieShelf_Solution
• Should be new MovieShelf
• hate it when I miss that stuff
• You can reduce the exceptions thrown by a method when you implement/override it
• So you do not need to have throws SetFullException on your add method
• Also: if you haven’t seen section D.10 of the textbook (or don’t have the textbook):
• DON’T WRITE ALL YOUR METHODS BEFORE COMPILING
• You cannot! Write a program! Like this!
• Since you are implementing an interface, you are forced to implement all the methods, but…
• You can “fake out” the compiler by just putting empty methods
• Or putting a dummy return value in the methods that expect it
• Like return false; in the methods that return boolean
• Then you can test your code BEFORE writing it all.

## Stacks

• You are used to stacks of things in everyday life
• A stack of dishes in the cabinet (or the counter…)
• A stack of books on your desk
• A stack of pringles in a can
• Let’s think about the properties of a stack in data structure-y terms
• How can I add things to the stack?
• By putting things on top.
• How can I remove things from the stack?
• By taking things off the top.
• Let’s restrict ourselves to this; you CAN take things out of the middle but it’s harder ok?
• Can I have duplicates?
• Sure, why not!
• Does the order that I add/remove things matter?
• Yes!
• Unlike Bag/Set, Stacks are ordered
• Can I see what is in the stack?
• Thiiiiiis is a little less clear
• For our purposes, we will say that we can only see what’s on top of the stack.
• So, a pringles can. (Tall, narrow, and hard to see inside.)

• A stack holds:
• Any number of items, including duplicates
• In a particular order
• And at any time, only one item is visible on the “top” of the stack.
• You can do the following with a stack:
• Push: add something to the top of the stack.
• Pop: remove something from the top of the stack.
• Peek: look at the top of the stack, but don’t change the stack.
• The order of the items in the stack is determined by the order they were pushed.
• When you push an item, it “covers up” the previous item.
• When you pop, then, the items will come out in reverse order.
• We say it’s LIFO - “Last In, First Out”

## Turning that into a Java interface

• What methods do we need for a StackInterface<E>?
• void push(E) - no need to return anything.
• E pop() - pops and returns the top value.
• E peek() - returns the top value.
• What happens if you pop or peek an empty stack?
• This is called a stack underflow.
• We could return null or something
• But the tradition is that this is an error condition.
• In Java, we can throw an exception.
• Then there are methods that would be nice to have.
• boolean isEmpty() - as usual.
• void clear() - to remove all items, as usual.

## Okay, what are stacks good for?

• A stack you interact with probably on a database: your browser history.
• When you click a link, the previous page is pushed onto the history stack.
• When you hit the back button, the history stack is popped, and you go to that page.
• If you right click your back button, you can even see all the stack items.
• Every time you type something or do something, it pushes that action onto a stack
• You mess up, and hit ctrl+Z (or cmd+Z)
• That pops the most recent action from the stack and un-does it
• Every program you write uses a stack, secretly.
• It’s called the call stack
• It’s used to keep track of the functions that are “in progress” and your local variables.
• And one more very common/useful technical use…
• Parsing.

## Parsing

• Parsing is picking apart a string into some kind of structure.
• Your brain does this with language all the time.
• A very common parsing task in computing is parsing matched brackets.
• So like, (parentheses), [brackets], {braces}, and even <“angle brackets”>
• Or in HTML, <a> is an opening tag, and its closing tag is </a>. Same idea.
• We want to ensure that the brackets are matched (or “nested”) properly.
• So this is a correct nesting: (array[i] + 4) * 3
• But this is an incorrect nesting: (array[i + 4) * 3]
• And so is this: (something is missing
• A stack is perfect for this task.
• We scan the string from left to right.
• Whenever we see an opening bracket (like ([{<), we push it.
• Whenever we see a closing bracket (like )]}>)…
• We have to check if it’s the correct closing bracket.
• So we peek at the top bracket, and if it’s the wrong one, we stop and say “nope.”
• If it’s the right one, we pop it.
• What about if the stack is empty?
• That’s also a bad string. E.g. "this is bad)"
• When we get to the end of the string…
• Should we have anything left on the stack?
• No. Cause if we did, that meant we had an opening bracket without a close bracket.
• So we only say the string is valid if the stack is empty at the end.

## Postfix expression notation (“reverse Polish notation” (RPN))

• If I write this expression: 4 + 6 ÷ 2
• What order do I do the operations in?
• What if I wanted to do the addition first?
• This is why we have parentheses: (4 + 6) ÷ 2
• The mathematician Jan Łukasiewicz invented prefix expression notation in 1924
• A way of unambiguously writing expressions without brackets/parentheses
• But we usually use postfix notation
• The above expressions would become, respectively:
• 6 2 ÷ 4 +
• 4 6 + 2 ÷
• This has interesting parallels to Japanese grammar (and other verb-final postpositional languages… Korean??)
• How to evaluate an RPN expression:
• When you see a value, push it.
• When you see an operator:
• pop the top two values
• perform the operation
• the top value becomes the second operand
• this is because the values come off in reverse order - LIFO!
• push the result.

## Stack implementations and analysis

• Well, implementing a stack shouldn’t be too hard…
• Or at least, very similar to our Bag implementations.
• Just like before, let’s keep a size. Everything to the left is “on the stack.”
• The thing at the top of the stack is at size - 1.
• Push:
• check if we have enough space and double the array if not
• put the new value at the slot given by size
• increment size
• what is the runtime? does it depend on the number of items on the stack?
• well.. not directly. doubling the array takes time but we’ll talk about that
• Peek:
• check that size > 0
• return the value in the slot at size - 1
• what is the runtime?
• Pop:
• peek and store that value
• decrement size
• return the peeked value
• what is the runtime?
• Clear:
• in order to prevent memory leaks, we need to null out all the items on the stack
• then set size to 0
• what is the runtime?
• Arguably this is even more natural-feeling
• The head of the list is the top.
• It’s null if the stack is empty.
• Push:
• Make a new node that points to head, and make it the head.
• what is the runtime?
• Peek:
• Check that head != null
• Return the value stored in head
• what is the runtime?
• Pop:
• peek and store that value
• set head to head.next
• return the peeked value
• what is the runtime?
• Clear:
• this is even easier
• head = null
• the rest of the links will “float away” - no need to null them
• what is the runtime?

## Amortized analysis

• “Amortized” means “when you spread it all out”
• Kind of like an average
• When you add something to it, how long does it take?
• It could be constant time (put new item at end of bag, increase size)
• It could be linear time (must duplicate array, which is O(n) itself)
• So is this linear or constant time??
• It’s…. both?
• Well, what you have to do is sort of average out all the possibilities.
• You look at the behavior over a sequence of operations, then divide by the number of operations.
• If we did it in a dumb way and increased the array size by 1 every time…
• Then every add would be O(n).
• But since we double the size of the array each time we run out of space…
• Let’s make a table. If we start the capacity at 1…

Step number Capacity Time to add item
to array
Time to resize
1 1 1
2 2 1 1
3 4 1 2
4 4 1
5 8 1 4
6 8 1
7 8 1
8 8 1
9 16 1 8
10 16 1
11 16 1
12 16 1
13 16 1
14 16 1
15 16 1
• The time to add an item to the array is constant.
• The time to resize is a little trickier…
• If you add n items, the biggest resize is going to take n steps.
• But – and this is where I got it wrong last time – the previous resizes will take time that is fractions of n.
• The second-largest resize will take n/2 steps.
• The third-largest, n/4… then n/8, n/16, n/32…
• …all the way down to 4, 2, 1.
• If we factor n out of this series, we get n(1 + 1/2 + 1/4 + 1/8 + 1/16 + ...)
• This series converges to 2, so for large n it’s 2n`: linear.
• Another intuition for this: if we draw the time taken as “stacks of blocks” (or cups)…
• And then we knock the stacks over (maybe Harley can help with this)
• Now we have a layer of blocks only 1 tall across the entire range of n.
• So for n additions, it takes n steps to add an item, and 2n steps to resize.
• divide by n, and we get 3.
• and O(3) = O(1). constant time!