Please read this page about taking my exams!
Exam format
- When/where
- During class, here, like normal
- ~70 minutes
- it is not going to be “too long to finish”
- no calculator
- Closed-note
- You may not have any notes, cheat sheets etc. to take the exam
- The open-note thing was just for when we were remote
- You may not have any notes, cheat sheets etc. to take the exam
- Length
- 3 sheets of paper, double-sided
- there are A Number of Questions and I cannot tell you how many because it is not a useful thing to tell you because they are all different kinds and sizes.
- But I will say that I tend to give many, small questions instead of a few huge ones.
- Topic point distribution
- More credit for earlier topics, less credit for more recent ones
- More credit for things I expect you to know because of your experience (labs, exercises)
- Kinds of questions
- Few or no multiple choice
- A few “pick n“ (but not many)
- Some fill in the blanks
- mostly for vocabulary
- or things that I want you to be able to recognize, even if you don’t know the details
- Application questions about numbers and arithmetic (i.e. math problems, basically)
- Base conversion
- Interpreting patterns of bits in different ways (signed, unsigned, etc)
- Unsigned and signed (2’s complement) addition
- Several short answer questions
- again, read that page above about answering short answer questions!!
- No writing code from scratch, but:
- tracing (reading code and saying what it does)
- debugging (spot the mistake)
- interpreting asm as HLL code (identifying common asm patterns)
- fill in the blanks (e.g. picking right registers, right branch instructions)
- identifying loads and stores in HLL code
Topics
- Information and computation
- Information is, essentially, answers to questions
- As humans the way we learn information is through our senses
- Primarily sight and hearing, but also touch, taste, and smell
- Kinds of information we encounter on computers are speech, music, other sounds, images, videos, text
- All of these kinds of information can be broken down into smaller pieces, ultimately just numbers
- Text is an array of characters, and each character can be represented as a number
- Sound is an array of air pressure measurements, and each can be represented as a number
- Images are 2-dimensional arrays of colors, and each color can be broken down into three intensities of red, green, and blue light (which correspond to the cone cells in our eyes), and those intensities are numbers
- Videos are a sequence of images paired with sound
- The simplest kind of information - the simplest kind of question - is the answer to a yes/no question
- yes/no
- true/false
- 1/0
- it doesn’t matter what pair of symbols you use, it’s the same idea
- One of those “yes/no” answers is called a bit
- it’s the smallest “bit” of information possible
- this comes up in information theory in physics as well
- You can really answer any question by answering several yes/no questions
- e.g. “20 questions” game
- or repeatedly splitting a large space in half to narrow in on a very small region
- or doing a binary search on an ordered array
- e.g. “I’m thinking of a letter”, and you ask “is it in the first half of the alphabet?”
- With each additional bit, you double the number of possible possibilities
- with 1 bit, you have 2 possibilities (
1
and0
) - with 2 bits, you have 4 possibilities (
11
,10
,01
, and00
) - with 3 bits, you have 8 possibilities (
111
,110
,101
, etc.) - …and so on
- therefore, with n bits, you can represent \(2^n\) possibilities
- with 1 bit, you have 2 possibilities (
- Computation is a set of rules that turns one piece of information into another
- The simplest nontrivial computation on the simplest piece of information is:
- if the input is 0, output a 1
- if the input is 1, output a 0
- That operation is called NOT
- it is one of the basic Boolean logic operations
- The simplest nontrivial computation on the simplest piece of information is:
- Boolean functions
- A function takes one or more inputs, and applies some computation to them to produce a single output
- We learn about them in math class like \(f(x) = x^2 + 3x - 9\)
- This function operates over the set reals \(\mathbb{R}\) - its input is a single real and its output is a single real
- We notate this as \(f: \mathbb{R} \to \mathbb{R}\) (read “f maps from real to real”)
- In Java syntax (if we had a
real
type) it would bereal f(real x)
- takes onereal
argument and returns areal
- In Java syntax (if we had a
- A Boolean function operates over the set of Booleans \(\mathbb{B}\)
- \(\mathbb{B}=\{T,F\}\) or \(\mathbb{B}=\{1,0\}\) - again, the symbols don’t matter
- The NOT function is notated \(NOT: \mathbb{B} \to \mathbb{B}\)
- Since the set of Booleans is finite (and very small), we can also represent a Boolean function with a truth table
- A truth table shows for every possible combination of inputs, what the output will be
- For example, the NOT function’s truth table looks like:
A Y 0 1 1 0 - the other basic operations are AND and OR, which are both \((\mathbb{B} \times \mathbb{B}) \to \mathbb{B}\) (“map 2 Booleans to 1 Boolean”)
- AND (only outputs 1 when both inputs are 1, otherwise outputs 0)
A B Y 0 0 0 0 1 0 1 0 0 1 1 1 - OR (outputs 1 when either or both inputs are 1; only outputs 0 when both inputs are 0)
A B Y 0 0 0 0 1 1 1 0 1 1 1 1 - finally there is XOR, “exclusive OR” which is sometimes considered a basic function.
- it outputs a 1 when its inputs differ, so it’s “one or the other but not both”:
A B Y 0 0 0 0 1 1 1 0 1 1 1 0
- A function takes one or more inputs, and applies some computation to them to produce a single output
- Boolean function notations
- There are four(-ish) ways of notating Boolean functions:
- English:
(A AND B) OR NOT C
- Logic: \((\text{A} \wedge \text{B}) \vee \neg \text{C}\)
- Engineering: \(\text{AB} + \overline{\text{C}}\)
- Programming:
(A && B) || !C
- English:
- We will be using the engineering standard:
- AND is written as adjacent variables, like a multiplication: \(\text{AB}\)
- OR is written as an addition: \(\text{A} + \text{B}\)
- NOT is written as a bar or line over a variable: \(\overline{\text{A}}\)
- annoyingly, \(\overline{\text{AB}} \neq \bar{\text{A}}\bar{\text{B}}\)
- see the little gap between the lines in the second one?
- the first one is
NOT(A AND B)
and the second is(NOT A) AND (NOT B)
- for that reason, I recommend using parentheses when NOTing a term: \(\overline{(\text{AB})}\)
- annoyingly, \(\overline{\text{AB}} \neq \bar{\text{A}}\bar{\text{B}}\)
- Also XOR, if you need it, is written as a circled plus: \(\text{A} \oplus \text{B}\)
- The order of operations is NAO: NOT first, then and, then OR
- I honestly don’t know where XOR fits in here, maybe it’s the same level as OR?
- I won’t put anything ambiguous like that on the exam
- There are four(-ish) ways of notating Boolean functions:
- Gates
- a gate is a circuit component which implements one of the basic Boolean logic functions
- each gate is composed of multiple transistors, tiny electrically-controlled switches
- Really, gates are just another set of symbols for writing Boolean functions
- a gate is a circuit component which implements one of the basic Boolean logic functions
- Positional number systems and bases
- in positional number systems, the position of a digit within the number has a meaning
- a number can be thought of as a polynomial where you multiply each digit by its place value
- e.g. 1234 = \(1 \times 10^3 + 2 \times 10^2 + 3 \times 10^1 + 4 \times 10^0\)
- for any base B,
- there are B digit symbols
- the place values are Bi starting with i=0 on the right and increasing leftward
- for an n-digit integer in base B,
- you can represent Bn different values
- the largest representable value is Bn - 1
- in positional number systems, the position of a digit within the number has a meaning
- Binary
- Binary is base 2, so there are 2 digit symbols and the place values are powers of 2: 1, 2, 4, 8, 16, 32, 64, 128, etc.
- All information on computers is represented and computed in binary
- many mathematical algorithms are simpler in binary than in other bases
- it’s also easier to make circuits that operate on binary
- and easy to make those binary circuits run fast
- (there were base-10 (decimal) computers, but they lost out to binary ones)
- Bits, bytes, nybbles, words
- one binary digit is a bit
- 8 bits is 1 byte (8b = 1B)
- 4 bits is 1 nybble - which corresponds to a single hex digit
- so 1 Byte is made of 2 nybbles, or 2 hex digits
- a word is the size of integer that a CPU was designed to work with, the “most comfortable” size
- e.g. a 32-bit CPU has 32-bit words - it can operate on 32-digit binary numbers
- in order of size,
bit < nybble < byte ≤ word
- Conversion from binary to decimal
- polynomial representation offers simple algorithm: add up place values of 1 bits
- this is also true when we use signed numbers!!
- polynomial representation offers simple algorithm: add up place values of 1 bits
- Conversion from decimal to binary
- method 1: “long division” method:
- You have to know the binary place values
- From MSB to LSB (left to right):
- If the place value fits into the remainder, put a
1
and subtract it off the remainder - Otherwise put a
0
- If the place value fits into the remainder, put a
- method 2: repeatedly divide by 2 until you get a quotient of 0, writing every remainder even if it’s a 0.
- the binary representation is the remainders read from top to bottom.
- method 1: “long division” method:
- Hexadecimal
- base 16 - 16 digit symbols (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F)
- we use it as an auxiliary base
- it lets us represent binary numbers in a compact form
- it also has a nice relationship with binary, which exposes powers of 2 and their multiples in nice ways
- hex numbers are often written prefixed with
0x
, which came from C but is in every programming language now- e.g.
0x1C34
is the hex number1C34
- it is not a multiplication and the
0
is not one of the digits
- e.g.
- Conversion between hex and binary
- 4 bits = 1 hex digit (nybble)
- The table is simple - count up in binary from
0000
to1111
, and count up in hex from0
toF
next to it. - When going from binary to hexadecimal, group the bits into 4 starting from the right (LSB)
- add 0s to the left side as needed to make a group of 4 bits
- then each group of 4 bits is 1 hex digit
- Unsigned integers
- There are no negatives. It’s in the name: unsigned = NO SIGN.
- To convert to decimal, add up the place values for each 1 bit.
- You, the programmer, decide when an integer is unsigned.
- in languages that have unsigned ints, you choose them by choosing the type of your variable
- e.g.
unsigned int x;
in C
- Range of an n-bit unsigned number is \([0, 2^n-1]\)
- Sign-magnitude
- Is how we write numbers on paper. +123 and -123: same digits, different sign.
- Is NOT used for integers, it’s used for floats
- The MSB is the sign, 0 for positive, 1 for negative, and is totally separate from the rest of the bits
- Downsides: two “versions” of 0 (+0 and -0); arithmetic is more complicated (special cases for addition and subtraction, just like we learn in school)
- To negate: just flip the sign bit. The rest of the digits are unchanged.
- 2’s complement integers
- The one and only system used to represent signed integers on computers today
- It works by making the MSB the negative version of its place value
- The MSB also represents the sign - 0 for positive, 1 for negative
- so the MSB is kind of pulling “double duty” in this system
- Range of an n-bit signed number is \([-2^{n-1}, +2^{n-1}-1]\)
- Remember that \(2^{n-1}\) is just “the next lower power”
- This representation is great because it makes arithmetic super simple, no special cases
- You can add any two numbers of any signs and it will Just Work (unless there’s overflow (“going off the ends of the number line segment”))
- Downside: there is one more negative number than positives, and it is A Bit Weird (it has no positive counterpart, so if you negate it, you get the same value back out).
- there is also a range tradeoff - since you are using up one bit to represent the sign, you can only represent half as many positive numbers as an unsigned number with the same number of bits
- To convert to decimal, you still add the place values for each 1 bit. e.g.
1001
is -8 + 1 = -7, NOT -1!!!!!!!! - To negate:
-x == ~x + 1
, or, “NOT all the bits, then add 1.”- The negative of a number is also called its “2’s complement.”
- Addition and subtraction
- Binary addition works just like in base 10, but you carry at 2 instead of 10.
- The same addition algorithm is used for both unsigned and signed integers.
- Remember, when adding 2’s complement numbers, nothing special happens. You just add the bits and you will get the correct value/sign at the end.
- Subtraction is defined in terms of addition:
x - y == x + (-y)
… and because of how 2’s complement works…x + (-y) == x + ~y + 1
- amazingly, this works for signed and unsigned subtraction! the 2’s complement of
x
(~x + 1
) “behaves like” its negative, even in a number system that has no negative numbers. there are two ways around the number circle.
- Comparison
- is done by using subtraction
- e.g. if
x < y
, thenx - y < 0
- any comparison operation can be turned into a subtraction like this. summarized:
If… Then you know that… x - y < 0
x < y
x - y == 0
x == y
x - y > 0
x > y
- comparing the difference to 0 is far easier than arbitrary comparison
- e.g. asking if something is “less than 0” is the same as asking if it’s negative, and in signed numbers, the MSB tells you if it’s negative, so you only need to look at 1 bit
- Addition in circuitry
- a half adder can add 2 bits and produce a sum and carry
- a full adder can add 3 bits and produce a sum and carry
- it represents 1 “column” of a multi-bit addition
- we can build any circuit by:
- making a truth table that captures the logic we want
- turning that truth table into a boolean function
- turning that boolean function into a circuit
- Ripple carry
- Method of implementing multi-bit addition where the carry-out of each bit becomes the carry-in of the next higher bit (stack them)
- When the inputs change, it will produce invalid results for a while, because the carries must “ripple” from LSB to MSB
- Overflow
- occurs when you attempt to compute a number that cannot be represented in the arithmetic system you’re working in (e.g. n-bit signed or unsigned integers)
- overflow is bad, it means you get the wrong answer and subsequent steps will give nonsense answers
- so it’s important to be able to detect when overflow occurs
- Detecting overflow
- AN OVERFLOW OCCURRED IF:
Addition Subtraction Unsigned MSB carry out is 1 MSB carry out is 0
(i.e. there is no carry out)Signed same sign inputs,
different sign outputsame as addition, but
after negating second input- For signed addition: you get an overflow only if you add two numbers of the same sign and get the opposite sign out (e.g. add two positives, get a negative)
- it’s totally possible to add two numbers of the same sign and not have overflow
- also if the inputs are opposite signs, then overflow is impossible.
- Multiplexers
- 2 data inputs,
A
andB
, 1 control inputS
(for “Select”), 1 data outputY
- When S is 0,
Y = A
(andB
is ignored) - When S is 1,
Y = B
(andA
is ignored) - it’s like a hardware
if-else
whereS
is the condition,B
is the “then/true” value, andA
is the “else/false” value - generalizes to more inputs - a 2-bit
S
lets you choose among 4 valuesA, B, C, D
etc.
- 2 data inputs,
- Propagation delay
- Propagation delay is how long it takes for a signal to pass through some circuit
- Nothing moves infinitely fast in the real world, so there are limits on how quickly we can compute things
- This has some consequences:
- when you change inputs to a circuit, you must wait some time until the outputs become valid
- simpler and (physically) smaller circuits are faster!
- Combinational logic
- AND, OR, NOT, multiplexers, things built from them (e.g. adders) are all combinational
- combinational logic’s output depends only on current inputs
- for each combination of inputs there is exactly one possible output
- combinational logic can compute anything but it cannot remember anything
- that’s why we have…
- Sequential logic
- can remember things unlike combinational logic
- sequential logic’s output can depend on current and past inputs
- any memory (flip flop, register, RAM), or any circuit that contains any memory
- relies on the clock signal to tell the memory components when to update their contents
- can remember things unlike combinational logic
- Latches, flip-flops, and registers
- a latch is the simplest circuit that can remember 1 bit of information
- (there are actually several kinds of latches, but we only looked at the RS Latch)
- a flip-flop is a latch surrounded by some extra circuitry which:
- makes it more stable and less prone to oscillation
- makes it work with the clock signal
- may also give it a write enable input
- a flip-flop is a 1-bit register
- an n-bit register is n flip-flops
- in Logisim we use splitters to convert between multi-bit “wire bundles” and real-world 1-bit wires
- a latch is the simplest circuit that can remember 1 bit of information
- Clock signal
- signal that alternates between 0 and 1 in a steady rhythm, forever
- the clock never stops ticking
- conceptually quantizes time into discrete “chunks” called clock cycles
- used to tell sequential circuits when to move to the “next step”
- we’re using the rising edge of the clock signal (when it goes from 0 to 1) to synchronize things
- FSMs
- if we combine combinational logic with sequential logic, we can make finite state machines (FSMs)
- e.g. simplest possible FSM is a flip flop paired with a NOT - on each tick of the clock, the value in the flip-flop alternates between 0 and 1
- more generally:
- combinational logic computes
- sequential logic remembers outputs of those computations
- and then those remembered values are fed back into the combinational logic to compute the next value
- if we combine combinational logic with sequential logic, we can make finite state machines (FSMs)