For Big O, Theta, Omega check this out:

If you like a song from an album, you may choose to listen to another song from the album. When do you decide to, instead of streaming, you buy the whole album?

This algorithm does not know the future, only the past. However, if it is an offline algorithm then it knows the whole sequence of events. The mistake is a factor of 2 only. With the algorithm "given a listen to 10 songs in the album, buy the album" means you can only overpay by a factor of 2. This is called 2-competitive.

We will now show a 2-competitive algorithm for this problem. The first request comes, the algorithm has to take the decision without knowing the content of the future request. Cost(A, R) is the cost the online algorithm comes.

An online algorithm A is c-competitive for some c factor if:

$$Cost(A, R) \le c * \; cost(OPT, R) + b,$$

For some constant b > 0.

The "b" is a shift. Let's pretend this doesn't exist for now.

If c = 2, then it pays more than opt by at most a factor of 2. So if op pays 50, it might pay at most 100. if opt pays 100, it pays at most 200.

How do we get such an algorithm?

If b = 0, then it is "strictly" c-competitive.

A single request is a pair (s,  l) where s is the song and l is either 0 or 1. If it's 1, she wants to listen to the song. If she doesn't, it's 0.

If the request is (s, 1) she pays x. If she streams, its 10x. If the request is (s, 0) she does nothing as she doesnt want to listen to the song.

The following stratergy is 2-competitive. "Rent for 10 times, and then buy the album" is 2-competitive.

"Rent for 10 times, then buy the album" is strictly 2-competitive. When you have 11 requests to listen to the song, we're hedging against the future so then we buy the album.

You cannot be worse than opt, as you've already paid 10x and you pay another 10x for the album so it is a factor of 2. Wikipedia is the only good source I can find online about competitive algorithms.

So the next couple slides he talks about what an array is, and then what linked lists are. I have another article on linked lists here:

Just to let you know, I'm skipping the slides on linked lists because of the blog post above. There's nothing new there that isn't in that blogpost.

This is actually interesting. In Java, it's called a hashmap (although most languages use dictionaries). It looks like this:

{"hello": 0, "goodbye": 1}

Where each key has a corresponding value assiocated with it. This slide says that you can order a dictionary by its keys.

Okay so searching with linked lists is kinda slow and sucky (read the blog post ‼‼‼😉), but we can instead store an ordered dictionary in an array by ascending values (he says non-descreasing values, which is the same as ascending....? Thanks for confusing the whole class 😅)

Uhhhhh I would rather die than look at binary search for the 59th time since I've been at uni but if you really want us to know it then sure lmao. He does a couple slides on how binary search works, but tbh you probably already know. If not, just google it. It's super easy.

This is an introduction to recurrence too, a popular topic that'll come up in the exam.

This is the depth of a node, not the height of the tree. I got confused by this first time around.

Sorry for not writing much, but I think the slides here are actually really good.

AVL trees are one of the tutorials so this is important !!

This is O(log n) but he writes O(log m).

This is based on divide and conquer algorithms, which I have a blogpsot about here:

Ok so this one pisses me off because it's not explained 'nicely'. Basically, an inversion on a number is how many numbers smaller than that number appear later in the list. Let's see a simpler example.

10 2 1

10 has 2 inversions. 2 and 1 are smaller than 10, but they appear after 10. 2 has 1 inversion. 1 is smaller than 2, but it appears later in the list. All sorted lists have 0 inversions. We say the entire set has 3 inversions.

1 2 10

Has 0 inversions and is sorted.

The maximum number of inversions is $\frac{n(n-1)}{2}$ where n is how many numbers there are.

The best way to study the greedy method is to look at the fractional knapsack problem.

Sorted arrays can be used for this. Running time is O(n log n)

With fractional knapsack, calculate the ratio value/weight for each item and sort the item on  basis of this ratio. Then take the item with the highest ratio and add  them until we can’t add the next item as a whole and at the end add the  next item as much as we can. Which will always be the optimal solution  to this problem.

So he proves it, but his hand writing.... I'm not including this here.