Week 12

I loved the lecture on memoization, mostly because it related to something I’ve been thinking about on the side.

Think back to assignment 1. The solution the four-stool solution is:

(1) move i cheeses to an intermediate stool using this method;

(2) move n-i cheeses to the destination stool using the 3-stool method, without the intermediate stool;

(3) move i cheeses from the intermediate stool to the destination stool using this method.

And the question is, okay, we’ve never been told the value of i or how to calculate it. So how do we pick it?

This question took me a while to think of a good answer to — and I was unsatisfied with it.

Solution number 1: make a helper function that finds the minimum number of moves for n cheeses; then make a helper function that tries all the options for an i value and picks the one that, at each level, returns the minimum number of moves.

Code:

def moves_req(n: int) -> int:
    '''
    Return the min number of moves to move n cheeses on 4 stools.'''
    
    if n == 1:
        return n
    else:
        return min((2*moves_req(n-i) + 2**i - 1) for i in range(1, n))
    
def ideal_i(n: int) -> int:
    '''
    Return the value i that produces the min number of moves to
    move n cheeses on 4 stools.'''
    
    ideals = [1]
    while len(ideals) < n:
        
        # Build a dict of all possible i: to moves using it.
        temp = {}
        for i in range(1, n):
            temp[i] = 2*moves_req(n-i) + 2**i - 1
            # Pick the i that required the fewest moves.
            ideals.append(min(temp, key=temp.get))
            
    return ideals[n-1]

One thing you’ll notice is the redundancy. Both moves_req and ideal_i are trying values of i and picking the smallest one. So why not just call moves_req with n+1 and grab the i it picked for its call to itself at n?! The whole structure of ideal_i is a glorified version of what moves_req does much more simply.

But anyway, the biggest problem I found was that this is massively slow. Try it yourself in a blank Python file. Call ideal_i(10). It returns. Call ideal_i(20). It returns, a little slower. Call ideal_i(25). Noticeably slow. Call ideal_i(30)… why don’t you go put on the kettle?

Luckily, for the purposes of the assignment, there don’t seem to ever be that many cheeses, so I had no problems with actually executing it according to my final report. But still, I knew it had to be possible to do this faster…

My frustration built for a little while. One day — a couple of weeks later — I realized I had a sort of solution. Every time you call moves_req with n == 5, it calculates the answer for n == 4, n == 3… etc. So why not capture its solutions at each level and reuse them the next time? I had, of course, school and work to do, but I kept trying to write code in my mind all day. Each time I got a chance to try it at a computer, something went wrong.

Finally, while getting a ride home, I took out pen and paper and worked out a version that made sense to me. (Good thing we have all that practice from doing tests by hand, eh?) When I got home, I tried it out, and it worked! Here it is:

def ideal_i(n: int, ideals: list=[(1, 1)]):
    
    while len(ideals) < n:
        options = {}
        for i in range(1, n):
            options[i] = 2 * ideal_i(n-i, ideals)[1] + 2**i - 1
        best = min(options, key=options.get)
        ideals.append((best, options[best]))
    return ideals[n-1]

As you can see, it’s very similar to the previous one, but there are a few significant differences. For one thing, I collapsed the two functions into one.

Now, this one function takes a second parameter called ideals. Note that it has an equals sign after it, which means that it has a default value. Any calls that actually specify a second parameter will use the specified value; any calls that only specify one parameter will use the default value. Very useful feature of parameters.

So what is that parameter? It’s a list of tuples; the index corresponds to n (the first index is n == 1), and in each tuple, the first element is the ideal value of i for that n and the second element is the number of moves for that n.

Our strategy is that, given n, we check whether we have a tuple for it in our list. If not, we sort through all the i values we need and find out which one generates the minimum moves for n-1… which we now have in a handy tuple that’s the last element in our list. Then, we record both the i and the moves it itself took, and append that to the list. When our list of ideals is long enough to look up n, we return the last one.

So, great! This one can calculate n at comfortable speeds up to a much larger number. Note that it actually has to be done in blocks of 800 or so, though… because it hits the maximum recursion limit. (Python seems to remember the value of ideals, so as long as the shell doesn’t restart, a call of ideal_i(1500) will fail but a call of ideal_i(750) followed by a call of ideal_i(1500) will work.) So now I could calculate an i value for an n in the low thousands.

I sent this to zingarod and he introduced me to dynamic programming via Wikipedia. That’s when I learned that I’d come up with something along the lines of memoization. Cool!

I put this to bed for a while, but one Saturday I woke up and decided to fiddle with it. And suddenly, I noticed a pattern. (Note: Due to a minor error in my code that I just noticed and fixed five minutes ago, the pattern I noticed was off. Everything I write after this point will use the correct pattern and code, since the methods are the same anyway, even though the actual history is slightly different.)

What was this pattern? Well, printing the i value for each n sequentially, starting at n == 1, we get…

1 1 2 2 2 3 3 3 3 4 4 4 4 4 ...

Wow! Pretty simple, eh? A run of length 2 where the value is 1; then a run of length 3 where the value is 2; then a run of length 4 where the value is 3 …

So I came up with this next solution, which has what is an advantage, in this case, of not being recursive:

def ideal_i(n: int):
    
    ideals, i, step = [], 1, 2
    
    while len(ideals) < n:
        ideals += [i] * step
        i += 1
        step += 1
    return ideals[n-1]

As you can see, it starts off with an empty ideals list. Then it appends values to that: first it appends [1] * 2 to it, then [2] * 3, etc, just as we saw above. Then it picks the correct value for n.

This is a good solution in some ways because of another concept in dynamic programming: naivety. The solution in which we actually calculate i for each n by testing possible values is a “naive” one, meaning we treat it as though we don’t yet know the answer. But once we’ve seen the pattern, we in fact do know the answer by a much simpler, faster means.

Hence, I saved this in a file called Tour_radical.py. It just felt so much like cheating. After all, that pattern could technically not hold at, say, n == 10,000. We can’t be sure, for example, that the length of each run really increases by 1 instead of by 1.00001 or something, meaning that eventually our prediction of the length of a run would be off. And, of course, the naive version is too slow to test it and compare at every value. So I tested it up to 800 or so in a separate file using code like this:

all(ideal_i(x)[0] == ideal_i_2(x) for x in range(1, 800))

*Note: If you’re following along and you use this to compare all the versions later in the post, it will not always return True. This is because for some values of n, there are actually two values for i that yield the same number of moves required, and the algorithms might not choose the same one.

So that’s as far as we can test the pattern… The rest is based on faith.

But there was still one optimization I could make to the code. If you test it, you’ll find that it only works up to about n == 125,000,000, and that probably depends on your machine. What happens after that? Well, first I got Python aborting. Then, with smaller numbers, I started to get MemoryErrors.

That’s because that list is, of course, getting to be utterly massive at large n. I tried writing it to a text file with a very large n once, and the text file was over 200mb. The TEXT FILE. DOT T X T. So, yeah. Not good.

Then I realized something more. Why do I have to keep an actual list of the values up to that point? I can calculate how long it would be if we did keep an actual list. So why don’t I create a variable to capture that hypothetical length, keep track of what the ideal i would be at each step, and return that value when the length is long enough?

def ideal_i(n: int):

    i, pseudo_len = 1, 2
    while pseudo_len < n:
        i += 1
        pseudo_len += i+1
    return i

This one is pretty self-explanatory.

And very quick. It’s instant about up to n == 1,000,000,000,000, and can run tolerably quickly up to about n == 1,000,000,000,000,000. Its time is at least as good as linear with a very small constant (maybe better; I’m not quite sure how to analyze the complexity here).

What does frustrate me, though, is that I’d probably be able to make it instant if I’d taken math further than grade 12… Maybe someone else can help out here? My suspicion is that with such an obvious pattern, there must be a way to predict in a single line, given a value n, how many times we would run the while loop if we did run it. And that number would, of course, be the same as our ideal i minus 1.

Oh well. I’ll have to be satisfied with this. After all, Anne will probably never be able to fit more than a quadrillion cheeses on a single stool, no matter how much she reinforces the stool’s legs.

But back up a second. I took this solution to zingarod, following up from my previous email, and he liked it. It clearly takes less time than other algorithms. But he invited me to his office to show me his own solution as well.

What was his? Well, it was basically my memoization one. Except that instead of recursion, he just used iteration to extend the list of ideals. In terms of runtime, they should be about equal — mine essentially iterates through calls to the function since the base case checker, ideals < n, stays updated — but remember that mine kept hitting a maximum recursion depth and needed to be “primed” in blocks of 800 or so. Switching recursion to iteration fixes that.

Here it is, as best as I remember it and edited for uniformity with my style (I don’t have a copy of what he showed me in his office):

def ideal_i(n):
    
    ideals = [(0, 1), (1, 1), (1, 3)] # list of (i, min_moves)
  
    for value in range(3, n + 1):
        options = {}
        for i in range(1, n):
            if value - i >= 0:
                options[i] = 2 * ideals[value - i][1] + 2 ** i - 1
        best = min(options, key=options.get)
        ideals.append((best, options[best]))
    
    return ideals[n]

Now, here’s something to think about: Why might we use zingarod’s solution, which still only runs on an n in the low thousands, rather than my ultra-fast one?

The answer: Remember that ideals was a list of tuples in that version of the function. The second element in each tuple was the moves required. Keeping track of the moves required could be a very valuable behaviour, depending on what you need the function for. (You don’t technically need it for assignment 1, but that’s because we keep track of the required moves by actually making them. Speaking of which… tour_of_four_stools is only tolerably fast for up to about 120 cheeses anyway, so all this ideal i stuff is very much hypothetical!)

The other reason zingarod’s solution is good is because, if you remember it up there, it uses “naive” calculation, i.e. we can be sure that each i really is the one that requires the fewest moves instead of having to have faith in the pattern.

But before I came to the end of my journey with this function, I realized there could be one more optimization to the code, and offered a hybrid solution to zingarod. We use a bit of naive calculation and a bit of the pattern.

Here’s how it works. The original formula for the four-stool solution required picking some value i where 0 < i < n. That means that for n == 1,000,000, we have to try 999,999 values of i before we find the one that generates the fewest moves. This fact is also what makes the algorithm take quadratic time: it goes once through every value < n, and for every value, it has to try n possible values of i.

But in reality, we can make much, much better guesses right off the bat. Specifically, we observe from our pattern that each i value will either be (a) equal to the value preceding it, or (b) exactly one more than the value preceding it. That limits our choices at every value to only 2.

Which means it stays linear. Which means it is, as zingarod put it, “fast as heck”.

Here’s the implementation (his code, plus edits for uniformity with my style to make it easier to compare with the other versions):

def ideal_i(n):
    
    ideals = [(0, 1), (1, 1), (1, 3)] # list of (i, min_moves)
  
    for value in range(3, n + 1):
        options = {}
        anchor = ideals[value - 1][0]
        for i in (anchor, anchor + 1):
            if value - i >= 0:
                options[i] = 2 * ideals[value - i][1] + 2 ** i - 1
        best = min(options, key=options.get)
        ideals.append((best, options[best]))
    
    return ideals[n]

This can handle n up to about 1,000,000, which is still very impressive with half-naive calculations. Woo!

So that’s that. I had a lot of fun working on this problem! Moral: Think outside the box and try new things. Follow a lead, even weeks after the due date, if it really interests you. Always use your professor as a resource outside of class to further your understanding. And so on.

Welp… This is my last post on this here slog. And one of my last three pieces of schoolwork in university! I’ll miss y’all, and thanks for reading along. And good luck to you all in future years!

>>> BerkeyColortran.quit()

****

UPDATE: I worked out a constant-time solution.

Advertisements

2 thoughts on “Week 12

  1. This is one nicely done post. I didn’t have any kind of solution to Tour.py in assignment 1, but reading this today has really helped me understand a lot more. I shared this on Google+ if you don’t mind.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s