Sunday 23 November 2014

The Liar's Paradox

My last post on programs halt and bang concluded that bang(bang) halts if, and only if, it doesn't halt, which is clearly a contradiction. This occurs because the self-reference creates a paradox. A classic example of this is the statement:
This sentence is false
Because the sentence references itself, if is both true and false at the same time. This is called the Liar's Paradox, which gets it's name from Psalm 116:11 in which David says:
I said in my alarm, "Every man is liar!"
Again, however you read the statement, you get a contradiction because the sentence is self-referential because the speaker is himself a man. 

I thought this was an interesting little complement to our class discussions on proving something uncomputable by trying to compute it and creating a contradiction.

The Halting Problem

Kurt Godel, who first proved that some truths are unprovable, made logic seem more "illogical" for all subsequent students of mathematics. But this concept has important implications for computer science because there are problems that no algorithm can solve. One of these is Alan Turing's Halting Problem, the first problem that was proved to be uncomputable. Computer programs either halt (return an output, raise an error, or change a mutable variable) or they never halt (get caught in an infinite loop) and it would be very useful for programmers to have a program that says whether any function given any input will halt or not halt. Unfortunately, the algorithm for such a program cannot be written.

def halt(f, i):
   '''Iff f(i) halts, return True.'''
   *some algorithm here*
The proof for this is simple (but still confusing too wrap your head around) and uses program, which we'll call bang, that halts for some inputs and not for others.

def bang(f):
   '''Return 0 if halt(f, f) is False, else don't halt.'''
   # if halt(f, f) is True, go into an infinite loop.
   while halt(f, f):
      pass
   # if halt(f, f) is False, then stop.
   return 0

So what happens if we run bang(bang)? There are two possibilities:

  1.  bang halts, meaning halt(bang, bang) was false. BUT that means halt identified bang as a program that did not halt!
  2. bang does not halt, meaning halt(bang, bang) was true. BUT then halt identified bang as a program that does halt!
Thus, we have a proof by contradiction: bang halts if, and only if, it doesn't halt.

So, if at this point you are completely confused that's okay! I had to work through the explanation several times after class before it started to click. Then I sat my room mate down and took him through the proof. After explaining it to him several times, I felt like an expert. 


Monday 17 November 2014

Problem Solving: The Paper Folding Problem

Weeks ago I introduced the Paper Folding Problem that we spent a class trying to solve. I'm going to walk you through the process that lead me to a possible solution.

The Problem: If you were to fold a piece of paper in half (right over left) , the crease or vertex will point downwards when the piece of paper is unfolded in the original orientation. Using the same technique to make multiple folds in the piece of paper will create a pattern of  up and down creases. If you're given the number of folds, can you produce the pattern of creases?

The Plan: First I started folding a strip of paper and recording the patterns of creases after each fold using arrows to indicate the direction of the crease. (See picture).



I noticed a pattern forming ... the middle crease created by the first fold always pointed down (blue arrows in picture above) and there was a type of symmetry around this central crease. The creases to the left of the central crease where the opposite mirror image (pattern flipped horizontally and vertically) to the pattern of creases on the right. Also, to the pattern of creases left of the central crease after n folds was the same as the whole pattern for n-1 folds (purple ovals in picture above)). Therefore, to determine the pattern for n folds, one could simply write the previous pattern (n-1 folds) + downwards arrow + opposing mirror image of the previous pattern (n-1 folds).

Looking back, I wondered if there was another way to build the pattern that might be easier to program. As I folded a new strip of paper, I wrote on the number of the fold on the creases formed by that fold. I noticed that the creases from the new fold were on either side and between each crease from the previous folds (see picture below). Also, these new creases go down, up, down, up....

Using this, it wouldn't be too difficult to create a program that takes a number of folds (n) and produces a list of ups and downs that present the pattern of creases from left to right in a piece of paper folded n times. I would use recursion to produce a list for n folds, from the list for n-1 folds, from the list for n-2 folds, etc until n=1 and the list is [down]. 

I'd be interested to hear if anyone worked out a different solution! Post a link to your solution in the comments.

Sunday 9 November 2014

"Ah ha!" Moments

The second term test happened this week. If you had asked me a week before the test how I thought it would go, I would have told you I was not expecting to do well because proofs are the bane of my existence. But the day of the midterm I was feeling calm and confident because in that last week before writing everything related to proofs started to click. The assignment and tutorials were a major part of this "awakening". As I looked over slides from the lectures, steps that had completely stumped me before were making sense because I understood the reasoning behind the text.

So I was very pleased when the midterm went well! I understood what was being asked in each of the three proofs and I think I managed to prove (or disprove) them in a logical and clear manner. 

Now I'm hoping the same insight will happen with run time analysis! 

Monday 3 November 2014

Big-Oh and Big-Theta

Optimising the efficiency of algorithms is an important component of computer science, because how smoothly programs work and how much computer processing power needed will depend on run time. Run time analysis is a useful way of describing the resource constraints of a particular algorithm. In class, we've been analysing the efficiency of functions using lists by determining their worst-case run time for input of size n. 

Big-Oh notation expresses the upper bound of a function's run time, by overestimating the number of steps needed to produce the output. For example, the worst-case scenario when sequentially searching for an item in a list of n items, is that the desired item is not in the list, but you only know this after looking at every item in the list. This requires n steps for comparing the input to each item in the list, plus 1 more step for returning the output (False). Therefore, the run-time of this search is linear or O(n).

Big-Theta notation expresses the lower bound of a function's run time, by underestimating the number of steps needed to produce the output. 

Big-Oh and Big-Theta seem to usually have the same type of run time (constant, linear, quadratic, etc.) for a function, but it is multiplied by a different constant (c). Because Big-Oh is the asymptotic upper-bound, its c is greater than the c for Big-Theta, the asymptotic lower-bound. 

More about proofs

This post is a week late, my apologies!

My previous post dealt with proof structures and I'll pick up from there. While the proof structure is important for framing the problem and organising your approach, the guts of the proof is where the magic happens. And when we first started proofs in class, it felt like magic was involved. I found it difficult to follow Danny's thinking and keep up. It was like he was pulling the solution out of a hat. I started to feel anxious about proofs and I was worried I was falling behind. Luckily, working through proofs in the tutorials helped me understand why certain approaches make sense for a specific type of proof. But it was working on assignment 2 that really solidified my confidence that I can prove the types of statements we dealt with in class. I gave myself lots of time to do the assignment and make sure I understood each statement before starting the proof. Especially with the proofs about limits of a floor of x function, drawing the function helped my understanding.

I feel far less anxious about proofs than I did two weeks ago and, with midterm 2 coming up, I am confident that it will all be okay!