Tuesday 16 December 2014

Follow up to the Diagonals Problem

In my solution to the diagonals problem, I felt like I wasn't able to adequately explain how I got from the numbers to the final equations. This is because I'm not too sure myself, it was the product of trial-and-error, luck and a bit of inspiration.

In his post, Tim does a great job of showing his thinking that led him to the same formula I got. Also, he posted code to a python program that takes the dimensions of the grid as input and returns the number of squares the diagonal crosses. The trickiest part of the program was the helper function for determining the greatest common factor of m and n. I hadn't thought that far... I was hoping python would have a built-in function for that!

This will be my last post for the course. It was interesting, challenging and I feel well prepared for CSC 236 next semester. I'm a bit embarrassed that I forgot how to take derivatives during the final, but I'm going to chalk that up to exam time stress.

Thanks for reading!


Sunday 14 December 2014

Problem Solving: Diagonals


If there is a grid of squares with a certain number of rows (m) and columns (n), is it possible to determine the number of squares a line will go through if drawn from top left corner to the bottom right? This was a problem given to us in a problem solving class and I would like to discuss how I went about solving it. 

The problem
  • Input: two natural numbers that describe the length and height of the grid (m x n).
  • Output: the number of squares that a diagonal crosses from the top left corner to the bottom right corner of an m x n grid (s). 

The Plan: Draw some m x n grids and diagonal lines. Look for patterns and interesting cases. 


5 x 3, s = 7; 5 x 8, s = 12



First observations:
1) the order of m and n doesn't matter. In the picture below, diagonals through 8 x 1 and 1 x 8 sections both cross 8 squares (blue in the picture below).
2) when m = n, you have a perfect square and s = m = n. A diagonal through an 8 x 8 grid crosses 8 squares, because it goes perfectly from the top left to bottom right corner of each square (green in the picture below).

So for ease of computing s, I can arbitrarily assign m as the smaller number/ odd number/ prime number. Whatever makes writing a function easiest. 

8 x 8, s = 8


I noticed that when m is divisible by n or vice versa, then s = max(n, m). However, this still leaves a lot of the combinations to solve, but it led me to notice that if m and n have the same factor, the line splits up into smaller sections. (i.e. 4 x 6 = 2(2 x 3). This means that you can multiply the number of squares the line passes through in the smaller section by the number of smaller sections (2 * (4 x 3, s = 6) that make up the large section (8 x 6, s = 2 * 6 = 12).


8 x 6 = 2(4 x 3)

Acknowledging when you're stuck: At this point I was a bit stuck. I knew how to determine s for certain specific cases, but there was still the problem of working out s when m and n only shared a common factor of 1. It was getting very complicated looking at grids with lines going all over the place. It was time for a new approach. I turned to just looking at the numbers to see if I could find a pattern. I created a table with values of n and m on each axis and filled in s values.

After staring a while at the numbers and trying out different ways of getting from m and n to s, I came across the equation s = n + m - 1. This seemed to work for the cases I hadn't solved for (i.e. when n and m only have a common factor of 1).

Looking back, I realized that a variation on this equation actually works for all cases, if the largest common factor (F) of m and n is found. Then the equation can be written as:
s = F(n\F + m\F -1)
where F is the largest common factor of m and n. In the end, it didn't matter if n or m were larger, smaller, even or odd.

Sunday 23 November 2014

The Liar's Paradox

My last post on programs halt and bang concluded that bang(bang) halts if, and only if, it doesn't halt, which is clearly a contradiction. This occurs because the self-reference creates a paradox. A classic example of this is the statement:
This sentence is false
Because the sentence references itself, if is both true and false at the same time. This is called the Liar's Paradox, which gets it's name from Psalm 116:11 in which David says:
I said in my alarm, "Every man is liar!"
Again, however you read the statement, you get a contradiction because the sentence is self-referential because the speaker is himself a man. 

I thought this was an interesting little complement to our class discussions on proving something uncomputable by trying to compute it and creating a contradiction.

The Halting Problem

Kurt Godel, who first proved that some truths are unprovable, made logic seem more "illogical" for all subsequent students of mathematics. But this concept has important implications for computer science because there are problems that no algorithm can solve. One of these is Alan Turing's Halting Problem, the first problem that was proved to be uncomputable. Computer programs either halt (return an output, raise an error, or change a mutable variable) or they never halt (get caught in an infinite loop) and it would be very useful for programmers to have a program that says whether any function given any input will halt or not halt. Unfortunately, the algorithm for such a program cannot be written.

def halt(f, i):
   '''Iff f(i) halts, return True.'''
   *some algorithm here*
The proof for this is simple (but still confusing too wrap your head around) and uses program, which we'll call bang, that halts for some inputs and not for others.

def bang(f):
   '''Return 0 if halt(f, f) is False, else don't halt.'''
   # if halt(f, f) is True, go into an infinite loop.
   while halt(f, f):
      pass
   # if halt(f, f) is False, then stop.
   return 0

So what happens if we run bang(bang)? There are two possibilities:

  1.  bang halts, meaning halt(bang, bang) was false. BUT that means halt identified bang as a program that did not halt!
  2. bang does not halt, meaning halt(bang, bang) was true. BUT then halt identified bang as a program that does halt!
Thus, we have a proof by contradiction: bang halts if, and only if, it doesn't halt.

So, if at this point you are completely confused that's okay! I had to work through the explanation several times after class before it started to click. Then I sat my room mate down and took him through the proof. After explaining it to him several times, I felt like an expert. 


Monday 17 November 2014

Problem Solving: The Paper Folding Problem

Weeks ago I introduced the Paper Folding Problem that we spent a class trying to solve. I'm going to walk you through the process that lead me to a possible solution.

The Problem: If you were to fold a piece of paper in half (right over left) , the crease or vertex will point downwards when the piece of paper is unfolded in the original orientation. Using the same technique to make multiple folds in the piece of paper will create a pattern of  up and down creases. If you're given the number of folds, can you produce the pattern of creases?

The Plan: First I started folding a strip of paper and recording the patterns of creases after each fold using arrows to indicate the direction of the crease. (See picture).



I noticed a pattern forming ... the middle crease created by the first fold always pointed down (blue arrows in picture above) and there was a type of symmetry around this central crease. The creases to the left of the central crease where the opposite mirror image (pattern flipped horizontally and vertically) to the pattern of creases on the right. Also, to the pattern of creases left of the central crease after n folds was the same as the whole pattern for n-1 folds (purple ovals in picture above)). Therefore, to determine the pattern for n folds, one could simply write the previous pattern (n-1 folds) + downwards arrow + opposing mirror image of the previous pattern (n-1 folds).

Looking back, I wondered if there was another way to build the pattern that might be easier to program. As I folded a new strip of paper, I wrote on the number of the fold on the creases formed by that fold. I noticed that the creases from the new fold were on either side and between each crease from the previous folds (see picture below). Also, these new creases go down, up, down, up....

Using this, it wouldn't be too difficult to create a program that takes a number of folds (n) and produces a list of ups and downs that present the pattern of creases from left to right in a piece of paper folded n times. I would use recursion to produce a list for n folds, from the list for n-1 folds, from the list for n-2 folds, etc until n=1 and the list is [down]. 

I'd be interested to hear if anyone worked out a different solution! Post a link to your solution in the comments.

Sunday 9 November 2014

"Ah ha!" Moments

The second term test happened this week. If you had asked me a week before the test how I thought it would go, I would have told you I was not expecting to do well because proofs are the bane of my existence. But the day of the midterm I was feeling calm and confident because in that last week before writing everything related to proofs started to click. The assignment and tutorials were a major part of this "awakening". As I looked over slides from the lectures, steps that had completely stumped me before were making sense because I understood the reasoning behind the text.

So I was very pleased when the midterm went well! I understood what was being asked in each of the three proofs and I think I managed to prove (or disprove) them in a logical and clear manner. 

Now I'm hoping the same insight will happen with run time analysis! 

Monday 3 November 2014

Big-Oh and Big-Theta

Optimising the efficiency of algorithms is an important component of computer science, because how smoothly programs work and how much computer processing power needed will depend on run time. Run time analysis is a useful way of describing the resource constraints of a particular algorithm. In class, we've been analysing the efficiency of functions using lists by determining their worst-case run time for input of size n. 

Big-Oh notation expresses the upper bound of a function's run time, by overestimating the number of steps needed to produce the output. For example, the worst-case scenario when sequentially searching for an item in a list of n items, is that the desired item is not in the list, but you only know this after looking at every item in the list. This requires n steps for comparing the input to each item in the list, plus 1 more step for returning the output (False). Therefore, the run-time of this search is linear or O(n).

Big-Theta notation expresses the lower bound of a function's run time, by underestimating the number of steps needed to produce the output. 

Big-Oh and Big-Theta seem to usually have the same type of run time (constant, linear, quadratic, etc.) for a function, but it is multiplied by a different constant (c). Because Big-Oh is the asymptotic upper-bound, its c is greater than the c for Big-Theta, the asymptotic lower-bound. 

More about proofs

This post is a week late, my apologies!

My previous post dealt with proof structures and I'll pick up from there. While the proof structure is important for framing the problem and organising your approach, the guts of the proof is where the magic happens. And when we first started proofs in class, it felt like magic was involved. I found it difficult to follow Danny's thinking and keep up. It was like he was pulling the solution out of a hat. I started to feel anxious about proofs and I was worried I was falling behind. Luckily, working through proofs in the tutorials helped me understand why certain approaches make sense for a specific type of proof. But it was working on assignment 2 that really solidified my confidence that I can prove the types of statements we dealt with in class. I gave myself lots of time to do the assignment and make sure I understood each statement before starting the proof. Especially with the proofs about limits of a floor of x function, drawing the function helped my understanding.

I feel far less anxious about proofs than I did two weeks ago and, with midterm 2 coming up, I am confident that it will all be okay!


Sunday 19 October 2014

Proofs

Proofs have been an integral part of mathematics since the Ancient Greeks. They can be thought of as a logical arguments for proving a mathematical statement based on assumptions and known relationships. A good proof should be elegant, easy to follow and should stand up to testing. This can be very intimidating for beginners (which is what we are in CSC165). But, if you know how to structure a proof, then there's always a place to start and a good structure can facilitate ideas and insight. Since I am a visual thinker, I going to compare the structure of a proof to matryoshka dolls (cheesy, I know).


Another name for matryoshka dolls is Russian nesting dolls because each smaller doll is contained in a bigger doll. The smallest doll is proof essence of the proof, the part that requires insight and can be tricky. The outer dolls contain that insight and apply it to domains where the relationship is true. I've illustrated a proof structure from last week's tutorial to show what I mean.

If you are comfortable creating the structure of a proof, then you can spend more energy on the little doll. More on how to go about proving the mathematical relationship contained by the small doll in my next post.

Sunday 28 September 2014

Paper Folding Problem

Firstly, apologies to my TA if every post he reads this week is about Friday's problem solving class, but it was the most exciting part of the course this week.

I won't go explain the problem, although it was a new one for me and interesting to solve. (Our group managed to come up with two solutions, so we were pretty chuffed with ourselves.) I want to mention what the exercise taught me instead. Danny gave us a series of steps to follow when solving a problem:

1. Understand the problem. Essentially, decide what the input is and what output should be generated.
2. Devise a plan. Or two or three if you can think of several ways of approaching the problem.
3. Carry out the plan. Having more than one plan can be beneficial for when you get stuck with one approach.
4. Look back. Have you followed the plan? Where has it got you? Have you discovered something helpful?
5. Acknowledge when, and how, you are stuck. This is the hardest step for me, because it's not always easy to know why you're stuck. It's important to solve this before moving on.

*Update (19 Oct 2014): I read in a fellow student's blog that these steps were devised by Polya.*

In general, I think I have intuitively followed these steps when problem-solving, but knowing the steps formalises the process. Recording what you're doing and plan to do helps with communication among team members and with your future self if you need a break from the problem. We found that our original plan lead to other ideas, and eventually, each of us was working on a separate approach but still keeping the others informed of any breakthroughs.

There was surprisingly little frustration (maybe because the problem wasn't too difficult) because we kept making steady progress. I'm going to be applying these steps to my other classes and to life when I am stuck.



Friday 19 September 2014

Implications

This week in CSC165 we talked a lot about implications, which can be tricky to wrap your head around.  I try to review the lecture slides after class and test whether I can explain the concepts and/or answer the questions. If not, I look up the topic in the course notes and check the discussion board. It doesn't help that sometimes common language doesn't reflect logic. Consider the idiom:

When it rains, it pours.

The antecedent (P) is "it rains". The consequence (Q) is "it pours". P -> Q. Rain implies pouring but pouring does not imply rain (even though experience tells us it should). The only way to prove this implication false is to find an example of a time when it rained (P is true), but did not pour (Q is false). Therefore, drizzly, gray days are the counter-example that make "when it rain, it pours" false. Perhaps a more logical idiom would be:

When it pours, it rains.

Do you agree or disagree with my logic? Can you think of other idioms that are implications?

The thing I find most tricky about implications is deciding if one statement implies the other, or vice versa. We've gone through some sentences in class as examples and it was only once I heard the answer that it seemed obvious. Before hearing the answer, I could reason either. So I need to get more practice and get comfortable with implications, because they are going to be showing up A LOT.