::Welcome to my online journal where I record,
on a weekly bases, the progress, challenges,
and other interesting events that happen
with me in CSC236 this fall at UofT::



December 05, 2008

General Thoughts

Given what we heard about this course from older students, my colleagues and I were really worried how things will turn out with CSC236. After thirteen weeks, I can say that CSC236 was a great course; the material was not too hard nor too easy, and the work load was a bit above average but fair. I think the solid background we have from CSC165 plus the enthusiasm and professionality of professor Danny played a key role in making things smooth for us.
I know we are not asked to do this, but I thought it would be useful if I give some feedback on the course. I am using three of the questions professor Danny used when evaluating CSC207.

What are the top three things you like about CSCS236?

  • The lectures: I found them very useful. Also, the black and green slides really managed to draw one's attention at 10 in the morning.
  • Generous office and TA hours: No matter how early my partner and I started to work on the assignments, there were always some bugs appearing at 7:30 pm before the assignment is due. Knowing that there is a TA available for help till 8 pm gave us great confidence and relief.
  • Doing assignments in partners: it is nice to work with other people. Besides, when you explain to your partners how you did a particular question, you get to understand it even better.

What are the top three things you hate about CSCS236?

  • Emphasis on the use of expressions: Assignment 1, and the later assignments, kind of shocked us because if we used expressions that were not clear enough for the grader, we would lose significant amount of marks. I know I am wrong in this point, but this is what I think.
  • Having a test on the last day of school.
  • hmmmm.. I could not think of anything else I hate about CSC236

Did your first year here prepare you well?

  • More than well (in terms of the materials themselves and the general atmosphere of the university).

I would like to conclude my journal by saying thank you professor Danny and I really hope if you teach us CSC263/ fall 2009.

Problem Solving Episode

***
I will be doing Question 3 /Assignment 2 using Polya's method which we learned in CSC165
****
QUESTION: Let G(n) be the number of ways a grasshopper can perform the climb when there are n stairs. Prove that for n >= 1, G(n) <= F(n), the nth Fibonacci number.


Step One: Understand the Problem
The main task in this question is to find a formula for G(n) and use it to prove that G(n) <= F(n) using complete induction. Since professor Danny does not require a closed formula for G(n), I will just try to find a recursive one.


Step Two: Devise a Plan
The first plan that came up to my mind is to try and find G(n) for some small values of n, and then by comparing these values with each other on one hand, and with n on the other hand, come up with a formula. After finding the formula, establish a predicate, a claim, then start the inductive proof.


Step Three: Carry out the Plan
Let's make a table for 1 <= n <= 8 and write down all the possible configurations of hops that the grasshopper can make to reach the nth step:





*****I do not know where did this space come from*****















































n Possible Configurations G(n)
1 1 1
2 11 1
3 111, 3 2
4 1111, 31, 13 3
5 11111, 311, 113, 131 4
6 111111, 33, 1113, 1131, 1311, 3111, 6
7 1111111, 331, 313, 331, 11131, 11131, 11311, 13111, 31111, 9
8 11111111, 3311, 3131, 3113, 1133, 1331, 111113, 111131, 111311, 113111, 131111, 311111, 13

Looking at the previous table, we can tell that G(n) will depend on the addition and subtraction of other values of G(n) becaue G(n), like F(n), does not seem to increase fast. (so we do not have exponential powers for example). Other than this observation, there does not seem to be a clear pattern that can be used. For example, for 2 <= n <= 4, G(n) = G(n-1) + G(n-2). For 5 <= n <= 7, G(n) = G(n-1) + G(n-2) -1. This does not seem to work; I need another plan. I know that the formula is there but I cannot seem able to derive it.
I will work on another plan tomorrow.

Step Two (b): Devise an alternative Plan
The previous plan did not seem to work out well so I had to come up with something else. This time, in stead of finding a formula that works for small values of n and then generalizing it, I will try to come up with something directly general.


Step Three (b): Carry out the Plan
1) Suppose that the grasshopper wants to get to step m. Then, the grasshopper must be at step m -1 since it can an hop 1 step. But since it can hop 3 hops as well, then it can be at step m - 3 as well. Because this is a recursive formula, we know that there are some number of configurations that make the grasshopper get to step m-1 . There is also another set of configurations that make it get to step m-3. Since G(n) is about the order of steps used to climb the stairs, then we can get to m by the number of configurations for m-1 (with 1 added to the end of each configuration) plus the number of configurations for m -3 (with 3 added to the end of each configuration).
Then, G(m) = G(m-1) + G(m-3); and this formula actually works for the values of n in the table.
We have to make a special consideration for the values of n <= 3 since (n-1) and/or (n-3) might no be defined. Then: G(n) = {1, if n=1 or n = 2
2, if n = 3
G(n -1 ) + G(n -3) if n > 3



2) Before we start the proof, we need to prove that the Fibonacci Sequence is non-decreasing.
Predicate P(n): ∀ n,g ∈ ℕ, g < n \implies F(g) <= F(n)
Claim: ∀ n ∈ ℕ, P(n)
Use complete induction; then we need to prove the following:
∀ n ∈ ℕ, [P(0)\and P(1) ..... \and P(n-1)] \implies P(n)
Base Cases:
When n = 0, then there is no smaller natural number. Hence, we cannot find a g such that g <> F(n). Hence, P(0) is true.
When n = 1, then the only smaller natural number is 0 and P(0) = 0 < 1 = P(1).
Hence, P(1) is true.
Induction Step:
Assume n ∈ ℕ
Assume P(0)\and P(1) ..... \and P(n-1) are true # Induction Hypothesis
If n <= 1, then P(n) holds as shown in the base cases
Else if n > 1
Then F(n - 1) = F(n-2) + F(n-3) <= F(n-1) + F(n-2) = F(n) # By IH
Then F(n - 1) <= F(n) \and (n-1) < n
Then P(n) holds
Then [P(0)\and P(1) ..... \and P(n-1)] \implies P(n)
Then ∀ n ∈ ℕ, [P(0)\and P(1) ..... \and P(n-1)] \implies P(n)
Conclude: ∀ n ∈ ℕ, F(n) is non-decreasing


3) Now that we have a formula for G(n) and we know that the Fibonacci Sequence is non-decreasing, we can establish a predicate and claim.
Predicate P(n): if n ∈ ℕ - {0}, G(n) <= F(n) Claim: ∀ n ∈ ℕ - {0}, P(n) We will use complete induction. Then we need to prove the following: ∀ n ∈ ℕ - {0}, P(1) \and P(2) ............... \and P(n -1) \implies P(n) Base cases: When n = 0, then P(0) holds automatically.
When n = 1, then G(1) = 1 <= F(1) = 1
When n = 2, then G(2) = G(1) = 1 <= F(1) <= F(2) When n = 3, then G(3) = 2 <= G(2) = 1 <= F(1) <= F(2) Then P(n) holds in all base cases.

Induction Step:

Assume n \in N and n > 0
Assume P(1)\and P(2) ..... \and P(n-1) are true # Induction Hypothesis
If n <= 3, then P(n) holds as shown in the base cases. Else if n > 3:
Then G(n) = G(n-1) + G(n-3) <= F(n-1) + F(n-3) # By IH and definition of G(n), F(n) Then F(n-1) + F(n-3) <= F(n-1)+ F(n-2) = F(n) # Since the Fibonacci Sequence is non-decreasing Then G(n) <= F(n) Then P(n) holds
Then ∀ n ∈ ℕ - {0}, [P(1)\and P(2) ..... \and P(n-1)] \implies P(n)
Conclude: ∀ n ∈ ℕ - {0}, G(n) <= F(n)
Step Four: Look Back
So we have found a formula for G(n),and we explained how we derived it. We proved that the Fibonacci Sequence is non-decreasing and used this property in out inductive proof. Then we can conclude that ∀ n ∈ ℕ - {0}, G(n) <= F(n).

December 01, 2008

Week Thirteen

Monday,
--------- Dec. 1, 2008

Today, we discussed "Push Down Automate"; the last topic in CSC236 this fall. The idea is very simple; how to express an FSA as a CFG. Each terminal is either \epsilon, or is preceded by one symbol.

Last week, we discussed examples of non-regular languages. Since these languages cannot be expressed by normal FSAs, we can use PDA to express them. We did an example of a language that has a similar number of 0s and 1s. The idea is to use stacks to determine how many 0s have been seen so far and compare them to the number of 1s.


Wednesday,
--------- Dec. 3, 2008
I could not attend today's lecture because I had a project for architecture due at 1 pm and I was working till 11 am (usual scenario). I am really thankful because Professor Danny posted the exam review approach on the bulletin board. I hope this will compensate for the lecture.
B the way, I did not know that professor's Danny's full name is Daniel. Very interesting :)


Friday,
--------- Dec. 5, 2008
We wrote test 3 today. Of course, today was the last day in school so I had like 4 assignments due plus this test. I did not study and I was not surprised to see that most students have not studied either. Anyway, I think the test was fair; I answered everything so I hope I will get a good mark. I thought there would be a question about pumping lemma and another question about proving DFSA invariant, but none of that showed up :)

November 26, 2008

Week Twelve

Monday.
--------- Nov. 24, 2008
I was really excited about today's lecture because it answered the question I had in mind since the first time I learnt about languages denoted by regular expressions. The question is, or “was” actually, "are all languages regular?" if not, how can we prove that?
The pumping lemma is a powerful tool that takes a segment of a language, usually called v, and copies it 0 or more times. If the resulting strings maintain the properties of L, then L is regular. Another thing that makes the pumping lemma really useful is that it can be used, through proof by contradiction, to show that a language is regular or not regular; depends on what we need.
Professor Danny showed how the language that has the same number of 0s and 1s, and the language with strings of prime lengths are not regular.



Assignment Three.
--------- Nov. 24, 2008
According to all my friends, this assignment is much easier than the previous two. The loop invariant and termination proof in question 1 were pretty straightforward. Question 2 was much trickier and longer but it gave us good practice in proving the equivalence of regular expressions and languages. Question 3 was really easy; we even found three ways to solve it. One way was by structural induction (just assuming that a regular expression has no Kleene star and that it is either the concatenation or union of other two regular expressions). The other two ways were through contradiction. I am glad we had more than one approach; but we went with the first way because it is the shortest. Question 4 was fair. I kind of misunderstood the meaning of "represent the DFSA textually" and I ended up drawing it in python using arrows and dashes. Of course the resulting machine was pretty nasty but after asking Danny, he explained that we need to mention the different transition functions instead. What a waste of three precious hours!


Wednesday.

--------- Nov. 26, 2008

So far, we have talked about alphabet and languages. Today we talked about grammars. I wonder what is coming next, may be Shakespearian programming?
Anyway, we define CFG as a tuple (V, \Segma, P, S). So it is just like a DFSA machine but with a set of variables and productions rather that states and transitions.
Professor Danny said that CFG is a powerful tool; I did not see why. But after using it to generate languages (and union and concatenation of languages), my doubts instantly faded away.



Friday.
--------- Nov. 28, 2008
Professor Danny ended Wednesday's lecture with a CFG that represents, as the lecture claims, binary strings with even number of 0s and 1s. We used simple induction to prove this. Just like the proof of merge sort, the proof was lengthy but not complicated. We need to consider all possible cases.
The only thing that freaked me out was the use of the Intermediate Value Theorem. In cases when zeros(x) = ones(x) + 1 and x = 1y, then zeros(y) = ones(y) + 2. There is no such antecedent in the induction hypothesis so we use IVT to show that for some y, zeros(y) = ones(y).
If I encountered such a proof in the exam, I know I would not be able to solve it. How did it ever come up to the people who solved this question to use IVT?

November 23, 2008

Week Eleven

Monday.
--------- Nov. 17, 2008
We returned back to the DFSA that accepts strings that are multiples of 3 (when evaluated in base 2) which we designed last week. Because we are dealing with strings, we used structural induction. The proof was easy though lengthy because there were so many cases to consider.
We also talked about using \epsilon transitions to concatenate or add two machines and concluded the lecture with the definition of Nondeterministic Finite State Automate (NFSA).


Wednesday.
--------- Nov. 19, 2008
We finished off on Monday with the definition of NFSA. The main difference between DFSA and NFSA is that in the former, the current input symbol uniquely determines the next state of the automaton, while in the latter, a symbol (or even \epsilon) moves the machine to one of several possible states. Hence, a string x inputted to an NFSA might result in several paths. Determining if the NFSA accepts x or depends on finding at least one path that goes from the initial state to an acceptance state.
The bottom point in this lecture is that every NFSA has an equivalent DFSA! This can be done just by replacing each state in the DFSA states with sets of states (easier said than done).
We concluded the lecture by discussing Cartesian product of languages and FSAs. I modestly think that this tool will be a very useful in and beyond this course. I am happy because we will practice it in assignment 3.


Friday.
--------- Nov. 21, 2008
So far, we have shown that every regular expression has an equivalent NFSA and every NFSA has an equivalent DFSA. We will finish this circle by showing that every DFSA has an equivalent regular expression. So we start with a machine, add a new start and a new acceptance states, and add an \epsilon transition between the original and new start state, and between the original and new acceptance state. Finally, start removing states to get a machine that is one-state smaller every time. You will end up with a very long expressions going from the start to the acceptance state.
We used this method with a DFSA that accepts strings with odd lengths and even number of 0s. I think that the resulting regular expression was cool but long; (00+01(11)*10)*(1+01(11)*0)[0(11)*0(1+0(11)*10)(00+01(11)*10)(1+01(11)*0)]*