::Welcome to my online journal where I record,
on a weekly bases, the progress, challenges,
and other interesting events that happen
with me in CSC236 this fall at UofT::



November 26, 2008

Week Twelve

Monday.
--------- Nov. 24, 2008
I was really excited about today's lecture because it answered the question I had in mind since the first time I learnt about languages denoted by regular expressions. The question is, or “was” actually, "are all languages regular?" if not, how can we prove that?
The pumping lemma is a powerful tool that takes a segment of a language, usually called v, and copies it 0 or more times. If the resulting strings maintain the properties of L, then L is regular. Another thing that makes the pumping lemma really useful is that it can be used, through proof by contradiction, to show that a language is regular or not regular; depends on what we need.
Professor Danny showed how the language that has the same number of 0s and 1s, and the language with strings of prime lengths are not regular.



Assignment Three.
--------- Nov. 24, 2008
According to all my friends, this assignment is much easier than the previous two. The loop invariant and termination proof in question 1 were pretty straightforward. Question 2 was much trickier and longer but it gave us good practice in proving the equivalence of regular expressions and languages. Question 3 was really easy; we even found three ways to solve it. One way was by structural induction (just assuming that a regular expression has no Kleene star and that it is either the concatenation or union of other two regular expressions). The other two ways were through contradiction. I am glad we had more than one approach; but we went with the first way because it is the shortest. Question 4 was fair. I kind of misunderstood the meaning of "represent the DFSA textually" and I ended up drawing it in python using arrows and dashes. Of course the resulting machine was pretty nasty but after asking Danny, he explained that we need to mention the different transition functions instead. What a waste of three precious hours!


Wednesday.

--------- Nov. 26, 2008

So far, we have talked about alphabet and languages. Today we talked about grammars. I wonder what is coming next, may be Shakespearian programming?
Anyway, we define CFG as a tuple (V, \Segma, P, S). So it is just like a DFSA machine but with a set of variables and productions rather that states and transitions.
Professor Danny said that CFG is a powerful tool; I did not see why. But after using it to generate languages (and union and concatenation of languages), my doubts instantly faded away.



Friday.
--------- Nov. 28, 2008
Professor Danny ended Wednesday's lecture with a CFG that represents, as the lecture claims, binary strings with even number of 0s and 1s. We used simple induction to prove this. Just like the proof of merge sort, the proof was lengthy but not complicated. We need to consider all possible cases.
The only thing that freaked me out was the use of the Intermediate Value Theorem. In cases when zeros(x) = ones(x) + 1 and x = 1y, then zeros(y) = ones(y) + 2. There is no such antecedent in the induction hypothesis so we use IVT to show that for some y, zeros(y) = ones(y).
If I encountered such a proof in the exam, I know I would not be able to solve it. How did it ever come up to the people who solved this question to use IVT?

November 23, 2008

Week Eleven

Monday.
--------- Nov. 17, 2008
We returned back to the DFSA that accepts strings that are multiples of 3 (when evaluated in base 2) which we designed last week. Because we are dealing with strings, we used structural induction. The proof was easy though lengthy because there were so many cases to consider.
We also talked about using \epsilon transitions to concatenate or add two machines and concluded the lecture with the definition of Nondeterministic Finite State Automate (NFSA).


Wednesday.
--------- Nov. 19, 2008
We finished off on Monday with the definition of NFSA. The main difference between DFSA and NFSA is that in the former, the current input symbol uniquely determines the next state of the automaton, while in the latter, a symbol (or even \epsilon) moves the machine to one of several possible states. Hence, a string x inputted to an NFSA might result in several paths. Determining if the NFSA accepts x or depends on finding at least one path that goes from the initial state to an acceptance state.
The bottom point in this lecture is that every NFSA has an equivalent DFSA! This can be done just by replacing each state in the DFSA states with sets of states (easier said than done).
We concluded the lecture by discussing Cartesian product of languages and FSAs. I modestly think that this tool will be a very useful in and beyond this course. I am happy because we will practice it in assignment 3.


Friday.
--------- Nov. 21, 2008
So far, we have shown that every regular expression has an equivalent NFSA and every NFSA has an equivalent DFSA. We will finish this circle by showing that every DFSA has an equivalent regular expression. So we start with a machine, add a new start and a new acceptance states, and add an \epsilon transition between the original and new start state, and between the original and new acceptance state. Finally, start removing states to get a machine that is one-state smaller every time. You will end up with a very long expressions going from the start to the acceptance state.
We used this method with a DFSA that accepts strings with odd lengths and even number of 0s. I think that the resulting regular expression was cool but long; (00+01(11)*10)*(1+01(11)*0)[0(11)*0(1+0(11)*10)(00+01(11)*10)(1+01(11)*0)]*

November 19, 2008

Week Ten

Monday,
---------- Nov. 10, 2008
We discussed different operations on regular expressions and how to use these rules to, I do not know if I can put it this way, "simplify" regular expression. Then Professor Danny mentioned that finding the intersection of two languages (or regular expressions) can be really tricky, but to solve this issue we use FSAs.
DFSA, Deterministic Finite State Automata, is a machine designed to represent a regular expressions. When given a string input, it determines whether this string is denoted by the regular expression or not.


Wednesday,
---------- Nov. 12, 2008
We talked more about DFSAs today and how to represent them in terms of their states and transitions. There is not much to write about; DFSAs looked to me like a code with if statements instead of states. The example we did in today's lecture was a really simple DFSA but I know things will get much complicated pretty soon.


Friday,
-------- Nov. 14, 2005
We learned how to design a DFSA today by establishing a state invariant which lists all states and explains each one of them.

November 03, 2008

Week Nine

Monday,
---------- Nov. 3, 2008
Today, we started chapter 7 which discusses strings and regular expressions. We began by defining alphabets, strings, and languages and the various operations that can be applied to them. Everything sounds okay except for the Kleene star. At first I thought it is something like the Cartesian product of the language itself, but it turned out to be infinite. I guess I can think of it as "infinite Cartesian products of the language itself"??


Wednesday,
---------- Nov. 5, 2008
Today's lecture was about regular expressions. This term is familiar from CSC207. We have three main operations; "or" (I do not know why is it + rather than |), "concatenation", and again, "Kleene star". Good thing to keep in mind when doing proofs related to RE, is to use the inductive definition itself. Consider three base cases, \phi, \epsilon, and (a) for some a \in \Segma. Then use structural induction in the induction step.
Finally, we discussed how to prove that two languages are equivalent. It is a two-way proof. Prove the first is a subset of the second, then prove the second is a subset of the first, and conclude that they are equivalent.


Friday,
-------- Nov. 7, 2005
We had term test 2 today. The test was fair, I honestly expected something harder. There is one thing is bugging me though. In the last question, I did not need to prove a loop invariant; I just showed that there is a decreasing non-empty sequence of natural numbers. I hope this is enough.