Monday.
--------- Nov. 24, 2008
I was really excited about today's lecture because it answered the question I had in mind since the first time I learnt about languages denoted by regular expressions. The question is, or “was” actually, "are all languages regular?" if not, how can we prove that?
The pumping lemma is a powerful tool that takes a segment of a language, usually called v, and copies it 0 or more times. If the resulting strings maintain the properties of L, then L is regular. Another thing that makes the pumping lemma really useful is that it can be used, through proof by contradiction, to show that a language is regular or not regular; depends on what we need.
Professor Danny showed how the language that has the same number of 0s and 1s, and the language with strings of prime lengths are not regular.
Assignment Three.
--------- Nov. 24, 2008
According to all my friends, this assignment is much easier than the previous two. The loop invariant and termination proof in question 1 were pretty straightforward. Question 2 was much trickier and longer but it gave us good practice in proving the equivalence of regular expressions and languages. Question 3 was really easy; we even found three ways to solve it. One way was by structural induction (just assuming that a regular expression has no Kleene star and that it is either the concatenation or union of other two regular expressions). The other two ways were through contradiction. I am glad we had more than one approach; but we went with the first way because it is the shortest. Question 4 was fair. I kind of misunderstood the meaning of "represent the DFSA textually" and I ended up drawing it in python using arrows and dashes. Of course the resulting machine was pretty nasty but after asking Danny, he explained that we need to mention the different transition functions instead. What a waste of three precious hours!
Wednesday.
--------- Nov. 26, 2008
So far, we have talked about alphabet and languages. Today we talked about grammars. I wonder what is coming next, may be Shakespearian programming?
Anyway, we define CFG as a tuple (V, \Segma, P, S). So it is just like a DFSA machine but with a set of variables and productions rather that states and transitions.
Professor Danny said that CFG is a powerful tool; I did not see why. But after using it to generate languages (and union and concatenation of languages), my doubts instantly faded away.
Friday.
--------- Nov. 28, 2008
Professor Danny ended Wednesday's lecture with a CFG that represents, as the lecture claims, binary strings with even number of 0s and 1s. We used simple induction to prove this. Just like the proof of merge sort, the proof was lengthy but not complicated. We need to consider all possible cases.
The only thing that freaked me out was the use of the Intermediate Value Theorem. In cases when zeros(x) = ones(x) + 1 and x = 1y, then zeros(y) = ones(y) + 2. There is no such antecedent in the induction hypothesis so we use IVT to show that for some y, zeros(y) = ones(y).
If I encountered such a proof in the exam, I know I would not be able to solve it. How did it ever come up to the people who solved this question to use IVT?
1 comment:
Yes, CFGs look deceptively simple and not very powerful.
Post a Comment