One of the things I’m least likely to be doing (at similar positions in the list to “watching American Idol” and “krunking”) is playing flash-style internet games on websites that specialize in banner ads. However, I recently ended up playing this game called Light-Bot, which is the first game I’ve seen (though I expect there are many others – like I said, I don’t do this much) to incorporate basic computer programming concepts.
There’s a robot placed on a grid, and the player has to write a program to tell the robot what to do in order to accomplish the goal. The programming language consists of seven commands (go forward, turn right, jump, etc.), which are represented with pictures instead of words, and the player drags the pictures onto a program grid (not the grid the robot is on), and they are executed in the order specified.
Two of the commands are subroutine calls – there are two extra program grids representing the subroutines. It’s even possible to get the robot into an infinite loop by having a subroutine call itself. Also, each portion of a program has a maximum length – twelve commands for the main body, and eight commands for each subroutine. Part of the challenge is utilizing the subroutines well enough to fit the program in the allotted space.
It occurred to me that the levels in the game would be interesting candidates for a genetic algorithm experiment. The lack of syntax (any random blending of two programs will always be syntactically valid) and the limited length of the programs would make it easy to write a simulator, and the problem space could be quite challenging (I gave up on level nine or so). A related problem would be determining if a particular level is solvable or not. I’ll put it on my list of programming projects.
UPDATE: So I ended up doing this. You can read more about it on this post.
In most programming languages, if you want to generate some primes with a sieve, you first pick an upper limit, and then create an array and filter out the multiples of the lowest primes until you hit the square root of the upper limit. Using Clojure's infinite sequences, I can remove the upper limit and filter out all the multiples of each prime at each step. I won't make any claims about the efficiency of this method (it's probably pretty bad), but I think it's a mildly fun idea:
(defn sieve-seq [level] (loop [val (cons 2 (iterate #(+ % 2) 3)) lev 2] (if (= level (- lev 1)) val (let [p (nth val (- lev 1))] (recur (concat (take lev val) (filter #(> (rem % p) 0) (drop lev val))) (inc lev)))))) (defn primes [level] (let [my-list (sieve-seq level) root (nth my-list level) limit (* root root)] (take-while #(< % limit) my-list)))
The first function returns an infinite sequence with the specified number of primes applied to the filter. The second function calls for the sequence from the first function and extracts the part of the sequence known to have only primes left:
user=> (primes 1) (2 3 5 7) user=> (primes 2) (2 3 5 7 11 13 17 19 23) user=> (primes 3) (2 3 5 7 11 13 17 19 23 29 31 37 41 43 47) user=> (primes 4) (2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113) user=> (primes 15) (2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 383 389 397 401 409 419 421 431 433 439 443 449 457 461 463 467 479 487 491 499 503 509 521 523 541 547 557 563 569 571 577 587 593 599 601 607 613 617 619 631 641 643 647 653 659 661 673 677 683 691 701 709 719 727 733 739 743 751 757 761 769 773 787 797 809 811 821 823 827 829 839 853 857 859 863 877 881 883 887 907 911 919 929 937 941 947 953 967 971 977 983 991 997 1009 1013 1019 1021 1031 1033 1039 1049 1051 1061 1063 1069 1087 1091 1093 1097 1103 1109 1117 1123 1129 1151 1153 1163 1171 1181 1187 1193 1201 1213 1217 1223 1229 1231 1237 1249 1259 1277 1279 1283 1289 1291 1297 1301 1303 1307 1319 1321 1327 1361 1367 1373 1381 1399 1409 1423 1427 1429 1433 1439 1447 1451 1453 1459 1471 1481 1483 1487 1489 1493 1499 1511 1523 1531 1543 1549 1553 1559 1567 1571 1579 1583 1597 1601 1607 1609 1613 1619 1621 1627 1637 1657 1663 1667 1669 1693 1697 1699 1709 1721 1723 1733 1741 1747 1753 1759 1777 1783 1787 1789 1801 1811 1823 1831 1847 1861 1867 1871 1873 1877 1879 1889 1901 1907 1913 1931 1933 1949 1951 1973 1979 1987 1993 1997 1999 2003 2011 2017 2027 2029 2039 2053 2063 2069 2081 2083 2087 2089 2099 2111 2113 2129 2131 2137 2141 2143 2153 2161 2179 2203 2207 2213 2221 2237 2239 2243 2251 2267 2269 2273 2281 2287 2293 2297 2309 2311 2333 2339 2341 2347 2351 2357 2371 2377 2381 2383 2389 2393 2399 2411 2417 2423 2437 2441 2447 2459 2467 2473 2477 2503 2521 2531 2539 2543 2549 2551 2557 2579 2591 2593 2609 2617 2621 2633 2647 2657 2659 2663 2671 2677 2683 2687 2689 2693 2699 2707 2711 2713 2719 2729 2731 2741 2749 2753 2767 2777 2789 2791 2797 2801 2803)
I usually don't pay much attention to operator precedence, thinking that it's either obvious, or it gets parentheses. And so recently I got bit in the face by it. I just spent a couple hours trying to find a bug in a set of classes with some nested data buffers and such, until I finally got to this method, which is supposed to extract a bit from an array of unsigned chars:
int DataBuffer::getBit(int index) { int whichByte=index/8; int offset=index%8; unsigned char theByte=this->get(whichByte); unsigned char mask=1<<(7-offset); if(theByte&mask==0)return 0; return 1; }It seems the method was returning 1 at inappropriate times. The hardcore C++ operator precedence nerds will have immediately noticed that I should have placed some parentheses around my
theByte&mask
. Good work all you hardcore C++ operator precedence nerds.
A friend of mine recently mentioned (do not read “endorsed” – I wouldn’t want anybody to get any false impressions about my anonymous friends) the idea that mathematical theorems are demonstrably, objectively true. It’s certainly a tempting conclusion to come to, especially if you’re the sort of person attracted to absolute truth, but in reality there isn’t anything absolutely true about math that’s of much interest. You can’t understand this, however, without looking deep into the bowels of mathematics to see what foundation it’s built on, so I’m going to try to do that for the uninitiated in an accessable way. To start with, I’m going to break mathematics up into three layers.
This is math as most people are familiar with it. The algebra and the geometry and the calculus and all that. It’s a bunch of properties about mathematical objects and rules for things you can do with them:
Most people find it incredibly dull and forget anything they’re forced to learn very quickly unless it happens to be helpful to them in some way.
Mathematicians’ math is the foundation for Popular Math. Some people would argue that this is the kind of math that the word “math” should refer to, and that Popular Math is something else entirely.
What mathematicians do is very much about discovery. They look for patterns, try to determine rules, and invent new ways of thinking about things. It’s a much more open and free activity than Popular Math, but it also takes a lot of genius and creativity to come up with anything new. A different summary of a mathematician’s activity is that they try to come up with interesting questions, and then try to find answers to those questions:
Answers are called theorems, and they have to be formally proven before they will be accepted. Mathematicians try to make use of previously proven theorems mixed with insight and luck to prove something new and interesting, which itself becomes a theorem. All the results of mathematics boil down to a giant list of theorems.
This process of proving is (I suspect) what most people who argue for the absolute truth of mathematics are referring to. A mathematical proof is simply a list of statements, where each statement either follows logically from the previous one, or is itself some previously proven theorem, and the last statement is the answer to the original question. In the face of a correct proof, there’s not much that can be done except to acknowledge it. Answering the question of whether theorems reveal absolute truth, however, requires looking at the foundations of proof itself.
A logician is concerned with the actual process of proving something. I said earlier that a mathematical proof is simply a list of statements where each statement follows logically from the previous one, or is itself some previously proven theorem. This sentence reveals the two legs that a mathematical proof stands on.
The first leg is the logical connection from one statement to another. What does it mean for one statement to follow logically from another? Most people have strong intuitions about this, but mathematicians aren’t generally comfortable with relying on human intuition if they can help it. Therefore, they rely on a formal system of logic, which is basically a list of rules for how one logical statement can be derived from another. These rules are stated using letters to represent abstract logical statements, and say things such as:
The rules of logic are simple statements that most people would intuitively agree with. Whether or not the rules themselves are true is not something a mathematician could comment on – that question is left to the philosophers. Mathematicians have, however, been able to show that the common set of logical rules are consistent – that is, you can’t use them to come to two contradictory conclusions.
Once logic has been formalized, we have a method for creating statements from other statements, in such a predictable fashion that a computer can do it. Theoretically, the work of a mathematician could be done by a computer (realistically, computers aren’t any good at deciding which theorems are interesting, so the amount of help they can provide is limited), so within a mathematical proof, there shouldn’t be any doubt about whether a connection is logically valid – either you can make the connection using one of the logical rules, or you can’t make the connection.
The second leg of a mathematical proof is the statements that don’t follow from previous statements, but are themselves previously established theorems. The first statement of any proof must necessarily be one of this sort, because there are no previous statements for it to follow from. Of course, if a proof uses a theorem that is actually faulty (i.e., wasn’t correctly proven), then the proof itself will be faulty. But a more glaring issue is a recursive problem: if this theorem relies on some other theorem, what does that theorem rely on? Probably a third theorem, but we could keep asking the question, like a child endlessly asking why. Where did it all start? This is where we get the idea of an axiom.
A system like this has to start somewhere, and every mathematical system has underneath it a set of axioms. Axioms are logical statements just like any statement in a proof, except that the axioms don’t have proofs. They are simply assumed. The idea is that if we start with a handful of well chosen axioms, we should be able to prove all sorts of interesting things. The famous ancient mathematician Euclid realized this, and came up with five axioms for geometry, which say things like “There is only one line between two points” and “circles can be drawn with any center and any radius”. These statements are the starting point for proving theorems within geometry.
Once again we can ask if the axioms are true, but once again the question is left to the philosophers. A natural question is if we can find a set of axioms that we’re intuitively comfortable with and that can serve as a foundation for all of mathematics. Maybe if we accomplished that, we could consider math to be “true enough”. Sure, it’s based on a few foundational concepts, but they’re concepts that nobody would argue with, and so at the very least math is the truest thing we can know. Unfortunately things get a little fuzzy from here.
A mathematical system (i.e., a set of axioms, and a set of rules for deriving theorems) has two fundamental properties: consistency and completeness. I mentioned consistency earlier when talking about the rules for logic. A system is consistent if it is impossible to prove two contradictory statements. If I can, using the axioms and the rules of logic, prove that something is true and then some other way prove that it is false, then the system is inconsistent. An inconsistent system is basically worthless, because one of the necessary rules of logic is that if a contradiction is possible, then anything is true. I had a professor once who wrote on the board “If I am the king of England and I am not the king of England, then I am the Pope” and said that it was a logically true statement. The idea is that if something is true and false, then we can use that to prove that any arbitrary unrelated thing is true. So in an inconsistent system, everything is true, which isn’t very interesting.
The second property is completeness. Completeness means that within a given mathematical system, any statement that can be made within that system is either true or false. This is obviously desirable, because it means that every question has an answer. It also says something else about our axioms besides consistency, something related to usefulness. If I decide to invent a mathematical system where the only axiom is “ten is a number,” then it wouldn’t be of much use. We could prove a few things, such as “If an object is ten, then it is a number” and “Something that is not ten is not a number,” but that’s about it. If somebody asked a question like “is seven a number?” then our system could not answer. It would depend on whether or not seven is ten, and we don’t have any axioms that address that issue. So for our ideal set of axioms, we’d like them to be both consistent and complete.
Unfortunately this turns out to be impossible. Within the last century it was proven that no system which is expressive enough to contain arithmetic can be both consistent and complete. One way of looking at this is that, given some consistent, complex mathematical system, there will always some statement within that system that is simultaneously true and unprovable.
So mathematicians play around with different sets of axioms, and use each for different things. The original geometric axioms of Euclid are just one way of doing geometry (it’s called Euclidean Geometry), and there are others (e.g., “what if lines weren’t really straight?”).
I’ve run through some pretty complex topics quickly, so if anything I said didn’t quite add up, leave a comment and I’ll try to clear things up.
Asking whether some statement is mathematically true is a bit like asking whether some action is legal: it depends on the context. In practice there are very good sets of axioms used in each field of mathematics which are either believed to be, or have been proven to be consistent, and result in all sorts of interesting theorems. For most of the history of mathematics, mathematicians didn’t question their axioms, and math has shown itself to be incredibly useful for all sorts of things. So we shouldn’t draw the conclusion that all of math is a lie. And usually we can get away with doing mathematical things without even thinking about the underlying system. But we can’t philosophize about the underlying truth without acknowledging that underneath everything else, math relies on human intuition, one way or another.
So if math isn’t necessarily absolutely true, why has it been so successful at describing the real world? Excellent question.
A couple days ago I saw a banner ad (I forget where) claiming that Obama was urging the nation to refinance their mortgages. Then later on facebook I saw this:
Apparently the wonderful marketing people that create these works of art think that the critical, thoughtful folks who might be persuaded by ads with ambiguous images of CGI-people will want to do what Obama tells them to do, but only as reported by a banner ad.
Comments