...Prove Their Worth...

"Problems worthy of attack
prove their worth
by hitting back." - Piet Hein

A kind of running diary and rambling pieces on my struggles with assorted books, classes, and other things, as they happen. You must be pretty bored to be reading this...

Wednesday, July 31, 2002

This is the posting that's been sitting on my hard drive that I'd referred to last time, with a bit of a trim for sanity. Without much further ado...



Most people meet math in school. They are taught to add, to subtract, to multiply, to divide. They are forced to memorize tables of facts. No one denies this is useful - I suspect even the kids know it. But kids aren't stupid, and they know about calculators and computers, and they know that a calculator can do arithmetic faster and better than they can. Arithmetic is boring. It's just a bunch of arbitrary algorithms (as far as kids are concerned).



It doesn't get much better as kids advance through the system. Algebra, then trig, then calculus - all are presented as a bunch of techinques to memorize, formulas to use for 'plug and chug', and always, in the end, to reduce things to arithmetics and get some 'numbers' out of things. Any proofs given (and this happens rarely) are turgid, hairy things, with the teacher more or less quoting the book, not really understanding what each step is for, where it fits in the larger picture, where the approach of the proof fits...



What's even worse is that most teachers sound bored while teaching. It's genuinely rare to find a teacher who is so excited about both teaching and his material that it shows in his eyes, in the way he talks about it, etc. This is not surprising to me, because most teachers have to repeat the same material several times a day, year after year after year. How anyone manages to stay excited in such circumstances (and a few do!) is beyond my understanding. So I sympathize. But the problem is that it's hard to get someone to care about what you're talking about if you don't really care about it yourself anymore.



Math education sucks. People grow up to hate math, and they are right to do so - math, as they have seen it, is dead boring.



I disliked math in high school and college. To me, math was mostly a tool that was handy for some things, but it was a pretty boring tool, and one that often savaged me when I least expected it to (ex.: doing things like 2+2=5 for large values of 2 on SATs...). Occasionally, I'd see something 'neat' in math, but that was as far as it went. I didn't like proofs, and I sucked at constructing them. I just wanted the cookbook recipie, damn it.



But the core reason I didn't like math was deeper. It didn't seem real, and it didn't seem accesible. That is, I could never see where most of the math I was taught 'came from'! I knew, intellectually, that lots of dead white men (and a few women, and a few non-Europeans*) spent centuries developing it. I didn't see how. As far as I was concerned, it was all pulled out of someone's ass. That it worked and was handy was the only reason to learn it, as far as I was concerned. But it's hard to get excited about what you think is the output of someone's rear, and so I didn't.



I suspect this is one of the reasons so many people don't like math, apart from the tedium and the bad instruction.



As I've mentioned in a long-past entry, my opinion of math has since changed to 'strikingly engrossing and beautiful, oh, and also useful'. (Blame Needham's Complex Analysis book for that change of heart.) The big difference is that now I can feel (and not just know, in a vague intellectual sort of way) why, exactly, it's not not just pulled out of someone's ass. I can do some of the proofs, and when I'm lucky, I know why each part of the proof is needed, and what happens when you take it out, and how else you can set up the proof, and what it means**. I can feel how various ideas are put together. I know where they come from, and when I don't (which is most of the time), I've some degree of confidence that I could figure it out, given time and some references.



Learning math (and related things) is to me a way of learning about how the world works. It's an amazingly elegant way of doing so, too. Here's a for-instance (one, sadly, that I suspect won't make any sense to anyone who doesn't already know about cross products and wedge products and wedgies and other things already, but such is life). Everyone knows about cross products of vectors. They're fine and dandy, very nice for a lot of practical things, and so on. However, they a) have an ass-nasty definition involving crap with determinants of 3x3 matrices (whose ass was it pulled out of?) and b) are limited to being defined on R^3 and c) require a right hand rule convention. The second and third problems are the more serious ones. It'd be nice to figure out some sort of generalization of a cross-product, one that isn't just limited to R^3, but which does behave like a cross-product in R^3.



To do this, we need to figure out the essential features of the cross-product. It turns out there's really only one: antisymmetry. That is, v x w = - w x v, where v and w are vectors in R^3. So, let's define a 'wedge product' (aka 'exterior product'), and say that it has this property. That is, v /\ w = - w /\ v, where v, w are elements of a vector space V. An additional piece of info that we'll need to tuck into the definition is that v /\ w will lie in a new vector space, denoted /\V. Actually, we'll say that /\V is an algebra (giving it a bit more structure than a vector space, but whatever).



That is pretty much the only thing we need to figure out how to take wedge products. Say V is three-dimensional, and is spanned by dx, dy, dz. Then the wedge product of two vectors in V is: v /\ w = (v_x w_y - v_y w_x) dx /\ dy + (v_y w_z - v_z w_y) dy /\ dz + (v_z w_x - v_x w_z) dz /\ dx. The only thing needed to derive this is the idea that /\V is an algebra together with the anticommutativity of the wedge product! Also, notice that this already bears a suspicious resemblance to the cross product!



What I think is cool here is just how little information was needed to get this fancy complicated thing (well, it only looks complicated - it's really not.) And reading on in Baez in Munian, I think I now grok where cross products come from. With more reading, I hope to be able to say that I grok stuff about exterior algebra with differential forms, and I'll know how to do 'cross products' on manifolds.



Anyways, I obviously think this stuff is interesting, and I've totally lost my thread of thought, and, err, I might as well post this shit now -- I don't think I'll be able to get it any more presentable.




* - Ok, preemptive strike. Yes, I know about the arabs doing some things with algebra, and so on, and so forth. However, whether anyone likes it or not, the Europeans are responsible for almost all of 'modern' mathematics. That doesn't mean they are The Smartest People Ever, and Everyone Else is a Retard.


** - This is sort a description of the 'nirvana state' - I don't actually grok things that deeply most of the time. Instead, most of the time I curse and despair of understanding anything, and then try again later, and sometimes I get it, and sometimes (most of the time) I don't. What keeps me going is that I do hope to eventually 'get it', and it's entertaining enough to keep my interest in the meantime.

Monday, July 29, 2002

A few days ago I read an interesting piece on wynk's site (::waves to wynk::) about math and math education and so on, and was inspired to write down some of my own thoughts on the subject. I got a few paragraphs in, but ran out of steam, and then got distracted, and now the stuff I half-wrote is sitting on my hard drive, staring at me, first forlornly, then seductively, then accusingly, and periodically sighing dramatically.



"Finish me!", it wails and/or thunders. (When it thunders, it does so in the style of Mortal Kombat - I've always loved those finishing moves...) One of these days, I'll get back to it, finish it and correct the many typos, and post it. One of these days...

It's been too bloody humid outside (even at night!) to go jogging the last few days. So no Mr. Thumpy sightings to report. Damn.



So, here's the promised math update. I've finished the chapter on vector fields (more or less -- I'll need to go back and do some more exercises to get reasonably limber). So now I'm on the chapter about differential forms. It's really very neat stuff. They're introduced by trying to get a coordinate-independent way of talking about gradients and directional derivatives. A problem we immediately run into is that getting a coordinate-independent formulation is easy, but only if we're willing to use the good ol' dot product of vectors. And it turns out that to define a dot product, we need to define an 'inner product' (because that's what a dot product really is), and to get that, we need a metric. This is a big no-no. See, the thing is that this text is basically preparing the reader to eventually tackle general relativity at the end of the book, and general relativity can use lots of different metrics, and there ain't no 'best' one. So the idea is that it's best to avoid introducing metrics unless we really, really have to. And so Baez and Munian then proceed to show how we can talk about directional derivatives and gradients and all that without coordinates or metrics. And that's where differential forms pop up. It's all fascinating stuff, and very well handled. Did I already mention this book has my recommendation to anyone curious about this stuff?



Anyway, I'm reading on, trying the occasional exercise, and I get to the part where the relationship between forms and vectors is elucidated. Say we've got a tangent vector space V. We can define a 'dual space', V*, by saying that it's the space of all linear functionals w: V -> R. That is, you feed a tangent vector to a 'dual', and it'll give you a number. (Secretely [and sloppily], 1-forms are just elements of V*, and they're also called 'cotangent vectors' (well, by sloppy people), fwiw.)



Notice I'm calling elements of V* 'w'. That's because Baez and Munian do it that way.



Now, a vector field v on M gives a tangent vector at each point of M, and the vector space of all tangent vectors at a point p in M is called T_pM. Similarly, a 1-form on M gives a 'cotangent' vector at each point in M, and a cotangent vector w is the thing that takes vectors (members of T_pM) to R. The set of cotangent vectors at a point is called T*_pM. It turns out that T*_pM is dual to T_pM. Anyway, that's all preliminary stuff.



Quoting Baez and Munian now:



More generally, if we have a linear map from one vector space to another,


f: V -> W


we automatically get a map from W* to V*, the dual of f, written


f*: W* -> V*


and defined by


(f*w)(v) = w(f(v)).


Thus the dual of a vector space is a contravariant sort of beast: linear maps between vector spaces give rise to maps between their duals that go 'backwards'.



Here's the problem: In the last equation in the quote, what, exactly, is w, and what, exactly, is v? As I said above, just a few lines before I the material I quoted, w is an element of the dual space to V, that is, V*. But, f* goes from W* to V*, so feeding it an element of V* doesn't make a lot of sense. On the other hand, what does v refer to? Well, v is often used as either a vector field, or a tangent vector in the book so far, depending on context, and it's being fed to f, which takes elements of V and gives elements of W. Notice that v is the lowercase version of V (duh), and this seems a sensible convention - denote 'sets' by uppercase, elements of those sets by lowercase. Could this be what's being used? Hmm. It doesn't quite help us with w, though - it wouldn't make a lot of sense for w to be an element of W, because it's being fed to f*. Stuff that's fed to f* really ought to be elements of W*. So, let's go with that. w is an element of W*, and v is an element of V.



Let's work on the left-hand side first. f*w gives us something that lives in v*. Then we're feeding v to an element of v*, which by definition gives us a real number (remember, v* is dual to V). Hmm.



Now let's work the right-hand side. f(v) gives us something that lives in W. When we then feed that to w, an element of W*, we get a real number, again, by definition, just like above. Hmm.



So, at least we've got real numbers on both sides of the equation. There's a chance, then, that's it's a real equation, and not something bogus like saying a vector is a banana. But, how can I tell if the two numbers are the same? Hmm.



Oh! The answer turns out to be that I've a banana for a brain, at best. The two real numbers are defined to be the same - hence the phrase 'and defined by' right before the equation in question. That is, f* is a function such that that equation is, err, an equation - the two sides are equal.



The thing I'm still not comfortable with is why the bloody hell a mapping between vector spaces gives rise to an opposite mapping between their dual spaces. I mean, Baez and Munian say it does, but why? I'll bet the answer's in the text somewhere, but I haven't been able to grok the fullness of it yet. I suppose the Stranger In A Strange Land thing to do at this point would be to make hot passionate monkey-love* to the textbook, but a little whiny voice in my head tells me that this is unlikely to help with this specific grokking.



This all is a long way of making sure that the snippet I quoted makes sense, and that I've got some sort of handle on it. In the end, it turns out that there isn't any typo in there: I was simply being slug-brained and wasn't properly making sense of what was written. On the other hand, it would be nice if what v and w are were to be explicitly spelled out in the text, instead of being left as a sort of implicit exercise, I think. I'm slowly gathering material (that is, questions) for a sci.physics.research post, where I'll suggest this to Dr. Baez (who is a regular there), along with making sure that the way I've got it pictured now is sensible.



I've thought of something else. Look at the quote I give, and then compare its length with the length of the rest of this posting. This is an example of what reading math tends to be like for me (and perhaps other people too, but I don't really know anyone else interested in math that I've talked to about such things). Mathematical notation and arguments tend to be very 'dense', in the sense that each symbol and parenthesis and word are imbued with rather deep meaning, all referencing a large amount of material previously studied. When I'm reading it, I have to unpack these bundles of meaning in my head (and often also on a piece of scrap paper) to really have a chance of understanding the thrust of the arguments, and then carefully repack them to make sure my new internal picture of things corresponds to what's on the page. This posting is sort of this process written down (but only approximately, sadly - there's a lot more that's had to be 'unpacked', but my fingers are tired). It's probably the basic reason why I read math (and other sufficiently technical stuff) a couple of orders of magnitude slower than fiction: the unpacking and repacking take a lot of time. This is also a reason why I can't really skim math.



I suspect this is also a reason why I would sometimes get in trouble in late high school and college math classes: I could profitably skim the 'math' texts used early on in my education, because they were relatively easy, but this was a bad habit, and a hard one to break (not realising it was a habit didn't help...). I often wonder how others cope with this sort of stuff, and whether some of the other bad math students (I know I can't be the only one!) have gotten in trouble with this sort of thing - that is, not really learning to read relatively high-brow technical material.



Hmm. Live and learn.



* - And we mustn't forget to soulfully holler "Thou art God!" at the proper moment in the proceedings. Wouldn't be a genuine 'Stranger' love session without it.




I've either found a substantial typo in Baez and Munian, or I've gone completely loony, or cross-eyed, or all of the above. News at 11!



Or, err, actually, after I get back from the library (stupid due dates...).

Sunday, July 28, 2002

I got my daily allowance of humor for the day this morning! I read a thread in a political discussion group I once frequented, which was started by the moderators for admonishing people to behave themselves and not engage in ad hominem attacks - especially tit-for-tat ones. When a particularly egregious (vicious, really) example of such an attack was presented in that very thread, can you guess what happened? Shock of shocks, the moderators responded with tit-for-tat ad homimems (and other, more substantive measures). The hypoc^Hthalamus is amusing to observe...


I'll post more math/physics related stuff later today, perhaps.


edit: I've got the sneaking feeling I might end up regretting saying some of that, but fuck it. I am, however, going to shut up now.


edit again: Oh dear. I've made a rather stupid mistake, conflating two very recent threads relating to 'moderation' issues into one in my mind. The incident I obliquely reference above did not take place in the 'lay off the tit-for-tats' thread - it was in the other moderation thread. Terribly sorry about that. Doesn't change the thrust of my, err, for lack of a better word, rant, though.

I saw Austin Powers: Goldmember today. My god, what a piece of foul, unfunny, boring tripe. Mind you, I very much enjoyed the first two installments. But all this third one is is poor repeats of the exact same jokes that were in the first two flicks. Oh, and it has appearances by celebrities. Wow. Like I give a flying wank about Travolta or Britney Spears or any of the other flaming retards that grace the screen with their presence for all of five seconds each. I can't see why anyone should, unless, of course, they worship said 'actors' to an improbable degree. I did laugh. Five or six or so times. One, count 'em, one of them was a genuine belly laugh. I kept waiting for it to get funnier, as many reviewers promised, but I'm sad to report it never did. Contrast that to the almost continuous hilarity of the first two flicks. I think someone ran out of inspiration, but was still dreaming of even bigger piles of soft, green, filthy lucre. I'm looking at you, Mike Myers.



In an act of disturbing sadomasochism, I finished a book called Cuba, by one Stephen Coonts. It's ostensibly about biological weapons being developed by Cuba, and, horror of horrors, they're pointed at the US. OH NOSE!!11! DOG SAVE TEH QUEN! It's also got a vaguely homo-erotic obsession with the main character's facial features (the nose in particular - I haven't the faintest idea why). This main character is an admiral in charge of a carrier battle group, who goes into battle personally, Star Trek style. The writing is worse than that found in a fifth-grade classroom. The characters are unintentionally hilarious cutouts. The plot is laughably thin, and even the bloody (not literally, though!) action scenes, which are the saving grace of 'products' in this genre, are fucking lame. Don't touch this festering literary puddle of vomit with a ten-foot rod, unless that's your bag, baby. Shagadelic. Yeah. Ahem. Moving on...



(Can you tell I've had an unpleasant week?)



Not all is gloom: I did see a good movie this weekend, that being The Bourne Identity. This was a second viewing (why is a long story). I liked it a lot the first time, and I've grown to like it even more after the second. Bourne Identity has all the basic ingridients of a good summer action movie: a fun (though fluffy - but not too fluffy) plot, good special effects/visuals, and lots of fun action. All of this was obvious after the first viewing, along with the fact that there was a certain 'je ne sais quois' to the whole thing which made it stand out above most action flicks.



Well, with second viewing, the 'je ne sais quois' went to a 'je sais', if you will. It's all the little touches that make this movie. The way Bourne looks lost sometimes. How he has to look at maps to figure out where stuff is. How his fighting skills come as a surprise to him. The way the Marines chasing him through the embassy are breathing hard from the exertion, and the fear. The way the camera lingers, occasionally, on the fallen, not out of some perverse voyuerism, but to show that people do get hurt badly hurt. That little silent moment between Damon and Potente in the car after the chase, while they come to terms with just how close it all was. The little romance scene between them seems to be initially slightly awkward - but in a natural way. All these things add up, adding little shadings and colorings to the general structure of the action in the movie. In the end, I have to say The Bourne Identity is one of the best 'spy action' movies in years.



Oh, and Franka Potente is quite a sight - and that's aside from the fact that she's a damn good actress. I have got to see The Princess and the Warrior. Oh man. Perhaps I should rewatch Run Lola Run, too...



Thursday, July 25, 2002

Whew. No more sidebar revisions for today. Ideally, the book list font should be smaller, but I don't feel like doing the proper CSSing for it today.

So, I'm adding another link to the right side of the page. The link is to the journal of Charles Stross. I encountered Mr. Stross in several of Gardner Dozois's Best of the Year in SF stories collection, and greatly enjoyed all of his stories that I've had the luck to read. Now, I have found his journal. It kicks much arse. Look, for instance, at just some of the recent material: a long and absolutely fascinating report about his visit to a nuclear powerplant, an interesting look at current EU vs. US power dynamics, and a provocative and educational "political metaphor for the day". There's more there. Very highly recommended reading.



Also, I must recommend his Hugo-shortlisted novelette Lobsters, about a venture altruist. Just start reading it. If you can read more than, say, three screenfuls, and then willingly stop, well, I'd be surprised. Try it!

I'm jumping ahead, because I'm too lazy to recount the explanation of why the definition of vector fields given last time actually deserves the name. Also, I apologize in advance for the incoherence of this post. That's the problem with writing about things I know damn well I don't understand -- I end up chasing my tail as if I was a dog and someone tied a piece of crispy bacon to it. Mmm.... Bacon.... Ahem. So, err, onward!



Say we've got two manifolds, and we call them M and N. Now, let's define a function f: N -> R. That means a function which eats points from N, and produces normal numbers (hence the 'R' part - that just refers to the 'real number line'). Also, let's define a function phi: M -> N. This is a function that takes points on M, and produces points on N.



Now, we can use the above information to get a real-valued function on M. We take a point on M, and feed it to phi. That gives us a point on N, which we can feed to f, which gives us a real number. Hurray. Summarizing, we just did 'f @ phi' (where the @ between f and phi signifies composition of functions, so it's read right to left). Now, we can define something called 'pulling back by phi', because we're pulling f back over from N to M using phi, and we can call phi_* a pullback (phi_* being defined using phi_* f = f @ phi).



Now, notice that something perverse is going on here: phi_* has to be on the 'other side' of f than phi. Because of this, real-valued functions on manifolds (which is what f was, remember) are called 'contravariant'.



"But wait!", you may be thinking. Could it be that we're stuck with this perverse backwardness simply because we asked for it? We defined f on N, and we've got a function phi that goes from M to N. So of course we had to go 'backwards' to get f to work on M! This is the line of thought I was preoccupied with while meditating upon a family-sized package of TP in the Chamber of Reflection.



To resolve this existential crisis, let's define f: M -> R, and now try to get a real valued function on N. If our concern above is justified, this shouldn't exhibit any 'backwardsness' (mind you, we're keeping phi defined just as it was before, from M -> N). So, off we go.



We need to somehow persuade f to eat points of N. It, at the moment, only eats points of M. Now, to go from M -> N, phi is just what we need. But we, unfortunately, need to go the other way - we need points of N disguised as M. To get that, we need the inverse of phi, that is, phi^-1. Uh-oh. We're getting backwardsy again. But, now we're in business, because f @ phi^-1 will do what we want: N -> R. So how the bloody hell do we define a pullback now? phi^* f = f @ phi^-1 ? I dunno, but probably something like (phi^-1)^* f = f @ phi^-1 is better.



Heh, well, if I haven't done anything stupid in the above, I just demonstrated that we still get the backwardness in this case. Hmm. Good. Moving on, then...



Not all things in life are contravariant, according to Baez and Munian. Tangent vectors, for one. Say we again have two manifolds, M and N, and phi: M -> N. Now, for a point p in M, we've got a tangent vector v [- T_pM (The [- being the 'element of' symbol). Our aim is to get transfer this tangent vector to N. That is, to get phi_* v [- T_phi(p)N. This ends up being called a pushforward.



At this point, I must confess not actually understanding pushforwards. I don't understand the bloody definition of a pushforward, quite. I can quote it, and even use it, to a degree, but I don't grok it. I don't understand why, exactly, aside from "it's nice that it be so", (phi @ y)'(t) = phi_* y'(t). (y(t) being a curve in M, that is, y(t): R -> M, and y'(t) being a tangent vector to that curve, and so to points the curve goes through). I'm thoroughly confused, and it's all the more frustrating because I think I've the 'gist' of things at this point, I don't feel I really understand them. Hopefully some more stewing and marinading will cure the confusion.

Monday, July 22, 2002

Praised be the Glorious One. I saw Mr. Thumpy today. During the day! He was running along somone's lawn, complete with big ears (I'll bet he's not very fuel-efficient, what with those aerodynamics), tail, and hopping. I hope he's smart enough not to try crossing any roads...

Saturday, July 20, 2002

Continued from previous entry... Okay. I'm ready to continue talking about something I can barely grasp with my metaphorically soapy fingers. The next notion I'm going to need is that of a 'directional derivative'. First, a derivative is generally a fancy way of talking about 'how stuff changes'. Most people meet derivatives in the context of talking about what a function f(x) is doing - swooping up and down, playing dead and staying constant, whatever. People that take vector calculus (also known as multivariable calculus) end up messing around with functions that eat and shit vectors, as opposed to numbers. If you plot a normal function that makes its living eating one normal number at a time on a piece of paper, it's just a curve. Everyone's seen those. If you plot a hairier function, such as one that works on more than one variable at a time (think f(x,y,z) or something), it'll look like a surface (if you're lucky) or something you can't even picture profitably, being stuck to an imagination that only works for 3D. So, think of a function that when drawn, looks like a normal surface. Like a rumpled (but not egregiously so) bed cover. Now, let's pick some point on it, and talk what's going on in that area of the bed cover. Generally speaking, the bed cover will be doing different things in different directions from your chosen point. In one direction, it might be going higher, in another it might be kinda flat, in yet another it might be going down, whatever. A directional derivative makes all this precise. That is, it's a way of talking what a function is behaving like in various directions. So, in my case, if I pick a point on my bed cover, and see what it's doing in the direction of the window, it'll turn out that's rising, and fast, too - there's a good pillow under there! I just took a derivative of a function (my rumpled bed cover) in the direction of my window. In short, a directional derivative. Whew, enough of that. (I launched into it because it's always good to make sure I can still explain the things I've learned a long time ago in simple terms - it's a way of making sure I still get them).



Let's get back to talking about vector fields. Say we've got a vector field called v on some domain. Let's make the domain R^n. R^n is just normal, flat space, with a dimension of n. It being a domain means the vectors that make up v live on R^n. They're n-dimensional, if you will. Say further we've got a function f in there, too.



Now, we can always, if we want, take the directional derivative of f with respect to v - v's just a bunch of arrows, that is, directions! So say we do that, and let's call the result vf, just because we like to use notation as malicious as possible.



So let's write a formula for vf. (Sadly, I don't have time to explain every term after this point - I'll just trust that whoever's reading this has seen some calculus before) We'll call a point x in R^n the coordinates (x^1, X^2, ..., x^n), and by Partial_u f we'll refer to the partial derivative of f with respect to x^u, where u can take on values from zero to n. I'll also use 'Einstein Summation Convention', because I'm a bastard, and it saves typing, and it's what my book uses, and the discussion here is paralleling it. The summation convention says that whenever we see something like x^u y_u - that is, the same index, here called u, popping up both upstairs and downstairs, we're meant to 'sum over u' - that is, do a sum, x^1 y_1 + ... + x^n y_n (I'm using ^ to stuff things where exponents go, and _ to stuff things where subscripts go, if you were to deasciify everything.)



Ok, now, given all that nastiness, the formula for our directional derivative vf looks like this (v^u stands for the components of v):



vf = v^u Partial_u f



Ok? If you know about partial derivatives, this shouldn't be news - it's just slightly different notation than what you probably used in calculus class. Now, watch out. f is on both sides of the equation. Let's just take it out, then.




v = v^u Partial_u



So now we're saying v's mission in life is to differentiate stuff. It's a 'linear combination of partial derivatives.' Now, this is weird. For one thing, the partial derivative is just hanging out there on the right with nothing to differentiate. But that's not so bad, because we can always stick a function back on there, and it'll have something to do again. What's weirder is we're saying a vector field v, and a directional derivative in the direction of v, are the same. This is sloppy: according to our current definitions of these terms, they're arguably closely related, but they are not the same.



Now, you're probably thinking: "DUDE. Step AWAY from the crack pipe. You can't do that!" And, stictly speaking, you're right. The above move (just ripping f out of there) was indeed 'illegal' and sloppy. However, it's suggestive, and what my book says is we can try using it as a guide for where we want to end up, and we can redo things in a legal way, and get to the same point. That is, we're going to redefine vector fields so they can work on manifolds.



We're going to start with the idea that 'vector fields' are entities whose "sole ambition in life is to differentiate functions." Then we're going to build an actual definition of such entities, and then we're going to be in a position to show that these still deserve to be called 'vector fields'.



So the way my book defines a vector field, v, at this point is that it's something that eats smooth (meaning, not too kinky, if you were to draw them) functions that are defined on a manifold, and produces smooth functions on that manifold. I've talked about manifolds in some old posting, you can go find it if you want to know what they kinda are (feel free to replace 'manifold' with 'R^n' for now if you don't know what a manifold is) . Also, v has to have the following properties:




v(f + g) = v(f) + v(g)

v(a*f) = a*v(f)

v(f*g) = v(f)*g + g*v(f)




where f and g are smooth functions, and a is just a number. Now, the first two properties are just a way of saying v is 'linear'. The meaty property is the third one: it looks just like the product rule (aka the Leibniz rule) from good old calculus.



Whew. The above probably seems bizarrely abstract: we started with hairy balls, and now we're talking about Leibniz, and 'smooth functions', and manifolds, and other fancy shit. But the huge benefit we just got for our efforts is that in our new definition, coordinates don't show up at all! This is vitally important, because while coordinates are very useful for actual concrete calculations, the universe isn't drawn on engineering graph paper, so coordinates are arbitrary, they're up to us. There are many different coordinate systems, and they're all arbitrary - there's no best one! So to talk about the true 'nature' of stuff, it's good to avoid using an arbitrary human thing like coordinates. And the new definition does that. And my fingers are tired. And the above was largely a recapitulation of a couple of pages of Baez and Munian's far better explanations from memory. So I'm going to stop for now.

Word of warning: I don't expect this particular 'posting' to make sense to anyone. Basically, I need to try to write out my current impressions about some stuff I've been reading, to figure out what I do understand, and what I don't. So, err, without (much) further ado...



I'm still reading my books on real and complex analysis, but I've also got a new book called Gauge Fields, Knots, and Gravity, by John Baez and Javier Munian. Very good book so far, very highly recommended, yada yada yada, more on such things some other time. I'm going to be talking about my attempt at digesting the chapter on vector fields.



The chapter starts off with a hilarious quote from Heavyside, where he heaps delightful scorn on opponens of vectors, and takes off from there.



To start with, think of a vector as everyone normally does: as an arrow. That is, a pointy thingy of a certain length and direction. Now, spiraling upward in abstraction at a dizzying rate (if you've never seen vectors before, you'd now talk about all the neat things about them and their uses), we talk about 'vector fields'. The way everyone normally conceives of a vector field is pretty simple. We have a 'field', in the sense I'm interested in, when we assign a number to every point in space. To be more careful, that's a scalar field. A vector field assigns an arrow to every point in space. If you want, think about hairy balls: each hair is a 'vector', and if a ball is really, really hairy, it'll have a hair coming out of every point on the ball. (Actually, you'll probably want to keep in mind that this makes the most sense if the hairs are really short - how the bloody hell would you define a 'direction' for a big, long, curly hair?)



Hmm. Ok, actually, I've got to go contemplate some of this stuff before going on.* More in a few minutes, hopefully.



Yes, I'm probably going to have to think about arrows and hairy balls and other things, but get your mind out of the gutter, damn it! I'm talking math here!

Friday, July 19, 2002

I've been silent for a while now. Main reason is, I haven't felt very motivated to write lately. Mind you, I've been keeping up my reading. I've read a couple of great fiction books (libraries are a wonderful thing), and I have another technical book now, with yet another in the mail from bn.com.



So. One of the fiction books is called Sparrow, by Mary Doria Russel. On the surface, this is a tale of First Contact, if the Society of Jesus (aka the Jesuits) were the ones to send out the exploration ship. It's a great read, with jarring and well-executed ideas, lots of moral exploration, very good character work, and so on. This is part of the 'literary' subgenre of science fiction, and it is one of its best representatives. Unfortunately, it's got a few of the problems commonly found in the subgenre, such as highly improbably science and technology (ex.: an interstellar spaceship made from a large asteroid that flies the whole way to Alpha Centauri (and back!) under a continuous 1g acceleration - apparently using a mass driver for propulsion!) But technology isn't the point of the story, so that's really a quibble. Highly recommended.



The other book is called Schild's Ladder, by Greg Egan. I'm a couple of chapters from finishing it, but I can already easily say that it's his strongest novel yet (though I wouldn't be surprised if upon rereading Diaspora I'd make it a tie.) Before I plunge into a mini-review of Schild's Ladder, I can't stop myself for raving about Greg Egan.



Mr. Egan is an Australian writer of hard science fiction, and is considered one of the strongest writers working in science fiction today. My opinion of him is rather higher, as I place him among my favorite writers period.



Now, 'hard science fiction' just means science fiction where the 'science' part sticks to plausible science, and the science is actually a significant part of the story. Greg Egan's work is, for lack of a better metaphor, diamond-hard. I'm aware of no one writing 'harder' SF than Egan. Egan is very well educated in mathematics and physics (for instance, he's collaborated with some full-time researchers on a bona-fide paper on loop quantum gravity), and is also well-versed in biology (I'm sure he knows about other things, too, but those are the ones I've seen in his writing).



The majority of his work plays harps on the theme of the nature of concsiousness, the nature of physical laws and mathematics, AI, and so on. His books attacking these themes tend to be set in the Really Far Future, which is nonetheless plausible. The main characters are not even Homo Sapiens in these books (for the most part), but they are both sympathetic and authetically different - as they damn well should be. For instance, in Diaspora, the main character is Yatima, a (way) superhuman AI, who happens to be a curious sort of orphan, and who also happens to be a mathematician. Yatima is, if memory serves, neuter - doesn't have a sex. As are many (but not by any means all) of the AIs. Who actually happen to be human descendants. The second half of Diaspora takes place in a five-dimensional universe. Yes, you get to actually try to visualize this, to play with it, to roll in it like a happy pig in pungent poo. Go read it. It will blow your mind. But anyway, moving on...



To get a taste of Egan's work, hit his website, and check out one of hisshort stores. I recommend starting with Border Guards (click 'complete text' to read the story). If the more technical parts of the tale read like gibberish, feel free to treat them like Star Trek technobabble (with the knowledge that actually, it's not technobabble.)



So. Err. To finally get to Schild's Ladder. I haven't quite digested it yet (as I said, I haven't even finished it just yet!), but I can tell you about a physics experiment that goes wrong, and results in the creation of what is apparently a more stable vacuum than, err, our vacuum. Which then starts expanding at half the speed of light and gobbling things up, and talks about the various fun consequences. It's great. I'll probably say more once I'm done with the book and have had a chance to digest it a bit.



Enough for now, 'cause this is getting painfully long.




Sunday, July 14, 2002

Je suis à ma maison encore. Québec est très beau. Mon Français est terrible, mais je n'avais pas dû l'employer pendant six années...

Wednesday, July 03, 2002

I'll be out of town for a week and a half starting July 4. I'm quite unlikely to have net access during that time, so no updates are likely. Heh, I suppose I've been slacking on the blogging front the last few days, but I've been running about like a ferret on crack doing various stuff, and haven't really had the time or the inspiration. Hopefully I'll have some updates when I return.