It's been too bloody humid outside (even at night!) to go jogging the last few days. So no Mr. Thumpy sightings to report. Damn.

So, here's the promised math update. I've finished the chapter on vector fields (more or less -- I'll need to go back and do some more exercises to get reasonably limber). So now I'm on the chapter about differential forms. It's really very neat stuff. They're introduced by trying to get a coordinate-independent way of talking about gradients and directional derivatives. A problem we immediately run into is that getting a coordinate-independent formulation is easy, but only if we're willing to use the good ol' dot product of vectors. And it turns out that to define a dot product, we need to define an 'inner product' (because that's what a dot product really is), and to get *that*, we need a metric. This is a big no-no. See, the thing is that this text is basically preparing the reader to eventually tackle general relativity at the end of the book, and general relativity can use *lots* of different metrics, and there ain't no 'best' one. So the idea is that it's best to avoid introducing metrics unless we really, really *have* to. And so Baez and Munian then proceed to show how we can talk about directional derivatives and gradients and all that without coordinates or metrics. And that's where differential forms pop up. It's all fascinating stuff, and very well handled. Did I already mention this book has my recommendation to anyone curious about this stuff?

Anyway, I'm reading on, trying the occasional exercise, and I get to the part where the relationship between forms and vectors is elucidated. Say we've got a tangent vector space V. We can define a 'dual space', V*, by saying that it's the space of all linear functionals w: V -> R. That is, you feed a tangent vector to a 'dual', and it'll give you a number. (Secretely [and sloppily], 1-forms are just elements of V*, and they're also called 'cotangent vectors' (well, by sloppy people), fwiw.)

Notice I'm calling elements of V* 'w'. That's because Baez and Munian do it that way.

Now, a vector field v on M gives a tangent vector at each point of M, and the vector space of all tangent vectors at a point p in M is called T_pM. Similarly, a 1-form on M gives a 'cotangent' vector at each point in M, and a cotangent vector w is the thing that takes vectors (members of T_pM) to R. The set of cotangent vectors at a point is called T*_pM. It turns out that T*_pM is dual to T_pM. Anyway, that's all preliminary stuff.

Quoting Baez and Munian now:

More generally, if we have a linear map from one vector space to another,

f: V -> W

we automatically get a map from W* to V*, the dual of f, written

f*: W* -> V*

and defined by

(f*w)(v) = w(f(v)).

Thus the dual of a vector space is a contravariant sort of beast: linear maps between vector spaces give rise to maps between their duals that go 'backwards'.

Here's the problem: In the last equation in the quote, what, exactly, is w, and what, exactly, is v? As I said above, just a few lines before I the material I quoted, w is an element of the dual space to V, that is, V*. But, f* goes from W* to V*, so feeding it an element of V* doesn't make a lot of sense. On the other hand, what does v refer to? Well, v is often used as either a vector field, or a tangent vector in the book so far, depending on context, and it's being fed to f, which takes elements of V and gives elements of W. Notice that v is the lowercase version of V (duh), and this seems a sensible convention - denote 'sets' by uppercase, elements of those sets by lowercase. Could this be what's being used? Hmm. It doesn't quite help us with w, though - it wouldn't make a lot of sense for w to be an element of W, because it's being fed to f*. Stuff that's fed to f* really ought to be elements of W*. So, let's go with that. w is an element of W*, and v is an element of V.

Let's work on the left-hand side first. f*w gives us something that lives in v*. Then we're feeding v to an element of v*, which by definition gives us a real number (remember, v* is dual to V). Hmm.

Now let's work the right-hand side. f(v) gives us something that lives in W. When we then feed that to w, an element of W*, we get a real number, again, by definition, just like above. Hmm.

So, at least we've got real numbers on both sides of the equation. There's a chance, then, that's it's a real equation, and not something bogus like saying a vector is a banana. But, how can I tell if the two numbers are the same? Hmm.

Oh! The answer turns out to be that I've a banana for a brain, at best. The two real numbers are *defined* to be the same - hence the phrase 'and defined by' right before the equation in question. That is, f* is a function such that that equation is, err, an equation - the two sides are equal.

The thing I'm still not comfortable with is *why* the bloody hell a mapping between vector spaces gives rise to an opposite mapping between their dual spaces. I mean, Baez and Munian say it does, but *why*? I'll bet the answer's in the text somewhere, but I haven't been able to grok the fullness of it yet. I suppose the Stranger In A Strange Land thing to do at this point would be to make hot passionate monkey-love* to the textbook, but a little whiny voice in my head tells me that this is unlikely to help with this specific grokking.

This all is a long way of making sure that the snippet I quoted makes sense, and that I've got some sort of handle on it. In the end, it turns out that there isn't any typo in there: I was simply being slug-brained and wasn't properly making sense of what was written. On the other hand, it *would* be nice if what v and w are were to be explicitly spelled out in the text, instead of being left as a sort of implicit exercise, I think. I'm slowly gathering material (that is, questions) for a sci.physics.research post, where I'll suggest this to Dr. Baez (who is a regular there), along with making sure that the way I've got it pictured now is sensible.

I've thought of something else. Look at the quote I give, and then compare its length with the length of the rest of this posting. This is an example of what reading math tends to be like for me (and perhaps other people too, but I don't really know anyone else interested in math that I've talked to about such things). Mathematical notation and arguments tend to be very 'dense', in the sense that each symbol and parenthesis and word are imbued with rather deep meaning, all referencing a large amount of material previously studied. When I'm reading it, I have to unpack these bundles of meaning in my head (and often also on a piece of scrap paper) to really have a chance of understanding the thrust of the arguments, and then carefully repack them to make sure my new internal picture of things corresponds to what's on the page. This posting is sort of this process written down (but only approximately, sadly - there's a lot more that's had to be 'unpacked', but my fingers are tired). It's probably the basic reason why I read math (and other sufficiently technical stuff) a couple of orders of magnitude slower than fiction: the unpacking and repacking take a lot of time. This is also a reason why I can't really skim math.

I suspect this is also a reason why I would sometimes get in trouble in late high school and college math classes: I *could* profitably skim the 'math' texts used early on in my education, because they were relatively easy, but this was a bad habit, and a hard one to break (not realising it *was* a habit didn't help...). I often wonder how others cope with this sort of stuff, and whether some of the other bad math students (I know I can't be the only one!) have gotten in trouble with this sort of thing - that is, not *really* learning to read relatively high-brow technical material.

Hmm. Live and learn.

* - And we mustn't forget to soulfully holler "Thou art God!" at the proper moment in the proceedings. Wouldn't be a genuine 'Stranger' love session without it.

## 0 Comments:

Post a Comment

<< Home