0 2025/diffgeo-0828

trying to learn differential geometry


Join me on my mission of trying to understand some of John M. Lee’s Smooth Manifolds (Second Edition).

This is a loose collection of notes as I try to explain this to myself.

Motivation

We want a general, abstract way to talk about smooth -dimensional manifolds, so we want to avoid talking about the external ambient space in which the manifold exists (the space it can be embedded or immersed in). Smooth manifolds are topological spaces that locally look like Euclidean space. They also have a smooth structure, which is an atlas of charts that are all smoothly compatible with each other, a chart being a way to assign coordinates for a local region on our manifold.

We want to do calculus on these manifolds and think about scalar fields, vector fields, and covector fields.

Tangent and Cotangent Space

Say is a smooth manifold and is a point on it. Then we can define a tangent space of vectors tangent to at , where is a vector space consisting of all directional derivatives evaluated at . These vectors act on smooth scalar fields which are just smooth functions . A tangent vector with a base point at is only going to care about the local behavior of the function near . Formally, is the space of all linear functionals that follow the product rule such that for all and :

These spaces have the same dimension as the manifold , and there is a different tangent space for each point . Because these spaces are finite-dimensional vector spaces, they have a dual space with the same dimension denoted . Elements of this dual space are covectors, linear functionals that act on vectors.

We can also join all these (co)tangent spaces together into (co)tangent bundles, the disjoint union of all the or spaces for all points :

Whereas and were -dimensional spaces (assuming is -dimensional), and are dimensional spaces. These tangent bundles also comes with canonical projection maps and that map every (co)vector to its base point. We can also be fancy and say that sections of represent (co)vector fields on . For the vector field case, a section is just a right-inverse of such that , so must map each point to a tangent vector base pointed at . Similarly, a covector field is a right-inverse of ‘s projection map.

Now we can consider how scalar, vector, and covector fields can act on each other:

  1. vector field ( scalar field ) = scalar field
  2. covector field ( vector field ) = scalar field

We can take now directional derivatives of scalar functions, but we still don’t have any way to talk about things like the gradient or curl. We can’t even take directional derivatives of vector-valued functions! It turns out that taking directional derivatives of vector functions is actually really hard (for reasons), so we’ll start with the gradient, but first we need some machinery.

Category Stuff

We can consider this to be an endofunctor on the category of smooth manifolds (fun fact: it is a monad). The tangent space endofunctor takes manifolds to their tangent space , and it takes smooth maps to their differential . Restricting to a point, you can also consider to be a functor from the category of smooth manifolds to the category of vector spaces.

For all and all scalar fields , we have

If is a tangent vector base pointed at , then must be a tangent vector base pointed at . This is called a pushforward since we are pushing a tangent vector forward from the domain of to its codomain. Additionally, the functor property that is just the chain rule, and is a linear map as it inherits the linearity from the vector space structure of and .

We can also think of as being a pullback acting on scalar fields, where . This pullback is also linear because it inherits linearity from function composition, and we actually have a contravariant functor (aka a functor to the opposite category) from the category of manifolds and maps between them to the category of vector spaces and linear operators. Again, this is a contravariant functor because if , then .

Because is dual to , a map also defines a pullback , which is just the transpose of the differential. For all vectors and covectors , we have

The fact that is a functor also gives us a nice interpretation of in general as a natural transformation , since we have the commutative square:

The Gradient

The gradient comes out of an operation called the exterior derivative, and in the case of the gradient it takes a scalar field and returns a covector field. This operation is almost the exact same as the differential, and we actually write it the same way, but it has subtly different semantics. If we had some function , then the gradient is a covector field that obeys

for all points and tangent vectors . This is basically just . But doesn’t this contradict to our original definition of the differential ? Remember that the condition for our original was that for all , we have

The covector definition is obtained from eliminating by setting it to . In the covector interpretation, we have , but this is exactly the same as our original interpretation (restricted to tangents at point ) since . So, if we call our covector definition and our original pushforward , we can write

Velocity Vectors

Let’s consider a concrete example for what looks like. Let’s say we have where in Einstein summation notation for each coordinate of . Then the tangent space is a vector space spanned by

where we can use the familiar notion of the partial derivative evaluated at a point, and we can view it as a derivation from . The full tangent vector bundle is isomorphic to , and we can write each element as a linear combination of the basis vectors of the corresponding :

Another thing we can now do is compute the velocity vector of a curve (where ) at some point for some . We use the differential (but we really only care about ), and since , elements of are just some multiple of the basis vector . Thus, we write

which can act on functions (scalar fields) like so:

Notice how since , our notation highlights the isomorphism between and , depending on if we interpret as a differential/tangent vector or a normal derivative.

The Lie Derivative

Ok, we are almost ready to take a directional derivative of a vector field. Why was it so hard anyways? Consider that the normal definition for the derivative is

We run into a problem because we have but , so in the general manifold case we have no way of adding or subtracting them (they are completely different vector spaces)! The solution is to take the Lie derivative, which is to take the derivative of a vector field along another vector field (rather than a single tangent vector). We will use the vector field to define something called a flow, and use this flow to pushforward tangent vectors from to .

The flow is essentially solving the differential equation defined by a vector field. The flow of a vector field is a curve with such that for all . We can define a flow starting at every point , and so we implicitly have a “time-evolution” map that moves each point for time steps along the flow defined by . This allows us to use the differential to take a directional derivative of with respect to :

Yes, is another vector field. This definition is obviously completely awful, but it turns out that for some mysterious, inscrutable reason, it is equivalent to the Lie bracket:

See theorem 9.38 on pg. 229, and please explain it to me if you understand. But how is the Lie bracket a vector field? Since vector fields take scalar fields to scalar fields, we are allowed to apply a second vector field. For example, is a scalar field, but the map is not a vector field, since it does not follow the product rule. The Lie bracket commutator combination thingy, however, does result in a vector field (it follows the product rule, and you can verify it just by expanding). Of course, by , I mean the vector field that takes to .

Lie Groups

A Lie group is a manifold that is also a group: it has smooth maps for group multiplication and inverses . Consequently, for all , we can define a smooth left-translation map . We can study vector fields on , and there are special left-invariant vector fields that do not change after applying for all . That is, is left-invariant if for all . It turns out that these vector fields form a vector space, called the Lie algebra of .

This vector space is isomorphic to , where is the identity of the group. This is because choosing the value of will automatically set and determine the entire vector field. See theorem 8.37 on pg. 191 for a proof that this is a bijective map between and the space of left-invariant vector fields.

The exponential map of a vector in is then given by taking the flow from the identity along the corresponding left-invariant vector field for one time step.


Log in to Comment

Firebase not Loaded