1 2025/06/20

[DRAFT] From Inner and Outer Products to Generalized Stokes

When I first learned about vector spaces and their dot and cross products, I felt that something was missing from the explanations. I didn’t feel like I got a good intuition, and the cross product in particular felt mysterious and clunky; why did it have to give another vector, and why does it only seem to work in three dimensions? This blog post is an answer for past me. Get your feet wet by building a better intuition for deriving the dot product, then discover a generalization of the cross product that works in any dimension, not just 3D.

I assume basic familiarity with dot products and cross products— this post does not do a good job of explaining them if you’ve never seen them before!

The Inner Product

Let’s start with the inner/dot product, the more familiar of the two. You may have seen it defined as , but I want to approach it from the side of pure geometry first instead of using coordinates. Forget about writing vectors as lists of numbers for a second, and simply think of them as directions and magnitudes, or arrows in space. We will start from the definition that a dot product is the scalar value of projecting one vector onto the other and multiplying the resulting lengths:

From here, let us consider the properties that this has. The definition of suggests a few things:

By thinking about it for a bit longer, we also get that it should be distributive:

These properties of the dot product actually already imply the Pythagorean theorem! From our definition of the dot product, we know that . We then want to write our vector as a linear combination of two orthonormal basis vectors and :

Here, orthonormal just means that they are perpendicular, with , and they have a length of 1, with . If we just expand the dot product,

Hey the 3rd line seems highly suggestive of the law of cosines… And indeed, if we draw a triangle of tip-to-tail vector addition with the vectors , we can see that

The Outer Product

Now for the fun part! Once again, I will start from pure geometry without any reference to coordinates, but instead of starting directly from the cross product, let’s start with the wedge product . It’s a generalization of the cross product and it’s subtly different, but it fundamentally does the same thing and has the advantage of also working in 2D where things are easier to explain. Whereas the inner product looks inwards and is about projecting vectors onto each other, the outer product extends outwards and is about the space spanned by the two vectors. We will define the outer product as the parallelogram formed by the vectors and :

Just trust that this thing “is a parallelogram” for now; we can still make precise mathematical statements with this. For example, we know the area of the parallelogram:

This gives us a nice symmetry with the case of the dot product, but beware! , but . Instead of being commutative, the wedge product is anti-commutative, where . We have this sense of “negative area,” or maybe more accurately orientation, where our parallelogram can either be face-up (positive) or face-down (negative). Which side is positive and which is negative is just a matter of arbitrary convention (right hand rule!), but if you’ve studied cross products or matrix determinants before, this concept will be familiar.

Like the dot product, it turns out that the wedge product works nicely with scalar multiplication and is both left-distributive and right-distributive. You can draw the diagrams to convince yourself of this as an exercise (I’m too lazy to make another tikz diagram). Also, since the area of the parallelogram vanishes as vectors become parallel (and ), we have .

Ok, that was a lot. Let’s work through an example of taking a wedge product in 2D: Let and .

Hey! That’s the formula for the determinant of the 2x2 matrix . Neat! Also notice how the terms that go to 0 in this expansion are exactly the opposite of the terms that went to 0 when we did this with the inner product.

What we’ve discovered here is a bivector, aka a 2-form. It’s just an upgraded version of a vector (which we also call a 1-form), where instead of having an oriented 1D length, we have an oriented 2D area. Where vectors have a length and a direction, bivectors have an area and an orientation. While this orientation can only be one of two possibilities (like heads or tails) in a 2D vector space, in 3D you have the freedom to face any direction you want! This is how a bivector ties into what you already know about the cross product: the cross product gives you a vector perpendicular to the parallelogram formed by and , with magnitude equal to its area. All along, that vector was just acting as a proxy for the 2-form. Later, we will see how you can use the Hodge dual to go between the vector of the cross product and the 2-form of the wedge product.

Naturally, since we’ve started counting, we can keep doing wedge products and extend this to volumes, hypervolumes, etc. That number you see in front of “-form” is called the grade, and it tells you the dimension of the object you’re working with. Keep in mind the difference between the dimension of the object and the dimension of the vector space you’re working in; you can have a 2D sheet of paper in your 3D room. If we are careful to avoid the case, in general we have a rule that

The case is analogous to having linearly dependent columns in a matrix, which makes the determinant go to 0 immediately. At this point, I will also make the claim without justification that the wedge product is associative. Also note that the wedge product with a scalar will just be normal scalar multiplication.

The Hodge Dual

To fully understand the connection between the wedge product and the cross product, we must understand the Hodge dual. Let’s take the example of a 2D space and consider what objects () we can build from the wedge product of vectors. Terminology note here: these are called blades if they can be written as the wedge product of some vectors directly, while more generally are linear combinations of blades.

Grade NameBasis BladesNo. Basis Blades
0Scalar1
1Vector2
2Bivector1

The space of (vectors) in 2D is spanned by and , while the space of (bivectors) is spanned by only a single element since there’s only one real way that areas can be oriented. What does this look like in three dimensions?

Grade NameBasis BladesNo. Basis Blades
0Scalar1
1Vector3
2Bivector3
3Trivector1

Hey it’s nice and symmetrical! It turns out we end up getting a Pascal’s triangle structure, as the number of basis blades of grade is equal to , where is the dimension of the overall vector space. You can see that we are taking all of the basis vectors and choosing of them to include in each of our basis blades.

The case of the trivector in 3D and the bivector in 2D are special, as there is only one element of that grade. We can pair them up with the scalars, and so we call them pseudoscalars. To match with the scalar unit (aka the number 1), we say that the pseudoscalar unit 𝟙 is the result of taking the wedge product of every basis vector. In 2D, 𝟙, and in 3D 𝟙.

In general, there is an easy mapping from to , and this mapping is the Hodge dual: the Hodge dual of is written . This takes scalars to trivectors (and vice versa) and vectors to biectors (and vice versa), and it is how you go from the cross product to the wedge product. We have (loosely) that .

More specifically, the Hodge dual of some unit where is the such that

𝟙

Remember that in 3D, the pseudoscalar 𝟙. Ok, this is not the real definition of the hodge star, but you can look on Wikipedia for the real version. The gist is that more generally, we have that ,1 and the hodge star is a linear operator. Because it’s a linear operator, we can just take the hodge dual of the basis blades individually to get the overall hodge dual of the object we’re working with. Let’s take a look at the hodge dual in 3D:

The dual of a vector is a bivector! But what about in 2D, where the dual of a vector is another vector?

Huh! The Hodge dual is clearly not a self-inverse: if we apply it twice in this case, we pick up a minus sign. But what did it do geometrically? This is a 90-degree counterclockwise rotation! We’ve moved the -axis to the -axis, and the -axis to the -axis. More generally, we get a sense for the geometric meaning of the Hodge dual: the Hodge dual gives you the space perpendicular to what you have now. If you have a vector, you get the bivector in 3D, and you literally get a perpendicular vector in 2D.

The Exterior Derivative

Now for some bonus content to make Dr. Adler’s MVC class make sense. Recall the gradient theorem, the divergence theorem, and Stokes’ theorem:

Now that we understand the wedge product, we can begin to understand generalized Stokes:

where the is the exterior derivative defined as

We sum over all the (orthonormal) basis vectors . Note the difference between , the coordinate for the th basis vector, and , the basis vector itself. We can understand what the exterior derivative does with this table:

Input to Exterior DerivativeOperation (Up to Hodge Dual)Result
Scalar FieldGradient Vector Field
Vector FieldCurl Bivector Field
Bivector FieldDivergence Pseudoscalar Field
Pseudoscalar FieldN/AJust 0

The exterior derivative always increases the grade of the field by 1. You can verify algebraically that taking the exterior derivative is equivalent to these operations (maybe I’ll write up a full explanation of this later). Anyways, from the definition of the exterior derivative, we get the identity that

For our derivation, let’s expand:

We see that the terms are when in the sum from the property that , and since , all the remaining terms can be paired with an opposite that exactly cancels it out! While the wedge product anti-commutes, the partial derivative operators commute (Clairaut’s theorem). From this singular statement that , we get these two identities:

You might have noticed that to take a divergence, we must Hodge dual our vector field first to get a bivector field, then Hodge dual our result again to get back to a scalar from a pseudoscalar. Thus it is useful to define a codifferential operator:

Ok, this is actually wrong and will sometimes give you the wrong sign but close enough. The codifferential operator also has the property that since the Hodge dual will (mostly) cancel with itself. With this, we can write the Laplacian as

You can verify this algebraically again (but I don’t want to). The negative sign is from the detail of the real definition of the codifferential that I’m glossing over. I also probably want to go over the geometric connection for curl, div, grad more in this post.

Footnotes

  1. Turns out the Hodge dual actually depends on the metric for the inner product. Huh, learned that while researching this.


Log in to Comment

Firebase not Loaded