1 obsidian/diffeq

Notes for Differential Equations

Table of Contents

Chapter 1: Substitution

1st Order Linear

Sigmoid

Existence & Uniqueness

For with initial condition , solutions MUST exist in some interval if is continuous on a rectangle containing . ( may be “narrower” than the rectangle if the solution curve happens to leave the rectangle in the middle).

This solution MUST be unique if (aka ) is continuous on .

Failing these tests does not imply nonexistence or non-uniqueness.

Bernoulli Equations

Divide by and then substitute . The equation becomes linear!

Homogeneous Equations (1st order)

Homogeneous function:

Is “zoom invariant” and said to be order . Zoom operation changes scale by same amount on all axes.

For differential equation:

When and are homogeneous with the same degree, the differential equation can be rewritten into and solved with substitution :

Becomes a separable differential equation!

Exact Diff Eq

Find potential functions such that

Let and . By clairaut’s theorem:

Because and are the result of taking partial derivatives, we can directly integrate and treat the other variable as a constant to find the original potential function .

Chapter 2: SOLDEs

2nd Order Linear

Uniqueness: The constraints and specify a unique solution on an open interval if , , , and are continuous on the interval (and is on that interval).

Associated Homogenous Equation

Superposition: The space of solutions to a homogenous equation is spanned by any 2 linearly independent solutions. For higher orders this is equal to the order of the equation.

Uniqueness implies that 2 linearly independent solutions must exist for any homogenous equation, and this holds generally for higher orders. Another argument is that there are that many roots (real or imaginary) to the characteristic equation by the fundamental theorem of algebra.

To solve, we factor a polynomial of the differential operator (the “characteristic polynomial”):

Duplicated & Imaginary Roots

Roots give the solutions as where is a root and is an integer less than the multiplicity of the root (this is exponential shift theorem):

By induction, this will go to 0 for any < . Thus, a root of multiplicity will produce the solution

For imaginary roots with their complex conjugate, the solution is and :

Because we know that is even and is odd and that , we can rewrite as:

Then, we can define new constants and , and this basis also spans the same space as the old one!

General Solution

First, solve the associated homogenous equation (the “complementary function”) and add it to the particular solution. This works because differentiation is linear.

Wronskian

A nonzero wronskian proves that all functions are linearly independent, a zero wronskian proves linear dependence for analytic functions.

I guess differentiation is the simplest operator that works by iterating it? Cause a simple multiplication wouldn’t work because that produces linear dependence across rows. Does adding one work????

Method of Undetermined Coefficients

For inhomogenous equation , if is made of , , , and polynomials, this method will work. That is, is a linear combination of terms of the form (or the same with ).

Find another homogenous equation such that is a solution (): Then, applying this new operator to both sides produces a new homogenous equation . The particular solution should be of the form of the difference between the solution to and the original associated homogenous equation , as those are the terms that don’t go to 0 after . We can then solve for the coefficients such that .

Or the shortcut method for finding the trial solution: For each term in , take a linear combination of that term and all its derivatives. Multiply by where is the smallest integer such that no term appears in the solution to the associated homogenous equation. This should be equivalent to applying and solving as described above (multiplying by is to account for increasing the multiplicity of roots).

Variation of Parameters

The general idea is to find functions to serve as coefficients to the associated homogenous solution such that it works out for the inhomogenous equation.

Very cursed. Wikipedia explanation is actually very good and easy to understand: https://en.wikipedia.org/wiki/Variation_of_parameters#Description_of_method

The only tricky part is that they skip over a step to get to equation vii: You must reorder the sum and factor out to see that those terms go to 0 because must follow the homogenous equation (ii):

By plugging in to the original equation:

In the end, the system of equations that we must solve is

To solve for we can use [[Cramer’s Rule]]. For the 2nd-order case, the solution to is:

Where and and two linearly independent solutions to the associated homogenous equation, and is the wronskian of and .

How did people come up with this shit?? Matrix Form: https://math.stackexchange.com/questions/1215632/variation-of-parameters-why-do-we-assume-the-constraint-v-1-leftt-righty

Chapter 4: Laplace

Dirac Delta PDF:

Unit step (Heaviside theta) CDF:

(One-sided) Laplace transform is linear:

Laplace transform exists for all piecewise continuous functions with exponential order, and it also always shrinks to 0 in s-domain:

Laplace transform of derivative (piecewise smooth with exponential order): TODO show full expansion and int by parts

Convolution Theorem:

Convolution theorem with dirac delta produces a shift (note: only shifts to the right with otherwise ):

This is just This is also exponential shift theorem? It can also be seen using a plain old u-sub.

Polynomials: TODO show full expansion and int by parts


Log in to Comment

Firebase not Loaded