The Basics

Properties of Real Numbers ($\mathbb{R}$)

The set of real numbers, denoted as $\mathbb{R}$, includes all the numbers on the continuous number line, encompassing both rational and irrational numbers.

Field Properties

Linear Equations

A linear equation in the field of real numbers is an equation that involves linear combinations of variables, each multiplied by a real number, equating to a real number.

Example

The linear equation $4x = 2$ has a solution in real numbers which can be found as follows:

This example demonstrates the closure properties of the field of real numbers under operations necessary for solving linear equations.

Properties Applied

These properties allow for the operation of basic algebraic techniques used in solving equations and manipulating expressions. For instance, the equation $ax + b = 0$ can be solved by applying the field properties to isolate $x$


Vectors in ($\mathbb{R}^2$)

Vectors are fundamental in mathematics and physics as they describe both magnitude and direction. In the plane $\mathbb{R}^2$, vectors are used to represent quantities like displacement, velocity, and force.

Representing Vectors

A vector in $\mathbb{R}^2$ can be represented as an ordered pair of real numbers $(x, y)$, which correspond to the vector’s horizontal and vertical components, respectively. This representation is often visualized as an arrow pointing from the origin (0, 0) to the point $(x, y)$ in the Cartesian plane.


Vector Operations

Vector operations are fundamental in various scientific and engineering disciplines, allowing for the manipulation and analysis of quantities that have both magnitude and direction. Below is a detailed list of common vector operations:

Vector Addition: Adding two vectors component-wise:

Scalar Multiplication: Multiplying a vector by a scalar to scale its magnitude:

Vector Negation: Reversing the direction of a vector:

Vector Subtraction: Subtracting one vector from another to find the vector difference:

Dot Product: Calculating the dot product of two vectors, which measures the angle between them and determines orthogonality:

Orthogonality: Determining if two vectors are orthogonal (perpendicular) based on their dot product:

Matrices and Systems of Linear Equations

Matrices are rectangular arrays of numbers, symbols, or expressions arranged in rows and columns, vital for representing and solving systems of linear equations.

Basic Matrix Operations


Gaussian Elimination

Gaussian elimination, also known as row reduction, is a method for solving systems of linear equations. It transforms the matrix into an echelon form using elementary row operations, making the system easier to solve.

Steps in Gaussian Elimination

  1. Form the Augmented Matrix: Combine the coefficient matrix with the constants into an augmented matrix.
  2. Row Reduction: Perform row operations to achieve upper triangular form.
  3. Back Substitution: Solve for the unknowns from the bottom row up.

Example: Solving a System of Equations

Consider the system of linear equations given by:

\[\begin{align*} x + 2y + 3z &= 9 \\ x + 3y + z &= 8 \\ 2x + y + 2z &= 7 \end{align*}\]

Step 1: Form the Augmented Matrix

The augmented matrix for this system is:

\[\begin{bmatrix} 1 & 2 & 3 & | & 9 \\ 1 & 3 & 1 & | & 8 \\ 2 & 1 & 2 & | & 7 \end{bmatrix}\]

Step 2: Row Reduction

Perform row operations to achieve an upper triangular form:

Resulting matrix:

\[\begin{bmatrix} 1 & 2 & 3 & | & 9 \\ 0 & 1 & -2 & | & -1 \\ 0 & -3 & -4 & | & -11 \end{bmatrix}\]

Resulting matrix:

\[\begin{bmatrix} 1 & 2 & 3 & | & 9 \\ 0 & 1 & -2 & | & -1 \\ 0 & 0 & -10 & | & -14 \end{bmatrix}\]

Step 3: Back Substitution


Invertible Matrices

A matrix $A$ is called invertible if there exists a matrix $X$ such that:

\[AX = XA = I\]

where $I$ is the identity matrix. This matrix $X$ is known as the inverse of $A$ and is denoted by $A^{-1}$.

Properties of Invertible Matrices

Example: Matrix Inversion

For the matrix

\[A = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\]

its inverse is found by switching the off-diagonal elements, giving us

\[A^{-1} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\]

and verifying $AA^{-1} = I$:

\[\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = I\]

Determinants

The determinant of a matrix is a scalar that provides useful information about the matrix, such as whether it is invertible. A matrix $A$ is invertible if and only if its determinant is non-zero.

Properties of Determinants:

Example: Calculating a Determinant

Consider a $3 \times 3$ matrix:

\[A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\]

The determinant of $A$, denoted $\det(A)$, can be calculated using cofactor expansion:

\[\det(A) = 1 \det\begin{bmatrix} 5 & 6 \\ 8 & 9 \end{bmatrix} - 2 \det\begin{bmatrix} 4 & 6 \\ 7 & 9 \end{bmatrix} + 3 \det\begin{bmatrix} 4 & 5 \\ 7 & 8 \end{bmatrix}\]

Evaluating further gives the determinant as a scalar value, which indicates whether $A$ is invertible.

Spaces

Vector Spaces A vector space includes not just a set of vectors, but also the operations of vector addition and scalar multiplication satisfying specific properties, such as commutativity, associativity, and distributivity. The vector space is structured around these operations to form a mathematical framework that supports linear algebra.

Span, Basis, and Dimension

Subspaces

A subspace is any subset of a vector space that itself fulfills the properties required to be a vector space. This includes being closed under addition and scalar multiplication and containing the zero vector.

Column/Null Spaces

Properties of Vector Spaces and Subspaces

Example: Standard Basis in $\mathbb{R}^n$ The standard basis for $\mathbb{R}^n$ consists of vectors that are all zeroes except for a single one at each position. This basis directly supports the structure and dimensionality of the space, showing clear, independent axes of movement.