Skip to content

Matrices — Machines That Transform Space

Vectors let you describe positions and directions. But what if you need to move an entire world? Rotate a character, scale a terrain mesh, shear a shadow — and do it in a single, clean operation that can be composed, reversed, and fed to a GPU. That is what matrices are for. By the end of this chapter you will know how to read and write a matrix, how to apply one to a vector, how to combine two transformations into a single matrix, and what "undo" means for a matrix. These four skills are the backbone of every game engine transform pipeline.

A Recipe for Moving an Entire World

Imagine you are building a 2D game. Your sprite is a little robot standing at the origin. You need to rotate it 45° and scale it up by 2. You could apply the rotation to every vertex, then apply the scale to every vertex — two passes over potentially thousands of points. Or you could write down a recipe — a single mathematical object that says "rotate 45°, then scale by 2" — and apply the recipe once per vertex.

That recipe is a matrix.

A matrix packages a transformation so completely that you can pass it to a shader, bake it into a scene graph, invert it to go back, and multiply it with another recipe to compose two moves into one. The GPU you are targeting performs millions of matrix-vector multiplications per frame without flinching. Understanding what those matrices contain is the difference between treating your game engine as a black box and actually controlling it.

What Is a Matrix?

A matrix is a rectangular array of numbers arranged in rows and columns.[^1] We describe its size as : rows, columns.

This is a matrix. The entry in row , column is written (row index first, column index second). So here , , , .

A example:

You can think of a matrix as a column of column-vectors (stack the columns side by side) or as a row of row-vectors (stack the rows). Both views are useful; you'll see both in this chapter.

In code, a matrix is just a two-dimensional array:

ts
// Row-major: matrix[row][col]
const A = [
  [3,  1],
  [-2, 5],
];

INFO

Row-major vs. column-major. Math conventions write matrices in row-major order (row index first). OpenGL and GLSL use column-major order internally. This difference affects how you lay out data in memory, but not the mathematics — just be aware which convention your library uses.

Matrix-Vector Multiplication

This is the move that does everything. Multiplying a matrix by a vector applies the transformation encoded in the matrix to that vector, producing a new vector.

For an matrix and a column vector with components, the product is a new vector with components. The -th component of the result is the dot product of the -th row of with .[^2]

Written out for a case:

Each output component asks: "how much of this does the vector contribute?" — which is precisely what the dot product measures. The connection to Chapter 3 is direct: matrix-vector multiplication is nothing more than a stack of dot products.

Worked example. Apply the matrix to the vector .

The -component was scaled by 2, the -component by 3. This matrix is a scale transformation — and we can see the result just by reading the diagonal.

WARNING

Dimensions must match. To compute , the number of columns in must equal the number of components in . A matrix requires a 3-component vector; the result is a 2-component vector. Attempting this with a mismatch is a hard error — your GPU driver will simply reject the shader.

In code:

ts
function matVecMul(A: number[][], x: number[]): number[] {
  return A.map(row => row.reduce((sum, aij, j) => sum + aij * x[j], 0));
}

const A = [[2, 0], [0, 3]];
const v = [4, 1];
matVecMul(A, v); // [8, 3]

Three Transformations Every Developer Should Know

Different matrices encode different geometric transformations. Here are the three you will see constantly in game code.

Scale

A scale matrix stretches or compresses along each axis independently.[^3] The scale factors sit on the diagonal:

Applying to a unit square stretches it horizontally and squashes it vertically:

Before               After S(2, 0.5)

  (0,1)---(1,1)         (0,0.5)-----(2,0.5)
    |       |     -->      |             |
  (0,0)---(1,0)         (0,0)--------(2,0)

Uniform scale () makes everything bigger or smaller without distortion. The matrix is just times the identity. Non-uniform scale () stretches one axis more than another — common for squash-and-stretch animations.

Rotation

A 2D rotation matrix rotates every point counterclockwise by angle about the origin. Its derivation comes from expressing a point in polar coordinates: a point at distance and angle from the x-axis sits at . Rotating by moves it to . Expanding with sum-of-angles identities yields:[^4]

Which is exactly the matrix-vector product:

Worked example. Rotate the point by :

The rightward unit vector lands on the upward unit vector — a quarter-turn counterclockwise.

    y
    ^
    |    (0,1)
    |      ^
    |      | (after 90° rotation)
    |      |
    +------+----> x
  (0,0)  (1,0)
         (before)

Worked example. Rotate the point by :

The point moves to approximately — northeast at 45°, as expected.

TIP

To rotate clockwise by , use — just negate in the formula. and , so the off-diagonal signs flip.

Shear

A shear slants the grid — imagine pushing the top of a rectangle sideways while keeping the bottom fixed. The horizontal shear matrix is:

Applying to the unit square:

Before               After shear H_x(1)

  (0,1)---(1,1)         (1,1)---(2,1)
    |       |     -->    /         /
  (0,0)---(1,0)       (0,0)---(1,0)

The bottom row stays put; the top row shifts right by 1. Shear is less common than scale and rotation, but it appears in oblique projections, 2D sprite effects, and italic-font generation.

Matrix-Matrix Multiplication — Composing Transformations

Here is the payoff. If you have two transformation matrices and , you can combine them into a single matrix . Applying to any vector gives exactly the same result as applying first, then .[^2]

The rule: the entry in row , column of is the dot product of the -th row of with the -th column of .[^1]

For matrices:

Dimension rule: To form , the number of columns in must equal the number of rows in . An matrix times an matrix produces an matrix.

Worked example. Combine a scale-by-2 (uniform) with a 90° rotation into one matrix:

To scale first and then rotate, compute (apply first, second — read right to left):

Now apply to the point :

The point was scaled to and then rotated to — one matrix, one multiplication.

Order Matters

Matrix multiplication is not commutative. Generally .[^1] This is not a technicality to memorize and forget — it has real consequences in game code.

Example. Reverse the order: scale after rotating.

In this symmetric case the result happens to be the same — but only because uniform scale commutes with rotation. Try a non-uniform scale and the difference becomes dramatic: rotating a squashed sprite versus squashing a rotated sprite produces visually distinct results.

WARNING

Right-to-left composition. When you write , the rightmost matrix is applied first. This surprises many developers. In pseudocode: C = Rotate * Scale means "scale first, then rotate." Invert the mental order when reading a chain of matrices.

In code:

ts
function matMul(A: number[][], B: number[][]): number[][] {
  const rows = A.length;
  const cols = B[0].length;
  const inner = B.length;
  const C: number[][] = Array.from({ length: rows }, () => Array(cols).fill(0));
  for (let i = 0; i < rows; i++)
    for (let j = 0; j < cols; j++)
      for (let k = 0; k < inner; k++)
        C[i][j] += A[i][k] * B[k][j];
  return C;
}

const S = [[2, 0], [0, 2]];
const R = [[0, -1], [1, 0]];
matMul(R, S); // [[0, -2], [2, 0]]

INFO

Why does this algorithm use three nested loops? Each outer pair picks a slot in the result. The inner loop over computes the dot product of row of with column of . That is all matrix multiplication ever is — many dot products, organized into a grid.

The Identity Matrix — "Do Nothing"

Every multiplication-based system needs a "do nothing" element — the number 1 in arithmetic, the empty string in concatenation. For matrices, that element is the identity matrix : an matrix with 1s on the main diagonal and 0s everywhere else.[^5]

For any matrix of compatible size:

And for any vector :

Multiplying by leaves every vector unchanged. Geometrically, is the transformation that does absolutely nothing — no rotation, no scale, no shear. Every point stays exactly where it is.

Why does it work? Each row of has a 1 in exactly one position and 0s elsewhere. The dot product of row with picks out the -th component of and leaves it unchanged. Add them all up and the vector is identical to the input.

The identity matrix is the baseline against which all other transformations are measured. You will see it again when you build transformation pipelines: multiplying by a series of matrices starts conceptually from and applies transformations one at a time.

Matrix Inverses — "Undo"

If a matrix encodes a transformation, its inverse encodes the transformation that perfectly reverses it. Their product is the identity:[^6]

Whatever does, undoes it:

Think of it like function composition. If is "rotate 45° counterclockwise," then is "rotate 45° clockwise." Apply both in sequence and you're right back where you started.

The Inverse of a Rotation Matrix

Rotation matrices have a beautiful property: their inverse is their transpose. The transpose of a matrix is obtained by flipping it along the main diagonal — swapping rows and columns.[^3] For :

You can verify this by multiplying by and confirming the result is (using ). This transpose-equals-inverse property is computationally convenient — transposing a matrix is just a rearrangement, no division required.

Worked example. Verify that :

When Inverses Don't Exist

Not every matrix has an inverse. A matrix that cannot be inverted is called singular (or non-invertible). The classic example is a projection: projecting 3D space onto a plane collapses one dimension to zero. Once that information is gone, there is no way to recover it — the inverse simply does not exist.[^6]

A example of a singular matrix:

Try multiplying by any vector :

The output always has equal components — both rows of are identical, so all the information about the difference between and is destroyed. You cannot reconstruct from ; infinitely many inputs map to the same output. No inverse exists.

Later chapters will explore exactly what makes a matrix invertible or not — the answer involves the concept of a determinant, which measures whether a transformation crushes space flat.

Computing the 2×2 inverse

For a matrix , the inverse is:

The quantity is the determinant of . When it is zero, the formula breaks down — division by zero — and the inverse does not exist. For larger matrices, computing the inverse is more involved and is usually done by row reduction or via a library function.

TIP

In practice, avoid inverting matrices explicitly. Many game and graphics operations that appear to need have cheaper special-case solutions. Rotation matrices invert via transpose. Scale matrices invert by taking reciprocals of the diagonal. Explicit general-purpose inversion (Gaussian elimination, LU decomposition) carries real computational cost and numerical risk — use it only when no special structure can be exploited.

Chapter Recap

ConceptFormulaWhat It Does
Matrix-vector productApplies transformation to vector
Scale matrixStretches/compresses each axis
2D rotation matrixRotates CCW by
Matrix-matrix productComposes two transformations
Composition order applies first, then Right-to-left
Identity matrix 1s on diagonal, 0s elsewhereLeaves everything unchanged
Inverse Undoes transformation
Rotation inverseTranspose = rotate in reverse

Key ideas to carry forward:

  • A matrix is a recipe for transforming space. Every entry encodes how much of each input dimension contributes to each output dimension.
  • Matrix-vector multiplication is a sequence of dot products — one per output component.
  • Matrix-matrix multiplication composes two transformations into one. Order matters: in general.
  • The identity matrix does nothing. The inverse undoes . Together, they are the "neutral" and "reverse" operations of the transformation world.
  • Some matrices have no inverse — they destroy information. Spotting these singular matrices (and understanding why they are singular) is a theme that runs through the next several chapters.

The next chapter asks a deeper question: what is a matrix really? We will see that every matrix is secretly a description of where the basis vectors land — and that insight unlocks the visual intuition behind every transformation you will ever write.

References

[^1]: Margalit, D. and Rabinoff, J. "Matrix Multiplication." Interactive Linear Algebra, Georgia Tech / LibreTexts, §3.4. https://math.libretexts.org/Bookshelves/Linear_Algebra/Interactive_Linear_Algebra_(Margalit_and_Rabinoff)/03:_Linear_Transformations_and_Matrix_Algebra/3.04:_Matrix_Multiplication

[^2]: Margalit, D. and Rabinoff, J. "Matrix Inverses." Interactive Linear Algebra, Georgia Tech / LibreTexts, §3.5. https://math.libretexts.org/Bookshelves/Linear_Algebra/Interactive_Linear_Algebra_(Margalit_and_Rabinoff)/03:_Linear_Transformations_and_Matrix_Algebra/3.05:_Matrix_Inverses

[^3]: Dunn, F. and Parberry, I. "Matrices and Linear Transformations." 3D Math Primer for Graphics and Game Development (online edition), gamemath.com. https://gamemath.com/book/matrixtransforms.html

[^4]: "Rotation Matrix." Cuemath. https://www.cuemath.com/algebra/rotation-matrix/

[^5]: Kuttler, K. "The Identity and Inverses." A First Course in Linear Algebra, LibreTexts, §2.6. https://math.libretexts.org/Bookshelves/Linear_Algebra/A_First_Course_in_Linear_Algebra_(Kuttler)/02:_Matrices/2.06:__The_Identity_and_Inverses

[^6]: "Invertible Matrix." Wikipedia. https://en.wikipedia.org/wiki/Invertible_matrix