3D projection
From Academic Kids

A 3D projection is a mathematical transformation used to project three dimensional points onto a two dimensional plane. Often this is done to simulate the relationship of camera to subject. 3D projection is often the first step in the process of representing three dimensional shapes two dimensionally in computer graphics, a process known as rendering.
The following algorithm was a standard on early computer simulations and videogames, and it is still in use with heavy modifications for each particular case. This article describes the simple, general case.
Contents 
Data necessary for projection
Data about the objects to render is usually stored as a collection of points, linked together in triangles. Each point is a series of three numbers, representing its X,Y,Z coordinates from an origin relative to the object they belong to. Each triangle is a series of three points or three indexes to points. In addition, the object has three coordinates X,Y,Z and some kind of rotation, for example, three angles alpha, theta and gamma, describing its position and orientation relative to a "world" reference frame.
Last comes the observer (the term camera is the one commonly used). The camera has a second set of three X,Y,Z coordinates and three alpha, theta and gamma angles, describing the observer's position and the direction along which it is looking.
All this data is usually stored in floating point, even if many programs convert it to integers at various points in the algorithm, to speed up the calculations.
Mathematical tools
The 3D transformation makes heavy use of square matrices, with 4x4 dimensions, and trigonometric functions. Each step of the algorithm is a matrix multiplication, where the elements of the matrices are derived from the coordinates and angles listed above, and various combinations of sines and cosines. Matrices have 4 rows and 4 columns, and use homogeneous coordinates, where vectors of three elements are typically extended to four adding a "1" element at their end.
Given a point of the form {x, y, z, 1}, one will apply a transformation resulting in a point of the form {x', y', z', ω'}. The projected point on the screen is then at the 2D coordinates {x'/ω', y'/ω'}. The 1D coordinate z'/ω' is needed to see if the projected point is in front of the camera or behind it. The number ω' (in addition to the screen coordinates) is needed when drawing textured triangles, but not when drawing monochromatic triangles.
Thanks to the associativity property of matrix multiplication, a program can precalculate many matrices, for example if it knows that some coordinate will never change.
Sometimes, a final "transformation matrix" valid for all points can be calculated, and then applied. This saves considerable time, since applying a matrix to a point uses only up to sixteen multiplications, instead of the dozens necessary to multiply matrices together.
At the very least, a transformation matrix can be calculated for a single object and then applied to all points in that object.
Note: The matrices are multiplied in the order Last matrix×...×Second matrix×First matrix×Point
First step: world transform
The first step is to transform the points coordinates taking into account the position and orientation of the object they belong to. This is done using a set of four matrices:
 <math>
\begin{bmatrix} 1 & 0 & 0 & x \\ 0 & 1 & 0 & y \\ 0 & 0 & 1 & z \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — object translation
 <math>
\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \alpha & \sin \alpha & 0 \\ 0 & \sin \alpha & \cos \alpha & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — rotation about the xaxis
 <math>
\begin{bmatrix} \cos \beta & 0 & \sin \beta & 0 \\ 0 & 1 & 0 & 0 \\ \sin \beta & 0 & \cos \beta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — rotation about the yaxis
 <math>
\begin{bmatrix} \cos \gamma & \sin \gamma & 0 & 0 \\ \sin \gamma & \cos \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — rotation about the zaxis.
The four matrices are multiplied together, and the result is the world transform matrix: a matrix that, if a point's coordinates were multiplied by it, would result in the point's coordinates being expressed in the "world" reference frame.
Note that, unlike multiplication between numbers, the order used to multiply the matrices is significant: changing the order will change the results too. When dealing with the three rotation matrices, a fixed order good for the necessity of the moment must be chosen. The object should be rotated before it is translated, since otherwise the position of the object in the world would get rotated around the centre of the world, wherever that happens to be.
 World transform = Translation × rotation
Second step: camera transform
The second step is virtually identical to the first one, except for the fact that it uses the six coordinates of the observer instead of the object, and the inverses of the matrixes should be used, and they should be multiplied in the opposite order. (Note that (A×B)^{1}=B^{1}×A^{1}.) The resulting matrix can transform coordinates from the world reference frame to the observer's one.
The camera looks in its z direction, the x direction is typically left, and the y direction is typically up.
 <math>
\begin{bmatrix} 1 & 0 & 0 &x \\ 0 & 1 & 0 &y \\ 0 & 0 & 1 &z \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — inverse object translation (the inverse of a translation is a translation in the opposite direction).
 <math>
\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \alpha & \sin \alpha & 0 \\ 0 & \sin \alpha & \cos \alpha & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — inverse rotation about the xaxis (the inverse of a rotation is a rotation in the opposite direction. Note that sin(−x) = −sin(x), and cos(−x) = cos(x)).
 <math>
\begin{bmatrix} \cos \beta & 0 & \sin \beta & 0 \\ 0 & 1 & 0 & 0 \\ \sin \beta & 0 & \cos \beta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — inverse rotation about the yaxis.
 <math>
\begin{bmatrix} \cos \gamma & \sin \gamma & 0 & 0 \\ \sin \gamma & \cos \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} <math> — inverse rotation about the zaxis.
The two matrices obtained from the first two steps can be multiplied together to get a matrix capable of transforming a point's coordinates from the object's reference frame to the observer's reference frame.
 Camera transform = inverse rotation × inverse translation
 Transform so far = camera transform × world transform.
Third step: perspective transform
The resulting coordinates would be already good for an isometric projection or something similar, but realistic rendering requires an additional step to correctly simulate the perspective distortion. Indeed, this simulated perspective is the main aid for the viewer to judge distances in the simulated view.
A perspective distortion can be generated using the following 4×4 matrix:
 <math>
\begin{bmatrix} 1/\tan\mu & 0 & 0 & 0 \\ 0 & 1/\tan\nu & 0 & 0 \\ 0 & 0 & \frac{B+F}{BF} & \frac{2BF}{BF} \\ 0 & 0 & 1 & 0 \end{bmatrix} <math>
where μ is the angle between a line pointing out of the camera in z direction and the plane through the camera and the righthand edge of the screen, and ν is the angle between the same line and the plane through the camera and the top edge of the screen. This projection should look correct, if you are looking with one eye; your actual physical eye is located on the line through the centre of the screen normal to the screen, and μ and ν are physically measured assuming your eye is the camera. On typical computer screens as of 2003, tan μ is probably about 1^{1}/_{3} times tan ν, and tan μ might be about 1 to 5, depending on how far from the screen you are.
F is a positive number representing the distance of the observer from the front clipping plane, which is the closest any object can be to the camera. B is a positive number representing the distance to the back clipping plane, the farthest away any object can be. If objects can be at an unlimited distance from the camera, B can be infinite, in which case (B + F)/(B − F) = 1 and −2BF/(B − F) = −2F.
If you are not using a Zbuffer and all objects are in front of the camera, you can just use 0 instead of (B + F)/(B − F) and −2BF/(B − F). (Or anything you want.)
All the calculated matrices can be multiplied together to get a final transformation matrix. One can multiply each of the points (represented as a vector of three coordinates) by this matrix, and directly obtain the screen coordinate at which the point must be drawn. The vector must be extended to four dimensions using homogenous coordinates:
 <math>\begin{bmatrix}
x' \\ y' \\ z' \\ \omega' \\ \end{bmatrix}=\begin{bmatrix}{\rm Perspective\ transform}\end{bmatrix} \times \begin{bmatrix}{\rm Camera\ transform}\end{bmatrix} \times \begin{bmatrix}{\rm World\ transform}\end{bmatrix} \times \begin{bmatrix} x \\ y \\ z \\ 1 \\ \end{bmatrix}. <math>
Note that in computer graphics libraries, such as OpenGL, you should give the matrices in the opposite order as they should be applied, that is, first the perspective transform, then the camera transform, then the object transform, as the graphics library applies the transformations in the opposite order than you give the transformations in! This is useful, since the world transform typically changes more often than the camera transform, and the camera transform changes more often than the perspective transform. One can, for example, pop the world transform off a stack of transforms and multiply a new world transform on, without having to do anything with the camera transform and perspective transform.
Remember that {x'/ω', y'/ω'} is the final coordinates, where {−1, −1} is typically the bottom left corner of the screen, {1, 1} is the top right corner of the screen, {1, −1} is the bottom right corner of the screen and {−1, 1} is the top left corner of the screen.
If the resulting image may turn out upside down, swap the top and bottom.
If using a Zbuffer, a z'/ω' value of −1 corresponds to the front of the Zbuffer, and a value of 1 corresponds to the back of the Zbuffer. If the front clipping plane is too close, a finite precision Zbuffer will be more inaccurate. The same applies to the back clipping plane, but to a significantly lesser degree; a Zbuffer works correctly with the back clipping plane at an infinite distance, but not with the front clipping plane at 0 distance.
Objects should only be drawn where −1 ≤ z'/ω' ≤ 1. If it is less than −1, the object is in front of the front clipping plane. If it is more than 1, the object is behind the back clipping plane. To draw a simple singlecolour triangle, {x'/ω', y'/ω'} for the three corners contains sufficient information. To draw a textured triangle, where one of the corners of the triangle is behind the camera, all the coordinates {x', y', z', ω'} for all three points are needed, otherwise the texture would not have the correct perspective, and the point behind the camera would not appear in the correct location. In fact, the projection of a triangle where a point is behind the camera is not technically a triangle, since the area is infinite and two of the angles sum to more than 180°, the third angle being effectively negative. (Typical modern graphics libraries use all four coordinates, and can correctly draw "triangles" with some points behind the camera.) Also, if a point is on the plane through the camera normal to the camera direction, ω' is 0, and {x'/ω', y'/ω'} is meaningless.