# Linear equation

Wikipedia view on Wikipedia
Two Graphs of linear equations in two variables

In mathematics, a linear equation is an equation that may be put in the form

${\displaystyle a_x_+\cdots +a_x_+b=0,}$

where ${\displaystyle x_,\ldots ,x_}$ are the variables (or unknowns or indeterminates), and ${\displaystyle b,a_,\ldots ,a_}$ are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation, and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation for non-zero values of ${\displaystyle b,}$ the coefficients are required to not being all zero.

In the words of algebra, a linear equation is obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken, and that does not contain the symbols for the indeterminates.

The solutions of such an equation are the values that, when substituted for the unknowns, make the equality true.

The case of just one variable is particularly important, and frequently the term linear equation refers implicitly to this particular case, in which the name unknown for the variable is sensibly used.

All the pairs of numbers that are solutions of a linear equation in two variables form a line in the Euclidean plane, and every non-vertical line may be defined as the solutions of a linear equation. This is the origin of the term linear for describing this type of equation. More generally, the solutions of a linear equation in n variables form a hyperplane (a subspace of dimension n – 1) in the Euclidean space of dimension n.

Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations.

This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, for linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations.

## One variable

Frequently the term linear equation refers implicitly to the case of just one variable. This case, in which the name unknown for the variable is sensibly used, is of particular importance, since it offers a unique value as solution to the equation. According to the above definition such an equation has the form

${\displaystyle ax+b=0,}$

and, for a ≠ 0, a unique value as solution

${\displaystyle x=-{\frac }.}$

In the case of ${\displaystyle a=0}$, two possibilities emerge:

1. ${\displaystyle b=0:}$ Every value of ${\displaystyle x}$ is a solution to the equation ${\displaystyle 0\cdot x+0=0,}$ and
2. ${\displaystyle b\neq 0:}$ There is no solution for the equation ${\displaystyle 0\cdot x+b=0,}$ the equation is said to be inconsistent.

## Two variables

In the case of just two variables the indexed variable names ${\displaystyle x_}$ and ${\displaystyle x_}$ and the respective coefficients ${\displaystyle a_}$ and ${\displaystyle a_}$ are often replaced, for the convenience of not having to deal with indices, by ${\displaystyle x}$, ${\displaystyle y}$, ${\displaystyle a}$ and ${\displaystyle b}$, respectively. As a consequence, the so called constant term, named coefficient ${\displaystyle b}$ in the above notation, must also be renamed; ${\displaystyle c}$ suggests itself. A linear equation in two variables is then denoted as

${\displaystyle ax+by+c=0.}$

Any change to such an equation that does not alter the set of solutions, i.e., the set of pairs ${\displaystyle (x,y)}$, that satisfy this equation (i.e., make it an identity), generates an equivalent equation. It is immediate that changing the involved names (e.g. capitalizing names or using other letters) and also reordering the equation (for example, by moving terms to the other side), does not change this set of solutions, and thus results in an equivalent equation, like,

${\displaystyle Ax+By=C,\quad }$ with ${\displaystyle \quad A=a,\;B=b\quad }$ and ${\displaystyle \quad C=-c.}$

These equivalent variants are sometimes given generic names, such as general form or standard form,[1] but contribute no new concepts.

The set of solutions also does not change when both sides of the equation are multiplied by the same non-zero number. According to the above definition, ${\displaystyle a}$ and ${\displaystyle b}$ (identically ${\displaystyle A}$ and ${\displaystyle B}$) are not both zero, so multiplying the equation by the reciprocal of one of these non-zero coefficients, results in an equivalent equation with ${\displaystyle +1}$ as the coefficient of one variable. This variable can be isolated on the left hand side, leaving an expression, possibly containing the other variable on the right hand side. This leads to either

${\displaystyle b\neq 0:\quad y=mx+y_,\quad \;}$ with ${\displaystyle \quad m=-{\frac }\quad \;}$ and ${\displaystyle \quad y_=-{\frac },\quad }$ or
${\displaystyle a\neq 0:\quad x=m'y+x_,\quad }$ with ${\displaystyle \quad m'=-{\frac }\quad }$ and ${\displaystyle \quad x_=-{\frac }.}$

When both coefficients ${\displaystyle a}$ and ${\displaystyle b}$ are not zero, then both forms exist, and, assuming real numbers as coefficients as well as the domain of the variables, the set of solutions for both equations can then be denoted as

${\displaystyle S=\{(x,mx+y_)|\;\forall x\in \mathbb \},\quad }$ which is equal to the set ${\displaystyle \quad S=\{(m'y+x_,y)|\;\forall y\in \mathbb \}.}$

In this case both components of the pairs in the set ${\displaystyle S}$ vary over all real numbers, thereby depending in a so called linear affine manner on the respective other.

When exactly one coefficient, either ${\displaystyle a}$ or ${\displaystyle b}$, is not zero, then one equation remains, which is either

${\displaystyle y=y_,\quad }$ for ${\displaystyle \quad a=0,\;b\neq 0,\quad }$ with the set of solutions ${\displaystyle \quad S_=\{(x,y_)|\;\forall x\in \mathbb \},\quad }$ or
${\displaystyle x=x_,\quad }$ for ${\displaystyle \quad b=0,\;a\neq 0,\quad }$ with the set of solutions ${\displaystyle \quad S^=\{(x_,y)|\;\forall y\in \mathbb \}.}$

For both alternatives this is a set of pairs of numbers, where either the second component is a constant, and the first varies over all the reals (${\displaystyle S_}$), or the first is a constant, and the second varies over all the reals (${\displaystyle S^}$).

### In Cartesian coordinates

Every single solution of a linear equation in two variables can be interpreted as two coordinate values, fixing a point in the Euclidean plane with a Cartesian coordinate system. The sets of solutions of such an equation make up a two-dimensional graph, which can be depicted in this plane. Conventionally, the first component of a solution ${\displaystyle (x,y)}$, the ${\displaystyle x}$-value, is assigned to a horizontally drawn ${\displaystyle x}$-axis, and the second component, the ${\displaystyle y}$-value, to a vertical ${\displaystyle y}$-axis.

Vertical Line ${\displaystyle x=x_}$
${\displaystyle ({\text}\;x_\mapsto a)}$

In the case of ${\displaystyle a\neq 0,\;b=0}$ the equation is ${\displaystyle x=x_,}$ and the set of its solutions ${\displaystyle S^=\{(x_,y)|\;\forall y\in \mathbb \}}$ has a vertical line as its graph, as shown in the figure to the right. The value ${\displaystyle x_=-{\tfrac },}$ where the line intersects the ${\displaystyle x}$-axis in the point ${\displaystyle (x_,0)}$, is called an ${\displaystyle x}$-intercept. Except for ${\displaystyle x_=0,}$ when the graph coincides with the ${\displaystyle y}$-axis, graphs of this kind do not intersect the ${\displaystyle y}$-axis, they have no ${\displaystyle y}$-intercept.

The set of solutions defines a function ${\displaystyle f(t)}$ and, simultaneously, the graph of this function, by interpreting the pairs ${\displaystyle (x,y)}$ as ${\displaystyle (t,f(t)),}$ provided that any two such solutions that differ in their second value (${\displaystyle y=f(t)}$), also differ in their respective first values (${\displaystyle x=t}$). The set ${\displaystyle S^=\{(x_,y)|\;\forall y\in \mathbb \}}$ violates this condition: all real values ${\displaystyle y}$ in the second component have the same first component ${\displaystyle x_.}$ Nevertheless, a graph for this set may be drawn, but it is not a graph of a function under the conventional assignment of axes, it obviously fails the vertical line test. This is the only type of straight line which is not the graph of any function ${\displaystyle f(t)}$.

Horizontal Line ${\displaystyle y=y_}$
${\displaystyle ({\text}\;y_\mapsto b)}$

The sets ${\displaystyle S_}$ and ${\displaystyle S}$ satisfy the above condition, and the graph of ${\displaystyle S_=\{(x,y_)|\;\forall x\in \mathbb \}}$ is shown to the right. In this case of ${\displaystyle a=0,\;b\neq 0}$ the graph of the constant function ${\displaystyle f(x)=y=y_}$ is a horizontal line. The value ${\displaystyle y_=-{\tfrac },}$ where the line intersects the ${\displaystyle y}$-axis, is called ${\displaystyle y}$-intercept. Except for ${\displaystyle y_=0,}$ where the graph coincides with the ${\displaystyle x}$-axis, graphs of this kind have no ${\displaystyle x}$-intercept.

In the case of ${\displaystyle a\neq 0\neq b}$ with the equation ${\displaystyle y=mx+y_}$ the set of solutions is ${\displaystyle S=\{(x,mx+y_)|\;\forall x\in \mathbb \}.}$ It consists of pairs of numbers, with the first component varying over all the reals, and the other being calculated by a simple expression, representing a linear map (${\displaystyle x\mapsto mx}$) and adding a constant (${\displaystyle y_}$). This is sometimes called a linear affine function, or simply also linear function, slightly abusing the strict term linear. Also in this case the graph of a linear equation in two variables is a straight line (see figure at the top) that intersects the ${\displaystyle x}$-axis at ${\displaystyle x}$-intercept ${\displaystyle x_=-{\tfrac }}$ (i.e., ${\displaystyle (x_,0)}$ is a solution) and the ${\displaystyle y}$-axis at the ${\displaystyle y}$-intercept ${\displaystyle y_=-{\tfrac }}$ (i.e., ${\displaystyle (0,y_)}$ is a solution).

Besides the intercepts being obvious from graphing the solutions of a linear equation in two variables, also their ratio (if it exists) can be graphically interpreted as determining the incline of the considered line (and all lines parallel to it). The slope of a straight line, usually introduced as rise over run, is the negative ratio of the rise, the ${\displaystyle y}$-intercept, to the run, the ${\displaystyle x}$-intercept. The negative sign accommodates for a positive slope, when the line rises for increasing ${\displaystyle x}$-values. Immediately

${\displaystyle -{\frac }}}=-{\frac {-{\tfrac }}{-{\tfrac }}}=-{\frac }=m,}$

which holds if both intercepts exist. If the ${\displaystyle x}$-intercept does not exist (${\displaystyle a=0}$), the slope ${\displaystyle m}$ equals ${\displaystyle 0,}$ belonging to a horizontal line.

Since rise and run of a straight line can be determined not only between the intercept points and the origin (${\displaystyle x_-0}$ and ${\displaystyle y_-0}$), but also between arbitrary points ${\displaystyle (x_,y_)}$ and ${\displaystyle (x_,y_)}$ on the line, the slope may also be determined by

${\displaystyle m={\frac -y_}-x_}}={\frac -y_}-x_}}.}$

Denoting the angle enclosed by the ${\displaystyle x}$-axis and the line as ${\displaystyle \varphi ,}$ then

${\displaystyle \tan \varphi =m=-{\frac }.}$

For ${\displaystyle b=0}$ the slope is undefined (${\displaystyle \varphi =\pi /2}$).

This shows that only two of ${\displaystyle x_,\;y_}$ and ${\displaystyle m}$ can be selected independently.

With the premise that at least one axis is intersected, and since both intercept values may range over the whole real number line, all parallels to both axes as well as all oblique straight lines, i.e., in fact all straight lines in the Euclidean plane, can be expressed by linear equations in two variables, and all such equations denote either oblique or axis-parallel straight lines. Therefore all equations, equivalent to one of the above forms are often referred to as "equations of a line". They are adjusted to fit best to specific tasks, and are given therefore specific names, described below. In what follows, ${\displaystyle x,\;y,\;t,\;\theta }$ are the names of variables, and other letters denote constants (fixed numbers) as coefficients.

### Slope–intercept form

This form relies on the habit of writing ${\displaystyle y=f(x)}$ and the conventional way of assigning the variables of the linear equation to the axes of a Cartesian coordinate system, drawn in the conventional manner as described above. This form exists only for ${\displaystyle b\neq 0,}$ allowing to isolate ${\displaystyle y}$ on the left hand side[2]

${\displaystyle y=mx+y_.}$

This way the slope ${\displaystyle m=-{\tfrac }}$ describes the inclination of the straight line which is the graph of this equation. The slope is positive for a line ascending to the right and negative, if the line ascends to the left. A zero-slope ${\displaystyle m=0}$ belongs to a horizontal line.

The ${\displaystyle y}$-intercept ${\displaystyle y_=-{\tfrac }}$ fixes the point ${\displaystyle (0,y_),}$ where the line crosses the ${\displaystyle y}$-axis, and ${\displaystyle y_=0}$ characterizes lines that cross the origin ${\displaystyle (0,0).}$

Recalling the ${\displaystyle x}$-intercept as ${\displaystyle x_=-{\tfrac },}$ the above slope-intercept form, employing the slope ${\displaystyle m}$ and the ${\displaystyle y}$-intercept, can be transformed to

${\displaystyle y=-{\frac }x-{\frac }=-{\frac }(x+{\frac }\cdot {\frac })=m(x-x_),}$

involving the slope ${\displaystyle m}$ and the ${\displaystyle x}$-intercept ${\displaystyle x_}$.

In the case of ${\displaystyle b=0,}$ there is no slope-intercept form in the above way, because a slope does not exist for ${\displaystyle \varphi =\pi /2}$.

For ${\displaystyle a\neq 0\neq b}$ it is possible to express the inverse functions ${\displaystyle f^{-1}}$ in the slope-intercept form as

${\displaystyle x=m'y+x_,\quad }$ with ${\displaystyle m'={\tfrac }.}$

The graph of this equation, having the same set of solutions, is necessarily identical to the above graph, but depicting it under exchanged assignment of the variables to the coordinate-axes (${\displaystyle (x,y)\mapsto (y{\text{-axis}},\;x{\text{-axis}})}$), yields the usual ${\displaystyle f^{-1}}$-graph for inverse functions, the ${\displaystyle f}$-graph mirrored along ${\displaystyle y=x.}$ This holds for both ${\displaystyle (a=0,\;b\neq 0)}$ and ${\displaystyle (b=0,\;a\neq 0).}$

The graph of a vertical line (${\displaystyle b=0}$) with no existing slope and the equation ${\displaystyle x=d}$ changes under this inverted assignment to the graph of the function ${\displaystyle y=d}$ with zero-slope (${\displaystyle d}$ an arbitrary constant), and vice versa.

The slope–intercept form of a line can be computed from the value of the function at any two points: the slope can be calculated as ${\displaystyle m=(y_-y_)/(x_-x_),}$ and then the intercept as ${\displaystyle a=y_-mx_.}$ This is a special case of the unisolvence theorem for polynomials: the coefficients of a polynomial of degree at most ${\displaystyle d}$ can be computed from the value at ${\displaystyle d+1}$ distinct points.

### Point–slope form

A line is uniquely defined by its slope and an arbitrary point on it. In the slope-intersect form this point on the line is either taken as the intersection ${\displaystyle (0,y_)}$ with the ${\displaystyle y}$-axis, or the intersection ${\displaystyle (x_,0)}$ with the ${\displaystyle x}$-axis and is combined with the slope ${\displaystyle m}$, provided its existence, to establish the equation for the according line. Generalizing this approach to an arbitrary point with coordinates ${\displaystyle (x_,y_)}$ yields:[3]

${\displaystyle y-y_=m(x-x_).}$

The point-slope form expresses the fact that the differences of the coordinates of an arbitrary point ${\displaystyle (x,y)}$ and the point ${\displaystyle (x_,y_),}$ both on a straight line are proportional to each other. More precisely, as long as ${\displaystyle x\neq x_}$ and ${\displaystyle y\neq y_,}$ the nonzero differences ${\displaystyle x-x_}$ and ${\displaystyle y-y_}$ are proportional and the proportionality constants are, respectively, ${\displaystyle m}$ and ${\displaystyle 1/m.}$

### Intercept form

For a straight line that crosses both coordinate axes outside the origin, both intercept values exist and are non-zero. This implies that also ${\displaystyle c}$ is nonzero, and such lines can be specified via the intercept form, that employs these two intercept values to specify an appropriate equation[4]

${\displaystyle {\frac }}+{\frac }}=1.}$

The intercept form results from moving ${\displaystyle c}$ in the equation ${\displaystyle ax+by+c=0}$ to the right side, and then multiplying both sides of the equation with ${\displaystyle -1/c,}$ yielding

${\displaystyle (-{\frac })x+(-{\frac })y={\frac }}x+{\frac }}y=1,}$

which is identical to the above form. The intercept form also works conveniently in higher dimensions for specifying (hyper)planes, when their intersections with all coordinate axes exist and are known.

### Two-point form

Two points ${\displaystyle (x_,y_)}$ and ${\displaystyle (x_,y_)}$ with ${\displaystyle x_\neq x_}$ (no vertical lines!) determine the slope of the line through these points. This slope, calculated as above, can be used with either point to employ the point-slope form, thereby establishing appropriate equations for this line, based on two points with different ${\displaystyle x}$-values. This yields [4]

${\displaystyle y-y_={\frac -y_}-x_}}(x-x_),\quad }$ for ${\displaystyle j=1,2.}$

In the rest of this paragraph ${\displaystyle j=1}$ is used.

#### Expanded form

Expanding, regrouping, and appropriately factoring the products leads to

${\displaystyle (y_-y_)x+(x_-x_)y+(x_y_-x_y_)=0,}$

identifying: ${\displaystyle \quad a=(y_-y_),\quad b=(x_-x_),\quad }$ and ${\displaystyle \quad c=(x_y_-x_y_).}$

#### Symmetric form

Multiplying both sides of the 2-point form by ${\displaystyle (x_-x_)}$ yields an equation in a symmetric form

${\displaystyle (x_-x_)(y-y_)=(y_-y_)(x-x_).}$

This form also works in the case of a non-existing slope (${\displaystyle x_=x_}$), but requires ${\displaystyle y_\neq y_}$ in this case: it correctly delivers ${\displaystyle \quad x=x_.}$

#### Determinant form

The products in the above equation result also from the evaluation of a 2-rowed determinant, inducing this form of the linear equation:

${\displaystyle {\beginx-x_&y-y_\\x_-x_&y_-y_\end}=0.}$

#### Mnemonic determinant

The products on the left hand side of the expanded version can be reproduced by evaluating the easily memorized 3-rowed determinant, which can be justified by the theory of projective geometry:

${\displaystyle {\beginx&y&1\\x_&y_&1\\x_&y_&1\end}=0.}$

#### Vectorial treatment

Any pair of numbers ${\displaystyle (x,y)}$ may be treated as a vector, given by two components with respect to a Cartesian coordinate system. A (naive) vector starts at the origin ${\displaystyle (0,0)}$, and ends at the given coordinates. Any two non-collinear vectors ${\displaystyle (a_,a_)}$ and ${\displaystyle (b_,b_)}$ span a parallelogram, with these three points. The area ${\displaystyle A}$ of this parallelogramm can be calculated as the magnitude of the exterior product (2dim-cross product, geometric product, ...) of these vectors. In components this can be done by evaluating the absolute value of a determinant with the components:

${\displaystyle A=\left|{\begina_&a_\\b_&b_\end}\right|.}$

Two given points ${\displaystyle P_=(x_,y_),\;P_=(x_,y_)}$ and an arbitrary third point ${\displaystyle P=(x,y)}$ are on one straight line (collinear), if, e.g., the vector from ${\displaystyle P_}$ to ${\displaystyle P_}$ and the vector from ${\displaystyle P_}$ to ${\displaystyle P}$ span no parallelogram, i.e., a parallelogram with area zero, i.e., also the vectors are collinear.

The vector from point ${\displaystyle P_}$ to point ${\displaystyle P_}$ can be expressed as

${\displaystyle P_=P_-P_=(x_,y_)-(x_,y_)=(x_-x_,y_-y_)}$

and similarly the vector from point ${\displaystyle P_}$ to an arbitrary point ${\displaystyle P}$ is

${\displaystyle P_=P-P_=(x,y)-(x_,y_)=(x-x_,y-y_).}$

Equating the exterior product of these two vectors, as specified above, to zero, yields a linear equation

${\displaystyle {\beginx-x_&y-y_\\x_-x_&y_-y_\end}=0,}$

which is identical to the determinant form above.

### Matrix form

Writing a linear equation in two unknowns in the form

${\displaystyle Ax+By=C,}$

and considering the collection of coefficients ${\displaystyle (A,B)}$ as a ${\displaystyle (1,2)}$-matrix, and the collection of variables ${\displaystyle {\beginx\\y\end}}$ as a ${\displaystyle (2,1)}$-matrix, then their matrix product equals the ${\displaystyle (1,1)}$-matrix ${\displaystyle {\beginC\end}:}$

${\displaystyle {\beginA&B\end}{\beginx\\y\end}={\beginC\end}.}$

This notation can easily expanded to more linear equations in more than two variables. For example, a system of two equations in two variables

${\displaystyle A_x+B_y=C_,\,}$
${\displaystyle A_x+B_y=C_,\,}$

can be denoted with a ${\displaystyle (2,2)}$-matrix and a ${\displaystyle (2,1)}$-matrix for the coefficients, by equaling the matrix product of the ${\displaystyle (2,2)}$-coefficient matrix with the ${\displaystyle (2,1)}$-variable matrix to the ${\displaystyle (2,1)}$-matrix of the constant terms:

${\displaystyle {\beginA_&B_\\A_&B_\end}{\beginx\\y\end}={\beginC_\\C_\end}.}$

A system of three linear equations in four variables would employ a ${\displaystyle (3,4)}$-matrix for the coefficients of the variables, which, multiplied with the ${\displaystyle (4,1)}$-(column)-matrix of the variables, is equaled to the ${\displaystyle (3,1)}$-matrix of the constant terms. For this ready extendability to higher dimensions, the matrix notation is a common representation tool for a system of linear equations, in linear algebra, and in computer programming. There are named methods for solving systems of linear equations, like Gauss-Jordan which can be expressed in matrix elementary row operations.

### Parametric form

The parametric form of a curve is useful to e.g. describe the movement of a point along this curve, and controlling this movement with a single parameter. This setting resembles the task in physics, where a particle starts at time ${\displaystyle t=0}$ at some point in space, say ${\displaystyle (h,k)}$, and travels along the curve, where it reaches point ${\displaystyle (p,q)}$ at time ${\displaystyle t=1.}$ With linear equations the curves are restricted to straight lines.

This task can be solved by adding a motion from ${\displaystyle h\to p}$ in the direction of the ${\displaystyle x}$-axis and a simultaneous motion from ${\displaystyle k\to q}$ in the direction of the ${\displaystyle y}$-axis, both motions controlled by the parameter ${\displaystyle t.}$ The motion in the ${\displaystyle x}$-direction can be described as

${\displaystyle x=(p-h)t+h}$

and similarly, the motion in the ${\displaystyle y}$-direction can be described as

${\displaystyle y=(q-k)t+k.}$

These two linear equations, with variables ${\displaystyle (t,x)}$ and ${\displaystyle (t,y)}$, make up a parametric form for a linear equation with variables ${\displaystyle (x,y)}$ that can be constructed from the two-point form with ${\displaystyle (h,k)}$ and ${\displaystyle (p,q)}$ as points.

For ${\displaystyle t=0:\quad (x,y)|_=(h,k)\quad }$ and for ${\displaystyle t=1:\quad (x,y))|_=(p,q).}$ For all ${\displaystyle t}$ in the interval ${\displaystyle [0,1]}$ the point ${\displaystyle (x,y)|_}$ is on the straight line segment connecting the points for ${\displaystyle t=0}$ and ${\displaystyle t=1.}$ This is sometimes called interpolation. For values of ${\displaystyle t}$ outside this interval, points outside of the segment, but still on the line are addressed extrapolation.

### Connection with linear functions

A linear equation, written in the form y = f(x) whose graph crosses the origin (x,y) = (0,0), that is, whose y-intercept is 0, has the following properties:

• Additivity: ${\displaystyle f(x_+x_)=f(x_)+f(x_)\ }$
• Homogeneity of degree 1: ${\displaystyle f(ax)=af(x),\,}$

where a is any scalar. A function which satisfies these properties is called a linear function (or linear operator, or more generally a linear map). However, linear equations that have non-zero y-intercepts, when written in this manner, produce functions which will have neither property above and hence are not linear functions in this sense. They are known as affine functions.

### Example

An everyday example of the use of different forms of linear equations is computation of tax with tax brackets. This is commonly done with a progressive tax computation, using either point–slope form or slope–intercept form.

## More than two variables

For the general case of a linear equation with ${\displaystyle n>2}$ unknowns the equation may always be assumed to be denoted as at the top

${\displaystyle a_x_+a_x_+\cdots +a_x_+b=0.}$

Sometimes ${\displaystyle b}$ is called the absolute term, and the term coefficient is reserved for the ${\displaystyle a_.}$ A variant to denote ${\displaystyle b,}$ stemming from the use in polynomials, is to write ${\displaystyle a_}$ instead, alluding to the zeroth power of any variable, that always reduces to ${\displaystyle 1.}$

When dealing with ${\displaystyle n=3}$ variables, it is common to use ${\displaystyle x,\;y}$ and ${\displaystyle z}$ instead of indexed variables.

The set of solutions of such an equation is a set of ${\displaystyle n}$-tuples, and each ${\displaystyle n}$-tuple makes the equation an identity, when its components are inserted for the respective unknowns. The values of the variables with zero coefficients are taken arbitrarily from the field of coefficients.

For an equation to have meaningful solutions, at least one coefficient must be non-zero. This can be formulated as

${\displaystyle a_^+a_^+\cdots +a_^=\textstyle \sum _^a_^>0.}$

If all coefficients ${\displaystyle a_}$ equal zero, then, as mentioned for one variable, the equation is either inconsistent (for ${\displaystyle b\neq 0}$) and there is no solution, or all ${\displaystyle n}$-tuples are solutions.

The set of solutions (${\displaystyle n}$-tuples) of a linear equation in ${\displaystyle n}$ variables is an ${\displaystyle (n-1)}$-dimensional hyperplane in an ${\displaystyle n}$-dimensional Euclidean space (or affine space if the coefficients are complex numbers or belong to any field). Within the usual setting of real numbers and a three-dimensional space with Cartesian coordinates, the set of the solutions of a linear equation with three variables describes a plane in the intuitive sense.

A given equation may be solved for all variables with a non-zero coefficient. Let ${\displaystyle j}$ be an index such that ${\displaystyle a_\neq 0,}$ then

${\displaystyle x_=-({\tfrac }}+{\tfrac }}}x_+\cdots +0\cdot x_+\cdots +{\tfrac }}}x_).}$

This way the linear equation can be seen as defining a function of ${\displaystyle (n-1)}$ variables, which maps, assuming the setting of reals, the set of ${\displaystyle (n-1)}$-tuples[5] of reals to the real numbers, i.e.:

${\displaystyle x_:\;\mathbb ^\to \mathbb }$

## Notes

1. ^ Barnett, Ziegler & Byleen 2008, pg. 15
2. ^ Larson & Hostetler 2007, p. 25
3. ^ Larson & Hostetler 2007, p. 29
4. ^ a b Wilson & Tracey 1925, pp. 52-53
5. ^ The (n-1)-tuples are ordered to represent the removal of j from the sequence .

## References

• Barnett, R.A.; Ziegler, M.R.; Byleen, K.E. (2008), College Mathematics for Business, Economics, Life Sciences and the Social Sciences (11th ed.), Upper Saddle River, N.J.: Pearson, ISBN 0-13-157225-3
• Larson, Ron; Hostetler, Robert (2007), Precalculus:A Concise Course, Houghton Mifflin, ISBN 978-0-618-62719-6
• Wilson, W.A.; Tracey, J.I. (1925), Analytic Geometry (revised ed.), D.C. Heath