## What is 'Multiple Linear Regression - MLR'

Multiple linear regression (MLR) is a statistical technique that uses several explanatory variables to predict the outcome of a response variable. The goal of multiple linear regression (MLR) is to model the relationship between the explanatory and response variables.

The model for MLR, given n observations, is:

y_{i }= B_{0} + B_{1}x_{i1} + B_{2}x_{i2} + ... + B_{p}x_{ip} + E where i = 1,2, ..., n

## BREAKING DOWN 'Multiple Linear Regression - MLR'

A simple linear regression is a function that allows an analyst or statistician to make predictions about one variable based on the information that is known about another variable. Linear regression can only be used when one has two continuous variables – an independent variable and a dependent variable. The independent variable is the parameter that is used to calculate the dependent variable or outcome. For example, an analyst may want to know how the movement of the market affects the price of Exxon Mobil (XOM). In this case, his linear equation will have the value of S&P 500 index as the independent variable or predictor, and the price of XOM as the dependent variable.

In reality, there are multiple factors that predict the outcome of an event. The price movement of Exxon Mobil, for example, depends on more than just the performance of the overall market. Other predictors such as the price of oil, interest rates, and the price movement of oil futures can affect the price of XOM and stock prices of other oil companies. To understand a relationship in which more than two variables are present, a multiple linear regression is used.

Multiple linear regression (MLR) is used to determine a mathematical relationship among a number of random variables. In other terms, MLR examines how multiple independent variables are related to one dependent variable. Once each of the independent factors have been determined to predict the dependent variable, the information on the multiple variables can be used to create an accurate prediction on the level of effect they have on the outcome variable. The model creates a relationship in the form of a straight line (linear) that best approximates all the individual data points.

The model for multiple linear regression is: y_{i }= B_{0} + B_{1}x_{i1} + B_{2}x_{i2} + ... + B_{p}x_{ip }+ E

Where y_{i} = dependent variable - price of XOM

x_{i1} = independent variable – interest rates

x_{i2 }= independent variable – oil price

x_{i3 }= independent variable – value of S&P 500 index

x_{i4}= independent variable – price of oil futures

E = random error in prediction, that is variance that cannot be accurately predicted by the model. Also known as residuals.

B_{0} = y-intercept at time zero.

B_{1} = regression coefficient that measures a unit change in the dependent variable when x_{i1} changes – change in XOM price when interest rates change

B2 = coefficient value that measures a unit change in the dependent variable when x_{i2} changes – change in XOM price when oil prices change

Etc.

The least squares estimates, B_{0}, B_{1}, B_{2}…B_{p} are usually computed by statistical software. As many variables can be included in the regression model in which each independent variable is differentiated with a number - 1,2, 3, 4...p. The multiple regression model allows an analyst to predict an outcome based on information provided on multiple explanatory variables. Still, the model is not always perfectly accurate as each data point can differ slightly from the outcome predicted by the model. The residual value, E, which is the difference between the actual outcome and the predicted outcome, is included in the model to account for such slight variations.

The multiple regression model is based on the following assumptions:

- There is a linear relationship between the dependent variables and the independent variables
- The independent variables are not too highly correlated with each other
- y
_{i}observations are selected independently and randomly from the population - Residuals should be normally distributed with a mean of 0 and variance
*σ*

The co-efficient of determination, R-squared or R^{2}, is a statistical metric that is used to measure how much of the variation in outcome can be explained by the variation in the independent variables. R^{2} always increases as more predictors are added to the MLR model even though the predictors may not related to the outcome variable. Therefore, R^{2} by itself, cannot be used to identify which predictors should be included in a model and which should be excluded. R^{2} can only be between 0 and 1, where 0 indicates that the outcome cannot be predicted by any of the independent variables and 1 indicates that the outcome can be predicted without error from the independent variables.

Assuming we run our XOM price regression model through a statistics computation software that returns this output:

An analyst would interpret this output to mean if other variables are held constant, the price of XOM will increase by 7.8% if the price of oil in the markets increases by 1%. The model also shows that the price of XOM will decrease by 1.5% following a 1% rise in interest rates. R^{2} indicates that 86.5% of the variations in the stock price of Exxon Mobil can be explained by changes in the interest rate, oil price, oil futures, and S&P 500 index.