Linear regression is probably the first algorithm that one would learn when commencing a career in machine or deep learning because its simple to implement and easy to apply in real-time. This algorithm is widely used in data science and statistical fields to model the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent variables). Several types of regression techniques are available based on the data being used. Although linear regression involves simple mathematical logic, its applications are put into use across different fields in real-time. In this article, well discuss linear regression in brief, along with its applications, and implement it using TensorFlow 2.0.
Regression analysis is used to estimate the relationship between a dependent variable and one or more independent variables. This technique is widely applied to predict the outputs, forecasting the data, analyzing the time series, and finding the causal effect dependencies between the variables. There are several types of regression techniques at hand based on the number of independent variables, the dimensionality of the regression line, and the type of dependent variable. Out of these, the two most popular regression techniques are linear regression and logistic regression.
Researchers use regression to indicate the strength of the impact of multiple independent variables on a dependent variable on different scales. Regression has numerous applications. For example, consider a dataset consisting of weather information recorded over the past few decades. Using that data, we could forecast weather for the next couple of years. Regression is also widely used in organizations and businesses to assess risk and growth based on previously recorded data.
You can find the implementation of regression analysis directly as a deployable code chunk. In modern machine learning frameworks like TensorFlow and PyTorch, in-built libraries are available to directly proceed with the implementation of our desired application.
The goal of linear regression is to identify the best fit line passing through continuous data by employing a specific mathematical criterion. This technique falls under the umbrella of supervised machine learning. Prior to jumping into linear regression, though, we first should understand what supervised learning is all about.
Machine learning is broadly classified into three types; supervised learning, unsupervised learning, and reinforcement learning. This classification is based on the data that we give to the algorithm. In supervised learning, we train the algorithm with both input and output data. Unsupervised learning occurs when theres no output data given to the algorithm and it has to learn the underlying patterns by analyzing the input data. Finally, reinforcement learning involves an agent taking an action in an environment to maximize the reward in a particular situation. It paves the way for choosing the best possible path for an algorithm to traverse. Now, lets look more closely at linear regression itself.
Linear regression assumes that the relationship between the features and the target vector is approximately linear. That is, the effect (also called coefficient, weight, or parameter) of the features on the target vector is constant. Mathematically, linear regression is represented by the equationy = mx + c + .
In this equation, y is our target, x is the data for a single feature, m and c are the coefficients identified by fitting the model, and is the error.
Now, our goal is to tune the values of m and c to establish a good relationship between the input variable x and the output variable y. The variable m in the equation is called variance and is defined as the amount by which the estimate of the target function changes if different training data were used. The variable c represents the bias, the algorithms tendency to consistently learn the wrong things by not taking into account all the information in the data. For the model to be accurate, bias needs to be low. If there are any inconsistencies or missing values in the dataset, bias increases. Hence, we must carry out proper preprocessing of the data before we train the algorithm.
The two main metrics we use to evaluate linear regression models are accuracy and error. For a model to be highly accurate with minimum error, we need to achieve low bias and low variance. We partition the data into training and testing datasets to keep bias in check and ensure accuracy.
Before we build a supervised machine learning model, all we have is data comprising inputs and outputs. To estimate the dependency between them using linear regression, we pick two random values, variance and bias. Thereby, we consider a tuple from the dataset, feed the input values to the equation y = mx + c, and predict the new values. Later, we calculate the loss incurred by the predicted value using a loss function.
The values of m and c are picked randomly, but they must be updated to minimize the error. We thereby consider loss function as a metric to evaluate the model. Our goal is to obtain a line that best reduces the error.
The most common loss function used is mean squared error. It is mathematically represented as
If we dont square the error, the positive and negative points cancel each other out. The static mathematical equations of bias and variance are as follows:
When we train a network to find the ideal variance and bias, different values can yield different errors. Out of all the values, there will be one point where the error value will be minimized, and the parameters corresponding to this value will yield an optimal solution. At this point, gradient descent comes into the picture.
Gradient descent is an optimization algorithm that finds the values of parameters (coefficients) of a function (f) to minimize the cost function (cost). The learning rate defines the rate at which the parameters are updated. It controls the rate at which we would be adjusting the weights of our network with respect to the loss gradient. The lower the value, the slower we travel the downward slope along which the weights get updated at every step.
Both the m and c values are updated as follows:
Once the model is trained and achieves a minimum error, we can fix the values of bias and variance. Ultimately, this is how the best fit line looks like when plotted between the data points:
So far, weve seen the fundamentals of linear regression, and now its time to implement one. We could use several data science and machine learning libraries to directly import linear regression functions or APIs and apply them to the data. In this section, we will build a model with TensorFlow thats based on the math which we talked about in the previous sections. The code is organized as a sequence of steps. You can simultaneously implement these chunks of code in your local machine or in any of the cloud platforms like Paperspace or Google Colab. If its your local machine, make sure to install Python and TensorFlow. If you are using Google Colab Notebooks, TensorFlow is preinstalled. To install any other modules like sklearn or matplotlib, you can use pip. Make sure you add an exclamation (!) symbol as a prefix to the pip command, which allows you to access the terminal from the notebook.
Step 1: Importing the Necessary Modules
Getting started, first and foremost, we need to import all the necessary modules and packages. In Python, we use the import keyword to do this. We can also alias them using the keyword as. For example, to create a TensorFlow variable, we import TensorFlow first, followed by the class tensorflow.Variable(). If we create an alias for TensorFlow as tf, we can create the variable as tf.Variable(). This saves time and makes the code look clean. We then import a few other methods from the __future__ library to help port our code from Python 2 to Python 3. We also import numpy to create a few samples of data. We declare a variable rng with np.random which is later used to initialize random weights and biases.
Step 2: Creating a Random Dataset
The second step is to prepare the data. Here, we use numpy to initialize both the input and output arrays. We also need to make sure that both arrays are the same shape so that every element in the input array would correspond to every other element in the output array. Our goal is to identify the relationship between each corresponding element in the input array and the output array using Linear Regression. Below is the code snippet that we would use to load the input values into variable x and output values into variable y.
Step 3: Setting up the Hyperparameters
Hyperparameters are the core components of any neural network architecture because they ensure accuracy of a model. In the code snippet below, we define learning rate, number of epochs, and display steps. You can also experiment by tweaking the hyperparameters to achieve a greater accuracy.
Step 4: Initializing Weights and Biases
Now that we have our parameters equipped, lets initialize weights and biases with random numerics. We do this using the rng variable that was previously declared. We define two tensorflow variables W and b and set them to random weights and biases, respectively, using the tf.Variable class.
Step 5: Defining Linear Regression and Cost Function
Here comes the essential component of our code! We now define linear regression as a simple function, linear_regression. The function takes input x as a parameter and returns the weighted sum, weights * inputs + bias. This function is later called in the training loop while training the model with data. Further, we define loss as a function called mean_square. This function takes a predicted value that is returned by the linear_regression method and a true value that is picked from the dataset. We then use tf to replicate the math equation discussed above and return the computed value from the function thereupon.
Step 6: Building Optimizers and Gradients
We now define our optimizer as stochastic gradient descent and plug in learning rate as a parameter to it. Next, we define the optimization process as a function, run_optimization, where we calculate the predicted values and the loss that they generate using our linear_regression() and mean_square() functions as defined in the previous step. Thereafter, we compute the gradients and update the weights in the optimization process. This function is invoked in the training loop that well discuss in the upcoming section.
Step 7: Constructing the Training Loop
This marks the end of our training process. We have set all the parameters, declared our models, loss function, and the optimization function. In the training loop, we stack all these and iterate the data for a certain number of epochs. The model gets trained and with every iteration, the weights get updated. Once the total number of iterations is complete, we get the ideal values of W and b.
Lets work through the code chunk below. We write a simple for loop in Python and iterate the data until the total number of epochs is complete. We then run our optimization function by invoking the run_optimization method where the weights get updated using the previously defined SGD rule. We then display the loss and the step number using the print function, along with the metrics.
Step 8: Visualizing Linear Regression
While concluding the code, we visualize the best fit line using matplotlib library.
Linear regression is a powerful statistical technique that can generate insights on consumer behavior, help to understand business better, and comprehend factors influencing profitability. It can also be put to service evaluating trends and forecasting data in a variety of fields. We can use linear regression to solve a few of our day-to-day problems related to supporting decision making, minimizing errors, increasing operational efficiency, discovering new insights, and creating predictive analytics.
In this article, we have reviewed how linear regression works, along with its implementation in TensorFlow 2.0. This method sets the baseline to further explore the various ways of chalking out machine learning algorithms. Now that you have a handle on linear regression and TensorFlow 2.0, you can try experimenting further with a lot other frameworks by considering various datasets to check how each one of those fares.
Vihar Kurama is a machine learning engineer who writes regularly about machine learning and data science.
Expert Contributor Network
Built Ins expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industrys definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.
See the original post:
Linear Regression: Concepts and Applications with TensorFlow 2.0 - Built In