Master the art of face swapping with OpenCV and Python by Sylwek Brzczkowski, developer at TrustStamp – Packt Hub

Posted: December 13, 2019 at 2:33 pm

No discussion on image processing can be complete without talking about OpenCV. Its 2500+ algorithms, extensive documentation and sample code are considered world-class for exploring real-time computer vision. OpenCV supports a wide variety of programming languages such as C++, Python, Java, etc., and is also available on different platforms including Windows, Linux, OS X, Android, and iOS.

OpenCV-Python, the Python API for OpenCV is one of the most popular libraries used to solve computer vision problems. It combines the best qualities of OpenCV, C++ API, and the Python language. The OpenCV-Python library uses Numpy, which is a highly optimized library for numerical operations with a MATLAB-style syntax. This makes it easier to integrate the Python API with other libraries that use Numpy such as SciPy and Matplotlib. This is the reason why it is used by many developers to execute different computer vision experiments.

At the PyData Warsaw 2018 conference, Sylwek Brzczkowski walked through how to implement a face swap using OpenCV and Python. Face swaps are used by apps like Snapchat to dispense various face filters. Brzczkowski is a Python developer at TrustStamp.

Histogram of oriented gradients (HOG) is a feature descriptor that is used to detect objects in computer vision and image processing. Brzczkowski demonstrated the working of a HOG using square patches which when hovered over an array of images produces a histogram of oriented gradients feature vectors. These feature vectors are then passed to the classifier to generate a result having the highest matching samples.

In order to implement face detection using HOG in Python, the image needs to be imported using import OpenCV. Next a frontal face detector object is created for the loaded image detector=dlib.get_frontal_face_detector(). The detector then produces the vector with the detected face.

Face landmark detection is the process of finding points of interest in an image of a human face. When dlib is used for facial landmark detection, it returns 68 unique fashion landmarks for the whole face. After the first iteration of the algorithm, the value of T equals 0. This value increases linearly such that at the end of the iteration, T gets the value 10. The image evolved at this stage produces the ground truth, which means that the iteration can stop now. Due to this working, this stage of the process is also called as face alignment.

To implement this stage, Brzczkowski showed how to add a predictor in the Python program with the values shape_predictor_68_face_landmarks.dat such that it produces a model of around 100 megabytes. This process generally takes up a long time as we tend to pick the biggest clearer image for detection.

The convex hull is a set of points defined as the smallest convex polygon, which encloses all of the points in the set. This means that for a given set of points, the convex hull is the subset of these points such that all the given points are inside the subset. To find the face border in an image, we need to change the structure a bit. The structure is first passed to the convex hull function with return points to false, this means that we get an output of indexes. Brzczkowski then exhibited the face border in the image in blue color using the find_convex_hull.py function.

In a linear filtering of an image, the value of an output pixel is a linear combination of the values of the pixels. Brzczkowski put forth the example of Affine transformation which is a type of linear mapping method and is used to preserve points, straight lines, and planes. On the other hand, a non-linear filtering produces an output which is not a linear function of its input. He then goes on to unveil both the transitions using his own image. Brzczkowski then advised users to check the website learnOpenCV.com to learn how to create a nonlinear operation with a linear one.

A Delaunay triangulation subdivides a set of points in a plane into triangles such that the points become vertices of the triangles. This means that this method subdivides the space or the surface into triangles in such a way that if you look at any triangle on the image, it will not have another point inside the triangle. Brzczkowski then demonstrates how the image developed in the previous stage contained face points from which you can identify my teeth and then create sub div to the object, insert all these points that I created or all detected. Next, he deploys Delaunay triangulation to produce a list of two angles. This list is then used to obtain the triangles in the image. Post this step, he uses the delaunay_triangulation.py function to generate these triangles on the images.

To recap, we started from detecting a face using HOG and finding its border using convex hull, followed it by adding mouth points to indicate specific indexes. Next, Delaunay triangulation was implemented to obtain all the triangles on the images.

Next, Brzczkowski begins the blending of images using seamless cloning. A seamless cloning combines the attributes of other cloning methods to create a unique solution to allow sequence-independent and scarless insertion of one or more fragments of DNA into a plasmid vector. This cloning method also provides a variety of skin colors to choose from.

Brzczkowski then explains a feature called pass on edit image in the Poisson image editing which uses the value of the gradients instead of the identities or the values of the pixels of the image.

To implement the same method in OpenCV, he further demonstrates how information like source, destination, source image destination, mask and center (which is the location where the cloned part should be placed) is required to blend the two faces. Brzczkowski then depicts a string of illustrations to transform his image with the images of popular artists like Jamie Foxx, Clint Eastwood, and others.

In computer vision, the Lucas-Kanade method is a widely used differential method for optical flow estimation. It assumes that the flow is essentially constant in a local neighborhood of the pixel under consideration, and solves the basic optical flow equations for all the pixels in that neighborhood, by the least-squares criterion. Thus by combining information from several nearby pixels, the LucasKanade method resolves the inherent ambiguity of the optical flow equation. This method is also less sensitive to noises in an image.

By using this method to implement the stabilization of the face swapped image, it is assumed that the optical flow is essentially constant in a local neighborhood of the pixel under consideration in human language. This means that if we have a red point in the center we assume that all the points around, lets say in this example is three on three pixels we assume that all of them have the same optical flow and thanks to that assumption we have nine equations and only two unknowns.

This makes the computation fairly easy to solve. By using this assumption the optical flow works smoothly if we have the previous gray position of the image. This means that for face swapping images using OpenCV, a user needs to have details of the previous points of the image along with the current points of the image. By combining all this information, the actual point becomes a combination of the detected landmark and the predicted landmark.

Thus by implementing the Lucas-Kanade method for stabilizing the image, Brzczkowski implements a non-shaky version of his face-swapped image. Watch Brzczkowskis full video to see a step-by-step implementation of a face-swapping task.

You can learn advanced applications like facial recognition, target tracking, or augmented reality from our book, Mastering OpenCV 4 with Python written by Alberto Fernndez Villn. This book will also help you understand the application of artificial intelligence and deep learning techniques using popular Python libraries like TensorFlow and Keras.

Getting to know PyMC3, a probabilistic programming framework for Bayesian Analysis in Python

How to perform exception handling in Python with try, catch and finally

Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial]

OpenCV 4.0 releases with experimental Vulcan, G-API module and QR-code detector among others

Read the original post:

Master the art of face swapping with OpenCV and Python by Sylwek Brzczkowski, developer at TrustStamp - Packt Hub

Related Posts