ARTIFICIAL INTELLIGENCE
PROJECT
FACE COVERINGS DETECTOR
As artificial intelligence develops, it can help solve various issues, especially when new problems suddenly arise. The 2020 pandemic created mass problems globally as the world struggled to adapt to prevent the virus from spreading. AI can be used to sort both small and large issues, one of which being the face-covering mandate. With masks being one of the significant ways to prevent the spread of the virus and it being signed into law as a requirement when indoor, enforcing the issue has been a problem for many businesses as many have had to have a member of staff on the door to enforce the rule, for example. Having a form of detection device or application instead of the additional staff member would help the businesses in several ways. Creating a model by training it against a dataset containing images of both with and without face coverings, to then be used in a live video stream detection.
​
The best solution to tackling many issues using artificial intelligence is to adapt already validated systems rather than start a completely new development. Throughout the development, the intent is to build upon already know information on systems just as facial detection. The diagram below shows an overview of the project, featuring all the key elements that create a complete implementation. The project uses three leading technologies; AI, ML and DL, but primarily focuses on machine learning; which is a subset of AI, focusing on getting machines to make decisions by feeding them data, and deep learning; which is a subset of machine learning, as it uses the concept of neural networks to solve complex problems.

The training phase aims to train a custom deep learning model to detect whether a person is wearing a face covering or not. In terms of machine learning, there are three approaches: supervised, unsupervised and reinforcement learning. This project uses explicitly supervised learning, which is the technique of training the program using well-labelled data, this being in the form of a dataset. The dataset contains two classes containing hundreds of images of people with face covering and without or wearing a face-covering incorrectly, allowing for the program to be fed this data and produce an algorithm from the training to identify a face covering. Supervised learning involves a well-defined training phase completed with this labelled data.

By training a custom deep learning model to detect whether an individual is or is not wearing a mask, this model can then be loaded and used for the main phase of the implementation and program as a whole. How the model is trained is by first using facial landmarks to locate areas of the facial structure, this trains the program to identify when critical parts of the face are not visible, in this case, the nose and mouth. The model will be created using the Keras and TensorFlow libraries on the datasets, and then the resulting model will be formed to be used in the actual detection program, which is created in phase 2.

Once the model has been trained, and the model has formed the second phase, the program will be run to produce a real-time detection for face coverings. The device used to create the application will require a fully working camera, as the device's camera will be accessed when the application is run. When detecting face while running, several challenges may occur, most notably in face alignment. Issues that can come under face alignment are slight occlusion, lighting condition, head orientation and non-rigid deformation. All of these challenges are continually being researched and solved by AI researchers, and during this project will try to be tackled as much as possible within the time frame, although elements such as lighting and image quality, will ultimately depend on the device's camera and conditions when the program is running.
​
The application will work by loading the classifier/model created in phase 1. When the program is run, the device's camera will be accessed and displayed in the top left corner of the screen at a suitable size. The program will then detect all faces displayed on the screen and will then apply the classifier/model to each face, determining with probability if the user is wearing a face covering. This will then be displayed on-screen, and if a face covering is detected, the box and text displayed will be coloured green, and if not detected, the coloured displayed will be red.
This project was created using Python, a high-level programming language and the video above shows the project working. Code available to view on my Github account.

