Retraction Retraction: A Hybrid Model to Ensure Biosecurity during Pandemic Situations Using OpenCV

This paper presents a hybrid model consisting of hardware, software subsystems and publicly available trained feature sets. The developed hybrid model is useful in automated contactless collection and analysis of employees’ and visitors' data in an organization especially during pandemic situations to ensure biosecurity. Such data include temperature and face mask status. If the set norms are not satisfied, the entry into the premises will be restricted or denied. The status is also updated in the corresponding record in the organization database. The hardware subsystem includes an arduino nano, sensors and audio visual alarms. The software subsystem was developed using OpenCV in Python and VSCode editor. Both offline and real time implementations were carried out. The model was validated using real time images and online data sources. The system was tested and found to work satisfactorily under practical input conditions.


Introduction
The main objective of the project is to build an automated contactless system to screen employees and visitors, collect and save their data for management and analytics purpose. This system will be useful in organisations like educational institutions, health care centres and industries. The system involves two levels of scanning. In the first level the employee and visitor are verified. In the second level their temperature and mask status are checked. The project gains more importance in the aftermath of Covid19 pandemic wherein the need for contactless processes in day to day activities has increased.

Related Work
This section describes some of the recent works in the related field. [1] propose a system to identify people who are not wearing protective facial mask in a smart city network monitored by ClosedCircuit Television (CCTV) cameras and then informing the in charge authority for necessary action. A deep learning Convolutional Neural Network (CNN) architecture [2] is used and they have reported 98.7% recognition accuracy. They collected dataset from [3,4] for training and testing the model. They used a total of 1539 samples with 80% and 20% for training and testing phases respectively.
[5] discuss masked face recognition using CNN. They used AR face database [6], IIIT-Delhi Disguise Version 1 Face Database [7] and their own Masked Face Dataset (MFD). The MFD has 45 subjects in seven disguises and various backgrounds with a total of 990 images. They reported accuracies R e t r a c t e d [9] discuss a system to identify suspicious individuals, which consists of a Raspberry Pi Zero, a camera module, capacitive touch sensor and organic light-emitting diode (OLED) display. This system uses Haar Cascade Classifier (HCC) for face detection followed by Local Binary Pattern Histogram (LBPH) for facial recognition using OpenCV. They used an image dataset of four subjects with ten images of each subject. They reported recognition accuracy in the range of 55 -65 %. Alcantara et al. [10] discuss head detection and tracking using OpenCV. They obtained an accuracy of 71.40% to 90%.
[11] describe various face recognition algorithms like Haar Cascade, Eigen faces, Fisherface and Local Binary Pattern (LBP). Also described in the paper is a Principal Component Analysis (PCA) based facial recognition system which can provide a reduced representation of the data. A deep learning based facemask and physical distancing detection is discussed [12]. [13] describes an implementation for detection of objects using HCC [14] and OpenCV library [1517]. HCC is based on a machine learning approach where a cascade function is trained from both positive and negative images. The trained function is then used to detect the objects in unseen or test images. It uses haarcascade_frontalface_default.xml [18] to detect individuals' faces.
[19] discuss a facemask-wearing condition identification method by combining image Super Resolution and Classification Networks (SRCNet). The system was trained and evaluated on the public domain Medical Mask Dataset containing 3835 images with 671 images of not wearing facemask, 134 images of incorrectly wearing facemask and 3030 images of correctly wearing facemask. They have reported 98.7% accuracy of detection.
[20] discuss a binary face classifier to detect face in an image irrespective of its alignment. Training is performed through fully convolutional networks to semantically segment the faces present in an image. Semantic segmentation is the process of assigning a label to each pixel of the image, i.e., either face or no-face. The authors describe that their model works well for both frontal and nonfrontal face images. The Visual Geometric Group VGG 16 architecture is used as the base network for face detection. The initial image size fed to the model is 224×224×3. After the final max pooling layerthe image size is reduced to 28×28×2. Experiments were performed on Multi Human Parsing Dataset containing about 5000 images. They have reported an accuracy of 93.8%.
A mechanism to implement real time face mask detection and alert is given in [21]. The datasets for 'with_mask' and 'without_mask' categories are available in [22]. A few more online data sources can be found in [23].
In the work reported in this paper the authors present two models both utilising, hardware and software systems, hence called hybrid, for safe analysis of data by its contactless collection and processing. Model1 uses a single trained feature set, assuming that the input is invariably a face image. Model2 is more robust. It is built on a multi-layered filtering approach where each layer filters out non facial features. This model uses five different independently trained feature sets. The collective decision is used to calculate the accuracy. The models are tested for both offline and online operations. The validation of the model is done using online image datasets. A subset of this dataset as described in section 4 of this paper was also used for validation. The authors also created their own small dataset of 20 images with and without mask for both offline and real time testing of the model.

Methodology
The overall block diagram of the hybrid model is shown in Figure 1. The safe data analysis model consists of two levels of scanning. In the first level of scanning, contactless checking of employees and visitors is done using Quick Response Code (QR Code) scanning which uses a 2D pictorial code to store the data. Python supports QR Code scanning with the help of built in libraries like pyzbar and zbar. After the first level of scanning, the employee/visitor enters a lobby where there is a camera to scan the face image. The second level of scanning is used to check for the mask and to ensure that the temperature is within the safe limits. A database is used to store and retrieve the data for analysis. Steps involved in QR code scanning for the Employee a) Each employee has his identity card (ID) with a built in unique QR code containing his employee id. b) Employee scans the QR code on his ID card against a camera connected to the computer. c) A Python code is written in the computer to read this code and obtain the employee id. d) The computer code fetches employee details from the database and verifies his identity. e) The computer recognizes the internal employee, as the details like identity number, name, designation, department, mobile number and email id are already available in the organization's database. f) The employee is then allowed for the second level of scanning.

Steps involved in QR code scanning for the Visitor
a) The visitor scans the static QR code meant for the visitor (displayed at the main entrance) using his mobile. b) Once scanned, an electronic form (e-form) opens in his mobile. This operation requires internet connectivity. c) Visitor fills the e-form with his name, organisation, purpose of visit, department to visit, mobile number, email id and submits the filled e-form. d) On successful submission, the visitor receives an id as QR code in his mobile. This is a temporary id for the visitor. e) The visitor scans the allotted temporary id (or token), against the camera which is connected to the computer. in the database. g) On successful verification, the visitor is allowed to enter the lobby for second level of scanning. Figure 2 shows the block diagram of hardware used in the model. It consists of an arduino nano [24], an ultrasonic sensor; HCSR04 [25] and a temperature sensor; MLX90614 [26] which is an infrared thermometer used for non-contact temperature measurements. The range of the sensor lies between 2cm and 400cm. The ranging accuracy can be upto 3mm. An OLED [27] is used for displaying the temperature.

Hardware details
For face mask recognition, the images are processed in the computer using the algorithm for mask detection, implemented in Python [28, 29] programming language with Open CV [30] library functions. The temperature and mask status are updated in the database against the corresponding record. This data may be monitored regularly by the administrative authority for necessary actions and decision making. A piezo buzzer [31] is also 'on' whenever the temperature is not in the safe limits.  Figure 3 shows the block diagram for software processes involved in the implementation of the model.  Figure 4(a) and Figure 4( Tables 1 to 6 show the performance efficiency for different scenarios. Table 7 shows the summary of performances of Model1 (M1) and Model2 (M2). As shown in Table 7, for M2, the accuracy is higher for 'WT' compared to 'M'. This is a logically required outcome as the basic intention is more towards identifying the defaulter. For M2 with D2, the accuracy for M is the least with 53.2% as it contains many complex images. However, for practical purposes for which it is intended, people are expected to wear a mask properly and show frontal face to the camera. Therefore, the real time implementation accuracy was satisfactory. Table 8 shows the comparison of the implemented work with two recent research works. The present work used a publicly available larger dataset with a large number of complex images, for validation. It works well for real time operation and is suitable for implementation in organisations at check-in. Temperature checking is also done. Additional trained feature sets can be used to improve the robustness of the model.

Conclusion
To ensure biosecurity during pandemic situations, the paper discussed an automated system for contactless collection of employees' and visitors' data using a QR code, camera, ultrasound sensor and temperature sensor. Hybrid models were built, validated and tested successfully for offline and online operations using Python and OpenCV. The model worked satisfactorily for the intended application. As a future work, this working model must be integrated and developed into a deployable model.