Recyclable plastic waste segregation with deep learning based hand-eye coordination

Plastic waste management is a critical concern in municipal solid waste management systems worldwide. Despite the efforts of waste management personnel to segregate waste manually, the existing challenges persist. In municipal waste facilities, individuals responsible for waste segregation face numerous obstacles. Consequently, a significant amount of plastic waste ends up in landfills, exacerbating the plastic waste problem. To overcome these challenges, this research focuses on developing an automated system capable of categorizing plastic waste based on its visual characteristics. The trained model exhibits high precision in identifying various types of plastic waste, including PET, HDPE, PVC, LDPE, PP, and PS. Specifically, the model achieves an Average Precision of 0.917 and an Average Recall of 0.801. Moreover, the model maintains a good balance between precision and recall. In real-time operation, an overhead camera locates the positions of both the waste items and the gripper. By calculating the positional difference between the waste and the gripper, the system achieves a higher level of segregation accuracy, resembling human-like hand-eye coordination. The proposed system offers a solution to the challenges faced in MSW facilities, where the timely segregation of waste is crucial. By automating the plastic waste categorization process, the system can significantly improve waste management practices, leading to a more sustainable approach to plastic waste disposal and recycling.


Introduction
Municipal Solid Waste (MSW) management continues to heavily rely on manual labor for waste segregation, particularly when it comes to sorting plastic waste.However, the lack of automated segregation devices poses a significant challenge in streamlining waste management processes.The proper segregation of plastic waste holds immense importance due to its environmental impact and recycling potential.Inadequate segregation can lead to adverse consequences, including pollution and disruption to ecosystems [1].Growing population and rapid urbanization lead to the generation of a substantial amount of plastic waste.It is estimated that India produces around 9.46 million metric tons of plastic waste annually.The mismanagement of this waste stream can have severe consequences for the environment and public health.The motivation for this research stems from the need to improve the waste management process, particularly in the segregation of different waste categories.The waste categories that are focused in the work includes PET, HDPE, PVC, LDPE, PP, and PS.These waste materials are easily identifiable and sortable by human visual perception.This raises the question of whether machines can be trained to interpret waste materials similarly.
To address these challenges, the introduction of automated systems for plastic waste segregation is crucial.By incorporating technologies such as robotics, computer vision, and deep learning, advanced devices can accurately detect and segregate plastic waste according to recycling categories [2].These systems have the potential to revolutionize waste management practices by enhancing efficiency, reducing manual labor, and improving recycling rates.This research paper aims to propose a novel approach to plastic waste segregation by integrating a robotic arm system with object detection capabilities.The system utilizes image-based object detection algorithms to identify different types of plastic waste, enabling the robotic arm to position and separate the waste accordingly.The study investigates the feasibility, efficiency, and effectiveness of this proposed system in comparison to traditional manual segregation methods.
Through this research, the significance of automated plastic waste segregation and its potential positive impact on waste management practices will be highlighted.By exploring the challenges and opportunities in this field, this study aims to contribute to the development of sustainable waste management systems that optimize plastic waste recycling and minimize environmental harm.
Section 1 Introduction which Provides an overview of waste management challenges and the need for advanced waste segregation systems, emphasizing the significance of plastic waste segregation.In, section 2 Review of existing literature on waste segregation and robotic applications, highlighting gaps and limitations in current methods.Section 3 describes about the issues in manual waste segregation, human limitations, and section 4 gives the outline of the proposed system.The process involved in the data collection and training the object detection model is given section 5.The algorithm used in the work is explained in section 6 and section 7 is result which shows the Presents experimental outcomes and system performance evaluation, showcasing efficiency and reduction in plastic waste.Section 8 is comparison with other works, and finally section 9 is conclusion which summarizes the key findings, contributions of the work.

Related works
In the field of waste management, numerous studies and research efforts have been undertaken to address the challenges associated with efficient waste handling, recycling, and environmental sustainability.Bobulski et al focus on addressing the challenge of plastic waste management by developing an automated sorting method using deep learning and image processing techniques, to segregate plastic waste into different categories, such as PS, PP, PE-HD, and PET, which are commonly found in household waste [3].Sundaralingam et al developed recyclable plastic waste classification model which can classify the plastic waste categories and also save the detection information as annotation file along with real time images [4].Koganti et al proposed a model for an automated waste segregation system.The system utilizes a camera and Raspberry Pi mounted on a conveyor belt to detect and classify individual waste items.The software module incorporates a Deep Learning algorithm, specifically the Single Shot Detector model with MobileNet as the base network, to classify the waste into two categories: biodegradable and non-biodegradable [5].Liang S introduces a novel approach to waste management through multi-label waste classification and localization in images.To overcome the studies focusing solely on single-label waste classification, authors developed a multi-task learning architecture based on a convolutional neural network, which combines waste identification and location tasks by utilizing attention modules, a multi-level feature pyramid network, and joint learning multi-task subnets [6,7] proposes an automated technique using a deep convolutional neural network for identifying microplastics in microscopic images [8] developed a deep learning approach to quantify and classify microplastics in scanning electron microscopy images and the use of deep learning significantly reduces the labour-intensive manual extraction of microplastics, making the process faster, cheaper, and more efficient.Smart waste management system that integrates deep learning and IoT technology to enhance waste segregation and monitoring, the system employs a camera module to detect waste and a servo motor to categorize it into respective waste compartments.An ultrasonic sensor monitors waste fill levels, while a GPS module provides real-time location data [9].An intelligent sorting bin for Recyclable Municipal Solid Waste that incorporates an automatic source separation method.The Convolutional Neural Network (CNN), a fast and efficient deep learning technique, is implemented for waste sorting.The Inception-ResNet V2 algorithm, a combination of Inception and Residual Network structures is utilized for object detection and the model is trained with PET and LDPE waste samples [10].Neelakandan et al developed an approach for industrial waste management using metaheuristics and deep transfer learning which includes waste detection and classification [11].The authors propose new benchmark datasets, detect-waste, and classify-waste, which cover various waste categories.They present a two-stage detector that utilizes EfficientDet-D2 for litter localization and EfficientNet-B2 for waste classification, achieving significant accuracy in waste detection and classification [12].The authors proposed a framework that combines saliency detection and image classification for robust garbage classification.The framework utilizes a saliency network to identify the garbage target area and a classification network to classify the segmented garbage image.Training is conducted using a fusion of images from the original dataset and saliency detection datasets with complex backgrounds [13].Puig et al address this issue by integrating a waste recognition system into recycle bins, educating users on proper waste disposal, and providing sorting quality statistics to improve waste collection strategies, and studied the performance of CNN for waste recognition, using a self-gathered dataset obtained from a trash can with an embedded video and lighting system.The results highlight the difficulties CNNs face in distinguishing between similar objects [14].Sundaralingam et al proposes a household waste segregator using the TensorFlow object detection model along with an Arduino microcontroller to segregate waste into five categories.The SSD MobileNet V2 model is trained with a dataset comprising various household waste types, enabling accurate waste classification and segregation into specific dustbins [15].The model developed in [16] achieves an impressive overall accuracy rate of 95.01%, outperforming several state-of-the-art models.It demonstrates high F1-scores for cardboard, glass, metal, paper, plastic, and litter categories.The study [17] proposes the use of CNN and Graph-LSTM, deep learning methods, for waste detection and classification on a conveyor belt in waste collection systems.Baharuddin et al address solid waste issues due to urbanization and industrialization, emphasizing recycling for environmental sustainability.It focuses on image processing techniques for classifying recyclable dry waste.An intelligent waste classification system is proposed, utilizing image analysis for feature extraction and classification [18].The author developed an automatic garbage classification system based on computer vision to address the increasing volume of daily garbage.The YOLOv5 object detection algorithm is improved by focusing on faster detection speed and accuracy [19].Abu-Qdais et al compares traditional machine learning models like Random Forest and Support Vector Machine with deep learning CNN.Results indicate superior performance of CNN, and introduces JONET, a deep learning model based on DenseNet 201 for waste classification [20].Hancu et al discuss aspects related to the selection of robotic system architecture for sorting solid recyclable waste streams.It highlights the importance of robotics technology and advanced recognition sensors in increasing sorting efficiency and transitioning waste into a valuable resource and also presents an optimization algorithm based on performance criteria, providing valuable insights into the area of smart waste management [21].The system proposed in [22] aims to address the challenges of managing daily waste in urban environments by classifying various types of garbage.It utilizes sensors such as humidity, temperature, gas, and liquid sensors to assess garbage conditions.However, while there have been significant contributions in the field of waste management, it is important to note that many of these works have not been tested with real-time models.While theoretical studies and classification approaches have provided valuable insights, the practical implementation and validation of these methods in real-world scenarios, particularly for the segregation of plastic waste, are still limited.Therefore, there is a need for research that not only explores theoretical concepts but also focuses on the development and testing of practical, realtime models for the effective segregation of plastic waste.Such research can bridge the gap between theoretical advancements and practical applications, enabling more efficient and accurate plastic waste segregation processes in waste management systems.

Problem background
In the field of waste management, despite advancements in technology and automation, several countries still heavily rely on manual labour for waste segregation in municipal solid waste (MSW) management.Human labour has inherent limitations when it comes to efficiently and accurately segregating waste.In the process of manual waste segregation, there are several health issues that can arise when humans are involved.Sorting waste often exposes workers to various hazardous materials, contaminants, and potentially harmful substances.These can include sharp objects, broken glass, toxic chemicals, biohazardous materials, and even infectious waste.Workers may also face physical strain and injuries due to repetitive tasks, heavy lifting, and improper handling of waste.Additionally, the lack of proper protective equipment and safety measures can further exacerbate the health risks associated with waste sorting activities [23,24].Although equipped with safety measures, their efforts are often hindered by the continuous arrival of new waste loads before they can complete the segregation of existing waste.This often results in improper segregation and the disposal of mixed waste into landfills.
The well-being and safety of workers should be a top priority in waste management practices, highlighting the need for automated systems that can minimize human exposure to health hazards and create a safer working environment Therefore, there is a critical need for advanced and automated waste segregation systems that can overcome these limitations and ensure effective waste management in a timely and sustainable manner

Proposed system
The proposed research work focuses on the development of a vision-guided robotic arm system for the automated segregation of plastic waste for recycling.The study involves the collection and annotation of a comprehensive dataset comprising plastic waste samples and corresponding images of the arm gripper, the architecture of the proposed system illustrated in figure 1(a).Subsequently, a TensorFlow pre-trained object detection model is trained using the collected dataset, employing 8000 training steps to optimize its performance.Once trained, a frozen inference graph is obtained and deployed on a local machine using a Jupyter Notebook environment.This allows for real-time execution of the object detection model, enabling the accurate localization and identification of plastic waste objects as well as the gripper position.To facilitate real-time waste segregation, a seamless integration is established between the object detection model and an Arduino microcontroller.This integration enables precise control and navigation of the robotic arm based on the detected waste and gripper positions.The components of the plastic waste segregation system are shown in figure 1(b).Leveraging this integration, the proposed system effectively segregates the plastic waste using the robot arm, demonstrating the practical implementation and efficacy of the developed approach.
In summary, this research work presents a comprehensive solution for automated plastic waste segregation, leveraging advanced techniques such as object detection, machine learning, and real-time control.The system's technical implementation showcases the potential for efficient and accurate waste segregation in an automated robotic environment.And also, this practice not only hampers the recycling and reutilization of valuable resources but also contributes to environmental pollution and the depletion of landfill capacity.MobileNet is a widely used pre-trained object detection model for object detection tasks due to its high performance and low computational cost.With an extension of the original MobileNet architecture, depthwise separable convolutions were introduced to MobileNet v2, to reduce the number of parameters and computational cost.The architecture contains several convolutional layers in series, each followed by batch normalization and ReLU6 (Rectified Linear Unit) activation function.The convolutional layers use a combination of regular convolutions and depthwise separable convolutions.The key innovation in MobileNetV2 is the use of a linear bottleneck structure, which reduces the number of parameters and computational costs while maintaining accuracy.The bottleneck consists of a 1×1 convolution which is the expansion layer followed by a depthwise separable convolution and another 1×1 convolution is the projection layer.The expansion layer increases the number of channels of the input tensor, while the projection layer reduces it back to the original number of channels.

Algorithm
The training process was conducted on the Google Colab GPU platform, utilizing the pre-trained SSD MobileNet 640 × 640 model.The model was trained for 8000 steps to ensure robust learning and convergence.The recorded losses during training provided valuable insights into the model's performance.The classification loss, measuring the accuracy of object classification, was observed to be 0.065064602, indicating the model's ability to correctly identify plastic waste objects and the gripper head.
Furthermore, the localization loss, quantifying the precision of object localization, achieved a value of 0.018953117.This suggests that the model successfully learned to accurately localize the target objects within their respective bounding boxes.Additionally, the regularization loss, which promotes generalization and prevents overfitting, was found to be 0.105664156, indicating effective regularization techniques employed during training.Overall, the total loss, encompassing all components, was measured at 0.189681873, indicating the convergence and proficiency of the trained model.The graph generated for the loss of objection model is illustrated in figure 2(b) These loss values validate the effectiveness of the training process, ensuring the model's ability to detect and classify plastic waste objects and the gripper head accurately.

Algorithm
After training the object detection model on the Google Colab GPU platform and obtaining the trained inference graph, it is downloaded and loaded onto the local machine for execution using Jupyter Notebook.The loaded inference graph is then utilized in conjunction with the algorithm to accurately segregate the waste based on the detected objects.This integration of the inference graph with the algorithm ensures efficient and real-time waste segregation capabilities.The proposed algorithm facilitates the automated segregation of plastic waste using a vision-guided robotic arm.By leveraging object detection techniques and intelligent decision-making, it enables precise identification and handling of plastic waste objects.The algorithm begins by capturing an image with an overhead camera and performing object detection to identify plastic waste objects and the gripper head.

Perform object detection
To identify and locate plastic waste objects, the algorithm applies a trained object detection model to the input images or video frames.This process involves analysing the visual data and identifying regions of interest that potentially contain plastic waste.The algorithm then generates bounding boxes around the detected objects and assigns corresponding class IDs to represent the type of waste for each object as illustrated in figure 3(a).

Listing the classes
After performing object detection, the algorithm iterates through the class ID list of the detection matrix generated by the object detection model.Each detected object is associated with a class ID, which was assigned during the training phase of the model.By iterating through the class ID list, the algorithm creates a new list that contains only the class IDs corresponding to the plastic waste objects.This step ensures that the algorithm focuses solely on the classes relevant to waste segregation, disregarding other classes such as the gripper class or unrelated objects present in the scene.The filtered class ID list enables the algorithm to further analyse and segregate plastic waste objects effectively.

Calculate object centers
For each detected plastic waste object, the algorithm calculates its center coordinates.By utilizing the bounding box coordinates obtained from the object detection step, the algorithm determines the center point of each object.
The x-coordinate center is calculated using the equation: The x-coordinate center is calculated as follows: The y-coordinate center is calculated as follows: In equations (1) and (2), x1 and x2 represent the bounding box's x-coordinates, while y1 and y2 represent the bounding box's y-coordinates of the detected waste object as shown in figure 3(b).The image width and image height are the dimensions of the input image or frame.By averaging the x-coordinates and y-coordinates of the bounding box, the algorithm obtains the center coordinates of the waste object, which are scaled based on the image dimensions.These center coordinates are crucial for determining the position and distance of the waste object relative to the gripper for subsequent segregation operations.

Find the nearest object
The algorithm calculates the distance between the center of the gripper and the center of each plastic waste object.By applying the distance calculation formula: x y

= -+
In this equation (3), center x and center y represent the x and y coordinates of the waste object's center.Similarly, gripper center x and gripper center y represent the x and y coordinates of the gripper's center.By subtracting the gripper's center coordinates from the waste object's center coordinates and applying the Euclidean distance formula, the algorithm calculates the distance between them.This distance measurement is essential for determining the nearest waste object to the gripper and making appropriate segregation decisions.The algorithm determines the proximity of each object to the gripper.The object with the shortest distance is identified as the nearest plastic waste object using the 'argmin' function.

Determine waste position
The algorithm determines the position of the nearest plastic waste object relative to the gripper by comparing the x-coordinate of the object's center with the x-coordinate of the gripper's center.Here, center x [nearest object] represents the x-coordinate of the nearest object's center, and gripper center x represents the x-coordinate of the gripper's center.By comparing these values, the algorithm can determine the relative position of the waste object with respect to the gripper, helping guide subsequent actions for waste segregation and robotic arm control.

Perform segregation
Based on the waste object's class and position (left or right), the algorithm instructs the robotic arm to execute the appropriate actions for waste segregation.The arm moves in the direction of the nearest waste.If the difference between the nearest object and the gripper is less than 80-pixel means, the waste object is aligned with the gripper.Then the gripper performs the pick and place operation which represented in figure 3(c), the operation carried based on program in the Arduino controller.
In the Arduino controller five positions of the servo were programmed which are forward, reverse, pick, drop, and reset.In summary, this system architecture involves several key components working in tandem to achieve real-time waste segregation.First, the overhead camera captures images of the waste stream, which are then processed by the deep learning model, specifically the TensorFlow pre-trained object detection model using MobileNet V2.This model is capable of identifying various categories of plastic waste, including PET, HDPE, PVC, LDPE, PP, and PS, with high accuracy.The position of each waste along with the position arm gripper will be given by the object detection model.Based on the position information provided by the model the algorithm will calculate the distance between the waste categories detected and the arm gripper.The arm will start to segregate the nearest waste from the current position.This process will be repeated until all waste is segregated.

Object detection
The model's performance in object detection is evaluated using Average Precision (AP) and Average Recall (AR).AP measures the precision of the model's detections across various IoU thresholds, while AR captures its recall at different IoU thresholds.The model achieves an AP of 0.748 when considering IoU thresholds from 0.50 to 0.95, indicating reasonably good overall performance.At a lower IoU threshold of 0.50, the model achieves a higher AP of 0.981, suggesting higher precision.At a higher IoU threshold of 0.75, the model maintains a good balance between precision and recall with an AP of 0.917.The model achieves an AR of 0.801 for all object sizes, indicating that it can recall approximately 80.1% of the objects present in the dataset.The prediction of the model is shown in figure 4 , in which the model able locate the waste and gripper with confidence level of 0.9.
The model showcases strong capabilities in real-time waste object classification and localization, providing precise bounding box information for effective segregation.However, it is important to note that increasing the size and diversity of the dataset can potentially improve the model's performance over time.By adding more data to the training set, the model will have a larger and more representative sample to learn from, allowing it to generalize better to unseen examples.With a larger dataset, the model can capture a wider range of object variations, backgrounds, and scenarios, leading to improved performance in object detection and classification.Therefore, continuous data augmentation and expansion of the dataset can be a valuable approach to enhance the model's performance over time.

Robot arm positioning and control using arduino for segregation
To provide a better representation of the process happening in real-time during the waste segregation task, figure 5 visually depict the scenario and enhance the understanding of the process.Additionally, figure 5(a) shows the gripper center point marked as a reference point.The HDPE waste object is positioned to the left of the gripper center point, with a distance of 257.58 pixels, while the PS waste object is located to the right of the gripper center point, with a distance of 197.53 pixels.This visual representation allows for a clear understanding of the spatial relationship between the waste objects and the gripper center.The center point in the image was calculated using equation (1), the nearest object is calculated using equation (2) the waste in position is calculated using equation (3).
In figure 5(b) the gripper center point is again marked as a reference point.The HDPE waste object is observed on the left side of the gripper center point, with a distance of 376.63 pixels, while the PS waste object is situated on the right side, with a distance of 90.25 pixels.The visual representation provides a real-time view of the waste objects' positions relative to the gripper center, aiding in comprehending the segregation process.However, in figure 5(c) the features of the gripper center point is taken as the reference, showcasing the HDPE waste object positioned to the left of the gripper center, with a distance of 397.07 pixels.On the other hand, the PS waste object is located to the right of the gripper center point, with a distance of 62.89 pixels.This visual representation captures the dynamic nature of the waste segregation task, enabling a comprehensive understanding of the real-time positioning of the waste objects.After each processing, the data will be sent to the Arduino microcontroller to position the arm, and to perform this action servo MG 996 is used in the robot arm.
After capturing an image of the waste objects using a camera module, the algorithm utilizes an object detection model to analyze the image and identify the different types of plastic waste based on recycling categories.Based on the detection results, the algorithm calculates the precise position of each waste object in the image, determining its x and y coordinates.The algorithm then initiates the necessary movements to position the robotic arm in alignment with the waste object.By sending control codes to an Arduino controller, which controls the arm servos, the algorithm gradually adjusts the servo angles, typically moving them by 10 degrees per cycle.This iterative process allows the arm to move closer to the target position in both the x and y coordinates.Upon achieving proper alignment with the waste object, the algorithm triggers the gripping mechanism to securely capture the waste within the gripper.A short delay is introduced to ensure a firm grip, after which the arm servo is reset to the neutral position, preparing for the next cycle.This process continues as the algorithm receives subsequent commands, such as 'forward' or 'reverse,' prompting additional cycles of image capture, object detection, and arm positioning.With each cycle, the arm servo angles are adjusted incrementally to achieve the desired position relative to the waste object.The continuous execution of this algorithm allows for real-time segregation of plastic waste, ensuring accurate identification, precise arm positioning, and efficient waste capture.
The waste segregation algorithm comprises several computational stages, each contributing to the system's efficiency.Initial object detection involves analyzing input images with a deep learning model, varying in computational complexity based on model architecture and data size.Filtering out irrelevant classes incurs workload proportional to the number of detected objects.Computing object centers and determining the nearest waste item add linear computational overhead, while assessing relative positions requires minimal computational effort.Finally, segregation commands for the robotic arm impose a relatively low computational strain.Overall, the algorithm maintains efficiency through a balanced distribution of computational resources across its processing stages.Table 1 presents a comparative analysis between the existing systems and the proposed system, and highlighting the deficiencies corresponding to the improvements offered by the proposed system.This comparison elucidates the advancements introduced by the proposed system in addressing the limitations observed in existing system.
Comparison between the proposed model and existing models:

Conclusion
This research focused on the development of an intelligent waste segregation system using a robotic arm equipped with computer vision capabilities.The proposed system utilized object detection algorithms to identify and locate different types of plastic waste objects.By calculating the distance between the gripper and the waste objects, the nearest object was determined, enabling efficient waste segregation.The algorithm successfully categorized the detected waste objects based on their class IDs and positioned them relative to the gripper using the calculated center coordinates.This information allowed the system to instruct the robotic arm to pick and segregate the waste objects accordingly.Experimental results demonstrated the effectiveness of the proposed system in real-time waste segregation tasks.The algorithm accurately identified the nearest waste object and determined its position relative to the gripper, enabling precise waste sorting also the system is capable of segregating plastic from the pile of different types of plastics.The integration of computer vision and robotic arm technologies provided an automated and efficient solution for waste management.Overall, this research contributes to the advancement of waste segregation systems by combining computer vision and

MobileNet v2
Detection and segregation based on robot arm The system can segregate waste from mixed waste robotics.The developed algorithm and methodology offer potential applications in waste management facilities, recycling centers, and other environments where automated waste segregation is required.Implementing such intelligent systems, will enhance waste processing efficiency, reduce human involvement, and contribute to sustainable waste management practices.This research underscores the importance of automated waste segregation and presents a viable solution that combines computer vision and robotics.The findings demonstrate the potential of the proposed system to revolutionize plastic waste recycling processes, offering significant benefits in terms of resource recovery, environmental sustainability, and operational efficiency.Future work could focus on expanding the system's capabilities to handle a broader range of waste materials and optimizing the robotic arm's motion planning for increased efficiency and accuracy.The proposed architecture exhibits a high degree of adaptability, making it suitable for various types of waste segregation tasks beyond plastic waste.By simply modifying the dataset used for training the object detection model, the system can seamlessly transition to segregating other recyclable materials such as metal or organic waste into separate categories.This flexibility is attributed to the robustness of the deep learning-based approach, which relies on learning from visual features inherent to different waste materials.Additionally, the algorithm developed for waste segregation employs sophisticated decision-making processes, leveraging real-time data from the object detection system to efficiently allocate resources and prioritize the segregation of waste items based on their proximity and categorization.This dynamic algorithmic framework enables the system to adapt rapidly to changing waste composition and environmental conditions, ensuring optimal performance across diverse waste management scenarios.As a result, the system offers a versatile and intelligent solution for addressing complex waste segregation challenges while promoting sustainability and resource conservation.In conclusion, while the research has made strides in developing a robust waste segregation system, practical challenges remain for its implementation.These include system integration within existing infrastructure and scalability to handle varying waste types and volumes.To address these challenges, exploring optimization strategies and involving various individuals, organizations, or groups who have an interest or influence in the waste management process.This could include waste management authorities, recycling facilities, local communities, policymakers, and environmental advocacy groups.By involving these stakeholders in the development and implementation of the waste segregation system, their input, concerns, and perspectives can be considered, leading to a more comprehensive and effective solution.

Figure 1 .
Figure 1.(a) Architecture of the proposed system.(b) block diagram of plastic waste segregation system.

A
comprehensive dataset was assembled to train the object detection model for plastic waste segregation.The collected dataset consisted of 1800 images, each captured at a resolution of 640 ×480 pixels, specifically tailored for the MobileNet v2 [25] architecture.This dataset encompasses a wide spectrum of plastic materials commonly encountered in recycling facilities.The plastic waste materials have been painstakingly categorized into several distinct classes, including PET (Polyethylene Terephthalate), HDPE (High-Density Polyethylene), PVC (Polyvinyl Chloride), LDPE (Low-Density Polyethylene), PP (Polypropylene), PS (Polystyrene), and the gripper head class.The Gripper head class is added to the dataset to locate the position of the head and waste for segregation.To enable precise object localization, extensive annotation was performed using the LabelImg tool, resulting in accurate bounding box annotations for the plastic waste objects.The frequency of the dataset used for training the object detection model is shown in figure 2(a).Data augmentation techniques were used to apply random transformation and adjust brightness as well as contrast to simulate varying lighting conditions commonly encountered in real-world environments.

Figure 2 .
Figure 2. (a) Frequency of the dataset for training.(b) Loss of the object detection model.

Figure 3 .
Figure 3. (a) Detection from the object detection model.(b) Calculating the center of each class.(c) Robot arm picking after the waste is aligned with the gripper.

Figure 4 .
Figure 4. Prediction result from the object detection model.
The following expression describes this comparison: If center x [nearest object] <, gripper center x, the waste object is on the left side.If center x [nearest object] >, gripper center x, the waste object is on the right side.If center y [nearest object] <, gripper center y, the waste object is positioned below.If center y [nearest object] >, gripper center y, the waste object is positioned above.

Table 1 .
Comparison between the proposed work with the existing work.