Intelligent Vehicle Systematic Design Based on Arduino and Raspberry Pi

The intelligent vehicle designed in this paper can realize functions, such as safety detection, visual identification, remote control and manipulator grasping, and so on. Arduino MEGA is used as the main control board to send signal messages to drive vehicles. Wi-Fi module is used to receive messages to remote control vehicles. The ultrasonic and infrared module is used to realize object detection around vehicles. To realize complex route movement, raspberry pie is used for visual recognition and path planning. Data is sent to Arduino for judgment in real time. Finally, it is verified that the design effectively improves the path-planning ability and obstacle-avoidance function in a sample vehicle.


INTRODUCTION
With the rapid development of intelligent technology, smart homes have become ubiquitous in people's lives [1].The emergence of high-tech products, such as intelligent speakers, intelligent cleaners, robotic dogs, intelligent sorting machines, and drones, prioritizes serving people and integrates communication, network, computer, automatic control, and other technologies.As a mobile part of intelligent products, research on smart cars in the home is also becoming increasingly in-depth [2].
Therefore, this article designs an intelligent car based on Arduino and Raspberry Pi control, which can be equipped with various platforms to meet the needs of intelligent services.From a fundamental perspective, it includes sensing, control, and visual components, which can intelligently complete work tasks and wirelessly connect to mobile devices.Intelligent cars achieve different functions through different system programs and modules and can meet the needs of general household use.

OVERALL SCHEME DESIGN
We design and research an intelligent car with personalized and expandable functions, mainly composed of Arduino and Raspberry Pi as the main control unit and different functional modules to form a multifunctional intelligent car.Among them, the measurement control management module that can be assembled by enterprise users mainly consists of Bluetooth module, Wi-Fi module, infrared receiving module, ultrasonic module, temperature and humidity module, smoke alarm module, cleaning module, robotic arm module, and voice recognition module.These modules form the control chain of the intelligent car.The overall design scheme of the intelligent car hardware system is shown in Figure 1.

Drive Control Design
The Arduino MEGA main control board controls four motors by outputting data to the motor drive board [3][4].By using the PWM method to control the motor, it is possible to control both the speed and direction of rotation of the motor [5].When the enabling end EN is at a high level and the direction end IN1 input pin is at a high level, the motor rotates forward; When the IN2 input pin is high, the motor reverses.The control mode of the left and right motors is the same, and the specific motion logic is shown in Table 1.
Table 1.Intelligent Car Motion Logic Turn right in place with the right motor as the center Turn right in place with the left motor as the center The code transmitted through infrared obstacle avoidance controls the high and low levels of the direction end to control the motor rotation direction and speed, enabling the intelligent car to move freely [6]. Figure 2 is a partial code for the motor drive of the intelligent car.

Visual Recognition Module
The visual recognition module mainly uses Raspberry Pie for image recognition and transmits the recognition data to Arduino [7][8].To achieve visual distance measurement, it is necessary to quickly obtain the image in front of the actual coordinate point.In order to speed up image processing, data processing schemes based on SVM (Support Vector Machine) or VFW (Window Operating System) are not used.Instead, OpenCV's library function is used to compare data processing.By taking photos with a front-facing camera, the image is preprocessed and image features are extracted.To complete the information recognition or data acquisition of digital images, there are five steps: image information acquisition, image processing, obtaining image digital features, classifier image training, and automatic recognition.In order to achieve the accuracy of automatic recognition, there are generally two solutions in program design: one is to use the function CvGrabFrame to quickly capture features consistent with database images from the external environment, and the other is to use the function CvRetrieveFrame to match the images captured by the camera with the database [9].The second solution is to compare the processed images captured by the camera with the data in the raspberry pie memory, which results in a much higher matching rate than the original image.Therefore, the second solution is adopted to achieve visual recognition of the intelligent car, as well as subsequent functions such as target locking and grasping.

IMAGE PROCESSING ALGORITHMS
The CP-ACGAN (classification algorithm) algorithm for image recognition is used in image processing.The traditional CGAN (condition generation countermeasure network) algorithm generates unsupervised countermeasure networks, while the CP-ACGAN algorithm applies the Generative adversarial network to the supervised learning method to make the database features correspond to the generated image features [10].This algorithm has the characteristics of fast recognition speed and high accuracy performance.The following is the algorithm structure, as shown in Figure 3. From the algorithm structure (Figure 3), it can be seen that in this network structure, when the sample is output in the discriminator D and generator G interfaces, there is no discriminant term for true or false feature data.At the same time, in order to ensure effective matching of the sample in discriminator D, feature point matching was reintroduced by utilizing the features of the tested object for feature matching.In order to effectively reduce the Loss function value of generator G and discriminator D, the sample read is regarded as supervision data with image characteristics, the processed result data is set as virtual data with image characteristics, and then the Softmax classifier is connected in the judgment network layer, so that the Loss function of supervision data can be obtained as: where ܰ is the number of samples in training, E is the predicted expected value, x is the input sample, p(x) is the distribution function of fitting the real sample, y is the sample image feature, is the predicted image feature of the sample, and <•> is the inner product.Thus, the Loss function of the actual sample data is: For the calculated data, there are two sources of error: firstly, for the K+1 class sample generated by discriminator D, the false sample has a certain probability loss.The other part is the difference loss between the input image feature y of generator G and the output image ‫ݕ‬ fake ᇱ feature of discriminator D.
Here, we let ‫ܮ‬ unsupervised be the probability expected loss of the false sample class, and ‫ݕ‬ K+ଵ ᇱ = 0.It can be obtained: In order to ensure that the input image features of each generated sample in the network are consistent with the real sample, the difference loss value between the generated sample characteristics ‫ݕ‬ fake ᇱ and the input feature y is CE (y,‫ݕ‬ fake ᇱ ).It is known that the function sample loss is: During the recognition process, the generator and discriminator parameters need to be updated, so it is necessary to construct the errors of the generator and discriminator separately.
The Loss function of generator D is: The Loss function of generator G is: Among them, , representing the feature point matching of the binomial loss value.

EXPERIMENTAL RESULTS AND ANALYSIS
To verify the effectiveness and reliability of the CP-ACGAN algorithm.We design 10000 training samples and 2000 test samples, using two-dimensional black and white images with 28 pixels each × 28 pixels.From Table 2, the accuracy error of the CP-ACGAN algorithm is relatively small, which is compared with the other three algorithms.The variance of the CP-ACGAN algorithm is only 1923e-09, which is much smaller than the other three algorithms.Therefore, it has good accuracy.Meanwhile, the accuracy of the CP-ACGAN algorithm is 99.5604%, which is also higher than the other three algorithms.At the same time, the CP-ACGAN algorithm also has high operational stability, making visual recognition more stable and accurate.
From Table 2, it can be seen that the CP-ACGAN algorithm can achieve good image quality during image processing.The processed image has better clarity and boundary effect than the ACGAN algorithm because the algorithm improves the convolution method.Therefore, the CP-ACGAN algorithm has a higher recognition rate, better stability, and faster recognition speed.Applying the CP-ACGAN algorithm to the visual recognition of smart cars can achieve accurate recognition of the mechanical grippers of the cars.

CONCLUSION
This design is to achieve intelligent recognition, obstacle avoidance, grasping, and cleaning functions of the intelligent car by combining various modules reasonably.A visual recognition function based on the CP-ACGAN algorithm was designed for an intelligent car based on Arduino and Raspberry Pi.The input and output data structure of the feature discriminator can be completely changed by performing feature recognition matching to reduce the Loss function and improve the recognition rate.This recognition method has good classification performance, high stability, and good scalability when applied to intelligent vehicles.The intelligent car designed in this article can meet the general needs of people's lives, improve home intelligence, and have diverse operations, expandable functions, and good stability performance.

2 Figure 1 .
Figure 1.Structure of the Robot Trolley System

Figure 2 .
Figure 2. Example of the motor drive code

Table Enable
The experiment involves training 50 samples, with one sample per training session.Both generator G and discriminator D are optimized by Adam, and the Learning rate is 0.0002.The results of four algorithms are compared in the visual recognition experiment.The average pooling CNN (Convolutional neural network) algorithm, the maximum pooling CNN algorithm, the ACGAN algorithm and the CP-ACGAN algorithm are used respectively.Table2shows the mean and variance of accuracy for the four visual algorithms.Table 2. Comparison of Mean and Variance of Four Algorithms