An adaptive particle swarm optimization with information interaction mechanism

This paper proposes an adaptive particle swarm optimization with information interaction mechanism (APSOIIM) to enhance the optimization ability of the PSO algorithm. Firstly, a chaotic sequence strategy is employed to generate uniformly distributed particles and to improve their convergence speed at the initialization stage of the algorithm. Then, an interaction information mechanism is introduced to boost the diversity of the population as the search process unfolds, which can effectively interact with the optimal information of neighboring particles to enhance the exploration and exploitation abilities. Therefore, the proposed algorithm may avoid premature and perform a more accurate local search. Besides, the convergence was proven to verify the robustness and efficiency of the proposed APSOIIM algorithm. Finally, the proposed APSOIIM was applied to solve the CEC2014 and CEC2017 benchmark functions as well as famous engineering optimization problems. The experimental results demonstrate that the proposed APSOIIM has significant advantages over the compared algorithms.


Introduction
Machine learning, a subset of artificial intelligence, is integral to extracting insights and making informed decisions through data-driven predictions and models [1].The process often involves numerous uncertain parameters that are critical to the accuracy of these models [2].Determining these parameters to optimize model performance poses substantial challenges for traditional optimization methods, and therefore, many intelligent optimization algorithms have emerged [3].Such as the firefly algorithm [4], differential evolution [5], whale optimization algorithm [6], grey wolf optimization (GWO) [7], sine-cosine algorithm (SCA) [8], gravitational search algorithm (GSA) [9], particle swarm optimization (PSO) [10], and others.Among these intelligent algorithms, PSO is a population-based intelligent optimization algorithm inspired by the foraging behavior of birds.Thanks to its easy implementation and rapid convergence rate, PSO has been widely applied.
However, the development of PSO algorithms encounters notable challenges, particularly evident in their 'premature maturity' and 'parameter sensitivities' , in which 'premature maturity' means that algorithms settle on local optima rather than the global optimum, which is particularly pronounced in complex, high-dimensional search spaces, significantly increasing the likelihood of entrapment in local minima, thereby compromising the solution quality.Moreover, the tuning of PSO parameters, such as particle velocity, population size, and inertia weight, is critically influential on algorithm performance.These parameters must be meticulously calibrated to suit specific problems, a process that is both intricate and computationally expensive.The challenge lies in optimizing these parameters to enhance solution quality and computational efficiency without leading to overfitting, thus maintaining the robustness and adaptability of the PSO across various problems.Therefore, it is of significant practical importance to continuously research methods for improving the performance of PSO [11].
Numerous scholars have conducted extensive and in-depth research into the issue of PSO's propensity to fall into local optimal solutions, yielding noteworthy results.In PSO, the inertia weights are the essential control parameters as they determine the search range of particles and consequently influence the performance significantly.Clerc [12] put forward a PSO with compression factor (PSOCF), in which a shrinkage factor is introduced into the time-varying PSO, which allows for the confirmation of the absolute convergence condition of particles and enhances the search efficiency of the PSO algorithm.Choudhary et al [13] propose a new linearly decreasing inertia weight PSO (LDIWPSO), in which the robust formulation of a linear weight factor leads to efficient cluster formation.The proposed LDIWPSO is successfully applied to optimize the layout of the wireless sensing network and further enhance the efficient transmission of the network.Özsoy [14] presents a new PSO with linear programming model (LPSO), which updates the position and velocity of each particle by using the current velocity and the distance from the pBest to the gBest.To improve the ability to solve complex problems, a non-linear inertia weight strategy is introduced into PSO.Taherkhani and Safabakhsh [15] introduce an adaptive inertia weight strategy based on stability into PSO (APSOS), where the inertia weights of each particle in different dimensions are determined according to the distance between the current particles and the best particles.Experimental results indicate that the proposed strategy can significantly improve the optimization ability of PSO in terms of solution quality and convergence speed.Nabi et al [16] propose an adaptive PSO (APSO) based task scheduling approach, which reduces the task execution time and increases throughput.The experimentation reveals that the proposed APSO attained up to 10% and 12% improvement for make-span and throughput, respectively.Li et al [17] propose an improved PSO with adaptive inertia weight (AIWPSO), which adopts an inertia weight adjustment method based on the optimal fitness value of individual particles to update the inertia weights of different particles.At the same time, a mutation threshold is used to determine that particles need to be mutated, which compensates for the inaccuracy of random mutation and enhances the diversity of the population.The results show that the AIWPSO algorithm can maintain a balance between exploration and exploitation effectively.Later, Yu et al [18] present a hybrid PSO with non-linear inertia weight and Gaussian mutation (NGPSO), in which a non-linear inertia weight strategy is introduced to improve the local search capabilities of PSO, and the results show that the proposed NGPSO performs significantly better than the compared algorithms in 62 benchmark instances of JSSP.Recently, Guan and Zhang [19] propose a novel PSO (NPSO), which uses a new inertia weight to calculate the fitness value of structural flexibility, and the numerical results show that the performance of the proposed NPSO algorithm is better than that of the comparison.
To further enhance the optimization ability of the algorithm, some new technologies and learning strategies are introduced into the PSO.Zhang et al [20] present a novel prey-predator PSO (PP-PSO), in which three strategies, i.e. capture, escape, and reproduction, are used to delete or transform the slothful particles to speed up convergence and computation speed.The experimental study shows that the proposed PP-PSO has a superior performance in other comparison algorithms.Later, Lin et al [21] introduce a global genetic learning PSO with diversity enhancement by ring topology (GGL-PSOD).In GGL-PSOD, a ring topology is adopted to improve the diversity of the algorithm, while a global learning component (GLC) strategy is employed to enhance the algorithm's adaptability.The comparison results on the CEC2017 functions show that GGL-PSOD outperforms seven representative PSO variants and five non-PSO meta-heuristics.Recently, Wang et al [22] present a NPSO variant, named 'Levy flight orthogonal learning PSO' (LOPSO), in which the jumping ability of Levy flight aims to strengthen the exploration, while the orthogonal learning process is utilized to enhance the exploitation.The extensive experiments demonstrate that the comprehensive performance of LOPSO ranks first in the 13 state-of-the-art PSO variants.Moazen et al [23] propose a PSO with elite learning (PSO-ELPM), in which the enhanced parameter updating strategy and the exponential mutation operator are jointly adopted to balance the exploration and exploitation capabilities of PSO.The results of the CEC 2017 benchmark functions reveal that the PSO-ELPM achieves better performance on six and seven functions in 30 and 50-dimensional problems, respectively.Xu et al [24] develop a strategy learning framework for PSO (SLFPSO), which constructs a strategy pool with strategies selected from existing PSO variants.Then, a training engine is used to evaluate the performance of strategy combinations on training CEC2013/2014/2017 benchmark functions.The results verify that the optimization ability of SLFPSO is significantly better than that of the compared algorithms.
Besides, some researchers have focused on the research of hybrid PSO algorithms.Jin and Lu [25] propose a multi-subgroup hierarchical hybrid algorithm based on genetic algorithm and PSO (GA-PSO), in which the bottom layer is composed of a series of GA subgroups, which contribute to the global search ability of the algorithm.The upper layer comprises the best individuals in the PSO of each subset, which is helpful for accurate local search of the algorithm.And therefore, the performance of the GA-PSO algorithm is enhanced obviously.Bhandari et al [26] develop a new metaheuristic optimization algorithm called hybrid HPSGWO, which effectively combines the exploitation ability of PSO and the exploration ability of GWO (PSO) to search for optimal solutions.The experimental results prove that the proposed HPSGWO algorithm obtains a better numerical result in numerical problems and the reliability-redundancy optimization problem.Charadi et al [27] introduce a novel hybrid algorithm imperialist competitive algorithm-PSO (ICA-PSO), combining the ICA with PSO.The ICA-PSO algorithm is applied to solve energy management in multi-source residential microgrids, and the experimental results demonstrate that the proposed ICA-PSO has significant advantages over the compared algorithms.A summary of existing PSO research is given in table 1.
The above research results show that the phenomenon of 'premature maturity' is improved, but the results are not always satisfactory for complex functions or multi-modal problems.Therefore, this paper proposes an APSO with information interaction mechanism (APSOIIM), in which an interaction information mechanism is introduced to enhance the optimization performance of the algorithm.Then, the performance of the algorithm is tested by the CEC2014 [28] and CEC2017 [29] benchmark functions to analyze the diversity and convergence of the population and explain why the proposed interaction information mechanism can maintain the balance between exploration and exploitation.Simultaneously, famous engineering optimization problems are also performed to further demonstrate that the proposed APSOIIM has significant advantages over the compared algorithms.Besides, a proof of global convergence is presented to verify the robustness and efficiency of the proposed APSOIIM algorithm.The main contributions of this study are as follows: (1) This paper proposes an APSOIIM.
(2) An information interaction mechanism (IIM) is introduced to enhance the optimization performance of the algorithm.(3) The algorithm's performance is tested to analyze the diversity and convergence of the population and explain why the proposed IIM can keep a better balance between exploration and exploitation.(4) Famous engineering optimization problems are performed to demonstrate that the proposed APSOIIM has significant advantages over the compared algorithms.(5) A proof of global convergence is presented to verify the robustness and efficiency of the proposed APSOIIM algorithm.
The rest of this paper is organized as follows.Section 2 describes the proposed APSOIIM algorithm, which includes initialization, particle update after adding information exchange strategy, and global convergence proof.Section 3 presents extensive experimental validation using the CEC2014 and CEC2017 benchmark functions, which effectively demonstrate the accuracy and efficiency of the algorithm.The ability of APSOIIM to solve famous practical engineering problems is implemented in section 4. Finally, the conclusions are drawn in section 5.

APSO with information interaction mechanism (APSOIIM)
This section focuses on the novelty and feasibility of the proposed APSOIIM in detail.Specifically, the first part addresses the algorithm's initialization, followed by the second part describes the updated formulation of the APSOIIM.The final part deals with the convergence proof of the proposed algorithm.The flowchart of the proposed APSOIIM is illustrated in figure 1.

Initialization
The different methods of initialization can affect the performance of the algorithm.To improve the credibility of the algorithm, the chaotic initialization method is used in this paper.Firstly, the initial particles are generated randomly, then the chaotic operation is executed by equation (2), and the resulting swarm particles will be evenly distributed within the solving space.The effect is shown in figure 2, where X i is the position of the ith particle; X i +1 is the position of the (i + 1)th calculated by the chaotic sequence; X min and X max stand for the minimum and maximum position restrictions, respectively; r 0 denotes a random number between 0 and 1.
Figure 2 shows that the frequency distribution of random values generated by chaotic initialization is better than that of random initialization.It also shows that the particles generated by chaotic operation are evenly distributed, which lays a solid foundation for subsequent optimization.

Update in APSOIIM 2.2.1. Information interaction mechanism (IIM)
As previously mentioned, the tendency for standard PSO to fall into local optima is the most prevalent issue.Two common strategies are often used to solve this problem.One is to increase disturbance and introduce variation, while the other aims to enhance the population diversity and avoid the particles falling into local optima.According to two common strategies, this section proposes an IIM for the update strategy of particle positions, which aims to enhance the diversity within a population.The proposed mechanism, based on a global population model, facilitates the assessment of particle updates from various perspectives, which not only considers and tracks individual optimal and global optimal but also involves the values of the individuals in the particle's immediate neighborhood.Therefore, the individual particle can be guided toward the optimal solution from multiple aspects, and the schematic representation of the specific strategy is drawn in figure 3.
From figure 3, each upgrade iteration under the interactive information mechanism evolves from one particle to three particles, i.e. the current particle, the left adjacent particle, and the suitable adjacent particle.In this scenario, even if one of the three particles falls into local optima, the other two particles maintain the  correct direction in the search for the optimum through information sharing.Adjacent particles can preserve better information, and the effects of negative information can be more effectively avoided, thereby preventing the particle population from prematurely converging.

Particle renewal
To solve the issue of sub-optimal accuracy and probabilistic convergence at the end of the search, the topology structure of the PSO algorithm is improved.The common topology structure in the standard PSO is the global model.In the iterative evolution of the algorithm, this model considers only the local optimal solution and the global optimal solution, which is easy to fall into the local optimum.In contrast, the local model allows particles to exchange information only with neighboring particles and only be influenced by the extreme value of neighboring particles.While this approach effectively prevents the algorithm from falling into the local optimal solution, it significantly lengthens the iteration time and slows the iteration speed.Based on this, this paper has developed new iterative forms and optimized the particle velocity update formula to include interaction with not only neighboring particles but also the global optimum, as specifically depicted in equations ( 3) and ( 4).With this approach, the particle update is influenced not only by pbest t i,j and gbest t i,j , but also by the local optimal solution of adjacent particles.This comprehensive consideration in the particles' iterative evolution maintains the iteration speed while enriching the diversity of the population.Moreover, it is worth noting that the fitness value of particles in the tth iteration is compared with the optimal fitness value of the previous generation to determine the update strategy of the following particles, as illustrated in equation ( 5), where r 1 , r 2 , r 3 , and r 4 are a random number between 0 and 1, v t i,j and x t i,j represent the speed component and position component.pbest t i,j is the individual optimal solution of the ith particle in the jth dimension under the tth iteration; gbest t i,j denotes the global optimal solution of the ith particle in the jth dimension under the tth iteration.λ is compression factor, c 1 , c 2 , c 3 , and c 4 are the acceleration factor.It can be seen that if the fitness value of the tth iteration is superior to that of the previous generation, more focus should be placed on local search, and the parameters c 1 , c 2 , and c 3 are set to 0, i.e. c 1 = c 2 = c 3 = 0. Conversely, if no better fitness value is found, global search should be maintained, i.e. c 4 = 0. λ and c are updated using the following formula, and equations 6-8 are derived from [30], where Iter refers to the current number of iterations; Iter max represents the iterative maximum; c iN and c iM are the means initial value and the final value, respectively, and c iN = 0.25,c iM = 1.05.
The pseudo-code of APSOIIM is presented in algorithm 1. Initialize the population x randomly 4: Initialize the velocity v randomly 5: Chaotic operation of population x 6: Calculate the fitness value fitness 7: //main loop 8: while Iter < Itermax do 9: Save current position x t i,j

Global convergence proof
The convergence of PSO has been verified in the [31], yet most demonstrations utilize knowledge of linear constants to prove the system.Such a method of convergence analysis, focused on individual particles and neglecting the actual random components, arguably oversimplifies the issue and weakens the applicability of the conclusions.In contrast, the proof method employed below shifts from analyzing individual particles to examining the convergence of the entire particle population, thereby offering a more accurate reflection of the whole system's state.

Theorems
Firstly, PSO is identified as a stochastic search algorithm.Secondly, the findings of Solis and Wets [32] demonstrate conditions under which the stochastic optimization algorithm converges to the global optimal solution with a probability of 1.The main conclusions are as follows.
where f is the objective function and D is the function generating the problem solution.z stand for a point found in S that minimizes the value of the function f or at least generates an acceptable lower bound, and S represents a subset of R n , indicating the constraint space of the issue.ζ is a random vector generated from the probability space (R n , B, µ k ), and µ K means the probability density on B, B is its own δ-domain.
Assumption 2. Arbitrary Borel subset A of S, if its measure v(A) > 0, then: where v(A) > 0 is the n-dimensional closure of the subgroup A and µ K (A) is the probability of generating A from the measure µ K .
Theorem.Assuming that f is a measurable function, S is a measurable subset of R n , and {z k } ∞ 0 is a sequence of solutions generated by a randomized algorithm.Thus, when both assumption 1 and assumption 2 are satisfied, then: where R ε is translated as the global set of optimal points, and the principle of search convergence for stochastic algorithms proves that for any stochastic algorithm.If both assumptions 1 and 2 are satisfied, the algorithm can converge to the global optimal solution with probability 1.

Proof
For the proof of global convergence of the proposed algorithm, the swarm perspective and the theoretical analysis of random search are applied in this paper, which is demonstrated as follows: Lemma 1.The proposed algorithm is satisfied with assumption 1.
Proof.The function D of the proposed algorithm would be described as equation (12): This clearly shows the proposed algorithm satisfies assumption 1.
Lemma 2. The proposed algorithm meets the requirements of assumption 2.
Proof.The APSOIIM algorithm, based on the standard PSO algorithm, uses a time-varying acceleration factor to dynamically adjust the compression factor using an interaction information mechanism, leading to the generation of new particles, and striking a balance between global and local models.For usually evolving particles, let the concatenation of their support sets be α.And, for new particles generated using the timevarying acceleration factor strategy, let the concatenation of their support sets be β.
Due to the randomness of the time-varying acceleration factor strategy, an integer must exist t 1 that allows S ⊆ β when t > t 1 .Thus, for the proposed algorithm, an integer must exist t 2 that ensures S ⊆ α ∪ β when t > t 2 .Define any Borel-subset Therefore, the proposed algorithm satisfies assumption 2.

Conclusion. APSOIIM is a globally convergent algorithm.
Proof.Assume that p g,t ∞ 0 is the sequence of solutions generated by the algorithm proposed in this paper, given that the APSOIIM algorithm satisfies both assumptions 1 and 2. It follows from theorem 1 that the proposed algorithm converges to the global optimal solution with probability one and has excellent convergence capability.

Experimental results and discussions
In APSOIIM, the local optimal solution and the global optimal solution can be obtained more efficiently by introducing the interaction information mechanism, which significantly improves its searchability and enhances the optimization accuracy of the solution.To verify the performance, the proposed APSOIIM and six other recent algorithms, i.e.PSO [10], LPSO [14], APSO [16], PSOCF [12], AIWPSO [17], and SLFPSO [24], were applied to solve the CEC2014 and CEC2017 benchmark functions, and the characteristics of the selected functions are given in table 2. To ensure the accuracy of the comparison experiments, the test functions involved are fixed at 10 dimensions and 30 dimensions, respectively.The recommended parameter values were adopted for all the algorithms, which are listed in table 3. The population size and maximal generation number were set to 40 and 1000, respectively.All experiments are performed on a machine with a system configuration of Windows 11 Processor Intel (R) Core (TM) i7-12 700 CPU, @2.10 GHz, 16 GB RAM, and MATLAB (R2021a).

Search accuracy test
To compare the optimization ability of the algorithm under complex conditions, a search accuracy test was used.And each function ran 100 times independently.The test results of the search accuracy obtained by each algorithm are shown in tables 4 and 5, where the best numbers are in bold.
Table 4 reveals that the search accuracy of the proposed APSOIIM is slightly lower than other algorithms on F 4 and F 6 , while the search accuracy of APSOIIM surpasses that of the other comparative algorithms for all the remaining evaluation functions.It is worth mentioning that for benchmark functions like F 3 , F 8 , and F 9 with multiple local optima, excellent results still can be achieved in APSOIIM, which indicates that the algorithm, effectively integrating the advantages of the interaction information mechanism and the compression factor strategy, strikes an effective balance between exploration and exploitation.
Table 5 shows the search accuracy of the proposed APSOIIM when expanding the function dimension to 30 dimensions.Upon analysis, it can be inferred that APSOIIM is competitive to a certain extent in unimodal Ackley(F7) i − 10 cos(2π x i ) + 10) [−5.12,5.12] 10 5 (y i 2 − 10 cos(2π y i ) + 10) u(x i , 10, 100, 4) [−5.12,5.12] 10 5 Discrete  functions, while it outperforms the others in multimodal and discrete functions.It is well known that multimodal functions, having multiple local optimum values and being more complex, demand a more robust algorithm, which effectively proves that the APSOIIM has high efficiency in dealing with complex problems.Note: F1-F6 are unimodal functions, F7-F11 are multimodal functions, and F12 is a discrete function.
Compared with the current mainstream algorithms, it is found that APSOIIM can play an excellent role in solving complex optimization problems and get a better numerical result, which further shows the effectiveness of the proposed algorithm.

Comparison with the winners of CEC2013-CEC2018
In this section, compared to the other excellent algorithms of the CEC series, i.e. iCMAES-ILS (ranked second in IEEE CEC2013) [38], SHADE (first ranked non-CMA-ES variant of IEEE CEC2013) [39], LSHADE (winner of CEC2014 competition) [40], EBOwithCMAR (winner of IEEE CEC2017) [41], CJADE (one of the advanced DEs) [42], HSES (winner of IEEE CEC2018) [43], and iLSHADE-RSP (one of the advanced DEs) [44].The population size and the maximum number of iterations were also set to 40 and 1000, respectively, and the experimental results are recorded in table 7.
Table 7 shows that the proposed algorithm has some shortcomings compared with some of the most advanced algorithms in the unimodal function part.Still, it does have competitiveness in multimodal function.Among all the multimodal functions mentioned in the manuscript, APSOIIM all ranks high except the F 7 function.The excellent rate has reached 83%, which shows that APSOIIM is more suitable for solving complex optimization problems in real life, and it is feasible and efficient in solving complex optimization problems.Besides, to verify the superior convergence efficiency of the proposed APSOIIM, the values of fitness change for the above algorithms are plotted in figure 4.   By analyzing the above graphs, the conclusions can be drawn as follows: (1) For F 8 -F 12 , the APSOIIM algorithm exhibits powerful performance, which indicates that the proposed strategy improves the optimization ability of the algorithm to a certain extent.(2) For all the multimodal benchmark functions, the APSOIIM is steadily ranked first, which indicates that the proposed algorithm can effectively solve complex problem optimization as well as having excellent anti-interference performance.

Population diversity analysis
Population diversity plays a significant impact on the performance of the algorithm, which can directly quantify the state of the population.A large population diversity indicates that individuals are located in large search areas and represent exploration.In contrast, a small population diversity demonstrates that individuals are situated in small search areas and represent exploitation.The diversity is calculated as equations 13 and 14: x id (t) (14) where I c is the dispersion between the population and the center of mass c is each iteration, and x id represents the value of the dth dimension of the ith individual in the tth generation  Compared with the curves of PSO, the proposed APSOIIM has a steeper downward trend, which shows that the APSOIIM algorithm has a more robust search efficiency.Therefore, the proposed APSOIIM algorithm gets a better numerical result.

Exploration and exploitation analysis
Exploration and exploitation experiments significantly enhance the persuasiveness and quality of papers [45].To assess whether the proposed APSOIIM algorithm maintains a balance between exploration and  16) where Div max is the maximum diversity of the whole iteration, x id is the value of the dth dimension of the ith individual in the tth generation To test the exploitation and exploration of the proposed algorithm, three different types of benchmark functions, i.e.F 3 , F 11 and F 12 , are selected.Figure 6 illustrates the curves of exploration and exploitation on F 3 , F 11 and F 12 functions, where the percentage (%) represents the exploration and exploitation level of all populations.It can be seen that the development proportion of unimodal function F 3 increases rapidly, which points out that the algorithm has better development ability, while the curves of multimodal function F 11 can show that the proportion of exploration decreases slowly, which also indicates that the APOIIM algorithm has better global exploration ability.

Convergence performance analysis 3.6.1. Probability of success test
As previously mentioned, while precision contrast is essential, it only reflects the algorithm's ability to seek superiority across a large number of test samples.It does not accurately depict the situation each time.Therefore, the success probability test was introduced [46] to effectively evaluate the comprehensive ability of the algorithm and the extent of the algorithm's randomness, and specific symbols for the probability of success are given in table 8.The results of the success probability evaluation for all compared algorithms are presented in table 9.

Convergence speed
To verify the superior convergence efficiency of the proposed APSOIIM, the values of fitness change for all algorithms are plotted in figures 7-9.By analyzing the above graphs, the following conclusions can be drawn: (1) For F 1 , F 2 , F 3 , F 5 , F 7 -F 12 , the optimal value obtained by the APSOIIM algorithm is superior to other algorithms at the initial stage of iteration, which indicates that the initialization strategy improves the optimization ability of the algorithm to a certain extent.(2) As observed during iterations, the convergence speed and search accuracy of APSO and PSOCF appear to be lacking.This is highlighted by the noticeably slower convergence curves of these comparative algorithms about APSOIIM at 50 iterations.In contrast, the convergence curves of the proposed algorithms are almost vertical at this stage, which demonstrates that the proposed algorithm has a robust search ability.(3) The proposed algorithm consistently achieves the highest accuracy during the later stages of iteration, showing its powerful search performance.(4) For F 4 and F 6 , although the proposed algorithm does not achieve optimal accuracy, it is early optimization and iteration speed are better than other comparison algorithms, which have certain practical desirability.

Real-world engineering applications
This section lies on the practical solution of engineering problems.The first set represents an unconstrained problem, while the second group comprises a practical problem with constraints.For optimization problems  with constraints, a simple constraint processing technique based on constraint violation is applied.This technique compares two solutions based on their constraint violations and selects the more suitable one using Deb's feasibility rules.Deb's feasibility rules are as follows: (1) Choose a feasible solution between feasible and infeasible solutions.
(2) A better solution in terms of fitness is selected among two feasible solutions.
(3) The one with the lesser rule violation is selected among two infeasible solutions.
For the regular optimization problems, the constraint violation viol x corresponding to the solution x can be calculated by equations 18-20.Besides, all experiments are performed on a machine with a system R Liu et al where ∈ is a predefined tolerance parameter that takes a fixed value of 10 −4 in the following engineering applications.

Gear train design problem
Gear train design problem is a typical discrete optimization problem, which is shown in figure 10.The question involved a total of four decision variables, i.e. x 1 , x 2 , x 3 , and x 4 , which represent the number of teeth on the four gears of a train.Since it is a real-life problem, each decision variable is restricted to an integer [47].This problem aims first to find the optimal number of teeth (decision variable), and then minimize the gear ratio.To effectively deal with discrete parameters, rounding of decision variables is implemented in this section, which is more consistent with real-life applications.This problem can be formulated as equation ( 21): To verify the practical effectiveness of the proposed algorithm, many comparative experiments are conducted in table 11.In the table, the solution obtained from the SCA [8] and other algorithms, i.e. augmented Lagrange multiplier (ALM) [48] and improved SCA (ISCA) [49], are also reported.The results demonstrate that APSOIIM provides a better solution in terms of both accuracy and efficiency levels.The proposed algorithm uses the least number of iterations to achieve results as good as those of SCA, ALM, and ISCA, significantly highlighting its efficiency advantage.

A tension/compression string design problem
This problem aims to minimize the f 3 (x) of a tension/compression spring subject to constraints on shear stress, deflection, surge frequency, outside diameter, and design variables.These types of problems are generally viewed as optimization problems with constraints.This problem involves a total of three decision variables, namely, coil diameter x 1 , average coil diameter x 2 , and the number of active coil turns x 3 .The problem can be described as equation (22): The problem has been tackled by a variety of metaheuristic algorithms, such as GWO [7], GSA [9], PSO [10], and Mathematical Optimization (Belegunda) [50].For all the algorithms, the maximum number of function evaluations is set to 1000, and the problem is run 30 times to obtain the average value.Table 12 clearly shows that the proposed algorithm has significantly improved the solution compared to the GSA [9] and other algorithms.It can identify the optimal decision variables under the constraints, better balance global exploration and local exploitation, and verify the effectiveness and accuracy of the proposed algorithm.

Cantilever beam design problem
The problem is viewed as a practical engineering problem with constraints, focusing on minimizing the weight of a cantilever beam with a square cross-section.As shown in figure 11, the beam is supported at node 1, and a force is applied at node 5.This problem involves a total of four decision variables, which represent the height or width of different beams.The mathematical formulation is as equation ( 23): The best solutions obtained by APSOIIM and other common meta-heuristics are presented in table 13.Table 13 compares APSOIIM with SCA [8] and other techniques (modified SCA (m-SCA) [49] and CONLIN [51]).The data indicate that the proposed algorithm can find the most suitable decision variables to derive the optimal fitness value, that is, the best solution suited to the practical engineering application.Ultimately, the cantilever beam design problem is solved practically.

Pressure vessel design problem
Pressure vessel design problems belong to a category of practical engineering problems.In this problem, the objective is to minimize the total cost, which comprises the forming, material, and welding costs of cylindrical containers.Therefore, it is often abstractly understood as the problem of finding the optimal value.As  depicted in figure 12, the vessel has a hemispherical head with a capped end.The problem involves four main variables: shell thickness, head thickness, inner radius, and cylindrical shell length.The first two decision parameters are discrete and are multiples of 0.0625, and the last two are continuous.Therefore, the problem is often viewed as a mixed-integer optimization problem.Moreover, the issue includes four constraints, three of which are linear, and one is nonlinear.The problem is mathematically formulated as equation (24): To obtain the best solution for such problems, the proposed algorithm is calibrated to the same order of magnitude of function evaluations, and the obtained results are shown in table 14.In the references, various optimization techniques, such as ISCA [49], SCA [8], GSA [9], PSO [10], and Lagrangian Multiplier [52], have been utilized to solve pressure vessel design problems.The numerical results clearly show that the proposed algorithm is more effective in obtaining the minimum cost to solve such real issues under constraints.

Himmelblau's nonlinear optimization problem
Himmelblau's non-linear optimization, regarded as a practical engineering problem with constraints, involves five optimization variables and six non-linear constraints.The goal is to find the best value within these constraints.In this scenario, numerous experiments have been conducted to compare the optimization performance of APSOIIM with other methods, such as Gaussian PSO (GPSO) [53], orthogonal PSO (OPSO) [54], and nutcracker optimization algorithm (NOA) [55].The mathematical formulation is unique as equation ( 25): Min f (x) = 5.3578547X 3 2 + 0.8356891X 1 X 5 + 37.293239X 1 − 40792.141 As shown by the results in table 15, the optimal value of the proposed APSOIIM is −30 666.95, which is closer to the real optimal value than other results under the same magnitude of function evaluations.Concurrently, the parameter is [X 1 , X 2 , X 3 , X 4 , X 5 ] = [78.00,33.00, 29.29, 45.00, 36.78], which falls within the constraint range of the parameters and meets the required specifications.The experimental results demonstrate that the proposed APSOIIM algorithm can obtain better solutions for Himmelblau's non-linear optimization problem.

Inverse kinematics of inchworm robot
The research on the inverse kinematics of an inchworm robot represents a novel application in practical problem-solving.This research aims to minimize the distance between the achieved and desired positions, thereby attaining precise alignment.The problem is framed as a four-dimensional issue with constraints, where the parameters symbolize the joint angles of each module.For the inchworm robot, the link length is set at 0.091, and the target point for simulation is designated as (0.5, 0.5), where (x d , y d ) is the desired point, and L i is the length of the link.
From table 16, it can be seen that the optimal values found by several algorithms are 3.4311 × 10 −01 , which indicates that the ability of algorithms to seek the optimal value is similar.At this time, the parameter of APSOIIM is [45, 360, 0, 0].However, comparing the 'Mean' and 'STD' values implies superior stability of the APSOIIM algorithm, which exhibits minimal fluctuations.Moreover, the accuracy of this observation is corroborated by the STD values presented in table 16.

Multiple disk clutch brake design problem
The design problem of multi-disc clutch brake, which is devoted to seeking the best parameters, is considered to be challenging to solve in recent years.The mathematical expression is illustrated in equation ( 27 where In this section, numerous algorithms are used for addressing this engineering application, and the parameters of all the algorithms are derived from the optimal parameters recommended in the references.The results are shown in table 17.Compared to SCA, TACPSO, and EGWO, the statistical results show that the APSOIIM algorithm obtains better results and exhibits excellent performance, which indicates that the APSOIIM algorithm realizes the exploration and exploitation excellently.

Speed reducer design problem
The speed reducer design problem consists of 7 decision variables, namely face width (b), module of teeth (m), number of teeth on pinion (z), length of shaft one between bearings (l1), length of shaft two between bearings (l2), diameter of shaft 1(d1), and diameter of shaft 2(d2).The objective of this problem is to minimize the total weight of the speed reducer.Mathematically, the problem can be stated as follows: Min f (x) = 0.7854x 1 x 2 2 (14.9334x 3 +3.3333x In this section, numerous algorithms are used for addressing this engineering application, and the parameters of all the algorithms are derived from the optimal parameters recommended in the references.The results are listed in table 18.
The results of SCA [8], ISCA [49] and CS [60] from the literature are also shown in table 18.From the table, it can be observed that the proposed method APSOIIM outperforms other techniques to minimize the weight of the speed reducer.
Based on the results of the eight practical engineering applications, the proposed APSOIIM algorithm can effectively balance the exploration and exploitation abilities.In addition, the proposed strategy could effectively avoid the phenomenon of 'Premature maturity' of the algorithm by enriching the population diversity, and accurately identifying the optimal solution of the algorithm, which is the optimal response to the actual problem, thus significantly enhancing the algorithm's efficiency.

Conclusions and future work
In this research, an APSOIIM is presented to solve complex global optimization problems.To begin with, a chaotic sequence strategy is utilized to create uniformly distributed particles and accelerate their convergence speed.Then, an interaction information mechanism is introduced to increase the diversity of the population, which allows for effective interaction with the optimal information of neighboring particles, striking a balance between exploration and exploitation.Besides, the convergence was demonstrated to verify the robustness and efficiency of the proposed APSOIIM algorithm, which converges to the global optimal solution with probability one.Finally, a substantial number of comparative experiments were conducted to solve the CEC2014 and CEC2017 benchmark functions as well as eight famous engineering optimization problems.The experimental results explain why the proposed interaction information mechanism can maintain the balance between exploration and exploitation and further demonstrate that the performance of APSOIIM is better than that of the compared algorithms.
In the future, APSOIIM will be developed to solve optimization problems in different research directions, such as constrained optimization problems, integer programming problems, and soon.Concurrently, the exploration of a multi-objective APSOIIM variant warrants additional investigation.In addition, the stability, parameter robustness, and computational complexity of the proposed algorithm need to be further studied.

Figure 1 .
Figure 1.Flow chart of the proposed APSOIIM.

Figure 3 .
Figure 3. Diagram of the interaction information mechanism.

Figure 5 .
Figure 5.The curves of population diversity and fitness values on F3, F11, F12 with 30 dimensions.

Figure 5
Figure 5 draws the curves of population diversity and fitness values solving the F 3 , F 11 , F 12 functions.For the unimodal function F 3 , the population diversity and fitness values decrease rapidly, which proves that the proposed algorithm has a good search ability.From the F 11 and F 12 function, although the curves of population diversity for the proposed APSOIIM algorithm are fluctuating, the range of fluctuations is stable.Compared with the curves of PSO, the proposed APSOIIM has a steeper downward trend, which shows that the APSOIIM algorithm has a more robust search efficiency.Therefore, the proposed APSOIIM algorithm gets a better numerical result.

Figure 7 .
Figure 7. Convergence curves of several methods for unimodal benchmark functions.

Figure 8 .
Figure 8. Convergence curves of several methods for multimodal benchmark functions.

Figure 9 .
Figure 9. Convergence curves of several methods for discrete benchmark function.

Table 1 .
Summary of existing PSO research.

Table 2 .
Overview of selected benchmark functions.

Table 6 .
Results of APSOIIM and current popular algorithms for 12 30-variable benchmark functions.

Table 7 .
Comparison results of APSOIIM and the winners of CEC2013-CEC2018 in 30 dimensions.

Table 11 .
Comparison results of the gear train design problem.

Table 12 .
Comparison results of the compression string design problem.

Table 13 .
Comparison results of the cantilever beam design problem.

Table 14 .
Comparison results of the pressure vessel design problem.

Table 15 .
Comparison results of Himmelblau's nonlinear optimization problem.

Table 16 .
Comparison results of inverse kinematics of inchworm robot.

Table 17 .
Comparison results of multiple disk clutch brake design problem.

Table 18 .
Comparison results of speed reducer design problem.