Top Essay Writers
Our top essay writers are handpicked for their degree qualification, talent and freelance know-how. Each one brings deep expertise in their chosen subjects and a solid track record in academic writing.
Simply fill out the order form with your paper’s instructions in a few easy steps. This quick process ensures you’ll be matched with an expert writer who
Can meet your papers' specific grading rubric needs. Find the best write my essay assistance for your assignments- Affordable, plagiarism-free, and on time!
Posted: June 28th, 2024
An Improved Particle Swarm Optimization Algorithm Using Memory Retention
Abstract. Particle Swarm Optimization (PSO) algorithm is a population-based stochastic optimization technique in which each particle moves in the search space in search of an optimal solution. During this movement, each particle updates its position and velocity with the help of its best previous position found so far (pbest) along with the best position found by swarm (gbest). The conventional particle swarm optimization suffers from stagnation as all the particles are attracted towards their local best rather than the global best position. There are several techniques which restrict the particles from falling into local optimum (mainly in multimodal functions). However, achieving higher convergence rate while avoiding stagnation is challenging. In this paper, we present new two variants of particle swarm optimization techniques which can prevent the premature convergence in the swarm. The first technique combines conventional PSO with Memory Retention (PSOMR) which augments memory to the conventional particle swarm optimization by leveraging the concepts of Ebbinghaus forgetting curve and second technique (MS-PSOMR) combines PSO with subswarms to avoid stagnation. We also show how historical memory, built by storing particles’ historical promising values for pbest and fitness values, improves Particle Swarm Optimization by using the best values from the nth slot in the memory. Experiments are conducted on some well-known benchmark functions and compared with algorithms of their category. The results show that both the approaches performed better than the algorithms of its category for the majority of the measured metrics and also discourages premature convergence. It is also observed that our approaches also find an optimal solution quickly when compared to conventional particle swarm optimization.
We get a lot of “Can you do MLA or APA?”—and yes, we can! Our writers ace every style—APA, MLA, Turabian, you name it. Tell us your preference, and we’ll format it flawlessly.
Keywords. PSO, memory retention, premature convergence, forgetting curve.
1 Introduction
Particle Swarm Optimization (PSO) algorithm is a population-based stochastic search algorithm that deals with complex non-linear optimization problem [1]. In PSO, each member of the group is called particle, and the entire population is called the swarm. The PSO algorithm is based on sharing knowledge among individual particles without any prior knowledge of their positions. This behavior is inspired by the social behavior of animals. Each particle in the swarm has a velocity. An important characteristic is that all particles perform a collaborative search in which all the particles are attracted towards the global best position which is termed as gbest in the whole swarm and its own best position termed as pbest.Particles of a swarm interconnect with each other and adjust the eir positions and velocity dynamically so that each particle is in its best position in the swarm. The entire process executes in iterations, and in each iteration, all particles move towards better and better positions by evaluating a fitness function f: RnR where Rn is a vector of real numbers and R is a real number. PSO is also simple to implement and can exhibit good convergence properties when compared to other evolutionary algorithms. Becuase of these advantages, PSO has become one of the useful optimization techniques and has been extended and implemented in many areas like function optimizations, image processing, power systems, etc. []
Although PSO is considered as a potential solution to solve many problems, it suffers from premature convergence since it can be trapped into local minima, mostly in case of multi-dimensional and multimodal problems which contain many numbers of local minima[9]. One of the potential reasons for such behavior is that existing PSO algorithms use only the information of their gbest and particle’s own current best position. However, they fail to make use of particles’ historical promising information in which the particle moves towards a local optimum.(Throughout this paper we refer pbest and fitness value as information). In this case, it may be impossible for a particle to come out of the local optimum area once its pbest falls into the region where the gbest locates. One important strategy which could prevent premature convergence is through maintaining a good balance between exploration and exploitation []. Exploration is defined as the ability to search for more points in the solution space which improves the optimization process as the algorithm iterates. Exploitation is defined as the ability to search near the current solution. Many PSO variants have been developed in the past to maintain a balance between these two by adjusting PSO parameters [], designing neighborhood topologies[], etc.One more important strategy to discourage premature convergence is to maintain high diversity among the particles using evolutionary operators such as selection, crossover, and mutation, etc.[21]
Totally! They’re a legit resource for sample papers to guide your work. Use them to learn structure, boost skills, and ace your grades—ethical and within the rules.
In this work, we propose two new strategies to overcome the problem of premature convergence and improve the overall optimization process. In these two techniques, we maintain the historical memory of the particles that can discourage premature convergence and improve the performance by storing promising solutions and reuse them later to improve the search process. In the first strategy PSOMR, we attempt to use memory which is inspired by the human brain. We apply the idea of Ebbinghaus forgetting curve into PSO algorithm and find out the information that is present in the particle at time t by choosing the pbest from the best memory slot (considering minimization function) and eliminate premature convergence problem. For the other variant, we use the concept of subswarm and exchange information between them. We store the historical promising fitness values in the particle’s memory and reuse them to avoid premature convergence.
The rest of the paper is organized as follows. Section 2 presents the basic concepts used in the paper along with various variants of PSO algorithm which uses the idea of memory. Section 3 describes proposed algorithms. Section 4 presents the results and section 5 concludes the paper.
2 Background and Motivation
Starts at $10/page for undergrad, up to $21 for pro-level. Deadlines (3 hours to 14 days) and add-ons like VIP support adjust the cost. Discounts kick in at $500+—save more with big orders!
2.1 Particle Swarm Optimization
In conventional PSO, each particle in the swarm is considered as a credible solution to the optimization problem in an n-dimensional space. Every particle in the swarm searches for an optimal point in the search space by possessing both position and velocity. At each iteration, the velocity of each particle is updated using the following equation:
vijt+1
=
100%! We encrypt everything—your details stay secret. Papers are custom, original, and yours alone, so no one will ever know you used us.
vijt+
c1r1jt*pBest it-xijt+c2r2jt*gBest it-xijt (1)
xi=xi+vi
(2)
Nope—all human, all the time. Our writers are pros with real degrees, crafting unique papers with expertise AI can’t replicate, checked for originality.
where
vijt is the velocity vector of particle i in dimension j at time t ,
xijtis the position vector of particle i in dimension j at time t ,
pBest itis the personal best position of particle i in dimension j found till time t,
Our writers are degree-holding pros who tackle any topic with skill. We ensure quality with top tools and offer revisions—perfect papers, even under pressure.
gBest itis the global best position of particle i in dimension j found till time t.Constant c1, c2 are the acceleration coefficients which determines the influence of particle’s personal and social experiences,
r1jt and r2tare random numbers.The three terms
vijt,
c1r1jt*pBest it-xijtand
Experts with degrees—many rocking Master’s or higher—who’ve crushed our rigorous tests in their fields and academic writing. They’re student-savvy pros, ready to nail your essay with precision, blending teamwork with you to match your vision perfectly. Whether it’s a tricky topic or a tight deadline, they’ve got the skills to make it shine.
c2r2jt*gBest it-xijtrepresent inertial, cognitive and social components respectively, which play a major role in updating particle’s velocity.
Premature convergence in PSO
The swarm is said to contain premature convergence when the solution comes near to local position, rather than moving towards the global best position. At this stage, the progress towards the global best position will also be stopped so that current activity could only continue to work on a local minimizer. That is, particles are converged too early and the result will be sub-optimal.In premature convergence, all the particles will come close to each other so that the global best and all personal bests are attracted towards one region in the search space. Since particles are repeatedly attracted to the small region in the search space with very less proximity, the momentum of the particles from their previous velocities becomes zero which results in stagnation. One of the effective ways to eliminate premature convergence is to maintain diversity among the population. Diversity in the swarm can be calculated by finding the proportion of distinct individuals based on their fitness values.
2.2 Subswarms
Guaranteed—100%! We write every piece from scratch—no AI, no copying—just fresh, well-researched work with proper citations, crafted by real experts. You can grab a plagiarism report to see it’s 95%+ original, giving you total peace of mind it’s one-of-a-kind and ready to impress.
Since conventional PSO does not guarantee diversity among the particles, dividing the swarm population into multiple groups, called sub-swarms is one of such techniques which can increase the overall search process [6] and improve the diversity among individuals[]. This technique of dividing the swarm into multiple sub swarms and performing optimization process local and independent of the subswarm is called multi-swarm optimization.In multi-swarm optimization, each subswarm focuses on a specific region and start the optimization process.
2.3 Forgetting curve
The storage and utilization process of information in the human brain is mainly decided by the length of time since the information is stored in the memory. The efficiency of historical cognition decreases gradually with time. That is, the human brain stores information by its frequent usage, but faded from the memory when it is not in use. Ebbinghaus, in 1885 described the famous Ebbinghaus forgetting curve to confirm this point. The forgetting curve represents the probability that a person can recall information at time t since the previous recall. It also shows how the information is lost from memory over time when there is no attempt to recall it. It is usually expressed as,
R=e-ts (3)
Yep—APA, Chicago, Harvard, MLA, Turabian, you name it! Our writers customize every detail to fit your assignment’s needs, ensuring it meets academic standards down to the last footnote or bibliography entry. They’re pros at making your paper look sharp and compliant, no matter the style guide.
Where R is the memory retention denoting the probability of recalling information at time t since the last recall. e is the Euler number, which is approximately 2.718,t is the time since the previous recall and S (relative strength of memory or stability) is the approximate time since the last recall for which the information is stored in memory.
An essential aspect of the long-term memory is that after reproduced information recall at time t > 0, the time of storing the information in memory S changes. The change is dependent on the previous time S and at the time of recall t. Usually, the reproduced recall multiplies in the next subsequent time by factor F which is always greater than 1, i.e., F >1. Also, immediate recollection of information has no considerable effect on the learning, and the too late reproduced recollection causes substantial forgetting. There is an optimal time opt(S), between these two times where the information recollection will be maximum in which the information is beneficial [3].
The new value of S denoted as Snew, is dependent on (a) type of memory, (b) time of repetitive information recall and (c) on the previous value of S. It can be calculated as Snew= ch(t, S, F, Sini)・S, where the function ch(t, S, F, Sini) calculates the coefficient of change of value S, where Sini is the initial strength of memory.
In this work, we combined the idea of Ebbinghaus forgetting curve with PSO algorithm. We assume that (a) particles in PSO have a memory which possesses the qualities of the human brain, and (b) the memory retention factor for the particles changes according to the law of Ebbinghaus forgetting curve.
For sure—you’re not locked in! Chat with your writer anytime through our handy system to update instructions, tweak the focus, or toss in new specifics, and they’ll adjust on the fly, even if they’re mid-draft. It’s all about keeping your paper exactly how you want it, hassle-free.
2.4 Motivation
According to the Ebbinghaus forgetting curve, the efficiency of historical cognizance mainly depends on the time since it is stored in the memory. In other words, the accuracy will be more when the information is recollected within the short span from the time it is stored in the memory instead of the information recollected in more time span. Following the same law, our idea is to pick the best pbest information from memory similar to the way the human brain does to help all the particles to find global optimum and proceed further to avoid stagnation. In PSO, since the particles iterate to move in the optimum solution space, storing the particle’s information in historical memory helps particles to get back to the optimum regions if they fall into local optima. Intuitively, the historical memory of particles can help PSO to overcome the drawback and improve performance. The reason is that the implicit or explicit historical memory allows PSO to store promising solutions and reuse their information later to improve the search process. On the one hand, the historical memory preserves the information of optimum solution space that particles have searched, which enable particles to return to these optimum regions if they fall into local optima. On the other hand, the historical memory that derives from different particles can also maintain the diversity of population to some extent.
Also, in PSO since all the particles initially start with divergent nature and degree of divergence decreases gradually when they are going to be trapped in local optima resulting in premature convergence. Improving the population diversity can prevent premature convergence and improve the overall convergence rate. Storing the particle’s previous fitness values in the memory and calculating the relative change in the fitness value at each iteration of PSO can be an indicator to identify premature convergence. A profound shift in fitness value, when compared to fitness value in the previous iteration, suggests that particle is lacking the divergent behavior (as the particle changes its position based on fitness value) and result in premature convergence.
3 Related Work
It’s a breeze—submit your order online with a few clicks, then track progress with drafts as your writer brings it to life. Once it’s ready, download it from your account, review it, and release payment only when you’re totally satisfied—easy, affordable help whenever you need it. Plus, you can reach out to support 24/7 if you’ve got questions along the way!
In the literature, there are several vital works which improved the quality of the conventional PSO. Bergh et al. [4] presented a cooperative particle swarm optimizer (CPSO) that uses multiple swarms to optimize various components of the solution vector cooperatively. Mendes et al. [5] introduced a fully informed particle swarm (FIPS) that uses all the neighbors to influence the flying velocity of a particle. Ratnaweera et al. [6] presented a self- organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO–TVAC) using time-varying acceleration constants. Liang et al. [7] described a comprehensive learning particle swarm optimizer (CLPSO) that uses all particles’ current pbests to update the velocity of the particle. Chen et al. [8] presented a PSO with an aging leader and challengers (ALCPSO) by using an aging mechanism. Lin et al. [9] presented a self-government PSO in which each particle updating depends on local best information searched at the last iteration as well as pbest and gbest. All the above algorithms always strike the local optimum.
Researchers have also focussed on utilizing memory, multi swarms for eliminating premature convergence. Below are some of the studies which used additional external and internal memory (repository) to improve the performance of conventional PSO. Memory-based methods of PSO in literature has mainly concentrated on various methods through which the local and global best positions are selected and used. Coello et al. [10] determined the global best by choosing the best solutions from the external memory and the local best is updated concerning Pareto dominance. Hu et al. [11] used dynamic neighborhoods and employed an external memory to memorize all potential optimal solutions. In their work, the global best is selected among the candidates in the external memory by defining a neighborhood objective and an optimization objective functions. The global best is found among the particle’s neighbors, subject to the defined neighborhood objective.Wang et al. [12] presented a triggered Memory-Based PSO that adjusts the parameters of the PSO when the search space changes over time. In this method, a predefined number of globally best positions are maintained and reintroduced into the population when necessary. Acan and Gunay [13] placed a single globally best position along with some finite number of globally worst positions. At the end of each iteration, some randomly chosen particles are replaced by applying crossover operator. In another work of Acan [13], each particle maintains global and local memory. A colony consisting of these positions is constructed where the velocity and position of particles are updated in each iteration. Here, the new positions are evaluated where the best replace the particle’s current velocity.
Tang and Fang [14] described an approach which accumulates the historical cognition by using deep extended memory and improved the performance of convergence speed by introducing a new learning method for cognition. Shahriar and Sima [15] presented an extension to PSO that utilizes two separate external memories in which a set of globally found best and worst positions are stored along with their parameters. At each iteration, the distance of current particle to the closest best and closet worst particles are calculated. Jie et al. [16] proposed a composite PSO algorithm namely HMPSO in which each particle has three candidate positions. These positions are generated from the historical memory, particles’ current pbests, and the swarm’s gbest. The above approaches re-introduce the globally best position into the population during the search. Majority of the memory related works in the literature theoretically concentrated on using the memory (which is either external or additional) to utilize the information from their historical cognizance.
The multi-swarm technique has also attracted researchers many researchers. Zang and Ding et al. [17] proposed MSCPSO which uses four subswarms and exchange information among themselves to evaluate fitness function by using master-slave model. Niu Et.al [18] presented a new optimization algorithm with central learning strategy called MPSOCL which employee a new scheme which updates the direction combining historical values from all subswarms.Suganthan et al. [19] proposed DMS-PSO in which the while population is divided into small subswarms which are frequently regrouped by exchanging information among subswarms to increase the diversity in the swarm. Frequent regrouping in DMS-PSO does not result in better exploitation as the algorithm iterates. Several alterations have been made to the DMS-PSO to achieve good performance.Xia et al. proposed an algorithm called DMS-PSO-CLS by hybridizing DMS-PSO with cooperative learning strategy to achieve better exploitation and exploration. Li and Xiao introduced a new method named Multi-Best PSO (MBPSO) in which the whole population is divided into the sub-swarms. Each subswarm has a separate gbest and the fitness value is calculated by using multiple gbests. In case of premature convergence, multiple gbest values are used to come out of the local position. Zheo.at.al extended DMS-PSO by using subswarms with dynamic sizes to adopt harmony search algorithm. In this method, new harmonies are generated according to the current best position and the nearer personal best solution is replaced with the newly generated harmony having better fitness.
Need it fast? We can whip up a top-quality paper in 24 hours—fully researched and polished, no corners cut. Just pick your deadline when you order, and we’ll hustle to make it happen, even for those nail-biting, last-minute turnarounds you didn’t see coming.
4 Proposed Approach
In this section, the two proposed approaches to eliminate the premature convergence problem is presented.
This method uses memory along with the concept of forgetting curve to read the information and update both velocity and position of the particle. The advantage of using memory is the preservation of particle’s historical promising information. From the historical memory, we reflect the distribution of particles’ historical promising pbests in the solution space. The advantage of incorporating forgetting curve into PSO is that we can fetch the appropriate fitness value by calculating memory retention of a particle. The approach also lies in-line with the way human brain recollects the information at time t and reproduce that for usage. The proposed approach is two-phased. The first phase deals with initialization of memory (Algorithm 1 (see figure1)) where each particle uses memory to store fitness values with their pBest personally explored so far. Note that, in conventional PSO there are local memories for each particle and a global memory for the swarm and all have a capacity of keeping only one position. Unlike conventional PSO, in the proposed approach, a particle is capable of storing a finite number of promising values. The second phase uses the memory from phase 1 and computes the memory retention and selects the pbest from the relevant memory slot.We assume that memory is empty initially and the sizes of memories are the same for all particles. In the initialization process, conventional PSO algorithm is run until the memory is filled with personally discovered promising positions. The fitness value of each particle i is compared with all the fitness values of the particles in its local memory Mi. The pBest and fitness value of a particle is inserted into Mi only if local memory is empty, or the current fitness value is better than last inserted value. This way, we ensure that the initial contents of memory are sorted in ascending order of their fitness values. For all particles in the swarm and their all dimensions, we compute the memory retention at time t with equation (3). Next, for all the dimensions, we calculate the percentage of memory retention using equation (5). Moreover, if it is not 0 %, we select the pbest from the nth slot of memory. We compute the velocity of the particle i at time t+1 as,
vijt+1
=
Absolutely—bring it on! Our writers, many with advanced degrees like Master’s or PhDs, thrive on challenges and dive deep into any subject, from obscure history to cutting-edge science. They’ll craft a standout paper with thorough research and clear writing, tailored to wow your professor.
vijt+
∑k=1tRipBest it-xijt+∑k=1tRigBest it-xijt (4)
where
Rirepresents the memory retention of particle i at time t.We continue to update the velocity of the particle using the equation (4) and update its position. This procedure is repeated for all the particles for all the dimensions. We compute function ch(t, S, F, Sini ) as [3]:
As the value of opt(S) varies between 10-30% of time S (see section 2.2.), we considered opt(S) as to be 20% of S and F = 1.2. Since we are maintaining multiple pbests and fitness values in memory for a particle, in the second phase, we determine the slot from which the information has to be picked from the memory at time t. For this, we use the value of S (which is approximated time since the last recall). If S value is less than predefined time (nṪ), where 0 < n <=m, then we pick pbest from the nth slot from memory. Figure 2 shows the procedure (algorithm 2) for finding the memory slot, and Figure 3 shows the algorithm (algorithm 3) for the proposed PSO with memory. Note that step 6 in Algorithm 3 uses the nth slot that is computed using algorithm 2. Our procedure differs from the conventional PSO algorithm in two ways:
We follow your rubric to a T—structure, evidence, tone. Editors refine it, ensuring it’s polished and ready to impress your prof.
Approach 2:
PSO with large population performs well on simple problems, and PSO needs a comparatively small population when compared to other evolutionary algorithms to achieve better results on complex problems[20]. Hence, we divide entire swarm into multiple subswarms to slow the convergence speed and increase the diversity.Each member in the subswarm uses the solution space of its subswarm and continue to search for better regions. Similar to approach 1, we store the fitness values of particles in its memory and use them later for updating the velocity and position. Each subswarm has its own local best and global best termed as lgbest, lpbest. We calculate the relative change in fitness values from memory, and if the relative change is less than a predefined constant(µ), we suspect that the particle is undergoing premature convergence. The reason being that the particle’s change in position is dependent on the fitness value(according to eq.1 and 2) also according to the conventional PSO algorithm, it is evident that the particle will change its position only if there is an improvement of fitness function when compared to the previous iteration. In case, if the relative change of the fitness value is less than µ, we assume that the particle is undergoing premature convergence. To avoid premature convergence of the particle, we use gbest of the best particle in the subswarm and use that to update the particle’s position, and velocity using Eq (1) and Eq (2) and the process is repeated. Since we are replacing the gbest, the particle will change its position and come out of premature convergence.
The following are the key parameters and the efficiency of the approach can be varied by adjusting them:
Send us your draft and goals—our editors enhance clarity, fix errors, and keep your style. You’ll get a pro-level paper fast.
The pseudo code for the MS-PSO is given below in algorithm 4.
5 Experimental Results
5.1Benchmark functions:
Yep! We’ll suggest ideas tailored to your field—engaging and manageable. Pick one, and we’ll build it into a killer paper.
To test the performance of both the proposed algorithms, we use six benchmark functions presented in Table 1. The functions are broadly classified into two groups. The first group f1,f2 are relatively simple uni-modal functions. The second group (f3-f6) are complex multi-modal functions out of which f5 is a complex multimodal function which has a large number of local optima.
5.2 Parameter settings:
We used the variable parameter settings for both the approaches to present the results in more detail manner.For PSO-MR, swarm population of m is taken as 30, memories of size M = 5. The number of dimensions D is set to 30. The results are obtained from 30 independent runs.Maximum number of epochs taken as 4000, which is a termination criterion. For MS-PSOMR, the total number of particles is 200 which are divided into four subswarms comprising of 50 particles in each subswarm. The total number of dimensions are 30. It is observed that the value for µ and the memory size (m) changes along with the function. The values are presented in Table 4. For all PSO, PSOMR, and MS-PSOMR we took Inertia weight (ω) as 0.729, and c1 and c2 as 1.49445 [17].
5.3 Discussion:
The two proposed approaches are compared with PSO variants which exhibit similar functionality.PSOMR is compared with CLPSO which uses all particles’ current pbests to update the velocity of the particle.MS-PSOMR is compared with DMS-PSO in which the whole population is divided into many small subswarms which are frequently regrouped by using various regrouping schedules. İn the next subsection, we present the results of PSOMR.
Yes! Need a quick fix? Our editors can polish your paper in hours—perfect for tight deadlines and top grades.
5.3.1 Experimental results of PSOMR Approach:
For illustrating the problem of premature convergence for conventional PSO, the algorithm is set to optimize Rastrigin function. Figure 4 shows the premature convergence behavior for conventional PSO for one particle in one dimension for 4000 Epochs. It is observed that during optimization process of the function, premature convergence is detected at 1551 epoch and the particles’ velocity becomes zero. However, premature convergence can be avoided if pbest is taken from the historical memory (see figure 5).
Table 1. List of benchmark functions | |||
Type of function | Name | fmin | Test Function |
UniModal | Sphere | 0 | f1=∑i=1Dimxi2 |
Rosenbrock | 0 | f2=∑i=1D-1( 100xi2-xi+12+(xi-1)2 | |
Multimodal | Schwefel | 0 | f3 |
You Want The Best Grades and That’s What We Deliver
Our top essay writers are handpicked for their degree qualification, talent and freelance know-how. Each one brings deep expertise in their chosen subjects and a solid track record in academic writing.
We offer the lowest possible pricing for each research paper while still providing the best writers;no compromise on quality. Our costs are fair and reasonable to college students compared to other custom writing services.
You’ll never get a paper from us with plagiarism or that robotic AI feel. We carefully research, write, cite and check every final draft before sending it your way.