Brought to you by:
Paper The following article is Open access

A vague memory can affect first-return time

Published 8 June 2020 © 2020 The Author(s). Published by IOP Publishing Ltd
, , Citation Tomoko Sakiyama 2020 J. Phys. Commun. 4 065005 DOI 10.1088/2399-6528/ab9801

2399-6528/4/6/065005

Abstract

First-return time is an important property for the return of particles or walkers to a start point. Recursive walks, which may be related to first-return time, are found in both random walk models and memory-based walk models. Achieving a balance between recursive walks and diffusive movements is a crucial but difficult modeling problem. Here, starting with a simple Brownian-walk model, I investigated how vague memorized information influences the first-return times of a walker. In the proposed model, the walker memorizes recently visited positions and recalls the direction in which it previously moved when returning to those positions. Using the recalled information, the walker then moves in the opposite direction to that previously traveled. In addition, the walker considers its recent experience and modifies its directional rules, i.e., memorized information, when the rule disturbs the recent flow of its movement. Thus, the proposed model effectively produces recursive walks in which a walker returns to a start point while demonstrating diffusive movements.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The first-passage and first-return times of particles or walkers provide understanding of the spatial properties of a system [13]. The former indicates the time required for particles or walkers to move across a target site for the first time, whereas the latter indicates the time required for them to return to the target site for the first time. These well-studied properties are important stochastic and random processes. For example, for a semi-infinite domain, the first-passage time density for any symmetric jump length distribution in a Markov process is known to exhibit the universal Sparre Andersen (SA) asymptotic ≃t−3/2 [46]. However, the first-passage time density in a finite interval, i.e., a finite field, is eventually cut off by an exponential shoulder. SA theorem states that the probability of the number of steps required for a random walker starting at the origin to enter semi-axes for the first time is independent of the walker's jump length distribution [7]. First-passage and first-return times are applicable to various problems: the spread of disease, neuron firing dynamics, controlled kinetics, and animal movements [813]. For example, animals conduct random searches for various purposes such as finding food resources or relocating towards their goal when they become lost [14]. Therefore, first-passage and first-return times are important properties of random walkers, and they were extensively studied using random walk algorithms [18, 15].

Animals also show recurrence, which is dependent on their spatial memory or environmental cues [16]. Thus, as well as simple random walk models, memory-based walk models also effectively describe animal movements [1618]. Indeed, the space-use of animals has been well investigated because in reality they search within limited areas and show recursive movements [16, 19, 20]. In such movements, the return times of animals to previously visited sites are therefore important [16]. However, visiting new sites with diffusive movements is also non-negligible since animals must search for targets such as food resources, prey, and mates [21]. Although several studies of space-use in animals assumed that agents can use spatial memory or maps to remember previously visited positions [19, 2226], living-systems do not necessarily exhibit these higher-order cognitive abilities [27]. Consequently, developing agent-walk models with a limited memory is necessary in order to properly investigate the decision-making processes of animals, which combine recursive walks with exploratory behaviors. Previous studies evaluated the first-return times of fractional Brownian motion and long-term correlated data with distribution densities such as power-law, log normal and so on [2831]. These long-term correlations can be linked to memory effects [30, 31]. However, studies are yet to fully deal directly with the stochastic process of an agent that autonomously modifies its direction of movement using its limited experience. Some animals, which do not necessarily possess high-level cognitive capacities, appear to use directional information to return to their nest or goal by accessing their memory [32, 33]. Furthermore, animals integrate a plethora of information received from their environment or mates and, by consequence, drastically change their actions [34]. In other words, animals sometimes ignore the rules they generally obey due to a new experience that compel them to take action.

In this study, therefore, I consider a memory-based walker model on a two-dimensional (2D) simulation field in which the walker sometimes coordinates the directional rules within its memory while initially following a Brownian-like walk pattern. Specifically, the walker remembers recently visited positions and recalls the direction in which it previously moved when it revisits those positions. Then, the walker moves in the opposite direction from that previously traveled in order to return to the start position effectively. However, this strategy does not always lead the walker to return to the start position when searching on a field of more than two dimensions. Hence, a modified model was also developed in which the walker reconsiders the memorized direction based on its recent experiences. Such an event previously was described in some random walk models or memory-based walk models [21, 35]. This modified model outperformed the basic model in respect of first-return times.

The paper is organized as follows. In section 2, I set up two different memory-based walker models. Section 3 presents the simulation results. In section 4, I discuss my findings.

2. Methods

2.1. Brownian model

In the Brownian model, the agent located at the coordinate (x, y) chooses one of four cells, x − 1, x + 1, y − 1, or y + 1, equally to update its position. In this paper, I developed two different models (the simple return model and the tuned return model). In both models, a walker follows a Brownian-like pattern at the beginning of each trial.

2.2. Space and agents

Each trial was run for 1000 time-steps. I assumed that a single agent (walker) moved in 2D square lattices, and the agent was set at the coordinates (0, 0). The field size was defined as 500 × 500 cells. When the agent reached the edge of the field, the trial would finish even if the 1000 time-steps were not completed (however, such an event never occurred in my experimental simulations). For movement, the agent located at coordinate (x, y) must choose one of four cells, x − 1, x + 1, y − 1, y + 1, and update its position at each time-step.

2.3. Simple return model

In this model, the agent located at the coordinate (x, y) first follows a Brownian-like walk pattern; it chooses one of four cells, x − 1, x + 1, y − 1, or y + 1, equally to update its position. The agent remembers recently visited cells; however, a memorized cell is removed from its memory after memory_th time-steps once the agent passes that cell. When the agent passes or returns to a certain cell, the parameter memory_th for that cell is refreshed. These memorized cells compel the agent to move in the opposite direction from the search area until later. Specifically, if the agent passed a cell and returns to that position before the cell was removed from its memory, it definitely moves in the opposite direction to that previously traveled. For example, an agent revisiting the coordinate (x, y) definitely moves in the +x direction and updates its position from (x, y) to (x + 1, y) if it moved in the –x direction at that position on a previous occasion. In contrast, the agent randomly chooses one cell from the adjacent four cells if it is the first time the agent visited that cell or if it has forgotten previously visiting that cell. Figure 1(A) shows a schematic diagram of this process. In one-dimensional walks, this strategy always leads the agent to turn back the way it has come, and it finally returns to the start point. Conversely, in 2D walks, this strategy does not necessarily lead the agent to retrace its steps. Therefore, the agent subjectively regards local information (direction) as global information for returning to the start point.

Figure 1.

Figure 1. Schematic flow diagram of (A) the simple return model and (B) the tuned return model.

Standard image High-resolution image

2.4. Tuned return model

In this model, the agent basically follows the same algorithm as the agent in the simple return model. However, the position update of the agent is replaced as follows using the parameter r (figure 1(B)).

If the agent revisits a certain cell, it moves in the opposite direction to that which it previously traveled with a probability r. With a probability 1-r, the agent moves in the same direction as that which it previously traveled. Initially, r is set to 1.00. Subsequently, r is determined as follows using the stream of movement (TSM). The agent scans TSM using its current and previous coordinates when TSM for each direction (TSMx or TSMy) is not fixed (both TSMx and TSMy are initially non-fixed). For example, if the agent at coordinate (x, y) updates its position to (x + 1, y), then the x-direction of TSM (i.e., TSMx) is fixed to +1, and count is changed from 0 to 1. Here, the parameter count indicates the degree of TSM, i.e., the degree of movement in a certain direction. After TSM is fixed for each direction, count is replaced with count +1 whenever the agent moves in the same direction as the fixed directions of TSM. However, count is reset to 0 when the agent moves in the opposite direction from the fixed direction of TSM. If movement flow is interrupted after count exceeds the threshold parameter move_th, then r is chosen by r ∈ [0.00, 1.00] = {r∣0.00 ≤ r ≤ 1.00}, and count resets to 0. For example, count is reset to 0 when the agent updates its position to (x, y − 1) from (x, y), although TSMy is +1 at that time. Note that both TSMx and TSMy become non-fixed again whenever count resets to 0. The agent considers movement flow to have been interrupted if count resets to 0 after it exceeded the value of move_th. As previously mentioned, r is replaced at that time, and a new value is selected and used until the same event occurs again. Thus, in the tuned return model, the agent tends to doubt/modify the directional rule and its memory when that rule disturbs the recent flow of its movement. Figures 2 and S1 show schematic illustrations of TSM and the calculation of r, respectively.

Figure 2.

Figure 2. A schematic illustration of the stream of movement (TSM) in the tuned return model. Cells are represented by squares. The black square represents the current cell (at time t). The grey cell represents the cell occupied by the agent at time t-1. The case for x-direction of TSM (TSMx) is shown. TSMx can be fixed to −1 or +1 'when TSMx is non-fixed' according to the current and previous coordinates. Conversely, that value can be reset to non-fixed 'after TSMx is set to −1 or +1' according to the current and previous coordinate.

Standard image High-resolution image

2.5. Parameters

All parameters for the models are shown in table 1. Later, the impact of changing the parameters is discussed.

Table 1.  Parameters used in the models.

Parameter Value Description
memory_th 20 Threshold value for the memory duration
move_th 10 Threshold value for the stream of movement
time length 1000 Time length of one trial
N of trials 1000 Number of trials conducted for measurement
field size 500 × 500 Field size

3. Results

Figure 3 illustrates an example of an agent trajectory in the tuned return model and the simple return model. In these examples, the agent in both models tends to trace its trajectory while expanding the search area. To investigate the dynamics of movement, the diffusiveness of the walks was analyzed; studying this property is useful for analyzing search efficiency. In the random walk analysis, the mean-squared displacement (MSD) and time-step are related as follows [36]:

Here, the value of parameter H depends on the model (H > 0.5 for a Lévy walk (super-diffusion); H = 0.5 for a Brownian walk (diffusion); H < 0.5 for a sub-diffusive movement). Figure 4 shows the MSD and time-steps obtained from both models (MSD was obtained every 100 time-steps). According to the tuned return model and the simple return model, the fit for H was H ≈ 0.48 and H ≈ 0.50, respectively. These results indicate that these walks show standard diffusion (R2 = 0.99 for both models).

Figure 3.

Figure 3. An example of an agent trajectory obtained from one trial. (A) Trajectory in the tuned return model. (B) Trajectory in the simple return model. memory_th = 20. move_th = 10.

Standard image High-resolution image
Figure 4.

Figure 4. The diffusiveness of (A) the tuned return model and (B) the simple return model. Fitted lines correspond to the slopes. memory_th = 20, move_th = 10.

Standard image High-resolution image

In further investigation, the first-return times were calculated. The first-return time τ is defined as the time interval until the agent returns to the start position for the first time, and it expressed as follows:

Here, t and s ∈ ${\mathbb{N}},$ denote time with t > s, while (xt, yt) represents the coordinate of the agent at time t and (xs, ys) represents the coordinate of the agent at time s, which satisfies (xs, ys) = (0, 0), i.e., the start position. Figure S2 shows an example of the calculation of return time where τ = 4.

Figures 5(A) and (B) represent the relationship between the time interval τ and its cumulative frequency obtained from both models. Results suggest that power-law-tailed distributions were achieved across some ranges in both models (figure 5(A), tuned return model: n (data from one trial) = 1386, μ = 1.67, weight of power-law against exponential law = 1.00, goodness of fit (GOF) G2 = 0.61, df = 1, p = 0.44; figure 5(B), simple return model: n (data from one trial) = 1595, μ = 1.59, weight of power-law against exponential law = 1.00, GOF G2 = 3.48, df = 1, p = 0.06). Although both models produced some power-law-like distributions in respect of first-return times, the slope of the tuned return model was higher than that of the simple return model. Therefore, the agent in the tuned return model returns to the origin frequently. I therefore focused on the mean return time 〈τ〉. Indeed, the mean return time of the tuned return model was found to be smaller than that of the simple return model, suggesting that the former model produces an effective home range with standard diffusion (tuned return mean time interval ± SD = 35.02 ± 111.63; simple return mean time interval ± SD = 58.71 ± 140.48; Mann-Whitney U-test, p < 1−15). The tuned return model also outperformed the perfectly random walk, i.e., the Brownian model in respect of first-return time (tuned return mean time interval ± SD = 35.02 ± 111.63; Brownian model mean time interval ± SD = 54.33 ± 137.14; Mann–Whitney U-test, p < 1−15). In fact, Brownian model produced weak power-law-like distributions in respect of first-return times, the slope of the tuned return model was higher than that of the Brownian model (figures 5(C) and (D)) (figure 5(C), Brownian model: n (data from one trial) = 1274, μ = 1.54, weight of power-law against exponential law = 1.00, goodness of fit (GOF) G2 = 5.84, df = 1, p < 0.05).

Figure 5.

Figure 5. The relationships between step length τ and cumulative frequency in the tuned return model, simple return model and the Brownian model. (A) Data from the tuned return model (black dots) and the best-fit power-law distributions (a solid line). (B) Data from the simple return model (light grey dots) and the best-fit power-law distributions (a solid line). (C) Data from the Brownian model (grey dots) and the best-fit power-law distributions (a solid line). (D) Data from all models are plotted. memory_th = 20. move_th = 10.

Standard image High-resolution image

In the simple return model, the agent moves in the opposite direction to that previously traveled when it returns to visited cells. This strategy allows the agent the opportunity to retrace its trajectory to some extent. However, rejecting this strategy seems to lead the agent to the start position effectively. Therefore, I investigated what would happen if the agent moved in a vertical direction when returning to visited cells. In the vertical version of the simple return model, the agent moves in a direction vertical to that previously traveled when returning to a visited cell; it randomly chooses one cell from the nearest two vertically-placed cells. As shown in figure S3, this modification does not contribute to reducing return time. Thus, I confirmed that moving in a parallel-opposite direction to that previously traveled may lead the agent to return to the start position effectively.

Finally, the effects of parameters were assessed. As shown in figure 6(A), which illustrates the mean first-return time when the parameter memory_th is varied (from 20 to 0, 40, 100, or 200), the tendency of each model does not depend on the modification of memory_th. However, the agent takes more time to return to the start position when memory_th = 0 because it does not remember any cells and therefore demonstrates a random walk. More interestingly, the tuned return model with the default move_th (=10) outperformed models with move_th = 0, 60, or 100, respectively (figure 6(B); tuned return (move_th = 10) mean time interval ± SD = 35.02 ± 111.63 versus the following: tuned return (move_th = 0) mean time interval ± SD = 52.75 ± 142.77, Mann–Whitney U-test, p < 0.05; tuned return (move_th = 60) mean time interval ± SD = 57.05 ± 129.38, Mann–Whitney U-test, p < 1−15; tuned return (move_th = 100) mean time interval ± SD = 56.52 ± 139.89, Mann–Whitney U-test, p < 1−15). This perhaps occurred because it seems unlikely that the agent demonstrates an opportunity to replace the parameter r when move_th is large. However, this does not suggest that the frequent replacement of r is related to shortened return times. Overall, these results do indicate that, in order to return to the start point effectively, exhibiting a large memory capacity is not necessary for the agent.

Figure 6.

Figure 6. The mean return times following variation of the model parameters (A) memory_th and (B) move_th. move_th = 10 in A for the tuned return model. memory_th = 20 in B.

Standard image High-resolution image

4. Discussion

Here, I developed an agent-based model in which an artificial agent walked constantly and made decisions on the direction in which it traveled. The walker remembered previously visited positions for a period and recalled the direction in which it previously moved when it revisited these positions. The walker selected a certain direction at each time-step; its selection was dependent on whether it was revisiting its current position. In other words, the walker tended to move in the opposite direction from that which it previously traveled. This strategy enables the walker to revisit the start position repeatedly if it walks on a one-dimensional field. However, on a 2D field, the walker does not necessarily repeatedly revisit the start position because directional information, representing local information, does not serve as global information. To overcome this problem, a modified tuned return model was developed in which the walker coordinates its directional information when it revisits a cell and recalls that information. With this modification, an adaptive model considering return times was successfully produced. Importantly, this tuned return model combines the recurrent behavior of a walker with standard diffusive movement.

When animals return to previously visited positions, they show recursive movements; such actions can represent home range behaviors [1621]. Animals are presented with a dilemma when they perform exploratory behavior since they should also achieve stable recursive walks [21]. Here, I focused on the ambiguity or instability of a walker's memorized information: the directional rule. Recalling directional information provides the walker with temporal information, but it cannot estimate its entire movement process. This holds true for several animals with a relatively low memory capacity, including insects [27, 37] and [38]. For example, ants turn their body to match their homing direction to memorized visual information and update their position sequentially [39]. However, they seem to ignore the memorized information and become confused in uncertain situations [40]. In the present study, the walker in the simple return model sequentially used a single rule for homing. Conversely, the walker in the tuned return model considered their recent experience, i.e., the flow of movement. Thus, the latter walker modifies its rule for homing if the rule disturbs its flow in a certain direction. This event means the walker is uncertain of the current rule (memory) [41, 42].

The models proposed here demonstrate normal diffusive movements; however, some animals exhibit super-diffusive movements such as Lévy flights or walks. In future research, the emergence of Lévy walks using a memory-based model could be investigated since the first-passage times from the stochastic processes of Lévy walks were also well-studied [3, 4] and [43]. Further research could also focus on the investigation of the moments of the first-return time [3].

Acknowledgments

I have no conflict of interest to declare.

Please wait… references are loading.
10.1088/2399-6528/ab9801