I have snail dispersal in my mind. So I have done a simple simulation of the dispersal of a group of snails released at one spot using the following assumptions:
1. Every snail survives and disperses during the dispersal period.
2. The net dispersal distance for each snail is the distance between the origin (the release point) and the snail's location at the end of the dispersal period. The net dispersal distances are normally distributed. I have created the distribution data at www.wessa.net/rwasp_rngnorm.wasp.
3. The dispersal angles are randomly selected non-unique integers between 1 and 360 degrees. I have created the dispersal angles at www.randomizer.org/form.htm.
The plot below shows one simulation—the only one I have done so far—where I set the number of snails=100, mean dispersal distance=10, standard deviation=3.
Each data point could represent either one snail or one colony. The maximum dispersal rate of these snails during a given period, say a year, would be represented in this plot by those points that are furthest from the origin. In this simulation, the snail that was the furthest from the origin was 2.76 standard deviation units away from it.
The individual colonies are likely to have variable survival rates and those that are the furthest from the origin may be less likely to survive than the rest. (You think of some reasons why that may be so.) As a result, the boundaries of a snail's range at a given instance—represented by the outlying colonies—will fluctuate over time. Therefore, there may be an "effective" dispersal rate that is slightly less that the maximum dispersal rate. Perhaps we could equate the effective dispersal rate to the average dispersal distance of the points lying between a couple of arbitrary standard deviation units, say, 2 and 2.5.