Indexed on: 01 Jan '98Published on: 01 Jan '98Published in: Probability Theory and Related Fields
We present and analyze a new speed-up technique for Monte Carlo optimization: the Iterated Energy Transformation algorithm, where the Metropolis algorithm is used repeatedly with more and more favourable energy functions derived from the original one by increasing transformations. We show that this method allows a better speed-up than Simulated Annealing when convergence speed is measured by the probability of failure of the algorithm after a large number of iterations. We study also the limit of a large state space in the special case when the energy is made of a sum of independent terms. We show that the convergence time of the I.E.T. algorithm is polynomial in the size (number of coordinates) of the problem, but with a worse exponent than for Simulated Annealing. This indicates that the I.E.T. algorithm is well suited for moderate size problems but not for too large ones. The independent component case is a good model for the end of many optimization processes, when at low temperature a neighbourhood of a local minimum is explored by small and far apart modifications of the current solution. We show that in this case both global optimization methods, Simulated Annealing and the I.E.T. algorithm, are less efficient than repeated local stochastic optimizations. Using the general concept of “slow stochastic optimization algorithm”, we show that any “slow” global optimization scheme should be followed by a local one to perform the last approach to a minimum.