Quantcast

Comparison study of sampling methods for computer experiments using various performance measures

Research paper by Inyong Cho, Yongbin Lee, Dongheum Ryu, Dong-Hoon Choi

Indexed on: 30 May '16Published on: 30 May '16Published in: Structural and multidisciplinary optimization : journal of the International Society for Structural and Multidisciplinary Optimization



Abstract

This study compares the performance of popular sampling methods for computer experiments using various performance measures to compare them. It is well known that the sample points, in the design space located by a sampling method, determine the quality of the meta-model generated based on expensive computer experiment (or simulation) results obtained at sample (or training) points. Thus, it is very important to locate the sample points using a sampling method suitable for the system of interest to be approximated. However, there is still no clear guideline for selecting an appropriate sampling method for computer experiments. As such, a sampling method, the optimal Latin hypercube design (OLHD), has been popularly used, and quasi-random sequences and the centroidal Voronoi tessellation (CVT) have begun to be noticed recently. Some literature on the CVT asserted that the performance of the CVT was better than that of the LHD, but this assertion seems unfair because those studies only employed space-filling performance measures in favor of the CVT. In this research, we performed the comparison study among the popular sampling methods for computer experiments (CVT, OLHD, and three quasi-random sequences) with employing both space-filling properties and a projective property as performance measures to fairly compare them. We also compared the root mean square error (RMSE) values of Kriging meta-models generated using the five sampling methods to evaluate their prediction performance. From the comparison results, we provided a guideline for selecting appropriate sampling methods for some systems of interest to be approximated.