Convergence rates for constrained regression splines

Research paper by Mary C. Meyer, Soo-Young Kim; Haonan Wang

Indexed on: 24 Nov '17Published on: 25 Sep '17Published in: Journal of Statistical Planning and Inference


Publication date: Available online 22 September 2017 Source:Journal of Statistical Planning and Inference Author(s): Mary C. Meyer, Soo-Young Kim, Haonan Wang Convergence rates of regression spline estimators have been established for a general framework in statistical modeling. It is well known that q th-order regression splines have optimal rates under mild assumptions. Increasing the number of knots tends to improve the approximation error rate but worsen the estimation error rate, and the optimal rate is attained by setting the two rates to be the same. For splines that are constrained to be monotone or convex, it is straight-forward to show that the constrained estimator attains the optimal rate if the approximation in the spline space also satisfies the constraints. If the monotonicity or convexity of the true regression function holds strictly, then the spline approximation will satisfy the constraints for a fine enough knot mesh. However, if there are intervals over which the constraints do not hold strictly, there is no guarantee that the approximation satisfies the constraints even for large numbers of finely-spaced knots, and therefore convergence rates of constrained regression splines have not been fully established. In this paper, we show that when the true function satisfies the constraints, there is a sufficiently close function in the spline space that also satisfies the constraints, and hence the constrained spline estimator attains the optimal rate of convergence.