Method to Measure Planar Displacement Using Centroid Calculation

This paper presents a new sub-pixel resolution approach to measure displacements of a planar motion stage using a vision acquisition sensor which images an actively controlled target generated on a planar display such as an LCD screen. In this paper we examine the resolution with which we can detect a target image defined by the intersection of two curves. The points that generate each curve are identified by using an intensity weighting of the pixels that define the curve. Once the data points are collected, two curves are fit to the data using a least-square approximation. Finally the intersection of the two curves provides a robust and reliable point location that can be tracked for position feedback in manufacturing equipment. Experimental results show that although the display which generates the target image has pixel size greater than 200μm, this procedure can reliably detect stage motions as small as 5μm. INTRODUCTION Computer Numerical Control (CNC) equipment is widely used in mass production to enable high-volume manufacturing with high accuracy. One of the determining factors for the resolution and accuracy of such equipment is the feedback sensing system used for axis positioning, e.g. typically linear or rotary encoders. However, in multi-axis motion systems, the feedback devices do not directly sense the position of the control point. Instead, the spatial position of the control point is estimated using the outputs of the position feedback sensors and a kinematic model of the machine. Inevitably, the kinematic model does not exactly describe the actual machine due to imperfect straightness of the axis guideways, non-squareness of their motion directions, and thermal variations with time, resulting in positioning errors. These errors have traditionally been compensated using inverted error mapping applied to axis commands. This is a complex and expensive approach, and suffers from the fact that the error map is static and cannot compensate time-dependent effects. Alternative approaches suggest the use of other sensors, such as the system introduced in [Fan, Wang et al.], which uses a 3D laser ball bar to measure the error of multi-axis machines. In [Wong, Montes et al.], a new approach to multi-axis position feedback is presented, whereby a vision feedback system is implemented to directly sense the tool position rather than through the use of the traditional kinematic model. This system utilizes a groundbased camera trained on an active pixel display fixed to the stage being controlled. Target images on the active display are generated and then acquired by the camera, with the difference between the desired and actual target position on the camera image plane used to generate the error vector for drive commands (see Figure 1). FIGURE 1. 2-AXIS STAGE CONTROL THROUGH FIXED CAMERA ACQUIRING PIXEL ARRAY IMAGE. In this work, the achievable resolution of the described system is investigated when using a target image consisting of two intersecting curves displayed in pure black/white form, i.e. no grayscale or color modulation of the target image is considered. The camera images this target, and analytic functions of the proper form are best-fit to the pixel data using a leastsquares technique. The intersection of these functions is defined as the target location.