313
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Increasing the Visibility for Observing Micro Objects with the Variable View Imaging System

&
Pages 71-91 | Published online: 13 Mar 2012

Abstract

Sufficient visual information is vitally needed for microassembly and micromanipulation. However, most of the conventional vision systems with fixed optical parameters cannot meet the requirement for observation of micro parts with complex shapes. Small field of view (FOV), occlusion and low depth resolution are the main important issues to solve. This article presents a view planning method to increase the visibility for observation of micro objects with consideration of resolution and FOV using a variable view imaging system. The system can supply a flexible view with adjustable view angle and position, thereby reducing occlusion. The best view angle is calculated by maximizing the resolution of the surface of interest under acceptable occlusion level. The view position is steered by visual feedback method which can keep the target inside of FOV. Simulations and experimental results show the feasibility of the proposed approach.

NOMENCLATURE

i a =

surface of interest in the image plane

i a o =

occluded area in the surface of interest

D =

occlusion detector

f x , f y , f ϕ, f γ =

functions of joint space

f =

image error

J =

system Jacobian

J im =

image Jacobian

J s =

joint Jacobian

K x , K y =

scaling factors

L o =

visibility level

L ac =

acceptable occlusion level

M =

magnification

N =

number of the subareas

N s =

normal of surface of interest

N v =

vector of view

O c =

target center

O d =

desired position

P i (u,v)=

image coordinate of the ith corner point on the target

=

control input for the scanning mirror

r p =

resolution in the image plane

r o =

resolution in object plane

u,v =

image coordinates

U rep =

repulsive field

U att =

attractive potential field

x w , y w =

view position

X w Y w Z w =

local coordinate of the view

γ, ϕ=

azimuth and zenith angle

sx , θ sy , θ p1, θ p2)=

angles of scanning mirror and wedge prism

1. INTRODUCTION

The conventional microscope cannot meet the growing demand for observation of dynamic targets in three-dimensional (3-D) space in microassembly and micromanipulation application. The fixed view direction of the conventional microscope cannot supply sufficient visual information because of small field of view, occlusion, and low depth resolution. In macro world, the active vision system such as a camera installed on a manipulator can change camera parameters such as the spatial position and orientation according to different tasks (Mezouar and Chaumette Citation2002). However, mounting a microscope on a manipulator is not practical owing to the complexity of the microscope. Instead of moving the vision system, moving stages are often applied in order to compensate for the problem of small FOV in the observation of moving objects. Ogawa and colleagues (Citation2005) proposed a high speed visual tracking system with a two axis moving stage, which can lock the motile cell in the FOV. But it may generate agitation to the specimen. In order to observe with different view of micro objects, multiple fixed microscope are often applied in microassembly and micromanipulation application. Ralis and colleagues (Citation2000) proposed a microassembly system with two cameras. One camera with low magnification is to observe a wide field of view in coarse visual servoing and the other with high magnification to detect micro targets in a small field of view with high resolution in fine visual servoing. When dealing the micro object with three dimensional shapes, more cameras are needed to achieve sufficient vision information. Probst and colleagues (Citation2006) designed a microassembly system with three cameras in different directions. Visual guided microassembly can be applied successfully without occlusion. However, due to the fixed position of the cameras, this configuration is only suitable for specific micro assembly tasks. However, switching between those cameras during microassembly makes continual visual tracking difficult. The development of a smart optical system with adjustable optical parameters becomes a promising solution. One example is the Adaptive Scanning Optical Microscope (ASOM) invented by Potsaid and colleagues (Potsaid et al. Citation2005; Rivera et al. Citation2010). Through the integration of a scanning mirror and deformable mirror, the ASOM can steer a sub-field-of-view in a large area. This concept has also been applied to a telescope (Scott et al. Citation2010).

In our previous research, a variable view imaging system was proposed which can change optical system parameters (Tao et al. 2008a; Tao and Cho Citation2009). The system integrates a pair of wedge prisms, a scanning mirror, and a telecentric lens group. The view angle can be steered by double wedge prisms, while the view position can be steered by the scanning mirror. Therefore the system can supply a view with a different view angle and position. A decoupling design for the system with a telecentric lens group is proposed to decouple the view angle and scanning mirror angle in (Tao et al. 2008b), which simplifies the operation of the system. Compared to the conventional imaging system, the variable view imaging system can provide the best view without moving the target. This is particularly suitable for the micromanipulation application where the whole manipulator system is hard to install on multi-dimensional stages to provide different views. Because it is an independent system, it does not interfere with the manipulation task to achieve a desired view. Another advantage is the high speed view steering ability because of the low inertia components such as the scanner mirror and wedge prism. This will speed up the assembly process. Those advantages are also desirable for in-vivo biological imaging. The conventional stage system induces the agitation to the sample and has relatively low dynamic bandwidth (Potsaid et al. Citation2009). Although the integration of wedge prisms and scanning mirrors in the system can achieve new functions, it also introduces aberrations that degrade the quality of the image. The wedge prisms used in converging light induce coma and astigmatism aberration. The shape and amplitude of aberrations changes with the position of the beam on the prisms. To compensate for those aberrations, a deformable mirror is used to correct the dynamic wavefront error. The deformable mirror control algorithm searches the best surface shape of the deformable mirror by maximizing the sharpness of the final image (Tao and Cho Citation2009). Therefore it can compensate for all the system aberrations which include the aberrations from the objectives and the system misalignment.

In this article, we introduce a method to obtain a view direction with the best visibility for observing the micro object using the variable view imaging system. The two objectives of the view planning method are to achieve a view with the best resolution under an acceptable occlusion level and keep the target always inside of the field of view (FOV). For the first issues, the occlusion and resolution constrains are discussed first. The calculation of the occlusion area is based on CAD model of the target by a ray tracing method. Then, the occlusion level was obtained. For the resolution constraints, the condition of the best resolution is introduced. By using artificial potential field method, the best view angle is calculated by maximizing the resolution of the surface of interest (SOI) under an acceptable occlusion level. For the second issue, view position is steered by the visual feedback technique. The target will always be kept within the FOV by the feedback control of the scanning mirror, which eliminates the need for accurate system calibration.

This article is organized as follows. In Section 2, the operation principle of the system is described. Section 3 presents the kinematics of the system. Section 4 describes the FOV control to keep the target inside of the FOV. Sections 5 and 6 introduce the constraints for the best view angle and best view calculation. Section 7 shows the simulation and experiment results.

2. OPERATION OF VARIABLE VIEW IMAGING SYSTEM

The view position and angles can be steered by specially designed active optical components. In order to realize this function, the idea is to design the vision system with active components which mimic the function of a robot. Figure shows the fundamental concept of the system. To steer the view orientation (γ, ϕ), double wedge prisms are utilized because of the compact size and for deflection of light. The function of the double wedge prisms is shown in Figure (a). As shown in the figure, it can steer the beam in any direction in a cone. This function is similar to a camera installed on a pan/tilt stage. To steer the view position (x w , y w ), the telecentric scanner is applied as shown in Figure (b). Its function is similar to a camera installed on the translation stage. In order to steer the both of view position (x w , y w ) and orientation (γ, ϕ), these two active optical components can be integrated into one system. It can be considered a camera installed on a robot with four degrees of freedom as shown in Figure (c). The view state V is defined in Figure (c), which includes view position (x w , y w ) and view orientation (γ, ϕ). γ and ϕ are the azimuth angle and zenith angle of the view, respectively. X w Y w Z w is the world coordinate. X v Y v Z v is the local coordinate of the view.

Figure 1 Concept of the variable view imaging system with active components which mimic the function of a robot: (a) steering view angles, (b) steering view position, and (c) steering both of view position and angles (color figure available online).

Figure 1 Concept of the variable view imaging system with active components which mimic the function of a robot: (a) steering view angles, (b) steering view position, and (c) steering both of view position and angles (color figure available online).

3. KINEMATICS OF THE SYSTEM

In the real operation, the calculation of the view position and angles in real time becomes more important. The aim of forward kinematics is to determine the view state given the scanning mirror angle and wedge prisms angle. The joint variables include rotation angle of prisms and scanning angle of mirror. A rigorous analysis of the view position and angle is made by ray tracing method. The forward kinematics of the system can be defined as follows:

where (θ sx , θ sy , θ p1, θ p2) are the angles of scanning mirror and wedge prism. The functions f x , f y , f ϕ , f γ are the ones describing the relationships between joint variables and view state. They are obtained by a ray tracing process described in (Lee et al. Citation2006), with details omitted in this article for the sake of brevity.

4. FIELD OF VIEW CONTROL

Because of the small FOV, the micro object can easily be out of the FOV. In the proposed system, there are two reasons causing the object being of out of the FOV. One reason is that the object is moving, as shown in Figure (a). For a fixed FOV, the object is easy to move out of the FOV. In order to keep the target within the FOV, the vision system should have a special mechanism to move the FOV. In the proposed system, the scanning mirror can supply this function. By moving the scanning mirror, view can be translated in the plane where objects of interest are located. The second reason causing the FOV change is the operation of the wedge prisms as shown in Figure (b). It can be seen that when the view angle changes, the view position also changes.

Figure 2 Two situations for out of the FOV: (a) The target moves out of the field of view (FOV). (b) The FOV changes with the rotation of the wedge prisms (color figure available online).

Figure 2 Two situations for out of the FOV: (a) The target moves out of the field of view (FOV). (b) The FOV changes with the rotation of the wedge prisms (color figure available online).

In order to avoid moving the target out of the FOV, the angle of the scanning mirror will need to be obtained based on the position of the target. However, calculation of scanning mirror angles from Equation 1 needs an accurate calibration of the system. Therefore a visual feedback technique is applied to steer the view on the target without an accurate calibration. In addition, the errors between the design model and the real model can be compensated by a feedback control loop.

The control diagram of the system is shown in Figure . The process of visual servoing is illustrated in Figure . At the instant time i, the target is at the center of the field of view as shown in Figure (a). Then, suppose that at time i + 1, the object moves towards the boundary of the field of view as shown in Figure (b). The new target center in the image plane will be detected from the image information and compared with the desired position in the image plane. Then the error Δf can be calculated from the current target position to the center of the field of view. Then, through the controller and image Jacobian, the command for the scanning mirror can be achieved and move the center of the FOV on the target as shown in Figure (c).

Figure 3 Feedback control diagram of the FOV using visual servoing. The visual information is achieved by the optical system and camera. The actual position of the target is estimated by the visual tracking algorithm. The error between the desired position and current position feed into the controller. The control signal can be achieved through the controller and the inverse of image Jacobian (color figure available online).

Figure 3 Feedback control diagram of the FOV using visual servoing. The visual information is achieved by the optical system and camera. The actual position of the target is estimated by the visual tracking algorithm. The error between the desired position and current position feed into the controller. The control signal can be achieved through the controller and the inverse of image Jacobian (color figure available online).

Figure 4 Process of the FOV control. (a) At the initial time i, the target is at the center of the field of view. (b) Then the object moves towards the boundary of the field of view at time i + 1. (c) To avoid the target moving out of FOV, scanning mirrors steer the FOV to keep the target at the center of the image (color figure available online).

Figure 4 Process of the FOV control. (a) At the initial time i, the target is at the center of the field of view. (b) Then the object moves towards the boundary of the field of view at time i + 1. (c) To avoid the target moving out of FOV, scanning mirrors steer the FOV to keep the target at the center of the image (color figure available online).

The Jacobian J denotes the relationship between the velocity of a feature point in the image plane and the joint velocity for scanning mirror, which is composed of two part as shown in the following:

where J im is the image Jacobian denoting the relationship between the velocity of the feature point in the image plane and that in the world coordinate. J s is the joint Jacobian for the scanning mirror denoting the relationship between the velocity of view position in the world coordinate and joint velocity. The image and joint Jacobians are future expressed by,
where
and are the angle velocities of scanning mirrors. u and v are the image coordinates. and are the velocities in the image plane. M is the magnification of the system. The K x and K y are the scaling factors of the pixel coordinates to image coordinates. Applying the proportional control law, the control input for the scanning mirror becomes:
where Δf = f d  − f. Here f d is the desired feature position in the image plane and f is the current feature position. is the angle velocity screw of the scanning mirrors. The features on the target are tracked by a visual tracking algorithm (SSD) (Lee et al. Citation2006).

5. CONDITIONS FOR THE BEST VIEW ANGLE

In order to determine the view orientation and view position of the proposed system for observation of a target, a view point should be moved to a place where high quality of vision information can be achieved. For real environments, many constraints need to be considered. In this article, two constraints are formulated, which take into consideration resolution and occlusion which are critically important for microassembly and micromanipulation applications.

5.1. Occlusion

Before operating the system to avoid occlusion, the occlusion state should be evaluated first. Here, we classify three categories of occlusion as shown in Figure . Single SOI is considered in this case. In the occlusion-free state, all SOI can be observed by the vision system. It is an optimal view state where no occlusion exists. The second state is partial occlusion, where some part of SOI was occluded, which happens often when there are several objects near each other. This is a state between the occlusion state and full occlusion state. Although sometimes it is difficult to avoid, we need to keep the unoccluded area as large as possible. The third state is the full occlusion, where the SOI cannot be observed from the system. This state should be avoided in the view planning.

Figure 5 Three occlusion states. (a) In the occlusion-free state, all SOI can be observed by the vision system. (b) In the partial occlusion state, part of the SOI was occluded. (c) In the full occlusion state, the SOI cannot be observed from the system (color figure available online).

Figure 5 Three occlusion states. (a) In the occlusion-free state, all SOI can be observed by the vision system. (b) In the partial occlusion state, part of the SOI was occluded. (c) In the full occlusion state, the SOI cannot be observed from the system (color figure available online).

In order to quantify the occlusion states, the visibility level, L 0 , is defined as follows:

where i a is the SOI in the image plane. i a o is the occluded area on the SOI in the image plane. Therefore three occlusion state corresponds to different value of L o as follows.

In 3-D computer graphics, occluded surface determination has been already used to determine which surfaces and parts of surfaces are not visible from a certain viewpoint (Watt Citation1993). Here, the CAD model has been known. Because of the telecentric lens group used in the system, the orthographic projection is applied in the proposed system. For a general object, the occlusion area can be calculated by the ray tracing method (Watt Citation1993). The process is shown in Figure . First, the SOI is divided to subarea a j . The number of the subareas depends on the resolution of the system. Then from center of each subarea, a ray R i is generated, which is parallel to the view direction. Finally the intersection between the ray and other surface can be analyzed by ray tracing method (Watt Citation1993).

Figure 6 Calculation of the occlusion area using ray tracking method. The interest area is divided to subarea a j . Then from center of each subarea, a ray R j is generated, which is parallel to the view direction. The intersection between the ray and other surface can be analyzed by the ray tracing method (color figure available online).

Figure 6 Calculation of the occlusion area using ray tracking method. The interest area is divided to subarea a j . Then from center of each subarea, a ray R j is generated, which is parallel to the view direction. The intersection between the ray and other surface can be analyzed by the ray tracing method (color figure available online).

The size of the occluded area in the image plane is calculated by

where i a is the SOI in the image plane. The calculation of i a is introduced in the next part. c j is the center of the jth area. D is an occlusion detector by ray tracing. N is the number of the subareas. Therefore, substituting Equation 7 into Equation 5, we can obtain the visibility level.

5.2. Resolution

The pixel size and magnification determine the minimum size of features which can be observed on the image plane. However the orientation of the object will also affect the resolution of the object as shown in Figure . The resolution r o can be defined as

Figure 7 The image of a SOI under orthographic projection model. N s is the normal of the SOI. N v is the view orientation. The high resolution of the target can be achieved by maximizing (|N s  · N v |) (color figure available online).

Figure 7 The image of a SOI under orthographic projection model. N s is the normal of the SOI. N v is the view orientation. The high resolution of the target can be achieved by maximizing (|N s  · N v |) (color figure available online).
where M is the magnification of the system. N s is the normal of SOI. N v is the view orientation. Therefore, if there is no occlusion, the high resolution can be achieved under the following conditions.

6. BEST VIEW CALCULATION

Based on the discussion made in Section 5, the best view should have a high resolution and acceptable occlusion level. Then the conditions for the best view calculation can be formulized in the following equations.

where L ac is the acceptable occlusion level.

In order to obtain the final solution, an artificial potential field (APF) is applied to guide the view direction to the desired state (Spong et al. Citation2005). In an APF, the attractive potential field can guide the view to a best view state while the occlusion can be included in the repulsive potential field to avoid occlusion. The potential field U is defined as follows,

where U att is the attractive potential field. U rep is the repulsive field. They are defined as follows,

In order to obtain the minimal point, the gradient of the potential field is calculated. And the view state can be obtained iteratively by the following equation.

Where ϵ is a gain factor and ∇U is the gradient of the potential field. In order to calculate the path of the joint angle, the potential field can be mapped to the joint space by Equation 1. Therefore the final joint angle can be directly calculated from the following equation considering the view limit of the system.

The calculation process is shown as follows:

(1) Initialize the joint angle (θ p1 , θ p2 ) 0.

(2) At the ith iteration, calculate the joint angles from Equation 13.

(3) If |(θ p1, θ p2) i  − (θ p1, θ p2) i − 1| < T, then (θ p1, θ p2) i is a solution. Otherwise, go to (2) and let i = i + 1. T is a small positive value which is smaller than the resolution of rotation stage. In this article, T is set as 0.01°.

7. EXPERIMENTAL SETUP

The system to investigate the feasibility of the proposed method is shown in Figure . The two-axis scanning mirror (GSI Z1913) was supplied by General Scanning INC. The range of the scanning mirror is ±20°. A Digital Scanning Controller (DSC) is used for control of the scanning mirror. The resolution of the scanning mirror is 11 µrad. The deviation angle of one wedge prism is 10°. Wedge prisms were installed on compact rotation stages (PR50) supplied by Newport Corporation. The resolution of the stage is 0.01 degree. Four achromatic lenses with an effective focal length of 150 mm are used in the system, which are supplied by Edmund Optics. The first intermediate image is formed by the 2nd lens. The 3rd lens and 4th lens integrate a deformable mirror into the system. The 4th lens also forms a final image on the camera. The aperture is set in front of the deformable mirror, which can eliminate the lens for relaying the aperture. In order to correct the aberration induced by wedge prisms, a 37-channel deformable mirror supplied by OKO Technologies was used in the system. In this experimental setup, the magnification of the system is 1. Object space numerical aperture (NA) is 0.033. Optical resolution is 11.85 um. The maximum zenith angle is 18.9°. To increase the zenith angle and optical resolution, wedge prisms with larger wedge vertex angle and higher numerical aperture objectives can be used. However, this will induce more aberrations to the imaging system. It is possible to correct those aberrations using the deformable mirror with a larger stroke or dual deformable mirrors in a woofer-tweeter configuration. The visual servo control loops runs at a frame rate of 30 ms. The maximum speed of the view translation is around 8 mm/ms. The speed of the view angle steering is limited by the wedge prisms rotation stage, which can provide a maximum speed of 20 degree/s for azimuth angle steering. Higher speed (up to 720 degree/s) can be achieved if piezo motor rotation stages are used.

Figure 8 The experiment setup for the variable view imaging system. It includes two-axis scanning mirrors, wedge prisms, the deformable mirror, science camera, and lenses (color figure available online).

Figure 8 The experiment setup for the variable view imaging system. It includes two-axis scanning mirrors, wedge prisms, the deformable mirror, science camera, and lenses (color figure available online).

8. EXPERIMENTAL RESULTS

The first experiment evaluates the FOV steering and self-occlusion for one object. The second experiment shows the case of the occlusions between two objects. The third experiment shows the microassembly application using the proposed method.

In the first experiment, the object is shown in Figure (a). Two SOI are considered in this experiment. The initial view angle (ϕ, γ) of the system is set as (0°, 0°). The object is located at the center of the view. From the initial view angle, the SOI is occluded as shown in Figure (a). In order to obtain a best view direction, the view planning method introduced in Section 5 and 6 are implemented here. The path generated from potential field is shown in Figure . The initial state S1, final state S4 and another two states S2, S3 are also shown in the figure. Note that the configurations of the wedge prisms corresponding to these states are also depicted. Figure shows the enlarged image at each state from the system. The final view state (x w , y w , ϕ, γ) at the goal point is (0 mm, 0 mm, 19°, −45°). As can be seen, both of the two surfaces can be observed at the final state.

Figure 9 Configurations of the experiments. (a) Single object with one SOI. (b) Two objects with one occluded SOI (color figure available online).

Figure 9 Configurations of the experiments. (a) Single object with one SOI. (b) Two objects with one occluded SOI (color figure available online).

Figure 10 Path of the rotation angles of prisms generated from the potential field. S1 is the initial state. S4 is the final state. The configuration of the mirror and the view angle parameters are also shown (color figure available online).

Figure 10 Path of the rotation angles of prisms generated from the potential field. S1 is the initial state. S4 is the final state. The configuration of the mirror and the view angle parameters are also shown (color figure available online).

Figure 11 Enlarged images at different view states from (a) S1, (b) S2, (c) S3 to (d) S4. The two SOI on the object can be observed at the state S4.

Figure 11 Enlarged images at different view states from (a) S1, (b) S2, (c) S3 to (d) S4. The two SOI on the object can be observed at the state S4.

The motion of wedge prisms also changes the position of the FOV. In order to keep the target in side of the FOV, the visual servoing method introduced in Section 4 is implemented. During visual servoing, the four corners of the part are tracked by the SSD image tracking algorithm (Tao and Cho Citation2009). The center point of the part O c is calculated by

where P i (u,v) is the image coordinate of the ith corner point as shown in Figure . The desired position O d is defined at the center of the image plane. The image error is defined as the distance between O c and O d , |O c  − O d |.

Figure 12 Definition of desired position and image error. O d and O c are the desired and current centers of the target. p 1 , p 2 , p 3 , and p 4 are the feature points for visual tracking (color figure available online).

Figure 12 Definition of desired position and image error. O d and O c are the desired and current centers of the target. p 1 , p 2 , p 3 , and p 4 are the feature points for visual tracking (color figure available online).

The images at each state with and without FOV control are shown in Table . As can be seen, the target moves out of the FOV after the state S2. When the proposed method is applied, the target is always kept in the center of the FOV. The image error during the FOV control is shown in Figure . The image error is smaller than 6 pixels, which can ensure that the target will always be within field of view.

Figure 13 Image errors in pixels during the FOV control. The image error is smaller than 6 pixels, which can ensure that the target will always be within field of view (color figure available online).

Figure 13 Image errors in pixels during the FOV control. The image error is smaller than 6 pixels, which can ensure that the target will always be within field of view (color figure available online).

Table 1. Captured images from the system without/with FOV control

In the second experiment, the two objects are tested as shown in Figure (b). The SOI of the target is closed to another object. In order to observe the SOI and avoid the occlusion by the second object, the view planning method is implemented in this experiment, where the occlusion level L ac is set as 1. Therefore, occlusion free is required for the SOI. Figure shows the path in the potential field for calculation of best field of view. The configuration for wedge prisms at the initial state and final state is shown in Figure . The view angles for the initial view is ϕ = 0°, γ = 0°. The view angles for the final view is ϕ = 12.4°, γ = 6.53°. Figure (a) and (b) show the image from experiment. In the initial view, the SOI is not visible without changing the angle of view from a normal incidence angle of view. In the final view state, we can observe the SOI without occlusion from the second object.

Figure 14 Simulation for occlusion by two objects when the acceptable occlusion level L ac  = 0.5, considering the angle limit of the real system. The view angles for the initial view is ϕ = 0°, γ = 0°. The view angles for the final view is ϕ = 12.4°, γ = 6.53° (color figure available online).

Figure 14 Simulation for occlusion by two objects when the acceptable occlusion level L ac  = 0.5, considering the angle limit of the real system. The view angles for the initial view is ϕ = 0°, γ = 0°. The view angles for the final view is ϕ = 12.4°, γ = 6.53° (color figure available online).

Figure 15 Captured image for the second experiment: (a) image at the initial state, ϕ = 0°, γ = 0° (b) image at final state, ϕ = 12.4°, γ = 6.53°.

Figure 15 Captured image for the second experiment: (a) image at the initial state, ϕ = 0°, γ = 0° (b) image at final state, ϕ = 12.4°, γ = 6.53°.

The third experiment shows a microassembly application. The microassembly task is shown in Figure , where a micro part with multi legs will be inserted into the holes. The dimension of the micro parts is shown in Figure . The length and width of each leg are 0.53 mm and 0.2 mm. The distance between the two legs is 0.5 mm. The length and width of each hole are 0.63 mm and 0.3 mm. The distance between the center of the two holes is 0.5 mm. Using a conventional optical system, occlusion is a big issue for assembly. The feature on the legs of the peg cannot be observed by a vertical view direction. It is impossible to implement the assembly task without this information.

Figure 16 Configuration of the microassembly task. A micro part with multi legs will be inserted into the holes (color figure available online).

Figure 16 Configuration of the microassembly task. A micro part with multi legs will be inserted into the holes (color figure available online).

Figure 17 The dimension of the micro parts, (a) micro part, (b) micro holes (color figure available online).

Figure 17 The dimension of the micro parts, (a) micro part, (b) micro holes (color figure available online).

The process of the microassembly is shown in Figure , which include four steps. The first step is to align the unoccluded features line l1 on the micro part with the l1′ on holes. In order to view the occluded feature, the view angle changes in the second step. In the third step, the other feature lines l2 to l7 on the micro part and l2′ to l7′ on the holes are aligned. The fourth step is to insert the part to the holes. Here we will focus on the step 2 for view steering. During assembly, the corners of the feature line are tracked by visual tracking algorithm (SSD) (Lee et al. Citation2006). The image based visual servoing method is applied, which is similar to the method described in (Lee et al. Citation2006).

Figure 18 Process of microassembly. The first step is to align the unoccluded features line l1 on the micro part with the l1′ on holes. In order to view the occluded feature, the view angle changes in the second step. In the third step, the other feature lines l2 to l7 on the micro part and l2′ to l7′ on the holes are aligned. The fourth step is to insert the part into the holes (color figure available online).

Figure 18 Process of microassembly. The first step is to align the unoccluded features line l1 on the micro part with the l1′ on holes. In order to view the occluded feature, the view angle changes in the second step. In the third step, the other feature lines l2 to l7 on the micro part and l2′ to l7′ on the holes are aligned. The fourth step is to insert the part into the holes (color figure available online).

Figure (a) shows image at the final state in the step 1. As can be seen, the legs of the micro parts cannot be observed with a top view. In step 2, the proposed method is applied to observe the occluded feature. The SOI is the side surface of the micro part. In order to obtain a best view direction, the proposed method is implemented here. The path generated from the potential field is shown in Figure . At the initial state S 1 , (ϕ, γ) is (0°, 90°). At the final state S 4 , (ϕ, γ) is (19°, 180°). Another two states S 2 , S 3 on the view path is also indicated in Figure . At S 2 , (ϕ, γ) is (9.7°, 178.7°). At S 3 , (ϕ, γ) is (16.6°, 180°). The configurations of the wedge prisms corresponding to these states is shown in Figure . It also shows the enlarged image at each state from the system. As can be seen, the legs of the micro part can be observed. The vision information is sufficient to guide the microassembly. Figure shows the image of the final state of the assembly.

Figure 19 The potential field and view path for view planning in the second experiment. At the initial state S 1 , (ϕ, γ) is (0°, 90°). At the final state S 4 , (ϕ, γ) is (19°, 180°) (color figure available online).

Figure 19 The potential field and view path for view planning in the second experiment. At the initial state S 1 , (ϕ, γ) is (0°, 90°). At the final state S 4 , (ϕ, γ) is (19°, 180°) (color figure available online).

Figure 20 The captured image during the view change in the second experiment at view state (a) S1, (b) S2, (c) S3 and (d) S4 (color figure available online).

Figure 20 The captured image during the view change in the second experiment at view state (a) S1, (b) S2, (c) S3 and (d) S4 (color figure available online).

Figure 21 Final state in the second microassembly experiment.

Figure 21 Final state in the second microassembly experiment.

9. CONCLUSION

This article presents a view planning method to increase the visibility for observation of micro objects using a variable view imaging system. The field of view, occlusion, and resolution are considered in the view planning process. The experiment verified the efficiency and applicability of the proposed method. In future research, the application of the proposed method in microassembly and micromanipulation will be a focus. For microassembly application, CAD models of objects are often available. Based on the models of the target and the assembly process, the views with the best visibility of the SOI during the assembly can be obtained using the proposed method. The dynamic view planning based on the position of the targets during the assembly in the real time will be also researched in the future.

REFERENCES

  • Lee , D. , X. Tao , H. S. Cho , and Y. J. Cho . 2006 . A dual imaging system for flip-chip alignment using visual servoing . Journal of Robotics and Mechatronics 18 ( 6 ): 779 – 786 .
  • Mezouar , Y. and F. Chaumette . 2002 . Avoiding self-occlusions and preserving visibility by path planning in the image . Robotics and Autonomous Systems 41 ( 2–3 ): 77 – 87 .
  • Ogawa , N. , H. Oku , K. Hashimoto , and M. Ishikawa . 2005 . Microrobotic visual control of motile cells using high-speed tracking system . IEEE Trans. Robotics 21 ( 3 ): 704 – 712 .
  • Potsaid , B. , Y. Bellouard , and J. Wen . 2005 . Adaptive scanning optical microscope (ASOM): A multidisciplinary optical microscope design for large field of view and high resolution imaging . Opt. Express 13 ( 17 ): 6504 – 6518 .
  • Potsaid , B. , F. P. Finger , and J. Wen . 2009 . Automation of challenging spatial-temporal biomedical observations with the adaptive scanning optical microscope (ASOM) . IEEE Trans. Automation Science and Engineering 6 ( 3 ): 525 – 535 .
  • Probst , M. , K. Vollmers , B. E. Kratochvil , and B. J. Nelson . 2006. Design of an advanced microassembly system for the automated assembly of bio-microrobots. Proc. 5th International Workshop on Microfactories. http://www.iris.ethz.ch/test/msrl/publications/files/IWMF06-Probst.pdf (accessed February 28, 2012).
  • Ralis , S. J. , B. Vikaramadiya , and B. J. Nelson . 2000 . Micropositioning of a weakly calibrated microassembly system using coarse to-fine visual servoing strategies . IEEE Transactions on Electronics Packaging Manufacturing 23 ( 2 ): 123 – 131 .
  • Rivera , L. , B. Potsaid , and J. Wen . 2010 . Image tracking of multiple c. elegans worms using adaptive scanning optical microscope (ASOM) . International Journal of Optomechatronics 4 ( 1 ): 1 – 21 .
  • Scott , C. , B. Potsaid , and J. Wen . 2010 . Wide field scanning telescope using MEMS deformable mirrors . International Journal of Optomechatrouics 4 ( 3 ): 285 – 305 .
  • Spong , M. , M. Vidyasagar , and S. Hutchinson . 2005 . Robot Modeling and Control . New York : Wiley & Sons .
  • Tao , X. , H. Cho , and F. Janabi-Sharifi . 2008a . Active optical system for variable view imaging of micro objects with emphasis on kinematic analysis . Appl. Opt. 47 ( 22 ): 4121 – 4132 .
  • Tao , X. , D. Hong , and H. Cho . 2008b . Variable view imaging system with decoupling design. International Symposium on Optomechatronic Technologies, Proc. of SPIE 7266: 72661U-1–11.
  • Tao , X. and H. Cho . 2009 . Variable view imaging system: An optomechatronic system for the observation of micro objects with variable view direction . International Journal of Optomechatronics 3 ( 2 ): 91 – 115 .
  • Watt , A. 1993 . 3D Computer Graphics. , 2nd ed . New York : Addison Wesley .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.