313
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Increasing the Visibility for Observing Micro Objects with the Variable View Imaging System

&
Pages 71-91 | Published online: 13 Mar 2012

Figures & data

Figure 1 Concept of the variable view imaging system with active components which mimic the function of a robot: (a) steering view angles, (b) steering view position, and (c) steering both of view position and angles (color figure available online).

Figure 1 Concept of the variable view imaging system with active components which mimic the function of a robot: (a) steering view angles, (b) steering view position, and (c) steering both of view position and angles (color figure available online).

Figure 2 Two situations for out of the FOV: (a) The target moves out of the field of view (FOV). (b) The FOV changes with the rotation of the wedge prisms (color figure available online).

Figure 2 Two situations for out of the FOV: (a) The target moves out of the field of view (FOV). (b) The FOV changes with the rotation of the wedge prisms (color figure available online).

Figure 3 Feedback control diagram of the FOV using visual servoing. The visual information is achieved by the optical system and camera. The actual position of the target is estimated by the visual tracking algorithm. The error between the desired position and current position feed into the controller. The control signal can be achieved through the controller and the inverse of image Jacobian (color figure available online).

Figure 3 Feedback control diagram of the FOV using visual servoing. The visual information is achieved by the optical system and camera. The actual position of the target is estimated by the visual tracking algorithm. The error between the desired position and current position feed into the controller. The control signal can be achieved through the controller and the inverse of image Jacobian (color figure available online).

Figure 4 Process of the FOV control. (a) At the initial time i, the target is at the center of the field of view. (b) Then the object moves towards the boundary of the field of view at time i + 1. (c) To avoid the target moving out of FOV, scanning mirrors steer the FOV to keep the target at the center of the image (color figure available online).

Figure 4 Process of the FOV control. (a) At the initial time i, the target is at the center of the field of view. (b) Then the object moves towards the boundary of the field of view at time i + 1. (c) To avoid the target moving out of FOV, scanning mirrors steer the FOV to keep the target at the center of the image (color figure available online).

Figure 5 Three occlusion states. (a) In the occlusion-free state, all SOI can be observed by the vision system. (b) In the partial occlusion state, part of the SOI was occluded. (c) In the full occlusion state, the SOI cannot be observed from the system (color figure available online).

Figure 5 Three occlusion states. (a) In the occlusion-free state, all SOI can be observed by the vision system. (b) In the partial occlusion state, part of the SOI was occluded. (c) In the full occlusion state, the SOI cannot be observed from the system (color figure available online).

Figure 6 Calculation of the occlusion area using ray tracking method. The interest area is divided to subarea a j . Then from center of each subarea, a ray R j is generated, which is parallel to the view direction. The intersection between the ray and other surface can be analyzed by the ray tracing method (color figure available online).

Figure 6 Calculation of the occlusion area using ray tracking method. The interest area is divided to subarea a j . Then from center of each subarea, a ray R j is generated, which is parallel to the view direction. The intersection between the ray and other surface can be analyzed by the ray tracing method (color figure available online).

Figure 7 The image of a SOI under orthographic projection model. N s is the normal of the SOI. N v is the view orientation. The high resolution of the target can be achieved by maximizing (|N s  · N v |) (color figure available online).

Figure 7 The image of a SOI under orthographic projection model. N s is the normal of the SOI. N v is the view orientation. The high resolution of the target can be achieved by maximizing (|N s  · N v |) (color figure available online).

Figure 8 The experiment setup for the variable view imaging system. It includes two-axis scanning mirrors, wedge prisms, the deformable mirror, science camera, and lenses (color figure available online).

Figure 8 The experiment setup for the variable view imaging system. It includes two-axis scanning mirrors, wedge prisms, the deformable mirror, science camera, and lenses (color figure available online).

Figure 9 Configurations of the experiments. (a) Single object with one SOI. (b) Two objects with one occluded SOI (color figure available online).

Figure 9 Configurations of the experiments. (a) Single object with one SOI. (b) Two objects with one occluded SOI (color figure available online).

Figure 10 Path of the rotation angles of prisms generated from the potential field. S1 is the initial state. S4 is the final state. The configuration of the mirror and the view angle parameters are also shown (color figure available online).

Figure 10 Path of the rotation angles of prisms generated from the potential field. S1 is the initial state. S4 is the final state. The configuration of the mirror and the view angle parameters are also shown (color figure available online).

Figure 11 Enlarged images at different view states from (a) S1, (b) S2, (c) S3 to (d) S4. The two SOI on the object can be observed at the state S4.

Figure 11 Enlarged images at different view states from (a) S1, (b) S2, (c) S3 to (d) S4. The two SOI on the object can be observed at the state S4.

Figure 12 Definition of desired position and image error. O d and O c are the desired and current centers of the target. p 1 , p 2 , p 3 , and p 4 are the feature points for visual tracking (color figure available online).

Figure 12 Definition of desired position and image error. O d and O c are the desired and current centers of the target. p 1 , p 2 , p 3 , and p 4 are the feature points for visual tracking (color figure available online).

Figure 13 Image errors in pixels during the FOV control. The image error is smaller than 6 pixels, which can ensure that the target will always be within field of view (color figure available online).

Figure 13 Image errors in pixels during the FOV control. The image error is smaller than 6 pixels, which can ensure that the target will always be within field of view (color figure available online).

Table 1. Captured images from the system without/with FOV control

Figure 14 Simulation for occlusion by two objects when the acceptable occlusion level L ac  = 0.5, considering the angle limit of the real system. The view angles for the initial view is ϕ = 0°, γ = 0°. The view angles for the final view is ϕ = 12.4°, γ = 6.53° (color figure available online).

Figure 14 Simulation for occlusion by two objects when the acceptable occlusion level L ac  = 0.5, considering the angle limit of the real system. The view angles for the initial view is ϕ = 0°, γ = 0°. The view angles for the final view is ϕ = 12.4°, γ = 6.53° (color figure available online).

Figure 15 Captured image for the second experiment: (a) image at the initial state, ϕ = 0°, γ = 0° (b) image at final state, ϕ = 12.4°, γ = 6.53°.

Figure 15 Captured image for the second experiment: (a) image at the initial state, ϕ = 0°, γ = 0° (b) image at final state, ϕ = 12.4°, γ = 6.53°.

Figure 16 Configuration of the microassembly task. A micro part with multi legs will be inserted into the holes (color figure available online).

Figure 16 Configuration of the microassembly task. A micro part with multi legs will be inserted into the holes (color figure available online).

Figure 17 The dimension of the micro parts, (a) micro part, (b) micro holes (color figure available online).

Figure 17 The dimension of the micro parts, (a) micro part, (b) micro holes (color figure available online).

Figure 18 Process of microassembly. The first step is to align the unoccluded features line l1 on the micro part with the l1′ on holes. In order to view the occluded feature, the view angle changes in the second step. In the third step, the other feature lines l2 to l7 on the micro part and l2′ to l7′ on the holes are aligned. The fourth step is to insert the part into the holes (color figure available online).

Figure 18 Process of microassembly. The first step is to align the unoccluded features line l1 on the micro part with the l1′ on holes. In order to view the occluded feature, the view angle changes in the second step. In the third step, the other feature lines l2 to l7 on the micro part and l2′ to l7′ on the holes are aligned. The fourth step is to insert the part into the holes (color figure available online).

Figure 19 The potential field and view path for view planning in the second experiment. At the initial state S 1 , (ϕ, γ) is (0°, 90°). At the final state S 4 , (ϕ, γ) is (19°, 180°) (color figure available online).

Figure 19 The potential field and view path for view planning in the second experiment. At the initial state S 1 , (ϕ, γ) is (0°, 90°). At the final state S 4 , (ϕ, γ) is (19°, 180°) (color figure available online).

Figure 20 The captured image during the view change in the second experiment at view state (a) S1, (b) S2, (c) S3 and (d) S4 (color figure available online).

Figure 20 The captured image during the view change in the second experiment at view state (a) S1, (b) S2, (c) S3 and (d) S4 (color figure available online).

Figure 21 Final state in the second microassembly experiment.

Figure 21 Final state in the second microassembly experiment.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.