2,051
Views
25
CrossRef citations to date
0
Altmetric
Research Article

Providing haptic feedback in robot-assisted minimally invasive surgery: A direct optical force-sensing solution for haptic rendering of deformable bodies

, , , &
Pages 129-141 | Received 26 Sep 2012, Accepted 27 Aug 2013, Published online: 25 Oct 2013

Abstract

This paper presents an enhanced haptic-enabled master-slave teleoperation system which can be used to provide force feedback to surgeons in minimally invasive surgery (MIS). One of the research goals was to develop a combined-control architecture framework that included both direct force reflection (DFR) and position-error-based (PEB) control strategies. To achieve this goal, it was essential to measure accurately the direct contact forces between deformable bodies and a robotic tool tip. To measure the forces at a surgical tool tip and enhance the performance of the teleoperation system, an optical force sensor was designed, prototyped, and added to a robot manipulator. The enhanced teleoperation architecture was formulated by developing mathematical models for the optical force sensor, the extended slave robot manipulator, and the combined-control strategy. Human factor studies were also conducted to (a) examine experimentally the performance of the enhanced teleoperation system with the optical force sensor, and (b) study human haptic perception during the identification of remote object deformability. The first experiment was carried out to discriminate deformability of objects when human subjects were in direct contact with deformable objects by means of a laparoscopic tool. The control parameters were then tuned based on the results of this experiment using a gain-scheduling method. The second experiment was conducted to study the effectiveness of the force feedback provided through the enhanced teleoperation system. The results show that the force feedback increased the ability of subjects to correctly identify materials of different deformable types. In addition, the virtual force feedback provided by the teleoperation system comes close to the real force feedback experienced in direct MIS. The experimental results provide design guidelines for choosing and validating the control architecture and the optical force sensor.

Introduction

Minimally invasive surgery (MIS) has progressed considerably in recent years; however, certain limitations still exist. A major shortcoming of MIS, and one that has been the subject of much research, is the lack of sensory information from the operative field that is available to the surgeon, resulting in a reduced access at the surgical site [Citation1]. Although advances in master-slave robot-assisted surgeries have improved accuracy and dexterity in comparison to open surgeries, effective haptic feedback is still missing from robots. This absence of force feedback leads to excessive or insufficient forces being applied that may result in increased damage to living tissues and/or slippage in grasping, and also make it difficult to palpate or assess tissue characteristics. Force sensing could also increase safety and diminish intraoperative time [Citation2–10].

Several studies have investigated various methods for providing force feedback in MIS [Citation2, Citation4–7, Citation9–14]. Contact force information can be provided to surgeons in kinesthetic, auditory, and/or graphical forms. In teleoperation systems, the information can also be used to provide force feedback directly through a haptic interface. The main goal is to determine the tip-tissue interaction forces in order to provide feedback or estimate tissue trauma, which is useful in situations such as microsurgeries, needle-based procedures, palpation, and knot-tying [Citation2, Citation3, Citation8].

This paper presents an enhanced haptic-enabled teleoperation system that can be used to provide force feedback to surgeons in MIS. As shown in , a bilateral architecture is employed with parallel force/position control strategy. In this architecture, two haptic-enabled robot manipulators are used, as master-slave haptic devices, to connect the human operator and environment in the teleoperation system. Early analysis approaches for teleoperation have taken advantage of two-port network models, where the environment/human are modeled as Thévenin equivalent loads that have a mechanical impedance and a force source. The mechanical impedance in this context describes the dynamic relationship between the movement and the force. This leads to a definition of impedance as force over position or velocity. In the analytical model, velocity can have a small workspace offset (steady space error). However, this difference does not affect the analysis. We can therefore use either velocity or position in the analysis of teleoperation systems [Citation15]. In implementation, we may need both position and velocity measurements. It seems more natural to use position, as most teleoperation systems work in reference to it, and the analysis is in a linear time-invariant condition.

Figure 1. Diagram presenting the bilateral master-slave teleoperation system with a hybrid parallel force/position control strategy. The system is enhanced by adding an optical force sensor to the end-effector of the slave device. The interaction force between the environment and the slave device is directly measured by the optical sensor.

Figure 1. Diagram presenting the bilateral master-slave teleoperation system with a hybrid parallel force/position control strategy. The system is enhanced by adding an optical force sensor to the end-effector of the slave device. The interaction force between the environment and the slave device is directly measured by the optical sensor.

The control system presented in this paper is a combination of position-error-based (PEB) and direct force reflection (DFR) strategies [Citation16, Citation17]. In the PEB approach (also called position-position control architecture), the goal is to minimize the position (velocity) error (xerr), which consists of the position of both master-slave manipulators (xm and xs). In other words, the slave device follows the master in a remote environment. In the DFR strategy (also called force-position control architecture), the operator gives the master position commands (xm) to move the slave device by providing the slave controller forces (ffeed). On the other side, the slave robot measures the environment force (fe) that should be sent to the master side to control the master manipulator. In this combined control strategy, the force feedback on the master device (fmc) is a linear combination of (a) the forces from the master controller (fback) due to the position error (xerr), and (b) the direct measured interaction forces between the slave and the environment (fe = fopt), while the operator applies forces (fh) on the end-effector of the master device. In addition, the slave controller motor force (fsc) is made from the slave controller force (ffeed). The interaction force with the environment is obtained from a newly designed optical force sensor, which is attached to the slave device. Therefore, the slave-side impedance () is a combination of the slave device impedance (Zs) and the extended force sensor impedance (Zopt). Using this extension along with the hybrid force/position (DFR/PEB) control strategy, the enhanced teleoperation system has the best of both control architectures by tuning the controller parameters (e.g., CMst and CSlv). This adjustment maximizes the teleoperator function/performance, while keeping the system stable. The details of the control system are presented in the section below headed Bilateral parallel force/position control strategy.

As noted above, the architecture requires direct force measurement using a force sensor. Several studies have investigated the force measurement by employing sensors such as strain gauges [Citation4, Citation18], capacitive-based sensors [Citation13], and fiber-based sensors [Citation4–6, Citation11, Citation13]. The first two sensors are electrically driven, and the introduction of electrical current into the body makes them limited in their application [Citation19], especially in the magnetic resonance environment [Citation13, Citation20–24]. These sensors need complex geometries to produce significant tension in axial directions and are also sensitive to temperature changes [Citation1]. In contrast, optical force sensors work on the basis of measurement changes in the intensity or phase of a light signal as it passes through a specific structure. The benefits of an optical system, as opposed to an electrical one, are numerous. Signals in a fiber-optic system do not suffer as much loss as in a traditional system using metal wire. Fiber-based sensors are also safer than strain gauges, since no electricity is actually present [Citation20–24]. They can discern changes with less hysteresis and can also be sterilized [Citation21–24]. However, there are limitations, including sensitivity to changes in light intensity caused by tilting of the optical cables or misalignment [Citation13]. In this work, the addition of the optical force sensor changed the teleoperation system and the kinematics/dynamics of the slave device. It was therefore necessary to formulate the new enhanced teleoperation architecture and its control strategy. The details of the sensor and the Gaussian light-sensing mathematical model are presented in the section below headed Mathematical model of the optical force sensor, while the kinematics/dynamics of both master and slave devices are presented in the section headed Kinematics and dynamics equations of the master and slave devices.

After implementing the teleoperation system, two human factor studies were conducted to (a) evaluate experimentally the performance of the enhanced teleoperation system and the optical force sensor, and (b) study the human haptic perception in the identification of remote deformability. In the first experiment, 20 subjects were asked to identify the deformability of three artificial body models without using any teleoperation approaches. The subjects used a laparoscopic instrument to examine the objects. The goal of this experiment was to define the human sensitivity level for the deformable objects: the subjects were unable to see the objects during the experiment, and the results were based on direct haptic feeling. In the second experiment, another ten subjects were asked to examine the deformable objects via the enhanced teleoperation system. The aim of this experiment was to study the effectiveness of haptic feedback on the remote human perception of deformable objects.

Enhanced haptic-enabled teleoperation system

Most haptic-enabled bilateral teleoperation control architectures have employed a combined-control system of sensors, actuators, and communication channels. In this paper, an optical force-sensing procedure is introduced to measure interaction force between the slave manipulator and the environment. For this purpose, a force sensor was designed, prototyped, and added to the slave device. This extension added an enhancement to the end-effector of the original device, leading to changes in the kinematics and dynamics of the slave manipulator and control strategy. The following section presents the enhanced parallel force/position control system.

Bilateral parallel force/position control strategy

This section presents the equations that govern the bilateral teleoperation system using a two-port network representation that includes impedance models for the master device, slave device, and communication channels in a linear teleoperation system. The characteristics of the network are analyzed using the Hybrid Matrix (H-Matrix), which relates the forces and the positions on the master and slave sides as follows: where xe, xh, fh, and fe represent the Laplace transforms of the environment position, the human-hand position, the human force on the master, and the environment force on the slave device, respectively. The negative sign in the environment position (xe) comes from the definition of positive current that goes into the electrical linear network, while the environment position has a different direction. It is worth mentioning that we can choose any two of the aforementioned signals as input, whereupon the other two become the output. In this work, we chose human-hand position (xh) and the environment force (fe) as the input. Therefore, H11, H12, H21, and H22 are the input impedance, the force scale, the position (velocity) scale with a minus sign, and the output admittance, respectively. We also assumed that the slave device was constantly in contact with the objects, and that the operator constantly grasped the master device, meaning that xe = xs and xh = xm.

As mentioned before, a hybrid force/position (DFR/PEB) control strategy is employed in the teleoperation system. From the network point of view, the bilateral communication of position and force improves the teleoperator performance in terms of better transparency and force tracking [Citation17]. The general system dynamic equations derived from are given by where Zm is the impedance of the master device, which is approximated as a time-invariant mass component of the master (mm), and (=Zs+ Zopt) is the total impedance of the slave side, including the combination of masses of the slave (ms) device and the attached optical force-sensor mass-spring model (mspring-kspring). In addition, CMst= CSlv= sbc+ kc ≡ Cc in the Laplace form denotes local master and slave derivative (bc) plus proportional (kc) controller transfer functions (the damper-spring model), C2 is the slave interaction force gain (scalar gain), and C1 and C3 are the master and slave force feed-forward and feedback controllers, respectively, which are equal to Cc. and denote the active voluntary forces, sometimes called external force, generated by the human operator and the environment, respectively.

Figure 2. Diagram of the connected teleoperation system controlled by position-error-based (PEB) and direct force reflection (DFR) controllers. In the PEB approach, the difference in position is sent to the both sides with the gain of the controller transfer function, being identical for the master and slave (C1= C3= sbc+ kc ≡ Cc). In the DFR control strategy, the master measures the hand position (xh) and the slave measures the interaction force (fe), which is subsequently presented to the operator through C2. The combined-control strategy can be expressed in a MIMO framework.

Figure 2. Diagram of the connected teleoperation system controlled by position-error-based (PEB) and direct force reflection (DFR) controllers. In the PEB approach, the difference in position is sent to the both sides with the gain of the controller transfer function, being identical for the master and slave (C1 = C3 = sbc + kc ≡ Cc). In the DFR control strategy, the master measures the hand position (xh) and the slave measures the interaction force (fe), which is subsequently presented to the operator through C2. The combined-control strategy can be expressed in a MIMO framework.

In general, the aforementioned system has multiple inputs and multiple outputs (MIMO), including positions and forces. The controllers' transfer functions and the manipulators’ impedances form the elements of the H-Matrix. Accordingly, the matrix elements are computed using linear algebra methods from Equations (2) and (3) as follows: where Cp= (+ Cc)−1.

In our teleoperation system, C2 is equal to 1, since the measured force is directly transferred to the master side without any change or amplification. According to Equations (6) to (9), all H elements depend on the slave impedance.

Optical force sensor

In our bilateral teleoperation system, an optical sensor is designed and prototyped using three Gradient-Index lens (GRIN-lens) fiber-optic collimators spaced at 120° intervals on a cylindrical lens holder 12 mm in diameter. A GRIN-lens consists of a material, such as glass, with a refractive index gradient. This index factor is a function of position in the material. The structure of the sensor is presented in . The body of the sensor consists of a cylindrical flexible structure (holder). It converts the applied forces into displacements and vice versa, due to its springy structure. The GRIN-lens holder and a mirror (reflector) are rigidly attached to the tip of the sensor body. There are several places where forces can be sensed on MIS tools, including the tool tip, part of the tool inside/outside the body, access channels, and the tool handle. If the force sensor is placed outside the body, because of size and sterilization concerns, the measured force acting at the tool tip may be inaccurate [Citation1]. The effects of friction and leverage suggest that tool-tissue interaction forces can be sensed by placing the sensor in an optimized position as close as possible to the tip.

Figure 3. The prototyped optical force sensor, showing the positions of the GRIN-lens fiber-optic collimators, the flexible structure, and the mirror. The body of the sensor consists of a cylindrical flexible structure that converts the applied forces into displacements and vice versa. The GRIN-lens holder and the opposing reflector plate (mirror) are rigidly attached to the ends of the sensor body.

Figure 3. The prototyped optical force sensor, showing the positions of the GRIN-lens fiber-optic collimators, the flexible structure, and the mirror. The body of the sensor consists of a cylindrical flexible structure that converts the applied forces into displacements and vice versa. The GRIN-lens holder and the opposing reflector plate (mirror) are rigidly attached to the ends of the sensor body.

As shown in , an infrared broadband light source centered at 1550 nm is used in conjunction with a 1-to-4 fiber-optic splitter/combiner to launch equal-power light signals into the three GRIN-lens collimators. A fourth output of the fiber-optic splitter/combiner is used to monitor the stability of the broadband light (reference channel). This allows us to compensate, through software, for any momentary instability or long-term drift in the output power of the 1550-nm broadband light source, which would otherwise result in erroneous readings of the force sensor [Citation24]. The light emerging from each collimator is back-reflected into the same collimator by the mirror. A corresponding photodiode detects the applied-force-induced change, in the form of electrical current, in the power of the reflected light signal collected by each collimator. The current generated by the photodiode is converted to a voltage by a low-noise high-gain operational amplifier circuit. The output voltages are digitized using a National Instruments USB-6009 DAQ card and processed by a computer running LabVIEW software (National Instruments Corporation, Austin, TX).

Figure 4. Diagram showing the different parts of the optical force sensing system. The light is demultiplexed to four equal-power signals using a 4-way splitter. Three of the signals are received and transmitted by three GRIN-lens collimators in the sensor unit. The three transmitted signals from the GRIN-lens collimators are received by photodiodes along with the fourth signal as a reference signal. All signals are amplified and sent to a personal computer via a DAQ card.

Figure 4. Diagram showing the different parts of the optical force sensing system. The light is demultiplexed to four equal-power signals using a 4-way splitter. Three of the signals are received and transmitted by three GRIN-lens collimators in the sensor unit. The three transmitted signals from the GRIN-lens collimators are received by photodiodes along with the fourth signal as a reference signal. All signals are amplified and sent to a personal computer via a DAQ card.

Mathematical model of the optical force sensor

To formulate the enhanced teleoperation system, it is necessary to obtain the required mathematical model of the added optical sensor. The main goal of this section is to determine the mathematical relationship between the applied force on the sensor and the sensor output (voltage/light intensity). The light-intensity distribution profile, emerging from the GRIN-lens fiber-optic collimator, has a Gaussian form. The total power of the back-reflected light collected by the GRIN-lens collimator is calculated from geometrical characteristics, i.e., from the GRIN-lens-to-reflector distance and the mode-field beam diameter (width of the Gaussian beam) at that distance [Citation25, Citation26]. The Gaussian light-intensity distribution profile in a cross-section of the beam can be represented by the following polar equation [Citation25]: where I(r) is the light intensity at radius r, I0 is the maximum light intensity (i.e., the intensity on the axis of the beam, at r= 0) and ω0 is the mode-field radius.

Employing the Gaussian model of the light beam [Citation25], the total light flux (i.e., power transformation) emitted from the GRIN-lens is given by where D is the diameter of the light beam in a cross-sectional plane at a given distance from the GRIN-lens. Note that the total light flux remains the same, regardless of the distance, whereas the intensity (W/mm2) of the beam at a given radius r in the cross-section is reduced with increasing distance of the reflector from the GRIN-lens.

shows a transmitting-receiving GRIN-lens, which is positioned at distance h from an axially translated reflector. The back-reflected light collected by the GRIN-lens is originated by the virtual GRIN-lens, and the trajectory of the light is represented by the virtual image in the plane mirror of the real GRIN-lens. Thus, the challenge is to estimate the fraction of the collected light that is coaxial with the GRIN-lens. This fraction is calculated based on the geometrical relationship between the Gaussian profile for the emitted beam at a distance 2 h from the GRIN-lens and the diameter of the collecting (receiving) GRIN-lens.

Figure 5. The Gaussian beam distribution profile (25), after leaving a fiber-optic GRIN-lens collimator. The transmitting-receiving light intensity varies as a function of the axial displacement of the reflector. The light travels the distance to the reflector and back to the same collimator. The virtual transmitting-receiving GRIN-lens is assumed to be placed in a mirrored position at 2 h from the real GRIN-lens.

Figure 5. The Gaussian beam distribution profile (25), after leaving a fiber-optic GRIN-lens collimator. The transmitting-receiving light intensity varies as a function of the axial displacement of the reflector. The light travels the distance to the reflector and back to the same collimator. The virtual transmitting-receiving GRIN-lens is assumed to be placed in a mirrored position at 2 h from the real GRIN-lens.

A change in the width of the Gaussian profile can be assumed to be linear if the change in h between the GRIN-lens and the reflector is small. Therefore, the profile of the light beam is modeled as a conical shape, with the boundaries determined by the divergence angle β and a Gaussian width equal to 2ω0. At the virtual GRIN-lens tip, the intensity of light is the maximum light intensity (I0). However, at the GRIN-lens tip, the Gaussian width increases to 2ω and the maximum light intensity drops to Accordingly, a small fraction of the back-reflected light is collected, limited by the diameter of the collecting GRIN-lens. The collected light flux (power) is calculated as the fraction of the transmitted light flux at the end-face plane of the GRIN-lens, which is enclosed within the boundaries specified by the diameter of the optical fiber: where d is the actual diameter of the transmitting/receiving GRIN-lens, h is half the distance from the virtual transmitting/receiving GRIN-lens to the transmitting/receiving GRIN-lens, and λ is half the acceptance angle of the lens.

Varying the amount of light flux leads to a change in the electrical signal generated by the optical detector. The theoretical voltage output of the sensor connected to the detecting photodiode is calculated based on the distance (h) between the GRIN-lens and the reflector through the following: where kv is a global conversion (voltage transformation) factor that relates the light flux to the voltage output, varying within the range 0–104, and ξr is an efficiency (uncertainty) parameter that accounts for misalignment between the transmitting and receiving light, which lies between 0 and 1.

In the sensing mechanism, the reference channel is used to compensate for the optical signal variations, caused by fiber bending and/or light intensity drifts. This channel reduces sensing errors if both the sensing and reference channels are under similar bending and/or light intensity drifts [Citation24]. A dimensionless model is obtained by normalizing the voltage output to the maximum voltage (the voltage output of the reference channel): where ξr as the efficiency parameter is equal to 1 for the reference voltage (Vtheo.max) and to 0.39 for the output voltage (Vtheo). In addition, λ = 30°, β = 16°, ω0= 0.06 mm, h0= 4 mm, and D = 20 mm.

Using the normalization in Equation (14), the effect of the voltage transformation factor is canceled out. Equation (14) has a nonlinear curve for the entire range of the displacement; however, the prototyped sensor operates in only a small part of this range. The calibration of the normalized voltage-deflection curve is shown in . To obtain this curve, the sensor was mounted on a non-vibration table and the GRIN-lens displacement from the reflector measured by applying an incremental applied force. In addition, the analytical calculations for the normalized voltage are extracted from Equation (14). Comparing both trends in , the experimental curve (mean= 0.335, σ= 0.036) follows the analytical curve (mean = 0.350, σ = 0.035) through a linear trend line over the operating range. This shows a close matching between the experimental and the mathematical models.

Figure 6. Calibration of the optical force sensor is performed by finding the normalized voltage-deflection curve from experimental results (black curve). This figure also shows the analytical calculation of the voltage-deflection curve using Equation (14) (gray curve) for the operational range of the force sensor. The linear approximation of the experimental voltage-deflection curve (dashed line) shows a reasonable matching between the experimental results (mean= 0.335, σ= 0.036) and the analytical calculation of mathematical model (mean = 0.350, σ = 0.035).

Figure 6. Calibration of the optical force sensor is performed by finding the normalized voltage-deflection curve from experimental results (black curve). This figure also shows the analytical calculation of the voltage-deflection curve using Equation (14) (gray curve) for the operational range of the force sensor. The linear approximation of the experimental voltage-deflection curve (dashed line) shows a reasonable matching between the experimental results (mean = 0.335, σ = 0.036) and the analytical calculation of mathematical model (mean = 0.350, σ = 0.035).

As mentioned before, the flexible structure is a spring, which is used to convert displacement to force and vice versa. Hooke’s Law is used to calculate the applied force on the sensor: where Δx is the reflector displacement and kspring is the constant factor of the spring or its stiffness. This factor is unknown for the force sensor.

An experiment was conducted to identify the stiffness of the flexible structure by obtaining a force-deflection curve, as shown in . For this purpose, incremental forces were applied on the sensor ranging from 1 N to 6 N. The forces were measured by a Vernier dual-range force sensor. At the same time, the reflector displacement was measured. Every force loading was iterated ten times and the corresponding values averaged to obtain the mean reflector displacement. The slope of the aforementioned curve represents the stiffness of the optical force sensor (9.20 N/mm).

Figure 7. Graph showing the experimental identification results for obtaining the stiffness of the flexible structure of the force sensor. The slope of the curve represents the constant spring (stiffness) of the optical force sensor (9.20 N/mm).

Figure 7. Graph showing the experimental identification results for obtaining the stiffness of the flexible structure of the force sensor. The slope of the curve represents the constant spring (stiffness) of the optical force sensor (9.20 N/mm).

The distance between the fiber tip and the mirror varies along the longitudinal axis of the fiber, whereas the mirror is kept perpendicular to the fiber. Accordingly, the actual distance between the reflector and the fiber tip (h) is calculated by subtracting the reflector displacement (Δx) due to force loading from the fixed gap distance between the reflector and the fiber tip (h0= 6 mm). In other words, the relationship between Δx and the normalized voltage is as follows:

By substituting Δx from Equation (16) into Equation (15) and using the flexible structure stiffness value, the following equation is obtained for the relationship between the normalized voltage reading (Vtheo(h)/Vtheo.max) and the applied force reading (fopt):

Kinematics and dynamics equations of the master and slave devices

Two PHANTOM Omni haptic devices (SensAble Technologies, Inc., Woburn, MA) [Citation27] were used as master-slave robot manipulators in this study. As mentioned, the slave manipulator was augmented with the optical force sensor. The optical force sensor acted as a physical mass-spring with unknown stiffness. The stiffness was identified experimentally as described in the previous section. The hardware augmentation also changed the kinematics and dynamics of the slave manipulator. Therefore, it was necessary to formulate the new enhanced kinematics/dynamics of the robot manipulators and the control strategy. This section presents the required mathematical formulation of the kinematics and dynamics for the master and the enhanced slave devices.

(1) Kinematics equations. PHANTOM Omni is a robot manipulator that offers 6 degrees of freedom (DOF) for positional sensing (three translational and three rotational). It also has three actuators, giving the ability to generate force feedback in 3-DOF displacement. The following equations govern the kinematics of the robot in free motion, which results in a 3-DOF movement.

The kinematics equation of the master device presents the relationship between its joint angles and the Cartesian coordinates of its end-effector. Cartesian space coordinators of the original devices are referred from the base to the end of link three. In the slave device, two new links were added to the end-effector of the PHANTOM Omni, due to the addition of the optical force sensor. The elements of the position vector corresponding to the operational forward kinematic model for the devices are given by where x, y, and z denote the elements of the position of the end-effector in Cartesian space. L1, L2, and L3O (“O” in L3O denotes “original”) are the links of the device, kz is the workspace transformation offset between the origin of the device and the first joint, and θ1 θ3 are the joint angles, which are shown in . Sa= sin(θa), Ta= cos(θa), L1= 0.025 m, L2= L3O= 0.133 m, and kz= L2+0.035= 0.168 m.

Figure 8. The PHANTOM Omni haptic device as enhanced by addition of the optical force sensor. With this extension, the force sensor acts as a physical mass-spring with unknown stiffness, which is identified experimentally. The extension also changes the kinematics and dynamics of the slave manipulator.

Figure 8. The PHANTOM Omni haptic device as enhanced by addition of the optical force sensor. With this extension, the force sensor acts as a physical mass-spring with unknown stiffness, which is identified experimentally. The extension also changes the kinematics and dynamics of the slave manipulator.

The optical sensor is attached to the slave device via the two links, L4= 0.45 m and L5= 0.08 m. In other words, L3O in the slave device is enhanced with L4 plus L5 (L3slave= L3O+L4+L5). Due to the negligible variation of link 5 compared to its initial length, it is assumed that L3slave has a fixed length.

To define the inverse kinematic of the device, it is necessary to calculate θ1 θ3 as a function of the Cartesian coordinators x, y, and z, as follows: where,

The relationship between the joint angle velocity and the operational velocity vectors is given by where J(θ) denotes the Jacobian matrix as follows: where L3O should be replaced with L3slave to present the enhanced slave Jacobian matrix.

To obtain the joint velocities, the inverse of the Jacobian matrix must be calculated, which is defined by the matrix adjunct over its determinant. The following equation presents the determinant of the matrix: where, under the condition of existence, θ3 ≠ θ2+π/2. Again, the original link of L3O should be replaced with L3slave for the slave device determinant.

(2) Dynamics equations. The general form of the dynamic equation of motion for robot manipulators is as follows [Citation28]: where θ = [θ1 θ2 θ3]TR3×1 is the vector of the joint angle, M(θ)R3×3 is the inertia (mass) matrix, C(θ,)∈R3×3 presents velocity-dependent elements (the Coriolis and centrifugal (damping) terms), G(θ)∈R1×3 is the gravity term and other forces acting on the joints, and τ=[τ1 τ2 τ3]TR3×1 is the exerted joint torques. It is also assumed that the robot is a 3-DOF manipulator.

In practice, it is desirable to express the dynamics of master-slave robots in the Cartesian coordinate system, where the tasks and interactions with the operator and environment are naturally specified. In this section, we develop the dynamic model of our system in the Cartesian coordinate system. Note that the forces acting on the end-effectors, f, result in joint torque. The following equation presents the relationship between the forces and the torques at joints: where the force vector at the tip is f= [fx fy fz] N, then τ1= (J11fx+J21fy+J31fz) N-m, τ2= (J12fx+J22fy+J32fz) N-m, and τ3= (J13fx+J23fy+J33fz) N-m will define the torques at joints. Multiplying both sides of Equation (25) by JT(θ),

In addition, the relationship between the velocities from Equation (22) is as follows:

By differentiating Equation (28), where

Therefore, the equivalent of the dynamic Equation (25) will be driven in the Cartesian coordinate system by substituting Equations (28) and (30) into Equation (27): where

Again, in Equations (31) to (34), there is a noticeable difference between the dynamic equations of the master and slave robot manipulators due to the enhancement of the slave device. In addition, the models of an operator’s arm and environment can be added into Equation (31) as a second-order linear time-invariant (mass-damper-spring) equation. Therefore, the final dynamics equations for the master and slave devices, respectively, in Cartesian coordinates will be as follows: where xm and xs are end-effector Cartesian positions, θm and θs denote joint angle positions, Mxm(θm) and Mxs(θs) are symmetric positive-definite inertia (mass) matrices, Cxm(θm,m) and Cxs(θs,s) represent velocity-dependent elements (damping), and Gxm(θm) and Gxs(θs) are the gravity terms, for the master and slave, respectively. In the above equations, Mxm, Cxm and Gxm can be extracted from the human hand and master device masses, damping, and stiffness elements, respectively. Mxs and Gxs terms are related to the environment, the origin slave device, and the augmented force sensor masses and stiffness. In the Cxs matrix, the elements only depend on the slave device and the environment damping.

Experiments and results

As the quality of a teleoperation system depends on a combination of human perception and hardware features, two human factor studies were conducted to achieve the following goals: (a) To verify experimentally the optical force-sensor function/performance and direct force measurement at only the slave side in our hybrid force/position control strategy, which transfers the positions of both sides and only the interaction force with the environment. Essentially, through the experiments, we investigated the performance of the control strategy on deformability discrimination of remote objects. (b) To develop and examine a human-centered approach to tuning control parameters in a telesurgery system as a human-in-the-loop system. In such a system, it is necessary to adjust the system parameters based on human perceptual capabilities and limitations. Thus, it is necessary to obtain information on how humans discriminate deformable objects using laparoscopic surgical tools. First, an experiment collected biased data on the perception of deformability from a human perspective. Basically, we measured human perception of deformability in a laparoscopic surgical setting based on past subjects’ experiences in real life. We then tuned the control parameters based on the collected data from the first experiment and verified them experimentally through a second experiment in a similar laparoscopic set-up. A 1-DOF movement (in the axial direction) was considered for the experimental teleoperation set-up to investigate the effect of haptic feedback.

Experiment I: Experimental set-up, procedure and results

The first experiment was carried out to quantify the deformability of objects by human subjects when in direct contact with different objects by using a laparoscopic tool. Equipment and materials included three deformable objects – a sponge, a suturing pad, and a model organ – as well as a laparoscopic training box, which was used by novice/resident surgeons to practice laparoscopic surgery. The view of the contents of the box were obstructed to ensure that subjects used only their sense of touch to identify the deformability of the objects. Twenty subjects (12 males and 8 females) aged 20–23 were asked to discriminate the objects without using the master-slave operation.

As an experimental task, a subject grasped the handle of a real laparoscopic surgical tool. The subject was asked to examine a deformable body that was placed inside the laparoscopic training box and report the degree of deformability of that object by providing a percentage grade ranging from 0% (very low deformability) to 100% (very high deformability) based on his/her past experience and/or memory. A repeated measures (within subject) design was used for the experiment, meaning that each subject carried out the task three times for three randomly chosen deformable objects, so that nine answers were collected from each subject. The order of object assignments for each trial was randomized before starting the experiment.

The mean and standard errors of the reported grades were calculated across all subjects and are shown in . According to the results, the sponge was the most deformable object, with an average reported deformability of 91%. The reported deformabilities for the suturing pad and the model organ were 78% and 31%, respectively. In addition, the results showed that the sponge and the suturing pad were very close in the deformability discrimination, while the stiffness of the model organ was far different from that of the other two. The results of this experiment were then used in the calculation of force feedback for the deformable objects in the second teleoperation experiment.

Figure 9. Experiment I: Human subjects’ perception of deformability was quantified for three objects – a sponge, a suturing pad, and a model organ – using a percentage scale.

Figure 9. Experiment I: Human subjects’ perception of deformability was quantified for three objects – a sponge, a suturing pad, and a model organ – using a percentage scale.

Experiment II: Experimental set-up, procedure and results

In the second experiment, a further ten human subjects (7 males and 3 females) aged 20–23 participated in a teleoperation task. As shown in , two PHANTOM Omni devices were used in a master-slave set-up, and the optical force sensor was attached to the slave device. Based on the results of the first experiment, control parameters were tuned to generate varying force feedback at the master side, enabling subjects to discriminate the three deformable objects using their sense of touch. The parameters were assigned to the deformable objects using a gain-scheduling method. Essentially, the nonlinear characteristics of the materials were expressed with a finite combination of linear models using the gain-scheduling approach. A PD controller was used to regulate the generated force feedback. The assigned P gains were 0.15, 0.2, and 1 for the sponge, suturing pad, and model organ, respectively. The gains for the sponge and suturing pad were selected so as to be very close to one another based on the results of the previous experiment. The assigned D gains were selected as 0.001, 0.002, and 0.003 by trial and error. In addition, visual feedback was provided to subjects by allowing them to observe the surgical tool movement in the slave side.

Figure 10. The experimental master-slave set-up to evaluate the performance of the proposed teleoperation system. As the teleoperation task, the subject grasped the end-effector of the master haptic device and examined the deformability of the remote objects on the other side of the room.

Figure 10. The experimental master-slave set-up to evaluate the performance of the proposed teleoperation system. As the teleoperation task, the subject grasped the end-effector of the master haptic device and examined the deformability of the remote objects on the other side of the room.

The experimental task started with a subject grasping the end-effector of the master device and examining the three deformable objects in the slave side, using a laparoscopic tool in a teleoperation setting over the Internet. The subject moved the end-effector up and down to touch the remote hidden objects. This experiment was carried out for two conditions: 1) with visual feedback alone and 2) with both haptic and visual feedback. In the visual-only condition, each subject identified the deformability of the objects by manipulating the master device and only looking at the displacement of the slave arm with no force feedback. In the visual and haptic condition, the subjects identified which materials they believed to have high, medium, and low deformability using both visual and haptic feedback. A repeated measures (within subject) method was employed in designing this experiment, meaning that each subject carried out the task three times for each condition. The order of object presentation in each trial was randomly selected before starting the experiment.

The results of the second experiment are presented in . The black bars show that the control strategy was able to provide effective force feedback to the subjects, enabling them to identify the deformable objects with a high degree of precision. In other words, in such a human-centered approach, the control parameters were properly tuned based on the quantified deformation of objects reported by the subjects in the first human factor study.

Figure 11. Experiment II: Identification by material of the three deformable objects – a sponge, a suturing pad, and a model organ – with visual feedback alone and with both haptic and visual feedback. The black bars show that the control strategy was able to provide effective force feedback to the subjects, enabling them to identify the deformable objects with a high degree of precision. In other words, in such a human-centered approach, the control parameters are properly tuned based on the quantified deformation of objects as reported by the subjects in the first human factor study.

Figure 11. Experiment II: Identification by material of the three deformable objects – a sponge, a suturing pad, and a model organ – with visual feedback alone and with both haptic and visual feedback. The black bars show that the control strategy was able to provide effective force feedback to the subjects, enabling them to identify the deformable objects with a high degree of precision. In other words, in such a human-centered approach, the control parameters are properly tuned based on the quantified deformation of objects as reported by the subjects in the first human factor study.

With visual feedback alone, the subjects could only identify the correct materials with 10%, 30%, and 50% accuracy for the suturing pad, model organ, and sponge, respectively. Surprisingly, the percentage of correct identification jumped to 80% for the suturing pad with visual and haptic feedback, while the subjects’ ability to detect the sponge increased four-fold with haptic force feedback. In addition, the results show that the difference between generated forces due to the selected gains for the sponge and suturing pad were still noticeable. To confirm whether there was a statistically significant difference between the no-force-feedback and force-feedback conditions, a repeated measures analysis of variance (ANOVA) was conducted, with a rejection level of 0.05. The results of the ANOVA (F(1,9) = 18.44; p = 0.002) show that force feedback will increase the ability of subjects to correctly identify materials with various degrees of deformability. In addition, the force feedback provided by the teleoperation system comes close to the real force feedback experienced in direct MIS.

Conclusions and future work

In this study, a haptic-enabled teleoperation system was implemented to verify experimentally the role of haptic feedback in the remote discrimination of deformable objects by humans, using a combined-control of master-slave scheme. The teleoperation system consists of the direct force reflection (DFR) and position-error-based (PEB) control strategies. To measure interaction forces with the remote environment, an optical force sensor was designed, prototyped, and attached to the slave device to enhance system performance. The enhanced teleoperation system has been described mathematically based on the aforementioned changes. As the quality of a teleoperation system depends on a combination of human perception and system features, two human factor studies were conducted to achieve the following goals: (a) To experimentally verify the optical force-sensor function and direct force measurement performance; and (b) to develop and examine a human-centered approach in a telesurgery system as a human-in-the-loop system.

The experimental results show that the use of a transmitting-receiving fiber allows the force sensor components to be successfully integrated into a haptic-enabled teleoperation system. The inclusion of a reference channel improves the sensor performance by reducing the signal variation for any momentary instability or long-term drift in the output power of the light source, which would otherwise result in erroneous readings of the force sensor. In addition, it eliminates the effect of the voltage transformation factor, which is difficult to find due to its variation. The mathematical description of the optical force sensing shows the relationship of the light intensity and the output force. The sensor accurately measures the interaction forces from various deformable objects, and a high degree of transparency is achieved with only one sensor. The normalized voltage-deflection curve indicates a close matching between the experimental and the mathematical optical force-sensor models. However, the extended design enhanced the end-effector of the original device by adding the spring effect at the slave tip. Accordingly, the hardware extension slightly changed the kinematics and dynamics of the slave manipulator and the control strategy.

The first human factor study quantified the human perception of deformation on three objects, using a laparoscopic tool directly. The results were used to fine-tune the control parameters in the teleoperation system. The results show that the sponge and the suturing pad were very close in the deformability discrimination, while the stiffness of the model organ was far different from that of the other two. This human-centered approach can be used as an alternative method for measuring the stiffness of deformable objects using machines.

A 1-DOF tele-manipulation experimental set-up was used in the second experiment. According to the results, the teleoperation system is able to provide effective haptic feedback to the hand of a user to identify remote deformable objects. In addition, the results show that the difference between generated forces due to the selected gains for the sponge and suturing pad was still noticeable compared with the results of the first experiment. There was also a statistically significant difference between the no-force-feedback and force-feedback conditions, which was confirmed by a repeated measures analysis of variance (ANOVA). Although only one force sensor was employed, the performance of the bilateral system was acceptable, giving a reasonable approximation in comparison with a PEB-only control strategy.

In future work, it would be interesting to examine systematic-based control approaches and verify the kinematics and dynamics of robot manipulator effects on system performance using an automatic gain-scheduling method. In addition, future work will focus on the design of a 6-DOF optical force sensor on a Stewart platform using fiber Bragg grating (FBG) [Citation29–31].

Declaration of interest

This research is supported by the McLaren Foundation, Michigan, under grant #330258-71010.

Acknowledgements

The authors would like to express their gratitude to the past and fellow students in the Research in Engineering and Collaborative Haptics (REACH) lab at Kettering University, Pedro Henrique Affonso, Reza Yousefian, Raniel Ornelas, and Garrett Kottmann, for their invaluable technical assistance and support in implementing the teleoperation system and in data collection from the experimental set-up.

Notes

1Part of this research was previously presented at the 14th Annual MSU/FAME Community Research Forum in Flint, Michigan, May 2012.

References

  • Trejos AL, Patel RV, Naish MD. 2010. Force sensing and its application in minimally invasive surgery and therapy: a survey. Proceedings of the Institution of Mechanical Engineers, Part C: J Mechanical Engineering Science 224(7):1435–1454
  • Akinbiyi T, Reiley CE, Saha S, Burschka D, Hasser CJ, Yuh DD, Okamura AM. 2006. Dynamic augmented reality for sensory substitution in robot-assisted surgical systems. Proccedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS ‘06), New York, NY, September 2006. pp 567–570
  • Gerovich O, Marayong P, Okamura AM. 2004. The effect of visual and haptic feedback on computer-assisted needle insertion. Comput Aided Surg 9(6):243–249
  • Peirs J, Clijnen J, Reynaerts D, Van Brussel H, Herijgers P, Corteville B, Boone S. 2004. A micro optical force sensor for force feedback during minimally invasive robotic surgery. Sensors and Actuators A 115(2–3):447–455
  • Massaro A, Spano F, Cazzato P, Cingolani R, Athanassiou A. 2011. Innovative optical tactile sensor for robotic system by gold nanocomposite material. Progress in Electromagnetics Research M 16:145–158
  • Hagn U, Konietschke R, Tobergte A, Nickl M, Jörg S, Kübler B, Passig G, Gröger M, Fröhlich F, Seibold U, et al. 2010. DLR MiroSurge: A versatile system for research in endoscopic telesurgery. Int J Comput Assist Radiol Surg 5(2):183–193
  • Tavakoli M, Aziminejad A, Patel RV, Moallem M. 2006. Methods and mechanisms for contact feedback in a robot-assisted minimally invasive environment. Surg Endosc 10(2):1570–1579
  • Weiss H, Ortmaier T, Maass H, Hirzinger G, Kuehnapfel U. 2003. A virtual-reality-based haptic surgical training system. Comput Aided Surg 8(5):269–272
  • Kim HK, Rattner DW, Srinivasan MA. 2004. Virtual-reality-based laparoscopic surgical training: The role of simulation fidelity in haptic feedback. Comput Aided Surg 9(5):227–234
  • Maass H, Chantier BBA, Cakmak HK, Trantakis C, Kuehnapfel UG. 2003. Fundamentals of force feedback and application to a surgery simulator. Comput Aided Surg 8(6):283–291
  • Su H, Fischer GS. A 3-axis optical force/torque sensor for prostate needle placement in magnetic resonance imaging environments. Proceedings of the 2nd IEEE International Conference on Technologies for Practical Robot Applications (TePRA 2009), Woburn, MA, November 2009. pp 5–9
  • Rosen J, Solazzo M, Hannaford B, Sinanan M. 2002. Task decomposition of laparoscopic surgery for objective evaluation of surgical residents’ learning curve using hidden Markov model. Comput Aided Surg 7:49–61
  • Puangmali P, Althoefer K, Seneviratne LD, Murphy D, Dasgupta P. 2008. State-of-the-art in force and tactile sensing for minimally invasive surgery. IEEE Sensors J 4:371–381
  • Hu T, Castellanos AE, Tholey G, Desai JP. 2002. Real-time haptic feedback in laparoscopic tools for use in gastro-intestinal surgery. In: Dohi T, Kikinis R, editors. Proceedings of the 5th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2002), Tokyo, Japan, September 2002. Part I. Lecture Notes in Computer Science 2488. Berlin: Springer. pp 66–74
  • Lawrence DA. 1993. Stability and transparency in bilateral teleoperation. IEEE Trans Robotics Automation 9(5):624–637
  • Hannaford B. 1989. A design framework for teleoperators with kinesthetic feedback. IEEE Trans Robotics Automation 5(4):426–434
  • Salcudean SE, Zhu M, Zhu WH, Hashtrudi-Zaad K. 2000. Transparent bilateral teleoperation under position and rate control. Int J Robotics Res 19:1185–1202
  • Reiley CE, Akinbiyi T, Burschka D, Chang DC, Okamura AM, Yuh DD. 2008. Effects of visual force feedback on robot-assisted surgical task performance. J Thoracic Cardiovasc Surg 135(1):196–202
  • Stanley R, Abildskov JA, Mcfee R. 1963. Resistivity of body tissues at low frequencies. Circ Res 12:40–50
  • Yip MC, Yuen SG, Howe RD. 2010. A robust uniaxial force sensor for minimally invasive surgery. IEEE Trans Biomed Eng 57(5):1008–1011
  • Polygerinos P, Seneviratne LD, Razavi R, Schaeffter T, Althoefer K. 2013. Triaxial catheter-tip force sensor for MRI-guided cardiac procedures. IEEE/ASME Trans Mechatronics 18(1):386–396
  • Gassert R, Moser R, Burdet E, Bleuler H. 2006. MRI/fMRI-compatible robotic system with force feedback for interaction with human motion. IEEE/ASME Trans Mechatronics 11(2):216–224
  • Polygerinos P, Ataollahi A, Schaeffter T, Razavi R, Seneviratne LD, Althoefer K. 2011. MRI-compatible intensity-modulated force sensor for cardiac catheterization procedures. IEEE Trans Biomed Eng 58(3):721–726
  • Puangmali P, Hongbin L, Seneviratne LD, Dasgupta P, Althoefer K. 2012. Miniature 3-axis distal force sensor for minimally invasive surgical palpation. IEEE/ASME Trans Mechatronics 17(4):646–656
  • Polygerinos P, Seneviratne LD, Althoefer K. 2011. Modeling of light intensity-modulated fiber-optic displacement sensors. IEEE Trans Instrum Meas 60(4):1408–1415
  • Puangmali P, Althoefer K, Seneviratne LD. 2010. Mathematical modeling of intensity-modulated bent-tip optical fiber displacement sensors. IEEE Trans Instrum Meas 59(2):283–291
  • Silva AJ, Ramirez OAD, Vega VP, Oliver JPO. 2009. PHANToM OMNI haptic device: kinematic and manipulability. Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA ‘09), Cuernavaca, Mexico, September 2009. pp 193–198
  • Craig J. 2005. Introduction to Robotics: Mechanism and Control (3rd Edition). Upper Saddle River, NJ: Pearson Prentice Hall
  • Ranganath R, Nair PS, Mruthyunjaya TS, Ghosald A. 2004. A force-torque sensor based on a Stewart platform in a near-singular configuration. Mechanism and Machine Theory 39(9):971–998
  • Seibold U, Kuebler B, Hirzinger G. 2001. Prototype of instrument for minimally invasive surgery with 6-axis force sensing capability. Proceedings of the 2001 IEEE International Conference on Robotics and Automation (ICRA 2001), Seoul, Korea, May 2001. pp 498–503
  • Mueller MS, Hoffmann L, Buck TC, Koch AW. 2009. Fiber Bragg grating-based force-torque sensor with six degrees of freedom. Int J Optomechatronics 3(3):201–214

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.