646
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Biomimetic optics: visual systems

&
Pages 811-836 | Received 10 Mar 2016, Accepted 09 Aug 2016, Published online: 22 Aug 2016

Abstract

Biomimetics is the transfer of ideas and concepts borrowed from nature in order to develop solutions to technological issues. Research in biomimetic vision as a whole has focused on several aspects ranging from development of optical devices and sensors, innovative solutions to visual problems and building intelligent systems, all based on findings from nature. In this review, we discuss the diversity of animal eyes, optical systems inspired by natural eyes and their optical properties in nature that has the potential for biomimicry.

1. Introduction

We have observed nature’s wonder for millennia and have attempted to imitate its finely engineered designs, from modelling aeroplanes inspired by the flight of birds in the early sixteenth century to developing robotics and computer vision in the current era. The term biomimetics, which means ‘to imitate life and nature’, was coined by an American biophysicist, Otto Herbert Schmitt (1913–1988) during the 1950s. This field of science has greatly evolved in the past few decades by observing the flora and fauna of earth with special attention to insects, marine creatures and humans; it has had useful applications in various fields like architecture, material science, engineering, medicine and robotics (Citation1).

Biomimetics is essentially a two-step process: learning from nature and mimicking or copying its strategies to improve technology. Gebeshuber and Drack categorized biomimetics into two fields: biomimetics by analogy and biomimetics by induction (Citation2). The former starts with technological challenges in engineering and tries to find solutions from analogous situations in nature, for instance, attachment of winglets to airplanes. Biomimetics by induction has no intention for application initially, but starts with certain phenomena described in biological studies and looks at ways to implement them in the development of technology, for instance, invention of Velcro from burs (Citation2). Understanding nature’s strategies to solve its problems is not as easy as one might think. However, advancements in science especially in nanotechnology have made possible the study of fine structures occurring in nature. By creating models of these structures, one can understand and predict how they work for the organisms and thereby incorporate these designs to solve similar human (and other technological) problems. Observation and innovation are the two key elements in biomimicry because it is not simply copying from nature but taking clues from nature to develop an efficient multifunctional system.

Researchers have always been inspired by how the eye and the brain have evolved and how they operate in different creatures (Citation3). Mimicking the visual system of humans and animals, a variety of optical imaging systems and sensors have been devised that has wide application in surveillance and security, medical devices like endoscopes and robotic vision. This review discusses the visual system in various organisms, the optical imaging systems that were inspired by them and optical properties in nature that have potential for biomimicry.

2. Biological visual systems – animal and human

Animal eyes have evolved from being just a simple structure that can detect light, as in flatworms, to a complex system that can form an image, as in insects and humans. Biological eyes show amazing structural complexity which combines components across various scales ranging from nanometre to centimetres. Comparative biological studies have shed light on how the visual system of other creatures differs from that of humans. There exists a diverse variety of eye design and structure that help the animals’ way of life depending on whether it is prey or a predator and whether they require a wide field of view, motion detection, colour vision or night vision. The animal visual system is broadly classified into two categories: (1) single-chambered, simple pigmented pit with a few photoreceptors like in flatworms, (2) multi-lens system with each photoreceptor having its own pigmented tube, as in the compound eyes of insects and many marine organisms (Citation4, 5).

2.1. Single-chambered simple eyes

Eyes with a common optical element that focus light on the neural element are considered simple eyes (Citation5). In the animal kingdom, this optical element can be a simple pinhole, a mirror, or most commonly, a lens. The cephalopod mollusc, Nautilus, is the only genus that has pinhole eyes that let through the light falling on its retina (Figure ) (Citation5, 6). The organism varies the pupil size from 0.4 mm to about 2.8 mm to control the light entering the eye. This eye design merely helps the organism to detect light and its direction. The mirror eye present in most scallops and a few clams is basically a spherical multilayered concave mirror lining the back of the eye, which reflects the incident light back to the receptor layer of the inverted retina (Figure ) (Citation4, 5, 7).

Figure 1. (Left) Nautilus belauensis, the cephalopod mollusc (Source: Wikipedia). (Right) Schematic diagram of pinhole eye of Nautilus. (Adapted from Land, 1988).

Figure 1. (Left) Nautilus belauensis, the cephalopod mollusc (Source: Wikipedia). (Right) Schematic diagram of pinhole eye of Nautilus. (Adapted from Land, 1988).

Figure 2. (Left) Argopecten irradians, the scallop (Source: Wikipedia). (Right) Schematic diagram of concave mirror eye of scallops. (Adapted from Land, 1988).

Figure 2. (Left) Argopecten irradians, the scallop (Source: Wikipedia). (Right) Schematic diagram of concave mirror eye of scallops. (Adapted from Land, 1988).

Camera-type simple eyes, the most commonly established type of simple eyes, are comparable to that of a camera, and hence the name. The basic elements in a camera-type eye are a lens and a retina at its focal plane (Figure ). Vertebrates, aquatic creatures like cephalopods and molluscs, and arthropod arachnids have such an eye model. Their lens shape and refractive index differ depending on whether the medium outside the eye is water or air. Aquatic organisms, like fish, have a spherical graded refractive index, increasing from lens periphery to the centre.

Figure 3. Schematic diagram of camera-type eyes in aquatic animals (left) and terrestrial animals (right) having spherical-shaped and flat lens, respectively. (Adapted from Neilsson et al. 1989).

Figure 3. Schematic diagram of camera-type eyes in aquatic animals (left) and terrestrial animals (right) having spherical-shaped and flat lens, respectively. (Adapted from Neilsson et al. 1989).

Humans have an aspherical cornea and a biconvex gradient index (GRIN) lens separated by liquid medium, aqueous humour (Citation8). The lens is suspended from the surrounding ciliary body by the zonular fibres. Jelly-like vitreous humour, which has the same refractive index as that of aqueous humour, fills the posterior chamber between the lens and the retina. The retina, the visual processing unit in the eye, has two photoreceptors (rods and cones) and other relaying neurons that transmit the signals to the brain.

2.2. Compound eyes

The compound eye has been the focus for optical engineers for mimicking the design into imaging systems and sensors. As opposed to simple eyes, compound eyes are complex in nature consisting of multiple optical elements, each of which focuses the light on the neural element. The proto-compound eye present in ark clams is so primitive in design that it has nothing but a single lining of photoreceptors in which each receptor has its own pigmented tube (Citation9). This arrangement helps the organism to detect moving predators. Other compound eye types are appositional and superpositional in design. In general, a compound eye has a mosaic of thousands of hexagonal or square-shaped facets called ommatidia, each of which has a cornea, crystalline cone, rhabdom and photoreceptors (Figure ) (Citation3–5, 8–10). At the distal end of the crystalline cone, there are 7–9 radially arranged photoreceptors called retinula cells, which have different spectral absorption characteristics. Each of these cells project bundles of photopigment-containing microvilli-forming rhabdomeres (R1–R8), all of which together form a central rod-like structure called rhabdom (Citation11). The cornea and crystalline cone focus the incoming light onto the rhabdom. The microvilli within each rhabdomere are parallel and respond to the plane of polarization of the incident light, which will be discussed later in the section on polarization vision. Within an ommatidium, the rhabdomeres can be fused together (fused rhabdom as in bees) or spatially separated (open rhabdom as in flies), which affects the spectral sensitivity of the photoreceptors (Citation12). The number of ommatidia, the type and shape of rhabdom, number of retinula cells largely vary with different species. Although diffraction at the small sized lenses (about 10–25 μm in diameter) limits their vision, these organisms have evolved to develop various anatomical solutions that give them remarkable capabilities, which will be discussed in further sections.

Figure 4. (Left) Compound eyes of a fly (Source: Wikipedia). (Right) Schematic diagram of the ommatidia of the compound eye (Adapted from Neilsson et al. 1989).

Figure 4. (Left) Compound eyes of a fly (Source: Wikipedia). (Right) Schematic diagram of the ommatidia of the compound eye (Adapted from Neilsson et al. 1989).

2.2.1. Appositional compound eyes

In appositional compound eyes, as in diurnal insects, bees, dragonflies and lower crustaceans, each ommatidium is optically isolated and receives a specific part of the scene, forming an inverted image on its photoreceptor unit (Citation4, 5, 9, 10). The rhabdom with its narrow field of view merely acts like a light meter at the centre of its field. The image resolution depends not on the photoreceptor density but on other factors like the inter-ommatidial angle and number of ommatidia for instance. The final image seen by the organism is erect and is formed by stitching together the images formed by all the ommatidia.

In bees, all the photoreceptors in the ommatidia point towards the same field of view while in the housefly and drosophilia, each receptor has a separate field of view, which is shared with the receptors of adjacent ommatidium (Figure (a)). A receptor in one ommatidium sends its axon to combine with the axons of receptors in each of the six adjacent ommatidia that view the same scene (Citation5). These axons form a common neural bundle which receives input from seven adjacent ommatidia. This peculiar neural wiring is referred to as neural superposition which can be comparable to the spatial summation process of the rod photoreceptors of humans that increases the sensitivity of the neuron (Citation13).

Figure 5. Types of Compound eyes. (a) Neural superpositional compound eye, (b) Refractive superpositional eye, (c) Reflecting superpositional eye (Adapted from Neilsson et al. 1989).

Figure 5. Types of Compound eyes. (a) Neural superpositional compound eye, (b) Refractive superpositional eye, (c) Reflecting superpositional eye (Adapted from Neilsson et al. 1989).

2.2.2. Superpositional compound eyes

In contrast to neural superposition, optical superpositional compound eyes, like in nocturnal insects, lobsters, fireflies and moths increase their sensitivity to light by converging the optical information from adjacent ommatidia onto a single neural lining separated from the optical units (Figure (b) and (c)). Anatomically, the superposition eye has a thin rod which acts like a light guide, separating the crystalline cone and rhabdom. Also, due to the absence of a pigment shield around each rhabdom, the rhabdom is able to receive light not only from its own ommatidium but also from the adjacent ommatidia (Citation4, 5).

Based on the optics of the lens, superpositional compound eyes are of refracting and reflecting types. The refracting superpositional eye, present in moths and many beetles, has gradient refractive index in its crystalline cone with a high index at its central axis, suggesting that the bending of light happens internal to the lens (Figure (b)). In lobsters and shrimps, the outer walls of the ommatidia appear silvered, and hence the incident light gets reflected on the sides of the wall to reach the photoreceptors (Figure (c)). The ommatidia are square shaped to avoid stray reflections at different angles. The superpositional eyes of crabs and hermit crabs are of a different kind than the other superpositional eye types. Their ommatidium works by both reflection and refraction with a parabolic-shaped crystalline cone. In between the crystalline cone and the retina, there is a pigment-free clear zone which has a narrow column formed by retinula cells that guides the light to the retina (Citation14). Many sensors inspired by the compound eyes have been developed, some of which will be discussed in the subsequent sections of this article.

Some species do not have an apparent eye, yet they can perceive light. Organisms like brittle stars and chiton have solid lens crystals as a part of their dorsal surface that converge light onto their neurons, making them photosensitive (Citation15, 16). Some species of beetle, for instance, Melanophila acuminate, sense the infrared portion of electromagnetic waves using specialized structures called pit organs which have 50–100 sensilla (Citation17). This sensitivity helps them find the location of forest fire and lay their eggs in the burnt wood. Such infrared sensitivity is described in rattlesnakes, python and pit vipers (Citation18).

3. Optical and neural elements of the eye

In this section, we describe some of the unique features of the optical and neural parts of the eye present in nature – starting from cornea to percept – and discuss the bio-inspired materials and technology in each part of the eye.

3.1. Cornea

The cornea, the optical window of the eye, has evolved as a major refracting element contributing to about two-thirds of the total eye power in vertebrates (Figure ). It acts as a protective layer that separates the contents of the eye from the environment, unlike the pinhole eyes in Nautilus (Citation6). As opposed to terrestrial animals, the role of the cornea as a light-focusing optic is not significant in marine organisms as the medium on both sides of the cornea is the same. The human cornea has a higher radius of curvature in the periphery than its centre, creating an aspheric design, which is an evolutionary modification to reduce aberrations (Citation19).

Figure 6. Schematic diagram of the eye depicting the optical data of cornea. r – radius of curvature, n – refractive index, t – thickness.

Figure 6. Schematic diagram of the eye depicting the optical data of cornea. r – radius of curvature, n – refractive index, t – thickness.

3.1.1. Biomimetic cornea

The cornea being the front layer of the eye in humans is easily susceptible to damage due to physical trauma and chemical injury, which requires wound repair in acute stages and corneal transplantation in severe cases. With risks of contamination, spread of infectious diseases and never-met demands for donor tissue, researchers have taken a number of approaches to create biomimetic corneal substitutes using biomaterials. A group of researchers from China have developed an optically transparent, biomimetic nanofibrous membrane, which has a collagen–hyaluronate interior and a chitosan surface coating (Citation20). Collagen, being the bulk of the corneal stromal protein, becomes a suitable bio-compatible material for corneal scaffolds. Hyaluronate is essential for corneal and conjunctival epithelial cells aiding in wound healing. Nanofibres that are electrospun using these materials were used on alkali-induced corneal injury of rats and the results showed a much superior mechanical and biological performance compared with conventional human amniotic membrane.

3.1.2. Bio-inspired contact lenses

The contact lens, a refractive correction modality, is a biomimetic product of cornea. It has been designed to precisely fit the human cornea in order to correct the refractive error of the eye and at the same time and most importantly, be compatible with the tear film, thereby giving good ocular comfort (Citation21). Ocular comfort is the major factor that impacts successful contact lens wear. While there are many reasons for ocular discomfort, focus has been on modifying material compositions and lens care solutions to improve lens hydration, surface wettability and reduce surface deposits which are directly related to lens comfort. In this pursuit, a few natural materials like phosphorylcholine (a zwitterionic phospholipid present on the outer surface of red blood cell membranes) and hyaluronic acid (an anionic, non-sulfated glycosaminoglycan present extensively in the extracellular matrices of most tissues) were also incorporated in the hydrogel contact lenses, in place of synthetic polymers like polyvinyl alcohol and polyvinyl pyrrolidone. Surface coatings on contact lenses as anti-microbial agents have been developed to limit the adhesion and growth of microbes. A few examples of such bio inspired anti-microbial materials include furanones from red sea kelp (Citation22) and LL-37, a human cathelicidin (Citation23).

Inspired by the optical system of the nocturnal helmet geckos, researchers are finding the possibility to develop new multifocal contact lenses (Citation24). The nocturnal geckos have distinct concentric zones of different refractive powers in their lens which help the organism to focus light from different distances and of different wavelengths onto the retina, enabling a good depth of field and colour vision at night (Citation25). Researchers are looking at designing contact lenses with auto-focus technology that can help presbyopes who lose their ability to correctly get an in-focus image of the objects at close distances. This contact lens design, inspired by elephantnose fish (its anatomy and the bio-inspired sensor is explained in Section 3.3), would include the lens, algorithm-driven sensors and miniature electronic circuits that adjust the shape of the lens and the power source (Citation26).

3.2. Pupils

The humans have a circular pupil while many animals have a slit or an oval-shaped pupil (Citation25, 27–29). Horses and goats have a horizontally elongated pupil whereas the domestic cats have a vertically elongated pupil. Cuttlefish have a W-shaped pupil to detect the polarized light that helps in seeing contrast even in dim light (Figure ). The shape of the pupil in an organism depends on its living environment and whether the animal is prey or a predator (Citation29). Banks et al. (Citation29) observed that the animals with vertical slit pupils are more likely to be predators and active in both day and night, whereas animals with horizontal slit pupils are more likely to be prey. The shape of the pupil affects the quality of the image formed on the retina and the field of view. The pupils of Geckos, Tarentola chazaliae, change from being two or more pinholes in vertical line during the day to a wide open aperture in night increasing the amount of light reaching the retina and the field of view (Citation25). However, with larger pupil, the image suffers from aberration effects. The horizontally oriented pupils of horses and goats help them watch for their predators in nearly 330º view when they put their head down to eat (Citation29). A pupil that constricts to multiple apertures has a larger depth of field, however, it produces multiple images which would reduce the depth of field if the image separation is large. The animal uses the separation between these images as a clue to accommodate and thereby estimate the distance of its prey (Citation29). A slit pupil can control the amount of light by as much as 135-fold compared to only 10-fold in a circular pupil (Citation28). Most animals, for instance, domestic cats and geckos, having slit pupils also have a multifocal lens, which has concentric zones that focuses light at different focal lengths (Citation25, 27–30). Since a slit pupil exposes the full diameter of the lens, different wavelengths from each zone of the lens would focus on a single plane, thus this combination of slit pupils and multifocal lens helps the organism to correct for chromatic aberration (CA) in which blue light focuses closer to the lens than red light (Figure ). CA is dependent upon the wavelength and hence the refractive index and can be minimized using a doublet lens, one of which has positive CA while the other has negative, thus cancelling each other out.

Figure 7. Various types of pupil shapes in different organisms. From top clockwise: cuttlefish, lion, goat, gecko, horse and cat. Source: http://www.npr.org/sections/health-shots/2015/08/07/430,149,677/eye-shapes-of-the-animal-world-hint-at-differences-in-our-lifestyles.

Figure 7. Various types of pupil shapes in different organisms. From top clockwise: cuttlefish, lion, goat, gecko, horse and cat. Source: http://www.npr.org/sections/health-shots/2015/08/07/430,149,677/eye-shapes-of-the-animal-world-hint-at-differences-in-our-lifestyles.

Figure 8. Spherical lens with two concentric zones of which the inner zone has a shorter focal length than the outer zone. The longitudinal chromatic aberration in the images produced by the two zones causes the blue and red images come to focus at the same plane. Source: Land, 2006, reproduced with permission from Current Biology.

Figure 8. Spherical lens with two concentric zones of which the inner zone has a shorter focal length than the outer zone. The longitudinal chromatic aberration in the images produced by the two zones causes the blue and red images come to focus at the same plane. Source: Land, 2006, reproduced with permission from Current Biology.

A point source when imaged by an optical system is not usually a point image, instead it is a blur circle or in spatial terms, point spread function (PSF), which is a diffraction pattern of light imaged from a point source (Figure ). The size of the PSF depends on factors like aperture size and wavelength. The smaller the aperture size, bigger is the size of PSF. An important metric in the analysis of optical systems is the optical transfer function (OTF) which is nothing but the Fourier transform of the PSF. The OTF is made up of a modulation transfer function (MTF) and the phase transfer function. The Phase Transfer function plays a major role only in systems with significant aberrations or in extreme off-axis situations. The MTF, on the other hand, is the ratio of image contrast to object contrast plotted as a function of spatial frequency. As the spatial frequency increases, MTF decreases indicating that the image has a contrast level lower than the object. Beyond a certain frequency called the cut-off frequency, which is given by a/λ where a is the aperture size and λ is the wavelength of light, the MTF goes to zero irrespective of the pupil shape (Citation31). A good discussion of these concepts can be found in the book by Boreman (Citation32) or Gaskill (Citation33). The shape of the PSF and MTF depends on the shape of the pupil. In a vertical slit pupil, the difference in pupil size in xy axis results in asymmetric PSF, which has larger radius along the horizontal axis. This difference causes an astigmatic depth of field, i.e. longer along the vertical extent than the horizontal (Citation29). This means that objects outside the focal distance will be more blurred in the horizontal direction than the vertical (Figure ). This directionality of depth of focus may help the animal to estimate the distance of its prey using stereopsis in the vertical direction and using defocus blur in the horizontal direction. The contrary is true for horizontal pupils – smaller PSF and greater depth of field along the horizontal direction – which gives a good image quality in a wide field of view and thus facilitates better locomotion for the animal that has horizontal pupils. Even a circular pupil acts like an elliptical pupil when the viewing angle is large. The horizontal pupil size as large as 7.69 mm apparently reduces to about 0.75 mm when viewed from 100° angle temporally (Citation34). When a circular pupil is rotated about the vertical axis, the diffraction pattern of a point source changes from a circular airy disc to an elliptical shape (Figure ). As the angle of incident light increases and the pupil becomes slit-like reducing the effective pupil area, the elliptical diffraction pattern changes to hyperbolas spreading horizontally, thus contributing to reduced peripheral vision (Citation35). Because of this obliquity effect of the pupil, when testing visual acuity or refraction at the peripheral retina, it is advisable to use vertical slit aperture at the source and horizontal slit at the eye, which would limit the light to a square controlling the variations in the pupil size in addition to allowing peripheral view (Citation36). Compared to circular pupils, the contrast transfer function is slightly better in slit pupils as depicted in Figure . For instance, a grating with 100% contrast and at 0.8 cut-off frequency would result in only 10% contrast when imaged with a circular pupil, as opposed to 20% contrast with a slit pupil (Citation31).

Figure 9. Point spread function: airy disc pattern and its intensity profile.

Figure 9. Point spread function: airy disc pattern and its intensity profile.

Figure 10. Image quality through a vertical slit pupil. (a) Images of a white cross presented at different distances from the focus. The horizontal lines of further away images are quite blurred compared with the vertical. (b) Horizontal and vertical cross sections of point spread functions plotted against different amounts of defocus. Source: Banks et al. 2015, open access.

Figure 10. Image quality through a vertical slit pupil. (a) Images of a white cross presented at different distances from the focus. The horizontal lines of further away images are quite blurred compared with the vertical. (b) Horizontal and vertical cross sections of point spread functions plotted against different amounts of defocus. Source: Banks et al. 2015, open access.

Figure 11. Diffraction patterns of a point source as a result of rotating a circular pupil about vertical axis. Source: Weale 1956, reproduced with permission from British Journal of Ophthalmology.

Figure 11. Diffraction patterns of a point source as a result of rotating a circular pupil about vertical axis. Source: Weale 1956, reproduced with permission from British Journal of Ophthalmology.

Figure 12. Contrast transfer functions of optical systems with circular and slit pupils. X-axis is spatial frequency as proportion of cut-off frequency. Source: Westheimer 1964, reproduced with permission from Vision Research.

Figure 12. Contrast transfer functions of optical systems with circular and slit pupils. X-axis is spatial frequency as proportion of cut-off frequency. Source: Westheimer 1964, reproduced with permission from Vision Research.

A conventional camera could be modified with different apertures to explore the advantages of having different pupil shapes. Levin et al. (Citation37) introduced a coded aperture imaging in the optics of an ordinary camera and extracted depth information and a high resolution image from a single photograph by controlling the amount of defocus.

3.3. Lens

As discussed in the previous section, because the medium outside and inside the eye is the same for marine organisms, the cornea loses the bulk of refracting ability and the main refracting optic becomes the lens. One of the earliest cephalopod molluscs, Nautilus, has no lens or cornea to focus light on its exceptionally fine retinal mosaic. They have quite a large eye close to 1 cm in diameter with approximately 4 × 106 receptors in the retina. However with these advantages, this simple pinhole eye performs 100 times worse in resolution and 400 times worse in sensitivity (for the pupil size of 0.4 mm) which it could have gained with the presence of a lens (Citation5, 6). The role of the lens in humans and other mammals is mainly in accommodation and minimizing aberrations, which will be discussed later. In addition, we also discuss the natural lenses present in nature and how they help the organism, as well as the optical devices that mimic natural lenses.

3.3.1. Accommodation

Accommodation, the ability to achieve clear vision at different distances, is carried out by a number of mechanisms in different animals. While mammals change the lens curvature to focus objects at various distances, some amphibians like snakes change the lens position closer to or farther from the retina (Citation5, 10). Additionally, diving ducks squeeze their lens into the iris to achieve sufficient accommodation while seals have a flat cornea that compensates for required accommodation (Citation38).

Birds have specialized muscles attached to the cornea (Crampton’s muscle) and the bony ossicles near the lens (Brucke’s muscles) that reshape these structures to focus light on the retina. In contrast, whales are able to fill the chamber behind the lens with an amount of fluid that is sufficient to bring objects in focus on the retina. This mechanism also helps them adapt to high pressure in deep sea (Citation5, 10).

The human lens changes its curvature by contraction of the ciliary muscle that enables it to focus objects at different distances (Citation8). People who are fitted with intra-ocular lenses (IOL) after cataract surgery do not get clear vision at all distances because of the fixed focal length of these lenses. An IOL typically has two parts – optic and haptics, which are the side structures that helps hold the lens in place in the lens ‘bag’ (Figure ). Mimicking the way the natural crystalline lens accommodates, Bausch and Lomb (NY) introduced CrystaLens, an accommodating IOL, which works by moving the IOL forward and backward along the axis of the eye. The CrystaLens has a 4.5-mm optic and a long-hinged plate haptic with two polyamide loops at the end of the haptics (Citation39). When the IOL is placed in the capsular bag, the long, flexible haptics push the lens optic back against the posterior capsule and the vitreous. When viewing near objects, the ciliary muscle changes shape and enlarges within the vitreous cavity, which increases the pressure in the vitreous, pushing the vitreous forward and causing the lens to arch forward. This mechanism increases the focusing power of the eye and it is found that the forward movement of the vitreous by 1 mm would create 1.3D of accommodation for an average axial length of 24 mm (Citation40).

Figure 13. CrystaLens accommodating IOL. Source: https://www.researchgate.net/figure/7017218_Fig1_Figure-1-The-Crystalens-AT-45-accommodating-IOL-A-posterior-chamber-modified [accessed Jun 15, 2016].

3.3.2. Correcting aberrations

Aberrations in an imaging system lead to poor image quality by degrading the optical transfer function. Biological lenses have evolved in a way that reduces aberrations and brings clear vision on the retina. Cephalopod molluscs such as squid and octopus, and also fish have a spherical lens which provides short focal length for their eye size. The disadvantage of a spherical lens is that the peripheral rays are bent more, which results in spherical aberration. However, these lenses have a gradient refractive index decreasing from the centre towards the periphery, and hence all the rays focus to a single point giving the organism a clear image over a wide field (Citation4). The steepness of the refractive index gradient determines the focal length of the lens. If the ratio of the focal length to the radius of the lens is about 2.5 (Matthiessen’s ratio), then the lens must be inhomogeneous (Citation4). Such a graded index is the design present in human lens but with a biconvex shape which minimizes spherical aberration (Citation41). Some organisms have non-spherical surfaces in order to reduce spherical aberration. For instance, the copepod crustacean, Pontella, has a triplet series of lenses with the first lens having a parabolic front surface. Replacing the first lens with a spherical front surface in a model eye resulted in more spherical aberration which suggests that the non-spherical lens design is a means to reduce spherical aberration (Figure ) (Citation4). Aberrations increase linearly with lens diameter, which is why the ommatidia of the arthropods produce aberration-free images because of their small-scale lens array. Some species of fish are able to minimize CA to an extent, which is achieved by slight variation in the refractive index gradient producing a multifocal spherical lens (Citation30). As explained in Sections 3.1 and 3.2, nocturnal geckos also have multifocal lens, which along with a slit pupil focuses different wavelengths from each part of the lens on a single plane eliminating CA.

Figure 14. (a) Image formation on the receptors through the lens system of male crustacean, Pontella. (b) Ray tracing when replacing the front surface of the first lens with a spherical surface. (Adapted from Land, 1988.)

Figure 14. (a) Image formation on the receptors through the lens system of male crustacean, Pontella. (b) Ray tracing when replacing the front surface of the first lens with a spherical surface. (Adapted from Land, 1988.)

3.3.3. Biological lenses

Nature has a multitude of organisms that possess intrinsic multifunctional optical designs that solve complex optical problems. Lenses and mirrors present in nature differ greatly from those used in laboratories. Biological optical materials are a largely inhomogeneous gradient of proteins and organic substances (Citation4). These lenses have evolved over many years at a slow pace based on the organisms’ way of life and survival requirements making use of naturally available materials in their habitat. These lenses are potentially inspiring designs from which one can gain ideas to upgrade the existing man-made optical designs, some of which are discussed below.

Ophiocoma, the brittle star, is a marine organism that shows photosensitivity by phototaxis (locomotion in response to light) and/or body colour change (Citation15). The way they sense the light is by focusing light onto the photosensors (nerve bundles) through an array of double-lens-shaped microlenses located on their dorsal arm plates. These microlenses are made of calcite and serve as a compound eye as well as mechanical support to the organism (Figure ). Aizenberg et al. (Citation15) studied the focusing ability of the lenses using the photolithographic technique, creating an artificial brittle star lens array, and calculated the spot size at the photoresist, operational lens diameter L and light enhancement factor E. They found that L and E were relatively high, which indicated that the lenses compensated for spherical aberration. Like the pupils controlling the amount of light entering the eye, brittle stars have chromatophores around the lenses that extend their processes over the lenses during the daytime that regulate the amount of light entering the lens. This is the same reason why it appears as though the animal is changing its body colour on exposure to light. Another interesting property of the photoreceptor system is that they are directionally selective to light and hence, respond efficiently to light coming in one direction with an angular selectivity of 10°.

Figure 15. (a) Light sensitive species, O. wendtii, changes colour markedly from day (left) to night (right). (b) Scanning electron micrograph of the peripheral layer of dorsal arm plate of O. wendtii shows enlarged lens structures surrounded by chromatophores. Source: Aizenberg et al. 2001, reproduced with permission from Nature.

Figure 15. (a) Light sensitive species, O. wendtii, changes colour markedly from day (left) to night (right). (b) Scanning electron micrograph of the peripheral layer of dorsal arm plate of O. wendtii shows enlarged lens structures surrounded by chromatophores. Source: Aizenberg et al. 2001, reproduced with permission from Nature.

A similar lens crystal is present in the photoreceptive eyes of the tropical sea water mollusc, Acanthopleura granulata, on its multifunctional biomineralized armour (Citation16). The organism has hundreds of eyes, each with a cornea, a lens and a chamber underneath the lens that has photoreceptive cells (Figure ). Although these eye lenses are made of the same material called aragonite, of which its shell is made, the lens has uniformly aligned crystal grains compared to the surrounding granular microstructure which reduces light scatter. The lens has two additional layers beneath it made of organic material and calcium, respectively. A thin cornea covers the lens and is continuous with the surrounding. The front and the back surfaces of the lens are parabolic in shape. Li et al. (Citation16) found that the organism can detect changes in light and resolve a 20-cm object at a distance of 2 m given the photoreceptor spacing of approximately 7 μm and a birefringent lens with a polar angle of 45º.

Figure 16. (a) Photograph of A. granulata, (b) Light micrograph of the shell showing sensory and non-sensory regions, (c) Polarized light micrograph of a polished cross section containing two eyes. Scale 20 μm. Source: Li et al. 2015, reproduced with permission from Science.

Figure 16. (a) Photograph of A. granulata, (b) Light micrograph of the shell showing sensory and non-sensory regions, (c) Polarized light micrograph of a polished cross section containing two eyes. Scale 20 μm. Source: Li et al. 2015, reproduced with permission from Science.

The human cornea, the lens and Bruch’s membrane allow specular reflection due to their polished surfaces. The photoreceptors reflect light at the interfaces where there is a difference in the refractive index between the two media (higher inside the cell than outside) like between the photoreceptor cell and extra-cellular fluid or between the discs and intra-cellular fluid. Bruch’s membrane located orthogonal to the optical axis of the photoreceptors is said to act like an essential back wall mirror, which creates a standing wave that is sensed by the photoreceptors. This arrangement also explains the reason for the evolution of an inverted retina. The scallop’s eye is another example of the mirror eye (Citation4, 5, 7). The back wall mirror is also seen in cats as the tapetum lucidum, which helps them in night vision (Citation3).

3.3.4. Bio-inspired optical devices

3.3.4.1. Bio-inspired tunable lenses

Tunable lenses have received great attention in the recent times in terms of the lens material and tuning mechanism. An adaptive fluidic lens was designed, similar to whales, with the ability to stream the fluid in and out of their eye chamber to focus the light on the retina (Citation42). This lens, made of polymer, included a fluid chamber and injection port and was attached to a glass substrate. The achieved focal length of the lens ranged from 41 to 172 mm with the highest resolution of 25.39 line pairs per mm. Lenses that use liquid as the medium are prone to effects due to vibration and environmental factors like temperature and gravity. Inspired by the optical system of the human eye and how it adjusts its focus to varying distances, Liang et al. (Citation43) designed a multilayered tunable lens made of a glass nucleus flanked by a liquid layer and an outer solid-state elastic polymer with improved focusing range (14.8–30 mm) and reduced lens aberrations. The whole unit was supported by polymethyl methacrylate rings. Pressure applied on the elastic polymer caused its front surface to increase in curvature, mimicking the accommodative mechanism of the human lens. These compact variable focus imaging systems find their application in cell phone cameras, endoscopes and machine vision devices. Many other tunable lenses were developed with high-density eyeball lens arrays actuated by electro hydrodynamic forces (Citation44), and lenses that not only mimic lens accommodation but also compensates for the image quality with lens tilting that mimics saccadic eye movements (Citation45).

3.3.4.2. Arthropod visual system

Apart from single-aperture optical devices, a large variety of multi-aperture optical systems have been developed mainly imitating the appositional compound eyes of insects, as a means to understand the insect’s perspective of the world and possibly exploit their superior functions, especially in the field of imaging and robotics. In fact, the microlens array system is a bio-inspired product of the ommatidia of the compound eye. Each ommatidium produces a crude single image of a specific part of the visual field. In Musca domestica, the common housefly, the view of each ommatidium is largely overlapping, which results in enhanced vision more than that achieved by its optics alone (Citation46). The fly is able to detect motion of tiny objects accurately by converging the information from a number of elementary motion detectors onto fewer detector cells in its tiny brain achieving what is called motion hyperacuity.

Many visual sensing units mimicking the anatomy of the housefly were developed mainly to improve the field of view (Citation47–50), while a few others integrated it with motion hyperacuity (Citation51–53). Earlier prototypes by Riley et al. (Citation51) and Benson et al. (Citation52) that exhibited motion hyperacuity used a single lens connected to about seven photodetectors and by pre-blurring an image and spreading it over adjacent pixels before spatial sampling, they could increase the ability to detect minute motion in the image. Luke et al. (Citation53) extended on the previous prototype to design a smaller sized multi-aperture, non-planar sensor by optimizing both the photodetector response, i.e. the PSF and the overlap between them to achieve motion hyperacuity analogous to that of the flies. It consisted of seven hexagonally arranged lenses of 2.6-mm focal length, each connected to an optical fibre (Figure ). The distance (w) between the lens and the optical fibre was adjusted in order to achieve an optimal pre-blurring at the fibre such that the Gaussian response of photodetectors was at peak. By increasing the inter-ommatidial angle (ΔΦ), i.e. the angle between adjacent lenses, an optimal overlap of the visual fields was attained comparable to that of the fly such that a superior motion detection was possible. Finally, w of 2.4 mm and ΔΦ of 7.5º was chosen to achieve maximum motion response. This sensor outperformed the previous sensors in signal-to-noise ratio by a factor of about 2.78.

Figure 17. (a) Benson’s sensor (Left) and Luke’s sensor (Right). (b) Non-planar design of Luke sensor (w – distance between lens and optical fibre, ΔΦ – inter-ommatidial angle). (Adapted from Luke et al. 2012.)

Figure 17. (a) Benson’s sensor (Left) and Luke’s sensor (Right). (b) Non-planar design of Luke sensor (w – distance between lens and optical fibre, ΔΦ – inter-ommatidial angle). (Adapted from Luke et al. 2012.)

The compound eye provides a wide field of view due to the presence of numerous ommatidia pointing in different angles to capture different portions of the scene. Conventional microlens array imaging systems have limited fields of view due to its arrangement in a rigid material and mostly on a planar surface. In addition, optical aberrations greatly limit the optical quality of the imaging system. Song et al. (Citation54) devised a hemispherical digital camera, inspired by the appositional compound eyes of fire ants and bark beetles. Their eyes have negligible off-axis aberrations and infinite depth of field, in addition to other features like wide FOV and motion hyperacuity. The hemispherical appositional camera consisted of two subsystems; an elastomeric microlens array and a matching array of silicon photodiodes at the focal length of the lens. The microlens array was attached to a supporting post that was in turn attached to a basement membrane. Both the subsystems were combined and stretched such that each element of the microlens array was integrated with a photodiode and the whole unit formed a hemispherical shape with a radius of curvature of 6.96 mm (Figure ). Advances in stretchable electronics and hemispherical photodetector array have made this design possible. The field of view achieved with this system was about 160º without overlapping fields when the acceptance angle (Δφ) was 9.7º and inter-ommatidial angle (ΔΦ) was 11º. The camera produced a single image from the samples of the photodiodes that was activated to the fullest. The effective resolution of the camera can be improved by scanning the camera which overlaps the images from adjacent lenses and by increasing the number of ommatidia. These camera models can be applied to surveillance and endoscopy devices.

Figure 18. (Left) Elastomeric microlens array and corresponding array of photodiodes and blocking diodes which are interconnected by filamentary wires. Magnified view of both the subsystems are shown. (Right) Bonding them together and stretching them provides the camera a hemispherical shape. (Inset) Magnified view of two elements of the camera; Δφ – acceptance angle and ΔΦ – inter-ommatidial angle, t – thickness of the basement membrane attached to microlens, h – height of the supporting post, R – radius of curvature of the camera, d – diameter of active portion of the photodiode. Source: Song et al. 2013, reproduced with permission from Nature.

Figure 18. (Left) Elastomeric microlens array and corresponding array of photodiodes and blocking diodes which are interconnected by filamentary wires. Magnified view of both the subsystems are shown. (Right) Bonding them together and stretching them provides the camera a hemispherical shape. (Inset) Magnified view of two elements of the camera; Δφ – acceptance angle and ΔΦ – inter-ommatidial angle, t – thickness of the basement membrane attached to microlens, h – height of the supporting post, R – radius of curvature of the camera, d – diameter of active portion of the photodiode. Source: Song et al. 2013, reproduced with permission from Nature.

Although compound eyes have the advantage of superior motion detection, their spatial resolution is far less than that of the human eye, and hence it is practically unsuitable for object recognition in surveillance and other applications. As the compound eye is made up of numerous small ommatidia, each with its own small aperture optics, its image suffers from diffraction effects leading to a wide airy disc (Citation55). The angular acceptance function of each retinula cell overlaps with other neighbouring cells, creating an overlapping Gaussian sampling in the eye (Citation56). This overlap reduces the insect’s spatial resolution to about 2/5° minimum angle of resolution, when compared to 1/60° in humans for 20/20 vision (Citation55). In order to improve the resolution of the compound eye inspired imaging systems, Lee et al. (Citation57) introduced ‘COMPU-eye’ (COMPUtational compound eye), a bio-inspired imaging system of arthropod compound eyes, which considered a structure similar to the hemispherical appositional camera discussed above, but with an acceptance angle (Δφ = 8º) larger than the inter-ommatidial angle (ΔΦ = 1.5º) that resulted in a highly overlapping and larger receptive field than when Δφ was nearly equal to ΔΦ as in previous designs (Figure ). In the conventional design, the resolution was improved by micro-scanning techniques which captured multiple images of the field with slightly different locations that were then integrated to a high-resolution image. However, in COMPU-eye, the resultant low-resolution image caused by overlapping fields was resolved by integrating a digital signal processing device and the resolution of the system improved fourfold compared to previous designs. Moreover, even if any ommatidia were damaged, the final observation was not affected as each point of the object was viewed by multiple ommatidia.

Figure 19. Imaging formation of a conventional compound eye (a) and the proposed COMPU-EYE (b). (a) The conventional compound eye consists of 8 × 8 ommatidia with Δφ = 1.5° and Δφ = 1.5°. (b) COMPU-EYE consists of 8 × 8 ommatidia with Δφ = 1.5° and Δφ = 8° as well as a DSP algorithm. Source: Lee et al. 2016, reproduced with permission from Optics Express.

Figure 19. Imaging formation of a conventional compound eye (a) and the proposed COMPU-EYE (b). (a) The conventional compound eye consists of 8 × 8 ommatidia with Δφ = 1.5° and Δφ = 1.5°. (b) COMPU-EYE consists of 8 × 8 ommatidia with Δφ = 1.5° and Δφ = 8° as well as a DSP algorithm. Source: Lee et al. 2016, reproduced with permission from Optics Express.

Inspired by compound eyes of insects, researchers have developed an omnidirectional eye model that captures only behaviour related information using about 642 hexagonal ommatidia on a spherical surface pointing in all directions (Citation58). Insects have the ability to control their behaviour, especially their flight, based on the visual input they receive. They have specialized units of tangential neurons, which specifically analyses and integrates the local information on light intensity and optic flow and generates the global response. The spherical eye model captures spherical images at multiple views of the scene taken from the same eye position, with which the field of view spans 180° horizontal and vertical (Figure ). A detailed discussion on biomimetic flight control can be found in the book Floreano (Citation59).

Figure 20. Spherical omnidirectional eye model. An image captured by the device. (Adapted from Neumann 2002).

Figure 20. Spherical omnidirectional eye model. An image captured by the device. (Adapted from Neumann 2002).

Currently, improvements in imaging under low light conditions have been achieved electronically using on-chip multiplication gain technology or highly photosensitive image sensors. Recently, a new optical approach has been sought to develop an artificial eye that can be used in astronomy, medicine and security (Citation60). This device consists of a ball lens fixed in a central iris, covered by a polydimethylsiloxane (PDMS) membrane and bio-inspired photosensitivity enhancer (BPE) (Figure ). The BPE, which combines the superpositional compound eye model and the retinal structure of Gnathonemus petersi, the elephantnose fish, has 48 × 48 array of micro-photocollectors (μ-PCs) with parabolic reflective sidewalls. The ball lens forms a hemispherical image on the PDMS membrane, similar to the camera-type eye. The μ-PCs are oriented in such a way that their axes coincide with the geometric centre of the ball lens, similar to crystalline microcups in the eyes of elephantnose fish. Each μ-PC has two opposite facets (large facet facing PDMS membrane) enclosed by four parabolic sidewalls coated with reflective aluminium, which reflects all the light coming from the large facet and focuses onto the small facet, increasing the light intensity more than three fold over entire visible light spectrum unlike its biological counterpart that reflects only red light (Figure ). The image was then captured on a matching image sensor. The BPE fabricated on a curved surface produces less distortions than that in planar surface.

Figure 21. (a) Schematic diagram showing the eye anatomy of elephantnose fish. Note the crystalline cup in the retina. (b) Artificial eye, (c) its front view, (d) its back view (e) individual parts of the artificial eye, (inset) structure of micro-photocollectors. Source: Liu et al. 2016, reproduced with permission from PNAS.

Figure 21. (a) Schematic diagram showing the eye anatomy of elephantnose fish. Note the crystalline cup in the retina. (b) Artificial eye, (c) its front view, (d) its back view (e) individual parts of the artificial eye, (inset) structure of micro-photocollectors. Source: Liu et al. 2016, reproduced with permission from PNAS.

Replacing the microlens array, Moghimi et al. (Citation61) used an array of Fresnel zone plates (FZP) and attached it to a flexible substrate (Figure ). These zone plates, both transmissive and reflective types, consisted of alternating dark and bright zones which focused light by diffraction, unlike mirrors and lenses. The dark zone had regions made of black Si with nanoscale surface roughness, which made it highly absorbent. The bright zones had a layer of aluminium in the reflective FZP that enhanced the reflectivity by 90%. The absorption of light in both types of FZP was smaller than that of the dark zone, which makes it an efficient diffracting microlens. The flexible substrate allows the device to obtain a wide FOV by reducing the amount of overlap between the adjacent fields and varying the location of focus by changing the shape of the substrate.

Figure 22. Fresnel zone plates (FZP) on a planar surface (a) and on a flexed surface (b). Field of view of two adjacent FZP and the degree of overlap shown on a planar surface (c) and flexed surface (d). Source: Moghimi et al. 2015, reproduced with permission from Scientific Reports.

Figure 22. Fresnel zone plates (FZP) on a planar surface (a) and on a flexed surface (b). Field of view of two adjacent FZP and the degree of overlap shown on a planar surface (c) and flexed surface (d). Source: Moghimi et al. 2015, reproduced with permission from Scientific Reports.
3.3.4.2.1. Bee visual system

Optical engineers have developed a catadioptric imaging system based on the Bee’s visual system, which has a large field of view capturing light by both reflection and refraction (Citation62). The outer part of the camera has a convex mirror facing downwards with an unpolished centre, which allows the light refracted through a combination of lenses in the central part of the sensor (Figure ). The reflected light from the mirror and the refracted light from the lenses focus at the same point giving a field of view of about 280º. This converging system was placed in an acrylic glass block coated with aluminium to avoid interference with mechanical parts.

Figure 23. (a) Catadioptric imaging system. (b) Rays shown in dashed lines are reflected at the convex mirror and the central rays shown in solid lines are refracted by the lens system and enter through a circular area not covered by the convex mirror. The black dot behind the acrylic glass indicates the nodal point of the camera lens. Source: Sturzl et al. 2010, reproduced with permission from Bioinspiration & Biomimetics.

Figure 23. (a) Catadioptric imaging system. (b) Rays shown in dashed lines are reflected at the convex mirror and the central rays shown in solid lines are refracted by the lens system and enter through a circular area not covered by the convex mirror. The black dot behind the acrylic glass indicates the nodal point of the camera lens. Source: Sturzl et al. 2010, reproduced with permission from Bioinspiration & Biomimetics.
3.3.4.2.2. X-ray telescope

Studying the celestial objects that emit radiations in the X-ray region of the electromagnetic spectrum is important to understand the high-energy universe. Unlike visible light, X-rays do not get refracted by passing through a material and is reflected from a surface only when it is incident at a grazing angle (Citation63). For making X-ray imaging possible in space, Roger Angel (Citation64) studied lobster eyes which work by grazing angle reflectance. Lobsters, a macruran crustacean, have a field of view greater than 180º that allow them to observe objects around them without turning their head. They have a reflective superpositional eye with a hemispherical surface covered with radially oriented square tubes that have reflecting interior (Figure ). Similar to their eyes, Angel conceived an X-ray telescope with square cell reflectors, combining features like high sensitivity at high energy and a large field of view. The light incident at the grazing angle gets reflected twice off the adjacent reflective sides and comes to a true focus on the hemispherical retina whose radius is half that of the cell surface. It is noted that the incident ray and the reflected ray are in the same plane as the cell axis and they form the same angle with the cell axis. The rays that reflect only once form two linear images passing through the true focus, each parallel to the cell walls and perpendicular to each other forming a cross pattern. Although the sensitivity is slightly reduced with the superposition of the true focus and linear focus images, it is better if the spectroscopy is made of relatively bright sources. For X-ray imaging, the optics was made 100 times longer than its width by which the system was made highly sensitive at a high-energy range. It was calculated that a telescope of such a design with 10-m focal length and 70-μm cell width would be able to give a maximum resolution of 2-arc seconds for a wavelength of 5 Å (Citation64). In recent years, X-ray telescopes mimicking the reflecting superposition eyes of lobsters have been developed and launched into space (Citation65, 66).

Figure 24. (Left) Scanning electron microscopy of the lobster eye, Palinurus vulgaris. (Right) Magnified view of the square cells. The schematic diagram of the reflective superpositional eye can be seen in Figure (c). (Adapted from Grubsky et al. 2007.)

Figure 24. (Left) Scanning electron microscopy of the lobster eye, Palinurus vulgaris. (Right) Magnified view of the square cells. The schematic diagram of the reflective superpositional eye can be seen in Figure 5(c). (Adapted from Grubsky et al. 2007.)

3.4. Retina

The quality of the perceived image depends not only on the nature of the optical image but also the density and spacing of photoreceptors (Citation5). The principal eyes of jumping spiders (Salticidae: Dendryphantinae) have a vertical strip of retinal receptors, which serve only 20º of vertical field of view but restrict the horizontal field of view. By lateral scanning eye movements, the organism is able to expand its field of view considerably, which helps to recognize its prey (Citation67). The position of the retina with respect to the lens is also equally important to achieve a good quality image. In the small eyes of gastropod molluscs and dorsal ocelli of insects, the retina is much closer to the lens so that it permits the organism to only sense the quantity and quality of light from different angles and not to resolve details of the objects (Citation5, 6)

3.4.1. Resolution vs. sensitivity

Nature offers a trade-off between resolution and sensitivity to the visual system of an organism based on its living requirements. Australian net-casting spider, Deinopis, is a nocturnal organism which lacks fine resolution but it is sensitive enough to capture its prey during the night. While this species has large eyes with large pupils and receptors to achieve good sensitivity, yet another species, Portia fimbriata, has an inter-receptor angle of only 2.4 arc minutes that provides good resolution in the daytime which is similar to that of humans given its small eyes (Citation68). Humans can resolve two objects that are 0.5 min of arc apart, while a fruitfly, Drosophila, can resolve only 4º. In sharp contrast to this, there are some organisms which have merely a dot retina with a few photoreceptors yet are able to form images. For instance, copepod crustaceans like Copilia and Sapphirina achieve this ability by continuous lateral scanning of the retina (Citation6).

3.4.2. Colour photopigments

The human retina has three types of cone photoreceptors, which are sensitive to overlapping regions of the visible spectrum. However, under dim light conditions, they are essentially blind to colour information. Nocturnal helmet geckos, Tarentola chazaliae, have large cones with three different photopigments sensitive to ultraviolet, blue and green wavelengths, respectively (Citation24). Their colour sensitivity is calculated to be 350 times higher than humans and they have a multifocal optical system that corrects for chromatic aberration. Species of mantis shrimp (Haptosquilla trispinosa, for example) have the largest number of photoreceptors types (between 16 and 21), each sensitive to a narrow band (1–5 nm) of wavelengths ranging from ultraviolet to red, and polarized light. Unlike trichromatic humans, these animals have a unique colour coding system based on temporal signals produced by scanning eye movements, which enable them to recognize colours rather than merely discriminating them (Citation69).

3.4.3. Bio-inspired sensors

Studying the visual system of organisms provides novel ideas to devise artificial sensors and imaging systems that have superior performance to existing technology. The human eye and a simple camera are often compared in terms of the process of image formation on the sensing element. Inspired by the human visual system, digital cameras were equipped with a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) array behind a lens or lens system in a single-aperture design (Citation53). Although widely used for capturing images, they fail when the object is smaller than the camera’s pixel size or when the movement of the object is less than the size of the inter-pixel spacing (Citation70).

3.4.3.1. Foveated imaging system

Humans and many other animals have a foveated imaging system wherein the scene that is imaged on the fovea is the only area of high resolution compared with the peripheral vision. Mimicking this design, Martinez et al. (Citation71) devised a foveated imaging system that has varying resolution across the image (Figure ). This design was achieved using liquid crystal spatial light modulators (SLM), which are similar to a deformable mirror. By applying a small voltage to an individual pixel in the SLM, the refractive index changes in the direction of propagation, and hence varying the optical path across the wavefront, which corrects for aberration in that area. The system can change its focus dynamically to any area of interest within a large field of view in a few milliseconds. With this approach, it will be able to reduce the bandwidth required for image transmission and merely spot the area that requires high resolution by scanning the sensor to that area.

Figure 25. (a) Foveated imaging system that focuses the light from specific direction based on how liquid crystal spatial light modulators change the refractive index. (b) Aerial image of an airport taken from the system. (Adapted from Martinez et al. 2001 and Szema et al. 2006).

Figure 25. (a) Foveated imaging system that focuses the light from specific direction based on how liquid crystal spatial light modulators change the refractive index. (b) Aerial image of an airport taken from the system. (Adapted from Martinez et al. 2001 and Szema et al. 2006).
3.4.3.2. Fibre optics in nature

Current facilities of information transfer and communication are possible because of optical fibres. Fibre optics, an incredible invention in telecommunication technology, existed in nature with advanced features for several million years. Superior flexibility and mechanical stability with minimum material are the hallmark of the primitive organism, Euplectella. These are deep-sea glass sponges (commonly called the ‘Venus flower basket’ as it houses a mating pair of shrimp) living thousands of feet below sea level with an intricate skeleton made of glass spicules 40–70 μm thick (Citation72, 73). They have free thin fibres at the base with a light-condensing lens that anchors the skeleton to the ground (Figure (a). Optical fibres have a high index core surrounded by a cladding of lower refractive index, allowing total internal reflection of the light entering the core, which facilitates light transmissions over longer distances without much loss of information. This is the same design that exists in a slightly different fashion in the bio-optical fibres of glass sponges, to transmit the light captured from co-existing bioluminescent bacteria, illuminating the entire skeleton. There exist three distinct regions in the spicules; a core of 2-μm thick enclosing an organic filament of refractive index 1.46, a central cylinder of low refractive index surrounding the core and an outer striated shell with multiple nanoscale layers of varying refractive index, which progressively increase towards the outermost layer (Figure (b) and (c)). These layers are glued together with a natural protein that gives an immense structural strength and flexibility that allow it to be bent without fractures, unlike synthetic fibres. Light transmission through the spicules woven in the skeleton is different from the free fibres at the base of the skeleton. The short free fibres function as multi-mode fibres letting most of the light through the cladding because the difference in refractive index between air and the spicule shell is more than between the shell and the core. The spicules embedded in the skeleton allow light only through the core, functioning as a single- or a few-mode fibres (Citation72, 73). Another advantage of the fibres of the glass sponges is their synthesis at a low ambient temperature, which allows sodium ions to be incorporated in them, increasing the refractive index of the core silica. This process improves their optical properties with no birefringence, unlike the man-made fibres manufactured at high temperatures (Citation72).

Figure 26. (a) Structure of sea sponge Euplectella. (b) Three structural regions of spicule. SS – outer striated shell, CC – middle central cylinder. (Inset) Organic filament within hollow core. (c) Refractive index profile of the spicule. Source: Aizenberg et al. 2004, Sunder et al. 2003, reproduced with permissions from PNAS [Copyright (2004) National Academy of Sciences, U.S.A.] and Nature.

Figure 26. (a) Structure of sea sponge Euplectella. (b) Three structural regions of spicule. SS – outer striated shell, CC – middle central cylinder. (Inset) Organic filament within hollow core. (c) Refractive index profile of the spicule. Source: Aizenberg et al. 2004, Sunder et al. 2003, reproduced with permissions from PNAS [Copyright (2004) National Academy of Sciences, U.S.A.] and Nature.

The silk threads of glow worms and spiders act as optical fibres that can be used in minimally invasive medical imaging. The naturally produced fibres are thinner (about 5 μm in diameter) and of refractive index 1.5, hence air acts as an outer cladding in addition to the inner cladding of SiO2 layer (Citation74). It was found that the wave-guiding capability, given by V parameter, was greater than 2.405 indicating the multi-modal behaviour of the fibre.

Although the commercial optical fibres were not bio-inspired when they were first designed and manufactured back in the 1950s, there are certainly a few superior properties in the naturally occurring optical fibres such as mechanical stability and flexibility of the material at nanometre scale that are worth copying and incorporating in synthetic fibres.

3.4.3.3. Fibre optics in the retina

Retinal architecture is quite complex in vertebrate eyes with the photosensitive cells, rods and cones, located behind the layers of other retinal cells, which then transmits the visual information to the brain. The light entering the eye has to traverse through the entire thickness of retina before it can be sensed by the photoreceptors. This arrangement causes large amounts of light scattering at the optically inhomogeneous retina (Citation75). However, considering this fact, superior visual performance is possible because of the wave-guiding property of the glial tissue, Müller cells and the photoreceptors themselves. Müller cells run along the entire thickness of the retina to the external limiting membrane. Their refractive index and diameter vary along their entire span from their end-foot to the outer processes (Figure (a)) (Citation76). Thus, theoretically the refractive index and the diameter of Müller cells may possess a constant wave-guiding capability (V). A fibre is said to possess excellent wave-guiding property if V is greater than 2, which is satisfied by Müller cells (Figure (b)).

Figure 27. (a) Müller cell shape and local refractive index along its different sections and photoreceptor outer segment (ROS) (Scale bar 25 μm). (b) Diameter and V parameter for 700 nm (red) and 500 nm (blue) are shown at indicated sites of the Müller cell. Note that the light-guiding capability remains fairly constant although the diameters and refractive indices change along the cell. Source: Franze et al. 2007, reproduced with permission from PNAS, Copyright (2004) National Academy of Sciences, U.S.A.

Figure 27. (a) Müller cell shape and local refractive index along its different sections and photoreceptor outer segment (ROS) (Scale bar 25 μm). (b) Diameter and V parameter for 700 nm (red) and 500 nm (blue) are shown at indicated sites of the Müller cell. Note that the light-guiding capability remains fairly constant although the diameters and refractive indices change along the cell. Source: Franze et al. 2007, reproduced with permission from PNAS, Copyright (2004) National Academy of Sciences, U.S.A.

Agte et al. (Citation77) found that the light-guiding efficiency peaks when the light beam enters the funnel-shaped end-foot of a Müller cells, resulting in a less scattered patch of light falling on the photoreceptors (Figure ). Recent studies show that Müller cells are wavelength-dependent waveguides, concentrating the green-red part of the visible spectrum onto the cones and the blue-purple light onto the rods (Citation78). This ability also improves night vision by efficiently guiding small amount of light to the highly sensitive rod photoreceptors.

Figure 28. Light-guiding capability of Müller cells. The laser light beam (green) from a glass fibre illuminates the retinal slice attached to a filter membrane. Müller cells and the membrane are stained by a red fluorescent dye. A laser scanning microscope records this fluorescence, as well as the scattering of the laser light (green; yellow if overlaid by red-fluorescent structures). (Bottom): for better visualization, the scattering of laser light is combined with a schematic drawing of Müller cells (red) and the photoreceptor layer (PRL, lime-green). (a) Laser passing in between 2 Müller cells causes scattering within the retina and produces wide spread patch of light at the filter. (b) Laser entering directly through the Müller cell endfoot causes a small intense spot. Scale bars 20 μm. Source: Reichenbach et al. 2014, reproduced with permission from e-Neuroforum.

Figure 28. Light-guiding capability of Müller cells. The laser light beam (green) from a glass fibre illuminates the retinal slice attached to a filter membrane. Müller cells and the membrane are stained by a red fluorescent dye. A laser scanning microscope records this fluorescence, as well as the scattering of the laser light (green; yellow if overlaid by red-fluorescent structures). (Bottom): for better visualization, the scattering of laser light is combined with a schematic drawing of Müller cells (red) and the photoreceptor layer (PRL, lime-green). (a) Laser passing in between 2 Müller cells causes scattering within the retina and produces wide spread patch of light at the filter. (b) Laser entering directly through the Müller cell endfoot causes a small intense spot. Scale bars 20 μm. Source: Reichenbach et al. 2014, reproduced with permission from e-Neuroforum.

The cone photoreceptors carry their own wave-guiding property by which the incident light is directed from the entrance of the receptor to their outer segments where photopigment are present, which allows for a sharp image preserved throughout the retinal depth (Citation7982). Because photoreceptors are composed of material that is denser than the surrounding medium they can guide light. The main advantage of light guiding is the economy of photopigment for a given photon capture (Citation83). This correlates well with the conical shape of the cone outer segments. Consider the maximum length of photoreceptors if they are pressed together in a continuous sheet. They cannot be too long since the contrast of the image will be reduced by defocus. If the traditional Rayleigh expression for tolerance to defocus is considered, it results in a maximum outer segment length, which depends upon the F number of the eye (F = f/D, where f is the focal length and D is the pupil diameter). Snyder remarks that all eyes are scaled according to this (Citation84).

The Stiles–Crawford effect of the first kind explains the directional sensitivity of the retinal receptors. It says that the visual sensitivity is greatest for light that enters through the centre of the pupil, and it gradually diminishes towards the periphery (Citation7982). Hence, the photoreceptors align themselves with their axis pointing to the pupillary centre in order to function efficiently because the angle of incidence of the light at the photoreceptor affects the propagation of light inside them and their response. The modal patterns at the end or near the terminations of outer segments of the photoreceptors were observed, which were influenced by factors like configuration of the receptor, the diameter of the receptor, the index of refraction of the receptor and the surrounding media, the wavelength of the incident energy and the angle of incidence of that energy (Citation81). These two wave-guiding structures allow for superior vision in the inverted retina. Similar wave-guiding nature is present in the rhabdom, the light detecting central rod of the compound eyes, which carries light from the optical elements to neural tissue (Citation5, 11). It should also be pointed out that the lowest-order modes of multimoded fibres composed of birefringent material can differ radically from the familiar modes of multimoded isotropic fibres. In general, the modal fields are neither uniformly polarized nor circularly symmetric as they would be on circularly symmetric isotropic fibres that are weakly guiding. Thus, multimoded anisotropic fibres can exhibit dramatic polarization effects which may have important practical applications. These fibre designs were inspired by fly eyes (Citation85).

Human rod cells can detect single photon of light due to the presence of photosensitive molecule called rhodopsin, which initiates a series of biochemical reactions altering the ion flow and finally results in an amplified signal (Citation86–88). This method of single photon detection was borrowed from human eye and applied in fabricating a highly efficient and sensitive short-wave infrared single-photon imager that can be used in high-speed imaging devices, telecommunications and medical instruments (Citation89). This nanoinjection photon detector combines a large absorption area with a nanoscale sensor analogous to the anatomy of the rod. Due to the doping level of InP, p-InGaSb and InGaAs layers used in the device, an electric field is generated in the absorption region (Figure ). When a photon is absorbed in the absorption region, holes are created which travels to the nanoinjector due to the electric field and gets trapped thereby increasing the potential at the nanoinjector, which controls the amplified flow of electrons. This mechanism is comparable to that of ion gates in the rod. The detector produces very high signal amplification with gain values more than 10,000 at room temperature and low noise levels.

Figure 29. Schematic diagram of nanoinjection single-photon detector. Source: Memis et al. 2008, reproduced with permission from SPIE.

Figure 29. Schematic diagram of nanoinjection single-photon detector. Source: Memis et al. 2008, reproduced with permission from SPIE.

3.5. Percept

Although the eye designs in nature are versatile, the basic idea of image formation is the same among all. The retina sends the information about the light intensity in the scene captured in the fixed solid angle of the photoreceptors, which forms the raw image to the brain (Citation5). Whether the retinal image is inverted or erect, the brain, having evolved along with the eye, recognizes how to interpret them. In this section, we discuss a few of the perceptual qualities present in the animal kingdom and how humans use it in developing new sensing devices.

3.5.1. Motion hyperacuity

Motion hyperacuity is the ability of a stationary visual system to detect moving objects with more sensitivity than that suggested by its photoreceptor arrangement (Citation90). Compound eyes have superior motion detection due to a large percentage of overlap in the visual fields of adjacent ommatidia and its neural circuit. Details about motion hyperacuity in compound eyes and bio-inspired optical devices were discussed in Section 3.3.

3.5.2. Polarization vision

The sun’s rays are scattered and polarized as they reach the earth through the atmosphere and is apparent especially in the near ultraviolet and blue wavelengths. The degree and direction of polarization depends on the position of the sun, creating an electric vector (e-vector) pattern throughout the celestial hemisphere. Humans are not very sensitive to this vector orientation of the polarized light, but in some insects like honey bees and desert ant, many cephalopods and crustaceans, polarization vision is an important perceptual feature (Citation91). These animals make use of polarization cues to orient themselves, navigate in search of food, water bodies or for camouflage breaking or motion detection. Interestingly, around 1000 AD, Vikings may have used the polarization information in the sky for navigation purposes by looking through a birefringent sunstone or Iceland spar, a crystal made of calcite, and observing the patterns created by polarized light (Citation9294). This birefringent crystal behaves like a depolarizer and hence, when rotated slowly, it depolarizes the partially polarized light at a specific orientation of the crystal (isotropy point) and creates a yellow and blue bowtie pattern known as the Haidinger’s brush, which indicates the direction of the sun. This pattern was evident even when the sun was hidden by clouds or in twilight conditions when the luminosity of the sky was low. Haidinger brush, an entoptic visual phenomenon, (Citation95) can be seen without using a birefringent crystal by looking a bright screen and tilting the head. This allows for detecting the polarization plane even when the polarization per cent is as low as 56%, which varies depending on the density of macular pigments (Citation96). The reason for Haidinger’s pattern has been attributed to the dichroic property of the yellow pigment in the macula. Using Fresnel’s equations, we can show that oblique rays passing through the cylindrical geometry of the blue cones in the fovea produce this dichroism enabling humans to sense polarized light (Citation97).

Honey bees and bumble bees indicate the direction of the nectar to its fellow bees by a peculiar wagging dance orienting their waggles in the direction of the nectar with respect to the position of sun (Citation98, 99). This behaviour is achieved with the ability of the bees to sense the direction of polarization of sunlight in the UV range using the dorsal rim area of their eyes (Citation100) The orderly arrangement of microvilli projections in the rhabdomere of the ommatidium and the alignment of the photopigment molecules in the microvilli are responsible for polarization sensitivity in these compound eyes. Snyder discovered that the 9th retinula cell, one of the UV sensitive cells in the ommatidium, has a short cell length and perpendicularly oriented microvilli with respect to other UV-sensitive receptors which makes it highly sensitive to polarized light (Citation101). However, in order to precisely code the e-vector of the polarized light, the bee possibly integrates the information from the neighbouring 25–50 ommatidia which have differently oriented 9th cell microvilli and/or the ommatidia of both eyes sharing the same visual field. While honey bees, desert ants, and flies use their UV receptors to sense polarization, crickets use their blue receptors for polarization vision (Citation12).

Inspired by these polarization sensitive insects, researchers have looked at developing novel navigation sensors to assist robots as a navigation compass and to imitate animals’ natural behaviour (Citation12, 91). An unmanned aerial vehicle was tested successfully with an artificial polarization compass as a primary heading sensor (Citation102). These polarized skylight-based navigation sensors can also be useful for people who have vision difficulties and are wheelchair bound and patients with Parkinson’s disease. Similar to polarization neurons, researchers developed polarization opponent units (POL-OP units) which consisted of a pair of polarization sensors that transferred the inputs to log-ratio amplifiers. The sensors had a photo-diode, a blue transmission filter, and a linear polarizer whose polarization axis was perpendicular to that of the other sensor, similar to the neural arrangement of insects. Three pairs of sensors were arranged such that the polarizing axis of the positive channel was 0º, 60º and 120º with respect to the robot’s body. A robot fitted with these sensors along with ambient light sensors, was able to sense the solar meridian and find its direction of heading with respect to the solar meridian. Many similar sensors using CCD or CMOS video camera were designed expanding its applications (Citation91).

3.5.3. Bionic devices

Biomimetic vision sensors, like those discussed in the previous sections, are of great benefit in developing computer and robotic vision. To make them see what humans see is the ultimate goal of research in this field, which has wide applications such as unmanned vehicles, missiles, surveillance and security. Another area is the development of artificial sensors that can help humans see what they sensed before their natural photosensors were damaged. Blindness is the most devastating disability with 285 million people being visually impaired and 39 million blind worldwide, according to World Health Organization (Citation103).

In general, bionic devices help restoring lost sensory function such as hearing or seeing and in the treatment of movement disorders, psychiatric disorders by causing electrical stimulation of the sensory organ, brain or spinal cord through implanted electrodes (Citation104). Studying the mechanism by which the retina of the human eye works has led to devising artificial sensors and bionic eyes. These bionic devices work by electrically stimulating the intact retinal layers through electrode implants placed at the epi-retinal, subretinal or supra-choroidal level. The first FDA approved bionic eye, Argus II system, developed by Second Sight (Sylmar, California, USA) is being used in patients who have lost their vision due to eye conditions like retinitis pigmentosa and age-related macular degeneration that destroy the photoreceptors (Citation104, 105). This device has a small video camera mounted on the eyeglasses, video processing unit attached to the glasses and an epiretinally implanted retinal prosthesis (Figure ). The video processing unit converts the video captured in the camera into electronic data that is transmitted to the retinal prosthesis. The retinal prosthesis consists of an array of 60 electrodes that converts the information into electrical impulses and then transmits to healthy neurons in the inner retina that can send the information to the brain. The patients fitted with Argus II were able to detect light and dark in their surroundings, which allowed them to identify the location and movement of the objects.

Figure 30. Argus II bionic system. Source: http://tekmono.com/teknoloji/1139/biyonik-goz-implanti-test-edild.

Figure 30. Argus II bionic system. Source: http://tekmono.com/teknoloji/1139/biyonik-goz-implanti-test-edild.

Alpha IMS, another bionic design incorporating a combination of 1500 photodiodes and corresponding stimulating electrodes, provides point–point stimulation to bipolar cells (Citation106). It doesn’t require an external camera and allows for object tracking by eye movements (Figure ). This subretinal implant works by capturing the incoming light using the photodiodes, and then sending it to the matching electrodes after processing and amplifying the signal. Clinical trials on these devices have shown an improved ability to read and detect motion of objects (Citation107, 108).

Figure 31. (a) Alpha IMS bionic implant. (b) Device implanted in the retina with a condition called retinitis pigmentosa. Source: http://www.vision-research.eu/index.php?id=868 and http://www.devicemed.de/index.cfm?pid=10749&pk=396919&fk=555109&type=article.

Figure 31. (a) Alpha IMS bionic implant. (b) Device implanted in the retina with a condition called retinitis pigmentosa. Source: http://www.vision-research.eu/index.php?id=868 and http://www.devicemed.de/index.cfm?pid=10749&pk=396919&fk=555109&type=article.

In order to achieve a more stable implant and to reduce surgical complications like conjunctival and scleral erosions, retinal detachment and so on, a retinal implant was developed that was suprachoroidal (between choroid and sclera) in location (Citation109). This device consisted of an array of 33 platinum stimulating electrodes, of which the outer 13 electrodes were grouped for gang stimulation, and 2 return electrodes (Figure ). The array was connected to another return electrode implanted subcutaneously behind the ear. Although the retinal implant was 250–400 μm away from the ganglion cells than in an epiretinal implant, the patients in whom it was implanted demonstrated a good ability to detect light and read optotypes (2.62 logMAR).

Figure 32. Suprachoroidal retinal implant showing the array of electrodes. Source: Ayton et al. 2015, PlosOne.

Figure 32. Suprachoroidal retinal implant showing the array of electrodes. Source: Ayton et al. 2015, PlosOne.

A number of such bionic devices targeting other areas of visual pathway including cortical microstimulation methods are underway to help restore vision in patients with severe visual impairment (Citation104, 105, 110).

4. Conclusion

Biomimicry is essentially plagiarizing the strategies from bio-diversified nature and applying it in developing technology. This area of research blurs the boundary between different scientific disciplines bringing biologists, engineers and biomedical researchers to a single platform and making it an interdisciplinary study. It should be noted that this is a tremendously open multi-disciplinary area. In this review we have highlighted certain aspects of visual systems – however there is a whole range of bio-inspired optics and photonics [see Greanya for a recent review (Citation111)]. Bio-inspired science can lead to revolutionary changes in current technology from self-healing buildings to artificial brains creating a smarter, stronger and eco-friendly future.

References

  • Gebeshuber, I.C.; Gruber, P.; Drack, M. Proc. Inst. Mech. Eng., Part C: J. Mech. Eng. Sci. 2009, 223, 2899–2918.10.1243/09544062JMES1563
  • Gebeshuber, I.C.; Drack, M. Proc. Inst. Mech. Eng., Part C: J. Mech. Eng. Sci. 2008, 222, 1281–1287.10.1243/09544062JMES890
  • Glaeser, G.; Paulus, H.F., Eds. The Evolution of the eye; Springer International Publishing: Cham, Switzerland, 2014.
  • Land, M.F. Contemp. Phys. 1988, 29, 435–455.10.1080/00107518808222601
  • Nilsson, D.E. Bioscience 1989, 39, 298–307.
  • Land, M.F. Molluscs. In Photoreception and Vision in Invertebrates; Ali, M.A., Ed.; Plenum: New York, NY, 1984, pp 699–725.10.1007/978-1-4613-2743-1
  • Land, M.F. Sci. Am. 1978, 239, 126–134.10.1038/scientificamerican1278-126
  • Ramamurthy, M.; Lakshminarayanan, V. Human Vision and Perception. In Handbook of Advanced Lighting Technology; Karlicek, R., Sun, C., Zissis, G., Ma, R., Eds. Springer International Publishing: Cham, Switzerland, 2015; 1–23. DOI: 10.1007/978-3-319-00295-8_46-1
  • Land, M.F. Curr. Biol. 2005, 15, R319–R323.10.1016/j.cub.2005.04.041
  • Szema, R.; Lee, L.P. Biologically Inspired Optical Systems. In Biomimetics – Biologically Inspired Technologies; Bar-Cohen, Y., Ed.; Taylor & Francis Group, Boca Raton, FL, 2006; pp 291–308.
  • Snyder, A.W.; Menzel, R.; Laughlin, S.B. J. Comp. Physiol. 1973, 87, 99–135.10.1007/BF01352157
  • Karman, S.B.; Diah, S.Z.M.; Gebeshuber, I.C. Sensors 2012, 12, 14232–14261.10.3390/s121114232
  • Sharpe, L.T.; Stockman, A.; Fach, C.C.; Markstahler, U. J. Physiol. 1993, 463, 325–348.10.1113/jphysiol.1993.sp019597
  • Nilsson, D.E. Nature 1988, 332, 76–78.10.1038/332076a0
  • Aizenberg, J.; Tkachenko, A.; Weiner, S.; Addadi, L.; Hendler, G. Nature 2001, 412, 819–822.10.1038/35090573
  • Li, L.; Connors, M.J.; Kolle, M.; England, G.T.; Speiser, D.I.; Xiao, X.; Aizenberg, J.; Ortiz, C. Science 2015, 350, 952–956.10.1126/science.aad1246
  • Sowards, L.; Schmitz, H.; Tomlin, D.; Naik, R.; Stone, M. Annal. Entomol. Soc. Am. 2001, 94, 686–694.10.1603/0013-8746(2001)094[0686:COBMAC]2.0.CO;2
  • Goris, R.C. J. Herpetol. 2011, 45, 2–14.10.1670/10-238.1
  • Kiely, P.M.; Smith, G.; Carney, L.G. J. Mod. Opt. 1982, 29, 1027–1040.
  • Ye, J.; Shi, X.; Chen, X.; Xie, J.; Wang, C.; Yao, K.; Gao, C.; Gou, Z. J. Mater. Chem. B 2014, 2, 4226–4236.10.1039/C3TB21845G
  • Stahl, U.; Willcox, M.D.; Naduvilath, T.; Stapleton, F. Optometry Vision Sci. 2009, 86, 857–867.
  • Watanabe, R.K. Contact Lens Spectrum, 2010, 25, 14.
  • Teixeira, P.; Gomes, F. Preventing Adhesion on Medical devices. In Bioadhesion and Biomimetics: From Nature to Applications; Bianco-Peled, H., Davidovich-Pinhas, M., Eds.; Taylor and Francis group: Boca Raton, FL, 2015; pp 269–285.
  • ‘Gecko Vision’: Key To Future Multifocal Contact Lens? Available at: sciencedaily.com/releases/2009/05/090507164407.htm (accessed Jul 24, 2016, 11:51 am).
  • Roth, L.S.; Lundström, L.; Kelber, A.; Kröger, R.H.; Unsbo, P. J. Vision 2009, 9 (27), 1–11.
  • Fish and Insects Guide Design for Future Contact Lenses. nih.gov/news-events/news-releases/fish-insects-guide-design-future-contact-lenses (accessed Jul 24, 2016, 12.30 pm).
  • Malmstrom, T.; Kröger, R.H. J. Exp. Biol. 2006, 209, 18–25.10.1242/jeb.01959
  • Land, M.F. Curr. Biol. 2006, 16, R167–R168.10.1016/j.cub.2006.02.046
  • Banks, M.S.; Sprague, W.W.; Schmoll, J.; Parnell, J.A.; Love, G.D. Sci. Adv. 2015, 1, e1500391.10.1126/sciadv.1500391
  • Kröger, R.H.; Campbell, M.C.; Fernald, R.D.; Wagner, H.J. J. Comp. Physiol. A 1999, 184, 361–369.
  • Westheimer, G. Vision Res. 1964, 4, 39–45.10.1016/0042-6989(64)90030-6
  • Boreman, G.D. Modulation Transfer Function in Optical and Electro-Optical Systems; SPIE Press: Bellingham, WA, 2001; Vol. 21.10.1117/3.419857
  • Gaskill, J.D. Linear Systems, Fourier Transforms, and Optics; Wiley and Sons: New York, 1978.
  • Spring, K.H.; Stiles, W.S. Brit. J. Ophthalmol. 1948, 32, 347–354.10.1136/bjo.32.6.347
  • Weale, R.A. Brit. J. Ophthalmol. 1956, 40, 392–415.10.1136/bjo.40.7.392
  • Kerr, J.L. Percept. Psychophys. 1971, 9, 375–378.
  • Levin, A.; Fergus, R.; Durand, F.; Freeman, W.T. ACM T. Graphic. 2007, 26, 70-1–70-9.10.1145/1276377
  • Sivak, J.G.; Bobier, W.R.; Levy, B. J. Comp. Physiol. 1978, 125, 335–339.10.1007/BF00656868
  • Dick, H.B. Curr. Opin. Ophthalmol. 2005, 16, 8–26.10.1097/00055735-200502000-00004
  • Nawa, Y.; Ueda, T.; Nakatsuka, M.; Tsuji, H.; Marutani, H.; Hara, Y.; Uozato, H. J. Cataract Refract. Surg. 2003, 29, 2069–2072.10.1016/S0886-3350(03)00257-8
  • Pierscionek. B.K.; Chan, D.Y. Optometry Vision Sci. 1989, 66, 822–829.10.1097/00006324-198912000-00004
  • Zhang, D.; Lien, V.; Berdichevsky, Y.; Choi, J.; Lo, Y. Appl. Phys. Lett. 2003, 82, 3171–3172.10.1063/1.1573337
  • Liang, D.; Wang, X.Y.; Du, J.W. Opt. Eng. 2015, 54, 065104–065104.10.1117/1.OE.54.6.065104
  • Liu, H.; Wang, L.; Jiang, W.; Li, R.; Yin, L.; Shi, Y.; Chen, B. RSC Adv. 2016, 6, 23653–23657.10.1039/C5RA22845J
  • Förster, E.; Stürmer, M.; Wallrabe, U.; Korvink, J.; Brunner, R. Opt. Express 2015, 23, 929–942.10.1364/OE.23.000929
  • Bogue, R. Sensor Rev. 2013, 33, 14–18.10.1108/02602281311294306
  • Ogata, S.; Ishida, J.; Sasano, T. Opt. Eng. 1994, 33, 3649–3655.
  • Kim, J.; Jeong, K.-H.; Lee, L.P. Opt. Lett. 2005, 30, 5–7.10.1364/OL.30.000005
  • Duparré, J.W.; Wippermann, F.C. Bioinspir. Biomim. 2006, 1, R1–R16.10.1088/1748-3182/1/1/R01
  • Jeong, K.-H.; Kim, J.; Lee, L.P. Science 2006, 312, 557–561.10.1126/science.1123053
  • Riley, D.T.; Harmann, W.M.; Barrett, S.F.; Wright, C.H. Bioinspir. Biomim. 2008, 3, 026003.10.1088/1748-3182/3/2/026003
  • Benson, J.B.; Wright, H.; Barrett, S.F. ISA Biomed. Sci. Instrum. 2008, 44, 367–372.
  • Luke, G.P.; Wright, C.H.G.; Barrett, S.F. IEEE Sensors J. 2012, 12, 308–314.10.1109/JSEN.2010.2099112
  • Song, Y.M.; Xie, Y.; Malyarchuk, V.; Xiao, J.; Jung, I.; Choi, K.J.; Liu, Z.; Park, H.; Lu, C.; Kim, R.H.; Li, R.; Crozier, K.B.; Huang, Y.; Rogers, J.A. Nature 2013, 497, 95–99.10.1038/nature12083
  • Land, M.F. Annu. Rev. Entomol. 1997, 42, 147–177.10.1146/annurev.ento.42.1.147
  • Sanders, J.S.; Halford, C.E. Opt. Eng. 1995, 34, 222–235.10.1117/12.183393
  • Lee, W.B.; Jang, H.; Park, S.; Song, Y.M.; Lee, H.N. Opt. Express 2016, 24 (3), 2013–2026.10.1364/OE.24.002013
  • Neumann, T.R.; Bülthoff, H.H. Proceedings of the EPSRC/BBSRC International Workshop on Biologically Inspired Robotics 2002, pp 196–203.
  • Floreano, D.; Zufferey, J.C.; Srinivasan, M.V.; Ellington, C. Flying Insects and Robots; Springer: Berlin, 2010.10.1007/978-3-540-89393-6
  • Liu, H.; Huang, Y.; Jiang, H. Proc. Natl. Acad. Sci. 2016, 113, 2015179536, 3982–3985.
  • Moghimi, M.J.; Fernandes, J.; Kanhere, A.; Jiang, H. Sci. Rep. 2015, 5, 15861, 1–11.10.1038/srep15861
  • Stürzl, W.; Boeddeker, N.; Dittmar, L.; Egelhaaf, M. Bioinspir. Biomim. 2010, 5, 036002.
  • Grubsky, V.; Gertsenshteyn, M.; Jannson, T. SPIE Newsroom 2007, DOI: 10.1117/2.1200702.0691.
  • Angel, J.R.P. Astrophys. J. 1979, 233, 364–373.10.1086/157397
  • Hudec, R.; Pina, L.; Šimon, V.; Švéda, L.; Inneman, A.; Semencová, V.; Skulinová, M. Nuclear Phys. B-Proc. Suppl. 2007, 166, 229–233.10.1016/j.nuclphysbps.2006.12.014
  • Collier, M.R.; Porter, F.S.; Sibeck, D.G.; Carter, J.A.; Chiao, M.P.; Chornay, D.J.; Cravens, T.E.; Galeazzi, M.; Keller, J.W.; Koutroumpa, D.; Kujawski, J. Rev. Sci. Instrum. 2015, 86, 071301.10.1063/1.4927259
  • Land, M.F. J. Exp. Biol. 1969, 51, 471–493.
  • Williams, D.; MeIntyre, P. Nature 1980, 288, 578–580.10.1038/288578a0
  • Thoen, H.H.; How, M.J.; Chiou, T.H.; Marshall, J. Science 2014, 343, 411–413.10.1126/science.1245824
  • Prabhakara, R.S.; Wright, C.H.G.; Barrett, S.F. IEEE Sens. J. 2012, 12, 298–307.10.1109/JSEN.2010.2100039
  • Martinez, T.; Wick, D.; Restaino, S. Opt. Express 2001, 8, 555–560.10.1364/OE.8.000555
  • Aizenberg, J.; Sundar, V.C.; Yablon, A.D.; Weaver, J.C.; Chen, G. Proc. Natl. Acad. Sci. 2004, 101, 3358–3363.10.1073/pnas.0307843101
  • Sundar, V.C.; Yablon, A.D.; Grazul, J.L.; Ilan, M.; Aizenberg, J. Nature 2003, 424, 899–900.10.1038/424899a
  • Huby, N.; Vié, V.; Renault, A.; Beaufils, S.; Lefèvre, T.; Paquet-Mercier, F.; Pézolet, M.; Be^che, B. Appl. Phys. Lett. 2013, 102, 123702-1–123702-3.10.1063/1.4798552
  • Reichenbach, A.; Agte, S.; Francke, M. e-Neuroforum 2014, 5, 93–100.10.1007/s13295-014-0054-8
  • Franze, K.; Grosche, J.; Skatchkov, S.N.; Schinkinger, S.; Foja, C.; Schild, D.; Uckermann, O.; Travis, K.; Reichenbach, A.; Guck, J. Proc. Natl. Acad. Sci. 2007, 104 (20), 8287–8292.10.1073/pnas.0611180104
  • Agte, S.; Junek, S.; Matthias, S.; Ulbricht, E.; Erdmann, I.; Wurm, A.; Schild, D.; Käs, J.A.; Reichenbach, A. Biophys. J. 2011, 101, 2611–2619.10.1016/j.bpj.2011.09.062
  • Labin, A.M.; Safuri, S.K.; Ribak, E.N.; Perlman, I. Nat. Commun. 2014, 8, 4319, 1–9.
  • Lakshminarayanan. V.; Enoch J.M. Biological Waveguides. In Handbook of Optics, 3rd ed.; Bass, M., Ed.; McGraw-Hill: New York, 2010; Vol. III, pp 9.1–9.29.
  • Enoch, J.M. Optical Properties of the Retinal Receptors. J. Opt. Soc. Am. 1963, 53, 71–85.10.1364/JOSA.53.000071
  • Lakshminarayanan, V. Proc. SPIE 3211, International Conference on Fiber Optics and Photonics: Selected Papers from Photonics India ‘96, Chennai, 1998, pp 182–192.
  • Enoch, J.M.; Lakshminarayanan, V. Retinal Fiber Optics. In Vision and Visual Dysfunction: Visual Optics and Instrumentation; Charman, W.N., Ed.; MacMillan: London, 1991, Vol. I, pp 280–309.
  • Snyder, A.W.; Laughlin, S.B. J. Comp. Physiol. A 1975, 100, 101–116.10.1007/BF00613963
  • Snyder, A.W.; Miller, W.H. J. Opt. Soc. Am. 1977, 67, 696–698.10.1364/JOSA.67.000696
  • Snyder, A.W.; Rühl, F. Electron. Lett. 1983, 19, 185–186.10.1049/el:19830128
  • Hecht, S.; Schlaer, S.; Pirenne, M. J. Gen. Physiol. 1942, 25, 819–840.10.1085/jgp.25.6.819
  • Rieke, F.; Baylor, D. Rev. Modern Phys. 1988, 70, 1027–1036.
  • Lakshminarayanan, V. Proc. SPIE 2005, 5866, 332–337.10.1117/12.613055
  • Memis, O.G.; Katsnelson, A.; Mohseni, H.; Yan, M.; Zhang, S.; Hossain, T.; Jin, N.; Adesida, I. Proc. SPIE. 2008, 7035, 70350V-1–70350V-12.
  • Nakayama, K. Vision Res. 1981, 21 (10), 1475–1482.10.1016/0042-6989(81)90218-2
  • Shabayek, A.E.; Morel, O.; Fofi, D. Visual Behaviour Based Bio-inspired Polarization Techniques in Computer Vision and Robotics. In Developing and Applying Biologically-inspired Vision Systems: Interdisciplinary Concepts; Pomplun, M., Suzuki, J., Eds.; Information Science Reference: Hershley, PA, 2012; pp 247–276.
  • Wolpert, H.D. Biological optics. In Biomimetics – Nature Based Innovation; Bar-Cohen, Y., Ed.; Taylor & Francis Group: Boca Raton, FL, 2012; pp 267–306.
  • Ropars, G.; Gorre, G.; Le Floch, A.; Enoch, J.; Lakshminarayanan, V. Proc. Royal Soc. A: Math., Phys. Eng. Sci. 2012, 468, 671–684.10.1098/rspa.2011.0369
  • Ropars, G.; Lakshminarayanan, V.; Le Floch, A. Contemp. Phys. 2014, 55, 302–317.10.1080/00107514.2014.929797
  • Haidinger, W.K. Ann. Phys. 1844, 139, 29–39.10.1002/(ISSN)1521-3889
  • Temple, S.E.; McGregor, J.E.; Miles, C.; Graham, L.; Miller, J.; Buck, J.; Scott-Samuel, N.E.; Roberts, N.W. Proc. R. Soc. 2015, B 282, 20150338.
  • Le Floch, A.; Ropars, G.; Enoch, J.; Lakshminarayanan, V. Vision Res. 2010, 50, 2048–2054.10.1016/j.visres.2010.07.007
  • von Frisch, K. The Dance Language and Orientation of Bees; Harvard University: Cambridge, MA, 1967.
  • Rossel, S.; Wehner, R. J. Comp. Physiol. A 1984, 154, 607–615.10.1007/BF01350213
  • Foster, J.J.; Sharkey, C.R.; Gaworska, A.V.; Roberts, N.W.; Whitney, H.M.; Partridge, J.C. Curr. Biol. 2014, 24, 1415–1420.10.1016/j.cub.2014.05.007
  • Menzel, R.; Snyder, A.W. J. Comp. Physiol. 1974, 88, 247–270.10.1007/BF00697958
  • Chahl, J.; Burke, M.; Rosser, K.; Mizutani, A. IFAC Proc. Vol. 2013, 46, 23–27.10.3182/20130626-3-AU-2035.00025
  • Lakshminarayanan, V. SPIE Optical Engineering+ Applications. Int. Soc. Opt. Photon. 2012, 8482, 84820A–84820A.
  • Lewis, P.M.; Ackland, H.M.; Lowery, A.J.; Rosenfeld, J.V. Brain Res. 2015, 1595, 51–73.10.1016/j.brainres.2014.11.020
  • Ong, J.M.; da Cruz, L. Clin. Exp. Ophthalmol. 2012, 40, 6–17.10.1111/j.1442-9071.2011.02590.x
  • Stingl, K.; Bartz-Schmidt, K.U.; Besch, D.; Braun, A.; Bruckmann, A.; Gekeler, F.; Greppmaier, U.; Hipp, S.; Hortdorfer, G.; Kernstock, C.; Koitschev, A.; Kusnyerik, A.; Sachs, H.; Schatz, A.; Stingl, K.T.; Peters, T.; Wilhelm, B.; Zrenner, E. Proc. Royal Soc. B: Biol. Sci. 2013, 280, 20130077.10.1098/rspb.2013.0077
  • da Cruz, L.; Coley, B.F.; Dorn, J.; Merlini, F.; Filley, E.; Christopher, P.; Chen, F.K.; Wuyyuru, V.; Sahel, J.; Stanga, P.; Humayun, M.; Greenberg, R.J.; Dagnelie, G. Brit. J. Ophthalmol. 2013, 97, 632–636.10.1136/bjophthalmol-2012-301525
  • Dorn, J.D.; Ahuja, A.K.; Caspi, A.; da Cruz, L.; Dagnelie, G.; Sahel, J.A.; Greenberg, R.J.; McMahon, M.J. JAMA Ophthalmol. 2013, 131, 183–189.10.1001/2013.jamaophthalmol.221
  • Ayton, L.N.; Blamey, P.J.; Guymer, R.H.; Luu, C.D.; Nayagam, D.A.X.; Sinclair, N.C.; Shivdasani, M.N.; Yeoh, J.; McCombe, M.F.; Briggs, R.J.; Opie, N.L.; Villalobos, J.; Dimitrov, P.N.; Varsamidis, M.; Petoe, M.A.; McCarthy, C.D.; Walker, J.G.; Barnes, N.; Burkitt, A.N.; Williams, C.E.; Shepherd, R.K.; Allen, P.J. PLoS ONE 2014, 9, e115239.10.1371/journal.pone.0115239
  • Dagnelie, G. Visual Prosthetics: Physiology, Bioengineering, Rehabilitation; Springer: New York, 2011.10.1007/978-1-4419-0754-7
  • Greanya, V. Bioinspired Photonics: Optical Structures and Systems Inspired by Nature; Taylor and Francis: Boca Raton, FL, 2015.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.