45,216
Views
87
CrossRef citations to date
0
Altmetric
Review Article

Tools and Technologies for Blind and Visually Impaired Navigation Support: A Review

ORCID Icon, ORCID Icon & ORCID Icon

Abstract

The development of navigation tools for people who are visually impaired had become an important concern in the research area of assistive technologies. This paper gives a comprehensive review of different articles published in the area of navigation solutions for people who are visually impaired. Unlike other review papers, this review considers major solutions that work in both the indoor or/and outdoor environments which are based on different technology. From the review, it became clear that the navigation systems proposed for the target users lack some core features that are quite important for independent navigation. Also, there can be instances in which humanitarian conditions also have to be considered in the navigation system design. Based on these findings, a set of recommendations are also given which can be considered in the future design of navigation systems for blind and visually impaired people.

1. INTRODUCTION

Navigation is an essential part of every person’s life. People navigate for work, education, shopping, and other miscellaneous reasons. Most people would acknowledge that vision plays a critical role in navigation since it facilitates the movement from one spot to another. It is relatively easy to imagine getting around without vision in well-known environments, such as our room in the house or even our office space. However, it is difficult to navigate unfamiliar places.

The statistics from the World Health Organisation (WHO) shows that approximately 2.2 billion people live with some forms of vision impairment globally.Footnote1 Being blind or visually impaired does not mean losing the independence of getting to and from places whenever we wanted. People with no vision or limited vision can travel independently daily with their means best suited for them. According to Nicolas et al. [Citation1], one of the biggest challenges to independence for people who are visually impaired is associated with safe and efficient navigation. However, it also should be noted that to facilitate safe and efficient navigation, it is good to acquire travel skills and use sources of non-visual environmental information that are rarely considered by people who rely on their vision. But still, there exist some challenges that are faced by people who are visually impaired during daily navigation [Citation2]. Besides, to reach the destination safely, there are different other challenges which are usual in navigation [Citation3]. Some of them are identifying pit in-front of the path, hanging obstacles, stairs, traffic junctions, signposts on the pavement, wet floor indoors, greased or slippery outdoor paths, etc.

People who are blind or visually impaired started using conventional navigation aids such as white canesFootnote2, guide dogsFootnote3,Footnote4, assistance by a trained guide or volunteer [Citation4,Citation5] since long time back. Research shows that people who become blind in their early life often learn to use their acoustic skills such as echolocation in an efficient way for navigation [Citation6–8]. Landmarks and clues also play an vital role in wayfinding and navigation.Footnote5 Also, some people utilize auditory or olfactory senses to assist their navigation. Tactile paving can also provide a supportive infrastructure for navigation in cities or urban areas for people who are visually impaired.Footnote6 They are particularly useful near public transport stations and provide better safety for pedestrians who need assistance to know precisely where they are located [Citation9].

People who are visually impaired often use Orientation and Mobility (O&M) skills that help travelling unfamiliar environments. The orientation and mobility skills acquired by the visually impaired users support to develop competencies needed for safe and efficient navigation.Footnote7 Orientation refers to the capability to know the current location and the destination which the person intends to travel. It includes information such as whether the person wants to move from one room to another or wants to go to a shopping mall. Mobility refers to the capability of a person for safe and effective navigation from one place to another. It includes travelling to a station without falling, crossing streets, and safe use of public transportation facilities [Citation10–12].

When technological advancements are being utilized in the design of everyday products, people started making use of that advantage in the assistive tools also. Such tools are created for people with disabilities as support in their daily life. Later such assistive tools collectively called Assistive Technologies. According to [Citation13], assistive technology is concerned with technologies, equipment, devices, apparatus, services, systems, processes and environmental modifications that enable them to overcome various physical, social, infrastructural and accessibility barriers to independence and live active, productive and independent lives as equal members of society. Research suggests that assistive technologies are playing increasingly important roles in the lives of people with disabilities, particularly in navigation [Citation14]. Some examples for this are WayfindrFootnote8, Envision [Citation15], etc. Due to the technological advancements that happened in the mobile industry, mobile devices with adequate computational capacity and sensor capabilities are also providing various possibilities in the development of navigation systems. According to Csapó et al. [Citation16], the largest and most common mobile platforms are rapidly evolving as the de-facto standards for the implementation of assistive technologies.

The research into assistive navigation aids for people who are blind or visually impaired is quite extensive. It may be because its scope extends from the physiological factors associated with vision loss to the human factors influencing mobility, orientation, and access to information and also to the technological aspects in the development of tools and techniques in the form of navigation, wayfinding, information access, interaction, etc. The authors of [Citation13] claim that it is very hard to characterize or capture the essence of this field within a single snapshot. According to [Citation17], there are many navigation systems proposed for blind and visually impaired people, but only a few can provide dynamic interactions and adaptability to changes and none of those systems works seamlessly on both indoors and outdoors. Moreover, even if there is a system that can work fine in all situations, it tends to be complex and does not concern about the needs of a blind person which ranges from the ease of usage, simple interface, and also being less complex.

Several reviews have been carried out on different navigation tools and techniques used by people who are visually impaired or blind. Tapu et al. [Citation18] conducted a review on wearable devices used to assist the visually impaired user navigation in outdoor environments. In a review by [Citation19], the authors studied different vision assistive methodologies used for indoor positioning and navigation. The study by [Citation20] discusses several Electronic Travel Aids (ETA), especially for helping blind people navigation using machine vision technologies. Hojjat [Citation21] gave an overview of some existing indoor navigation solutions for blind and visually impaired people. But a systematic review of navigation systems which works on both indoor and outdoor and categorized based on the technological developments are fewer.

The contribution of this review paper is a systematic presentation of the literature of various navigation solutions used by (or proposed for) the people who are visually impaired. Unlike the majority of similar other reviews, this work considers the navigation systems working on all environments (like outdoors, indoors, etc), that use various underlying technology (vision, non-vision, mobile-based, etc). The reviewed works are categorized based on the underlying technology can give a better organization to this paper. In the end, the review comes with some recommendations that are developed based on the extensive review done by the authors. The authors believe this paper can give an overview of developments that happened in the domain and the recommendations can be considered for the design of future navigation solutions of people who are visually impaired.

The organization of the paper is as follows: Section 2 describes some general information regarding assistive travel aids and also the main underlying technologies used in navigation system design. Section 3 discusses the methodology adopted to conduct this review and also a discussion of various navigation systems considered for this review. This section also classifies the navigation systems based on the underlying technologies. Section 4 proceeds with the discussions and recommendations. The paper ends with the conclusion in section 5.

2. ASSISTIVE TRAVEL AIDS

According to Chanana et al. [Citation22], the limitations of conventional assistive solutions led to the evolution of technology-supported solutions which can be used for guiding the user by providing the necessary navigation details by performing obstacle detection. The immense potential emerged in computing and communication platforms can be considered as an advantage in utilizing it in designing a navigation and accessibility solution for the visually impaired person [Citation23].

A classification of blind navigation systems, devices, and recognition methods proposed by the authors of [Citation4] comprises three categories: electronic orientation aids (EOAs), position locator devices (PLDs), and electronic travel aids (ETAs).

Electronic Orientation Aids (EOAs) is designed to assist blind and visually impaired people in finding the navigation path. A camera and different sensors are usually combined to identify obstacles and paths [Citation24,Citation25]. EOA systems usually need more information about the environment. The limitations of EOAs are in difficulty in the incorporation of a complex computing device with a lightweight and real-time guiding device.

Position Locator Devices (PLD) are used to decide the precise position of devices that use the Global Positioning System (GPS) and Geographic Information System (GIS) technologies. A combination of GPS and GIS-based navigation systems can be used to guide the user from the current location to destinations. However, [Citation4] argues that that combination will not completely work for the visually impaired people because the system cannot help the user to avoid the obstacles in front of them. Hence PLDs usually need to be combined with other sensors to detect the obstacles. Another limitation with PLDs is that they need to receive signals from GPS satellites so that they only can be used outdoors, not indoors.

Electronic Travel Aids (ETAs) are general assistant devices for helping visually impaired people avoid obstacles. ETAs could improve the detection range of obstacles and landmarks and can give a better orientation also [Citation26]. As a result, it can facilitate safe, simple, and comfortable navigation tasks. An ETA consists of the sensing input unit(s) to receive inputs from the environment and a single or multiple feedback modalities to give information to the user which can help in navigation. The following section discusses in detail about each of them.

2.1 Sensing Inputs

The common sensing inputs used in ETAs are general camera (or a mobile phone camera), depth camera, Radio Frequency Identification (RFID), Bluetooth beacon, ultrasonic sensor, infrared sensor, etc.

A modern smartphone camera can give good images in terms of quality and they are small in size too. But the main limitation of a general smartphone camera is that they do not provide the depth information so such systems cannot determine the distance from user to obstacle. General camera images usually are processed to detect only the obstacles ahead during the navigation.

A depth camera provides ranging information. Among depth camera recognition systems, Microsoft Kinect [Citation27] is usually used as the primary recognition hardware in depth based vision analysis systems. Microsoft Kinect uses a new Time-of-Flight (ToF) camera for the depth computation. Compared to purely two-dimensional images, depth images can provide more information about obstacles. One of the main limitations of Kinect cameras is that they cannot be used in intense light environments. In addition to Kinect cameras, other depth-based cameras used for depth analysis is Light Detection and Ranging (LiDaR). The disadvantage of both Kinect and LiDaR based systems is its excessive size which is not convenient to use and also carries for a human for navigation. But recent smartphones also have an inbuilt depth sensor in addition to the normal camera. The features like a smaller size, portability, and computation possibility become advantages if we use such a mobile for visual processing during the navigation.

Radio Frequency Identification (RFID) refers to a technology where the digital data will be encoded in tags or smart labels and are captured by a reader via radio waves. The technology suffers from fluctuating signals accuracy, signals disruption, reader and/or tag collision, slow read rates, etc. Moreover, the user needs to be aware of the RFID reader location in a navigation context [Citation28].

Beacon is a type of RFID, used for identifying objects with radiofrequency. It’s more like active RFID because it’s not depending on the RFID reader to generate power. Bluetooth beacons are small hard-ware devices constantly transmit Bluetooth Low Energy (BLE) signals. A BLE beacon broadcasts packets of data at regular intervals of time. These data packets are detected by a mobile application or pre-installed services on smartphones nearby. Bluetooth Low Energy transmits fewer data over a smaller range, so it consumes less power compared to other navigation devices. Bluetooth beacons suffer from high installation cost, maintenance costs since the receivers or emitters need to be installed throughout the ceiling [Citation29].

Near-Field Communication (NFC)Footnote9 is also based on the RFID protocols. The main difference of NFC with an RFID is that an NFC device can act not only as a reader but also as a tag. It is also possible to transfer information between two NFC devices.

Ultrasonic-based navigation systems use ultrasonic to measure the distance to the obstacles and indicate the information to the user through voice or vibration. However, one limitation of these systems is that they cannot detect the exact location of obstacles due to the wide beam angle of ultrasonic. Besides, these systems cannot recognize the type of obstacles (e.g. a car or a bicycle) [Citation30].

Infrared (IR) sensor is used to sense certain characteristics of its surroundings by emitting an IR signal or detecting it. Infrared sensors can also measure the heat emitted by an object and detecting its motion. The drawback of IR based devices is that natural and artificial light can interfere with it. IR based systems are costly to install due to a large number of tags to be installed and maintained [Citation31].

2.2 Feedback Modalities

An ETA with vision/non-vision sensory units can detect obstacles during the navigation. This information, along with the direction cues, needs to be sent to the users and helps them in navigation. Common modes of feedbacks are audio, tactile, or vibrations. Some systems use a combination of them and thus gives the user a multimodal option to get the navigation cues.

Audio feedback is usually given in a navigation system using earphones or speakers. The disadvantage of audio feedback is the disturbance caused to the user when the information is over-flooded or it may also annoying when the user misses the environment sounds due to the auditory cues [Citation32]. Bone conduction headphones are used in many navigation systems to minimize this issue to an extent. The headset allows the conduction of sound to the inner ear allowing the user to perceive the audio signal without blocking the ear canal [Citation33].

Tactile feedback is also used in some navigation feedback systems where the feedback is given through the user’s foot, hand, arm, fingers, or any body parts where the sense of pressure can be experienced. It allows the user to get notified for directional cues and also to avoid obstacles by feeling sensations at various body pressure points. Unlike the audio feedback methods, the tactile feedback can be used to avoid the distraction in users due to environment sounds [Citation34,Citation35].

When smartphones are widely used in the design of the navigation systems, some systems tried to use the vibration feature of smartphones and also some other mobile devices to give navigation feedback to the users [Citation36,Citation37]. The vibration pattern can be used to give certain cues for the direction. However, it is not so useful or comfortable to the users because sometimes it may confuse the users or takes time while decoding the meaning of vibration pattern in real-time.

3. TOOLS AND TECHNOLOGIES FOR NAVIGATIONAL SUPPORT

Many research works were reported in the area of assistive navigation systems for people who are visually impaired. This review paper includes scientific publications from a period of approximately five years (2015–2020). This narrow selection was made to avoid repetitions from previous related reviews and also to focus on recent developments that happened in the domain. The ACM Digital Library, IEEEXplore, ScienceDirect, PubMed, Sensors, and also some other relevant databases were queried using combinations of the keywords “navigation systems for visually impaired” and “navigation systems for blind”. The abstracts of the publication were reviewed to exclude irrelevant papers. The review was not limited to complete, functioning, and evaluated system prototypes, but also the systems which are proposed with a prototype and which got the potential to develop further.

The core technology used in each of those systems varies. Each of them got their advantages and also limitations. This section classifies the different electronic travel aids considered for this review based on the underlying technology. Previous attempts were already reported on similar classification criteria but the majority are limited to a specific environment such as indoor or outdoor environments or machine learning solutions or sensor-based solutions. But we tried to incorporate almost all technological advancements that happened in the area and the utilization of them in the design of navigation systems. Moreover, we did not make any limitation constraints on the application environment like other similar reviews. The classification is done in five categories: (1) Visual imagery systems, (2) Non-visual data systems, (3) Map-based systems, (4) Systems with 3D Sound, and (5) Smartphone-based solutions. The following sections describe the classification in detail.

3.1 Visual Imagery Systems

Vision-based navigation or optical navigation uses computer vision algorithms and optical sensors, including different types of cameras to extract the visual features from the surrounding environment. The system tries to detect obstacles using the visual features and then guide the user to navigate safely by giving directions to avoid them. Different visual imagery based devices/technologies were tried to incorporate in the navigation system design in the literature. It includes systems that used a stereo camera, IP camera network, RGB-D camera, etc. We categorized them based on the underlying visual imaging capturing device/technology and also mention some of the notable works on each category.

3.1.1. Stereo Camera

The authors in [Citation38] presented a navigation methodology employed by an intelligent assistant called Tyflos. The Tyflos system carries two vision cameras and captures images from the surrounding 3D environment, either by the user’s command or in a continued mode (video). It then converts those images into its corresponding verbal descriptions which use in establishing a verbal communication with the user.

The navigation system proposed in [Citation39] integrates a binocular camera, inertial measurement unit (IMU), and earphones in a bicycle helmet. When the object is detected at a particular position, it is been converted to a sound source and is conveyed to the user through earphones. The particular technique is called Binaural rendering and it refers to the technique of creating sounds that can be localized in direction and distance using headphones. The system is intended to work only on the outdoor environment.

3.1.2. IP Camera Network

Navigation systems using IP cameras were also proposed in the literature. In the system proposed by Chaccour et al. [Citation40], cameras were installed on every room ceiling. The photos captured from the environment were analyzed by a remote processing system using a computer vision algorithm. Using a simple interactive mobile application installed on the smartphone, the user would be able to reach the destination. shows an outline of how the system works. The main issue with the system is with the expenses in the installation of various IP cameras in the indoor environment.

Figure 1: Architecture of IP-camera based system. (Adapted from [Citation40].)

Figure 1: Architecture of IP-camera based system. (Adapted from [Citation40].)

3.1.3. VSLAM

Visual Simultaneous Localization and Mapping (VS-LAM) is a technology that can be used for location and positioning using the visual inputs from a camera [Citation41]. Since the technology only needs a single camera sensor and can work without any other sensors, it got much popularity in the navigation system design domain. Bai et al. [Citation42] proposed an indoor navigation solution that uses the VSLAM algorithm to solve issues in indoor localization and virtual-blind-road building. The system used a dynamic sub-goal selecting strategy to help users with navigation by avoiding obstacles along the way. VSLAM was also used in [Citation43]. Here, the system is connected with a cloud server and the major components include a helmet with stereo cameras, an android-based smartphone, a web application, and also a cloud computing platform. An overview of the system is shown in . The evaluations reported that there is still scope for improving object detection and recognition accuracy.

Figure 2: Cloud and vision-based navigation system. (Adapted from [Citation43].)

Figure 2: Cloud and vision-based navigation system. (Adapted from [Citation43].)

3.1.4. RGB-D Cameras

The Intelligent situational awareness and navigation aid (ISANA) [Citation44] was an electronic SmartCane proto-type that uses the Google Tango [Citation45] tablet as its mobile computing platform. Using the onboard RGB-D camera, efficient obstacle detection and avoidance approach based on Kalman filter (TSM-KF) algorithm were used in ISANA. They also designed a multimodal human-machine interface (HMI) with speech-audio interaction and robust haptic interaction through an electronic SmartCane.

Xiao et al. [Citation46] proposed a cyber-physical system that uses an RGB-D sensor for object detection and also scenes understanding. Both auditory and vibrotactile feed-back modes are available with the system. The system needs internet access to do the computation but can work both indoor and outdoor environments.

The system proposed by [Citation47] also used an RGB-D camera for facilitating indoor navigation for the visually impaired people. The system also consists of a standard laptop that runs a navigation soft-ware, smartphone user interface, and a haptic feedback vest. Based on the voice instruction from the user, the system can identify the start and destination points. In addition to storing the previously generated maps for navigation, the system can also generate maps while the user is travelling.

The system proposed in [Citation48] used a combination of wearable and social sensors for providing navigation directions for blind users. Using wearable sensors like RGB-D and IMU, the system provides a line-of-sight object detection through the audio and vibratory feedback modes. And at the global level, users make decisions utilizing the social sensors (such as Facebook posts, Twitter tweets, etc). The social sensors can analyze its contents posted by others and through this, the blind users can get warnings about various incidents or accidents happened at a particular spot. Using that info, the blind user can decide whether or not to follow the route planned by the navigation system. The decision about a route can be made based on the environment changes.

3.1.5. Microsoft Kinect

Microsoft Kinect comes under RGB-D cameras creates great interest among researches in utilizing it in the design of navigation systems for visually impaired [Citation49]. So it is worth separately mention the developments that occurred in this area. Kinect is a line of motion sensing input devices produced by Microsoft which is suitable to detect objects and thus can use to support navigation. It also supports a large feature set and can work in low light environments. The system proposed by [Citation50] used an algorithm that takes the input data from Microsoft Xbox Kinect 360 [Citation51]. It helps to make a 3D map of the indoor areas and detects the depth of an obstacle/human. Similar to this, [Citation52] also presented an obstacle avoidance system for blind people using a Kinect depth camera. The depth images from the Kinect camera were processed using a windowing-based mean method and then use it for detection of different obstacles. When the system recognizes an obstacle, it sends voice feedback to the user through the earphones.

3.1.6. LiDaR

Since the use of LiDaR got popularity in autonomous vehicle design and robotics, the researchers also tried to experiment with its scope in the assistive navigation design for the people who are visually impaired. The LiDaR Assist Spatial Sensing (LASS) system proposed by [Citation53] uses a LiDaR sensor to identify the obstacles and then translate it into a stereo sound of various pitches. The spatial information from the sensor such as the obstacles’orientation and distance is translated as relative pitches. The system proposed in [Citation54] also reports the use of the LiDaR sensor integrated with a white cane. The scanning of the cane for the detection of obstacles in one disadvantage in addition to the weight and size that makes the system a little uncomfortable. But the advantage of LiDaR can much more be exploited in the area since smaller size versions of sensors released recently.

3.2 Non-visual Data Systems

The navigation systems which do not use vision algorithms or optical sensors as their main choice are discussed in this section. Various systems are proposed using various sensors like ultrasonic, beacons, IR sensors, etc. Even though there are systems that use both and visual and non-visual data, the systems discussing here mainly depend on non-visual features for giving guidance to users for navigation.

3.2.1. BLE Beacons

Systems using Bluetooth beacons were reported in literature multiple times [Citation55–57]. But it is worth mentioning some details of a few. Nair et al. [Citation58] proposed a hybrid positioning and navigation solution that combines both Bluetooth Low Energy (BLE) beaconsFootnote10 and Google Tango to tap into their strengths while minimizing their weaknesses.

In the GuideBeacon system [Citation59], a smartphone was used to interact with Bluetooth-based beacons placed in the different indoor locations which can assist the users in giving navigation directions. Some of the improvements needed in the proposed system are with the user interface and navigation modules.

3.2.2. IoT based

The Internet Of Things (IoT) is intercommunication between various systems that can transfer data over a network without requiring any form of human or ma-chine interactions. Navigation systems using the IoT concept has been reported in the literature after the popularity of the same for different applications [Citation60,Citation61].

Indriya [Citation62] is a handheld device used in conjunction with a smart cane. The system can detect obstacles ahead up to three metres and can distinguish between humans and objects with 80% accuracy. It can provide both vibratory and voice alerts before any possible collision. Indriya uses less number of sensors for the IoT-based implementation with the Android platform support. The system tends to give poor results during step detection, slope detection, etc.

Blind Guide [Citation63] is also based on the Internet of Things that can work with existing solutions such as a white cane to help visually impaired to navigate in both indoor and outdoor environments. If an obstacle has been detected by the wireless sensor, a signal will be sent to the central device which is a Raspberry Pi board. After identifying the object, the user will be informed about its name and also its distance from the user through voice feedback. An important limitation of the prototype is that it requires internet access for object recognition, which makes the system-dependent in the location with data network accessibility.

3.2.3. Ultrasonic Sensors

Navigation systems based on ultrasonic sensors can be considered as a common choice in the design after visual(camera) based solutions. The systems proposed using this technology works in conjunction with some electronic boards like Raspberry Pie or Arduino [Citation64]. An ultrasonic blind stick with adjustable sensitivity with the help of an ultrasonic proximity sensor and a GPS module was reported in [Citation65].

The NavGuide [Citation66] can categorize obstacles and surrounding environment situations using ultrasonic sensors. The system can provide priority information to the user using vibratory and audio alerts. The system was designed to detect wet floors, floor-level, and knee-level obstacles also. One of the limitations of NavGuide is that it is unable to sense a pit or downhill. In addition to that, NavGuide can sense a wet-floor only after the user steps on it.

The GuideCane [Citation67] also used ultrasonic sensors to detect obstacles during the navigation and an attached embedded computer could find the direction of motion steered by the system and also the user who uses it. The limitations of the GuideCane is that it is not able to detect overhanging obstacles such as tabletops, also it cannot detect important features such as sidewalk borders.

3.2.4. IR Sensors

IR sensors offer low power consumption and low cost compared to ultrasonic sensors and because of those reasons, it also experimented in the navigation system design. A smart assistive stick based on IR has been reported in [Citation68]. Also, IR has been used in conjunction with other technologies like Google Tango and Unity [Citation69].

The system proposed in [Citation70] presents a solution using infrared sensors, which helps to detect various objects such as buildings or walls etc. The device has to be placed on the user’s arms can transmit navigation signals through vibrations. These signals provide information about movement steps and the nearest threats.

3.3 Map-based Systems

Users who are visually impaired are using different tactile tools, such as maps with raised points, small-scale prototypes, or magnet boards after the O&M training. Different multimodal maps have been proposed to assist the navigation of blind and visually impaired people. These tools are an efficient way for spatial learning in the absence of vision but got limitations. One of such is it is not possible to update the contents of the maps. To overcome these limitations, accessible interactive maps have been developed [Citation71]. Using a participatory design approach, the authors of [Citation72] designed and developed an augmented reality map that can be used in O&M classes. This prototype combines projection, audio output, and use of tactile tokens. Hence it allows both map exploration and construction by people who are visually impaired.

SmartTactMaps proposed by [Citation73] was a smartphone-based approach to support blind persons in exploring tactile maps. A 3D environment map made using an RGB-D sensor was reported in [Citation74]. The system could extract semantic information from the RGB images to help people who are visually impaired to navigate at home.

A 3D Printed audiovisual tactile map called LucentMaps [Citation75] was also proposed for people who are visually impaired. The authors of the same claimed to simplify the combination of mobile devices with physical tactile maps. VizMap system [Citation76] uses computer vision and crowdsourcing to collect various information from indoor environments. It uses on-site sighted volunteers to take various videos and uses these to create a 3D spatial model. These video frames are semantically labelled and embedded into a reconstructed 3D model, which can form a queryable spatial representation of the environment.

3.4 Systems with 3D Sound

The Sound of Vision system reported in [Citation77] is a wearable sensory substitution device that assists in the navigation of visually impaired people by creating and conveying an auditory and tactile representation of the surrounding environment. The user will be getting both audio and haptic feedback. The system needs improvements in its usability and accuracy factors.

Stereo Vision-based Electronic Travel Aid (SVETA) is an electronic travel aid that consists of a headgear moulded with stereo cameras and earphones. A sonification procedure is also proposed in the paper to map the disparity image to a musical sound in stereo. Each such musical sound corresponds to some information about the features of the obstacle that the user faces. The system works fine in indoor environments. The requirement to use the system turns to be a disadvantage for novice users. The target users need to have training on the different meanings of stereo musical sounds before using it [Citation78].

3.5 Smartphone-based Solutions

Smartphone-based navigation solutions offer portability and convenience to the users. This section describes the various solutions proposed on the smart-phone platform for visually impaired users.

NavCog3 proposed by [Citation79] was an indoor navigation system that provides turn-by-turn instructions and immediate feedback when incorrect orientation is detected. It also provides information on landmarks and some points of interest in nearby locations. The system provides audio feedback to the users. shows an overview of the NavCog3 system [Citation79]. Ganz et al. [Citation80] proposed PERCEPT-II application on which the target user obtains navigation instructions to the chosen destination when touching specific land-marks on the application installed in the mobile device. The destination spots were tagged with Near Field Communication (NFC) tags. illustrates the architecture of PERCEPT-II. The system has a limitation which involves installing and maintaining a large number of NFC tags.

Figure 3: An Overview of Navcog3 System. (Adapted from [Citation79].)

Figure 3: An Overview of Navcog3 System. (Adapted from [Citation79].)

Figure 4: PERCEPT-II Architecture. (Adapted from [Citation80].)

Figure 4: PERCEPT-II Architecture. (Adapted from [Citation80].)

Lin et al. [Citation4] proposed a smartphone application that can be integrated with an image recognition system to form an assisted navigation system. Based on the network availability, the system can choose two operating modes: an online mode and an offline mode. When the system got initiated, the smartphone captures an image and then sends it to the server for processing. The server uses some deep learning algorithms [Citation82,Citation83] to recognize different obstacles. The main limitations of the system include high energy consumption and the need for high-speed network connectivity.

The TARSIUS system [Citation84] aimed at enhancing the capability of users in the visual scene understanding of the outdoor environments. The system components include the TARSIUS app for mobile devices, a web server, a remote assistance centre, and also the Bluetooth LE/iBeacon tags placed along the streets at points of interest. The main issues of the TARIUS system include the placement of Bluetooth beacons all-around the streets which cause high cost and also may result in signal interferences.

ENVISION [Citation15] uses a specific method to detect static and dynamic obstacles robustly and accurately in real-time video streaming recorded by a smartphone with an average hardware capacity. The system can be further improved if the obstacle recognition and classification modules can help the target users for a better understanding of the environment.

The “Active Vision with Human-in-the-Loop for the Visually Impaired” (ActiVis) project developed by [Citation85] proposes a multimodal user interface that uses audio and vibration cues to transmit navigational information to the target user. The current implementation of the ActiVis system is an android app based on a Tango device [Citation45] and a bone-conducting headset. The system can be improved if it can adapt to feedback parameters in real-time which can improve the performance of the system.

The Tactile Wayfinder [Citation86] consists of a tactile belt and also a Personal Digital Assistant (PDA) which runs a Wayfinder application. The application manages location and route information. After identifying the path direction, the information is sent to the tactile display. The vibrators in the belt can give information regarding directions to the user for navigation.

4. DISCUSSIONS AND RECOMMENDATIONS

Through this review, we went through different navigation systems proposed for people who are visually impaired and classified them based on the underlying technology. The following two subsections describe discussion on the systems which were considered for review and also some recommendations we would like to propose based on that.

4.1 Discussions

Many research works are being done and several navigational technologies have been developed throughout the years to support blind and visually impaired, but only some are still in existence [Citation1]. The reasons for such a situation is somewhat identified through this review work. Most of the solutions may work well in theory but maybe too difficult or cumbersome to be adopted by the intended user in practice. Part of the reason for that may be due to a weak connection between engineering solution with that of a target user’s requirements. For example, different navigation solutions have been studied throughout this work and one of the main issues identified is with the size of the devices/systems [Citation38,Citation52,Citation78]. The size of most of the systems is quite impractical for a person to carry and thus make the user not even consider it at all as an aid for the navigation. The cost of the system, the long learning time needed for the user to adapt to the system are some of the reasons why the similar systems are widely used by the blind and visually impaired people. Studies did in [Citation70] also support these findings. In most of the systems, visually impaired people need to invest more amount of time to get used to the system and understand its working which was claimed to be frustrating and discouraging [Citation1]. Also, there might be a huge problem that can occur when a navigation solution demands changes in the environment. For example, setting up extra equipment or sensors in buildings or roads [Citation40,Citation59,Citation80] required more expenses and also infrastructure. But it also should be noted that it can enhance the security and also contribute feature addition in the navigation system.

Another interesting finding is the feedback design of the navigation systems. The majority of the reviewed systems went for the audio-based feedback systems [Citation40,Citation43,Citation50]. Only a few even consider dual-mode or multimode feedback methods [Citation44,Citation62,Citation66]. One of the factors which identify a better navigation system is the appropriate choice of feedback method. Different target users may prefer different methods, and it is a reality that incorporating all of them in a single system may not be a fruitful approach. But at the same time, we should not restrict the users with a single option of feedback too. In some situations, one feedback method can overrule or benefit than the other one. For example, in a noisy urban environment, an audio feedback method is not a suitable choice. At the same time, the same feedback method will be preferably useful in identifying the objects and giving information about the same back to the user.

The developments happened in the technology, systems, and algorithms paved the way to create a platform that can make it possible to create more interesting applications that can support the navigation of people who are visually impaired. Manduchi and Coughlan [Citation87] argues that there is increasing interest in the use of computer vision (in particular, mobile vision, implemented for example on smartphones) as an assistive technology for persons with visual impairments. However, there is still a lack of understanding of how exactly a blind person can operate a camera-based system [Citation88,Citation89].

Manduchi [Citation23] explains that this understanding is necessary not only to design good user interfaces but also to correctly dimension, design, and benchmark a mobile vision system for this type of application.

Though there are some technology supported tools like Audiopolis [Citation12] to enhance the Orientation and Mobility skills of a person, it should be also noted that not much works are happening in this area. According to Richard G.L [Citation11], technology should also play a continuing role in Orientation and Mobility. The author also asserts that research can be useful in developing new technology and evaluating its impact on the context.

Any user has the interest to know about the surroundings during the navigation. Even though many object detection based navigation solutions exist, such a system either lacks to give proper information or it may be overflooded by undesired information to the user. Also, the complex and time-consuming operations required in an obstacle detection procedure makes the system also more complex as well as less effective in delivering the necessary information in real-time [Citation90]. Real-time detection is quite necessary for any navigation context. A few seconds of delay might cause risk. Most of the cloud-based or online solutions are not favourable because of this reason.

Privacy and security of personal and private data is a serious concern in today’s digital era and this is valid in the context of technology-based navigation systems for people who are visually impaired [Citation91]. Some users don’t like to store the location info in the history of the device because that data can be used for tracking the user or for giving some advertisements by any commercial company. Data management in a system deals with how data is being handled during or after the usage of that system. The data can be in the form of audio signals, images, videos, etc. From the reviewed works, none of them mentions how the data collected from the navigational systems are stored and how the ethical concerns related to that are handled. So, it is also an interesting dimension to taken care about.

One another major point which the researchers need to take care of is how far those technology supported systems can fulfil the needs of the target users. Even though currently there are many technologies available for the visually impaired person for navigation, almost all are unreachable by most of the target users. We speculate that there are some gaps between the research and the actual needs. The study was done in [Citation92] on the needs and coping strategies of low-vision individuals validates this claim. The author points out that technical focus may divert attention away from the needs of users. So we may assume that technology-centric systems alone cannot be considered as a user-centric solution that can eliminate the difficulties faced by visually impaired people during their navigation.

4.2 Recommendations

It is evident that even with advancements and the variety in technological solutions of navigation assistive devices, they have not yet become so widely used and also the user acceptance is low [Citation93]. Through this section, we are trying to put forward some recommendations which can help in the design of navigation systems for blind and visually impaired people in the future.

  1. Appropriate choice of real-time object detection methods: Deep learning-based object detection solutions are improving in recent years and the amount of time needed to do the required task is being reduced [Citation94,Citation95]. The use of an effective and appropriate object detection method in an appropriate platform which can support real-time operations is an important thing to be considered in this scenario.

  2. Availability of multiple options for feedback mechanisms: If the system delivers only a single mode of feedback, it may not be useful in different instances. Some people rely on auditory mode whereas others some others on tactile or vibratory mode. But there will be situations where each of these modes becomes relevant depending on the environment. So, if there is an option for multiple feedback modalities, the user will get the flexibility to choose one based on a situation or environment. This will make the system more effective in varying environments. If a multimodal feedback system can be implemented based on the adaptation of user preferences according to the situations and environmental conditions, it would be useful to the users even though it has complexities in implementation [Citation96,Citation97].

  3. Mitigation of the extensive learning time: The amount of time needed to familiarize a system is an important user-centric factor in the design of a navigation system of people who are visually impaired. Both of these are required by the visually impaired persons to master most navigation assistive systems existing today [Citation1,Citation98]. One of the aspects which need to be considered in this case is that the users should not feel much difficulty in learning or using a new system.

  4. Comfortability in carriage and usage: Many of the systems which are existing now are burdensome or heavy [Citation49]. Portability is one of the important factors which governs the practicality and user acceptance of such devices [Citation99]. Even though many systems are integrated with a mobile phone, the usage of the same could not believe to be user-friendly as expected. The solution should be designed in such a way that a user should be able to carry the system and could be able to use it with more easiness.

  5. Amount of information apprehended by the user: The main purpose of navigation is to reach a point of destination safely. But the user will be more interested in what all happening around simply and effectively. They might be desired to get informed about the changes in the surroundings during navigation like traffic blocks, obstacles, or any warning situations, etc. The users should be informed in the right amount at the right timing about the surrounding environment during navigation [Citation22]. To be effective, the navigation solution is advised to focus on conveying specific environmental information [Citation1].

  6. Avoid social stigma concerns: Sometimes technology innovations cannot alone give the visually impaired persons to start liking them. The users must feel comfortable and have to use a system for assisting the navigation without feeling any type of social awkwardness [Citation100,Citation101]. Methods to solve such a situation depends on designing a user-centric simple device which will not feel the users different while using it in public.

  7. Proper management and security of personal and private data: Management of private and personal data should be considered while designing a navigation device for blind and visually impaired [Citation91,Citation102]. A blind user should have the option to have customized settings in the navigation device regarding which information the system is used for its process execution and which data is being shared in the network. This setting can be done as per the preferences of the user and the context in which the system is being used.

5. CONCLUSION

This literature review gives an overview of the state of the art of the navigation systems for people who are visually impaired and the systems are categorized based on the underlying technology. The review comes with some interesting findings. Over the years many systems are being proposed to support the navigation for the people who are visually impaired. But many of them got limitations associated with comfortability and portability, user learning and adaptation time with the new system, adaptable multi-feedback options, etc. Because of these reasons may be, the systems that were proposed to date did not gain much popularity in the blind and visually impaired community and still, the target users are reluctant to use it. The recommendations given in this paper based on analyzing the issues with the existing systems can be utilized in the future development of similar systems.

The design of navigation assistive technologies for the people who are visually impaired is an active research area as well as a challenging one where there are instances in which humanitarian conditions have also to be considered. Also, we recommends that the future navigation systems need to take advantage of technological trends and make them useful for creating a universally accessible navigation solution. We believe that the review of the systems, as well as the recommendations given in this paper, can be a stepping stone for considering further research in the area.

Additional information

Notes on contributors

Bineeth Kuriakose

Bineeth Kuriakose, received his Bachelor of Technology in computer science and engineering from Kannur University, India and Master of Technology in computer science (with specialization in image processing) from Cochin University of Science and Technology, India. Currently, he is a PhD research fellow at the OsloMet - Oslo Metropolitan University, Norway. He got several years of experience working in academia and also in industry. He served as a reviewer for some international journals and conferences. His areas of research interests include artificial intelligence, image processing, machine learning and human-computer interaction. Email: [email protected]

Raju Shrestha

Raju Shrestha, is a computer scientist and engineer, currently working as an associate professor in computer science at the OsloMet ‐ Oslo Metropolitan University, Norway. He holds a PhD degree in computer science from the University of Oslo, Norway, and received master degrees: ME in computer science & technology from Hunan University, China, and European Erasmus Mundus MSc degree in color, in informatics and media technology from the three European Universities: University of Jean Monnet, France, University of Granada, Spain, and NTNU in Gjøvik, Norway. He did BSc Engineering in computer science & engineering from Bangladesh University of Engineering and Technology, Bangladesh. Dr Shrestha has several years of professional and industrial working experience in different capacities in the field of information technology. He received professional training in Japan, Singapore, and the USA. His area of research interest includes (but not limited to) artificial intelligence, machine learning & deep learning, data science, assistive technology, image processing & analysis, and cloud computing. Email: [email protected]

Frode Eika Sandnes

Frode Eika Sandnes, received a BSc in computer science from the University of Newcastle upon Tyne, UK, and a PhD in computer science from the University of Reading. He is a professor in the Department of Computer Science at OsloMet and Kristiania University College and has also acquired status as a distinguished teaching fellow. His research interests include human-computer interaction generally and universal design specifically. Dr Sandnes has been instrumental in the establishment of the first master specialization in Accessibility Norway. He is an editorial member of several journals and has hosted several international conferences. Sandnes is the Norwegian representative to IFIP TC13. He was involved in the translation of WCAG2.0 into Norwegian, has written several textbooks and he has served on the board of the usability special interest group of the Norwegian Computer Society. Email: [email protected]

Notes

References

  • A. G. Nicholas, and G. E. Legge, “Blind navigation and the role of technology,” in The Engineering Handbook of Smart Technology for Aging, Disability, and Independence, 2008, pp. 479–500.
  • A. Riazi, F. Riazi, R. Yoosfi, and F. Bahmeei, “Outdoor difficulties experienced by a group of visually impaired iranian people,” J. Curr. Ophthalmol., Vol. 28, no. 2, pp. 85–90, 2016. doi: https://doi.org/10.1016/j.joco.2016.04.002
  • R. Manduchi, S. Kurniawan, and H. Bagherinia, “Blind guidance using mobile computer vision: a usability study,” in ASSETS, 2010, pp. 241–242.
  • B.-S. Lin, C.-C. Lee, and P.-Y. Chiang, “Simple smartphone-based guiding system for visually impaired people,” Sensors, Vol. 17, no. 6, pp. 1371, 2017. doi: https://doi.org/10.3390/s17061371
  • Y. Zhao, E. Kupferstein, D. Tal, and S. Azenkot, “It looks beautiful but scary: How low vision people navigate stairs and other surface level changes,” in Proceedings of the 20th International ACM SIGAC-CESS Conference on Computers and Accessibility, ACM, 2018, pp. 307–320.
  • H. Devlin. Echolocation could help blind people learn to navigate like bats, Feb 2018.
  • Acoustical Society of America. Exploring the potential of human echolocation, Jun 2017.
  • L. Thaler, and M. A. Goodale, “Echolocation in humans: an overview,” Wiley Interdisciplinary Reviews: Cognitive Science, Vol. 7, no. 6, pp. 382–393, 2016.
  • M. Srikulwong, and E. O’Neill, “Tactile representation of landmark types for pedestrian navigation: user survey and experimental evaluation,” in Workshop on using audio and Haptics for delivering spatial information via mobile devices at MobileHCI 2010, 2010, pp. 18–21.
  • M. C. Holbrook, and A. Koenig. History and theory of teaching children and youths with visual impairments. Foundations of Education. Volume I:, 2000.
  • R. G. Long, “Orientation and Mobility research: what is known and what needs to be known,” Peabody J. Educ., Vol. 67, no. 2, pp. 89–109, 1990. doi: https://doi.org/10.1080/01619569009538683
  • J. Sánchez, M. Espinoza, M. de Borba Campos, and L. B. Merabet, “Enhancing orientation and mobility skills in learners who are blind through video gaming,” in Proceedings of the 9th ACM Conference on Creativity & Cognition, ACM, 2013, pp. 353–356.
  • A. Bhowmick, and S. M. Hazarika, “An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends,” J. Multimodal User Interfaces, Vol. 11, no. 2, pp. 149–172, 2017. doi: https://doi.org/10.1007/s12193-016-0235-6
  • M. J. Field. Assistive and mainstream technologies for people with disabilities, Jan 1970.
  • S. Khenkar, H. Alsulaiman, S. Ismail, A. Fairaq, S. K. Jarraya, and H. Ben-Abdallah, “ENVISION: Assisted navigation of visually impaired smart-phone users,” Procedia. Comput. Sci., Vol. 100, pp. 128–135, 2016. doi: https://doi.org/10.1016/j.procs.2016.09.132
  • Á. Csapó, G. Wersényi, H. Nagy, and T. Stockman, “A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for re-search,” J. Multimodal User Interfaces, Vol. 9, no. 4, pp. 275–286, 2015. doi: https://doi.org/10.1007/s12193-015-0182-7
  • L. Ran, S. Helal, and S. Moore, “Drishti: an integrated indoor/outdoor blind navigation system and service,” in Second IEEE Annual Conference on Pervasive Computing and Communications, 2004. Proceedings of the, IEEE, 2004, pp. 23–30.
  • R. Tapu, B. Mocanu, and E. Tapu, “A survey on wearable devices used to assist the visual impaired user navigation in outdoor environments,” in 2014 11th International Symposium on Electronics and Telecommunications (ISETC), Nov 2014, pp. 1–4.
  • C. S. Silva, and P. Wimalaratne, “State-of-art-in-indoor navigation and positioning of visually impaired and blind,” in 17th International Conference on Advances in ICT for Emerging Regions, ICTer 2017 - Proceedings, 2018-Janua, 2018, pp. 271–279.
  • Z. Fei, E. Yang, H. Hu, and H. Zhou, “Review of machine vision-based electronic travel aids,” in 2017 23rd International Conference on Automation and Computing (ICAC), IEEE, 2017, pp. 1–7.
  • A. Hojjat. Enhanced navigation systems in gps denied environments for visually impaired people: A survey. arXiv preprint arXiv:1803.05987, 2018.
  • P. Chanana, R. Paul, M. Balakrishnan, and P. V. M. Rao, “Assistive technology solutions for aiding travel of pedestrians with visual impairment,” J Rehabil. Assist. Technol. Eng., Vol. 4, pp. 2055668317725993, 2017.
  • R. Manduchi. “Mobile vision as assistive technology for the blind: An experimental study,” in Computers helping people with special needs, K. Miesenberger, A. Karshmer, P. Penaz, and W. Zagler, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 9–16.
  • V.-N. Hoang, T.-H. Nguyen, T.-L. Le, T.-T. H. Tran, T.-P. Vuong, and N. Vuillerme, “Obstacle detection and warning for visually impaired people based on electrode matrix and mobile kinect,” in 2015 2nd National Foundation for Science and Technology Development Conference on Information and Computer Science (NICS), IEEE, 2015, pp. 54–59.
  • H.-C. Huang, C.-T. Hsieh, and C.-H. Yeh, “An indoor obstacle Detection system using depth information and Region Growth,” Sensors, Vol. 15, no. 10, pp. 27116–27141, oct 2015. doi: https://doi.org/10.3390/s151027116
  • U. R. Roentgen, G. J. Gelderblom, M.-i. Soede, and L. P. De Witte, “The impact of electronic mobility devices for persons who are visually impaired: A systematic review of effects and effectiveness,” J. Visual Impair. Blin., Vol. 103, no. 11, pp. 743–753, 2009. doi: https://doi.org/10.1177/0145482X0910301104
  • V. Filipe, F. Fernandes, H. Fernandes, A. Sousa, H. Paredes, and J. Barroso, “Blind navigation support system based on Microsoft Kinect,” Procedia. Comput. Sci., Vol. 14, pp. 94–101, 2012. doi: https://doi.org/10.1016/j.procs.2012.10.011
  • Wireless Technology Advisor. Disadvantages of rfid. mostly minor or you can minimize them., Nov 2009 13.
  • S. S. Chawathe, “Lowlatency indoor localization using bluetooth beacons,” in 2009 12th International IEEE Conference on Intelligent Transportation Systems, IEEE, 2009, pp. 1–7.
  • N. Fallah, I. Apostolopoulos, K. Bekris, and E. Folmer, “Indoor human navigation systems: A survey,” Interact. Comput., Vol. 25, no. 1, pp. 21–33, 2013.
  • A. J. Moreira, R. T. Valadas, and A. M. de Oliveira Duarte, “Reducing the effects of artificial light interference in wireless infrared transmission systems,” in IET Conference Proceedings, January 1996, pp. 5–5(1).
  • D. Dakopoulos, and N. G. Bourbakis, “Wearable obstacle avoidance electronic travel aids for blind: a survey,” IEEE Trans. Syst., Man, and Cybernetics, Part C (Applications and Reviews), Vol. 40, no. 1, pp. 25–35, 2009. doi: https://doi.org/10.1109/TSMCC.2009.2021255
  • Z. Cai, D. G. Richards, M. L. Lenhardt, and A. G. Madsen, “Response of human skull to bone-conducted sound in the audiometric-ultrasonic range,” Int. Tinnitus J., Vol. 8, no. 1, pp. 3–8, 2002.
  • A. Fadell, A. Hodge, S. Zadesky, A. Lindahl, and A. Guetta. Tactile feedback in an electronic device, February 12 2013. US Patent 8,373,549.
  • R. H. Lander, and S. Haberman. Tactile feedback controlled by various medium, November 16 1999. US Patent 5,984,880.
  • S. Brewster, F. Chohan, and L. Brown, “Tactile feedback for mobile interactions,” in Proceedings of the SIGCHI Cnference on Human Factors in Computing Systems, 2007, pp. 159–162.
  • E. Hoggan, S. A. Brewster, and J. Johnston, “Investigating the effectiveness of tactile feedback for mobile touchscreens,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2008, pp. 1573–1582.
  • N. G. Bourbakis, and D. Kavraki, “An intelligent assistant for navigation of visually impaired people,” in Proceedings 2nd Annual IEEE International Symposium on Bioinformatics and Bioengineering (BIBE 2001), IEEE, 2001, pp. 230–235.
  • T. Schwarze, M. Lauer, M. Schwaab, M. Romanovas, S. Böhm, and T. Jürgensohn, “A camerabased mobility aid for visually impaired people,” KI-Künstliche Intelligenz, Vol. 30, no. 1, pp. 29–36, 2016. doi: https://doi.org/10.1007/s13218-015-0407-7
  • K. Chaccour, and G. Badr, “Novel indoor navigation system for visually impaired and blind people,” in 2015 International Conference on Applied Research in Computer Science and Engineering (ICAR), IEEE, 2015, pp. 1–5.
  • N. Karlsson, E. Di Bernardo, J. Ostrowski, L. Goncalves, P. Pirjanian, and M. E. Munich, “The vslam algorithm for robust localization and mapping,” in Proceedings of the 2005 IEEE international conference on robotics and automation, IEEE, 2005, pp. 24–29.
  • J. Bai, S. Lian, Z. Liu, K. Wang, and D. Liu, “Virtual-blind-road following-based wearable navigation device for blind people,” IEEE Trans. Consum. Electron., Vol. 64, no. 1, pp. 136–143, 2018. doi: https://doi.org/10.1109/TCE.2018.2812498
  • J. Bai, D. Liu, G. Su, and Z. Fu, “A cloud and vision-based navigation system used for blind people,” in Proceedings of the 2017 International Conference on Artificial Intelligence, Automation and Control Technologies, ACM, 2017, pp. 22.
  • B. Li, J. P. Munoz, X. Rong, Q. Chen, J. Xiao, Y. Tian, A. Arditi, and M. Yousuf, “Vision-Based mobile indoor assistive navigation Aid for blind people,” IEEE Trans. Mob. Comput., Vol. 18, no. 3, pp. 702–714, 2019. doi: https://doi.org/10.1109/TMC.2018.2842751
  • E. Marder-Eppstein, “Project tango,” in ACM SIGGRAPH 2016 Real-Time Live!, SIG-GRAPH ‘16, New York, NY, USA, Association for Computing Machinery, 2016, pp. 25.
  • J. Xiao, S. L. Joseph, X. Zhang, B. Li, X. Li, and J. Zhang, “An assistive navigation framework for the visually impaired,” IEEE Trans. Human-Mach. Syst., Vol. 45, no. 5, pp. 635–640, 2015. doi: https://doi.org/10.1109/THMS.2014.2382570
  • Y. H. Lee, and G. Medioni, “Rgb-d camera based wearable navigation system for the visually impaired,” Comput. Vis. Image. Underst., Vol. 149, pp. 3–20, 2016. doi: https://doi.org/10.1016/j.cviu.2016.03.019
  • S. L. Joseph, J. Xiao, X. Zhang, B. Chawda, K. Narang, N.-t. Rajput, S. Mehta, and L. Venkata Subramaniam, “Being aware of the world: To-ward using social media to support the blind with navigation,” IEEE Trans. Human-Mach. Syst., Vol. 45, no. 3, pp. 399–405, 2015. doi: https://doi.org/10.1109/THMS.2014.2382582
  • A. Bhowmick, S. Prakash, R. Bhagat, V. Prasad, and S. M. Hazarika, “Intellinavi: navigation for blind based on kinect and machine learning,” in International Workshop on Multi-disciplinary Trends in Artificial Intelligence, Springer, 2014, pp. 172–183.
  • M. Sain, and D. Necsulescu, “Portable Monitoring and navigation Control system for helping visually impaired people,” in Proceedings of the 4th International Conference of Control, Dynamic Systems, and Robotics (CDSR’17), 2017, pp. 1–9.
  • Z. Zhang, “Microsoft kinect sensor and its effect,” IEEE Multimedia - IEEEMM, Vol. 19, pp. 4–10, 02 2012. doi: https://doi.org/10.1109/MMUL.2012.24
  • A. Ali, and M. A. Ali, “Blind navigation system for visually impaired using windowing-based mean on Microsoft Kinect camera,” in International Conference on Advances in Biomedical Engineering, ICABME, 2017-Octob, 2017.
  • C. Ton, A. Omar, V. Szedenko, V. H. Tran, A. Aftab, F. Perla, M. J. Bernstein, and Y. Yang, “Lidar assist spatial sensing for the visually impaired and performance analysis,” IEEE Trans. Neural Syst. Rehabil. Eng., Vol. 26, no. 9, pp. 1727–1734, 2018. doi: https://doi.org/10.1109/TNSRE.2018.2859800
  • R. O’Keeffe, S. Gnecchi, S. Buckley, C. O’Murchu, A. Mathewson, S. Lesecq, and J. Foucault, “Long range lidar characterisation for obstacle detection for use by the visually impaired and blind,” in 2018 IEEE 68th Electronic Components and Technology Conference (ECTC), IEEE, 2018, pp. 533–538.
  • M. Castillo-Cara, E. Huaranga-Junco, G. Mondragón-Ruiz, A. Salazar, L. Orozco-Barbosa, and E. A. An-túnez, “Ray: smart indoor/outdoor routes for the blind using bluetooth 4.0 ble,” in ANT/SEIT, 2016, pp. 690–694.
  • T. Ishihara, J. Vongkulbhisal, K. M. Kitani, and C. Asakawa, “Beacon-guided structure from motion for smartphone-based navigation,” in 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, 2017, pp. 769–777.
  • V. Nair, M. Budhai, G. Olmschenk, W. H. Seiple, and Z. Zhu, “Assist: personalized indoor navigation via multimodal sensors and high-level semantic information,” in Proceedings of the European Conference on computer vision (ECCV), 2018, pp. 0–0.
  • V. Nair, C. Tsangouri, B. Xiao, G. Olmschenk, Z. Zhu, and W. Seiple, “A hybrid indoor positioning system for the blind and visually impaired using Blue-tooth and Google Tango,” J. Technol. Persons Disabil., Vol. 6, pp. 62–82, 2018.
  • S. A. Cheraghi, V. Namboodiri, and L. Walker, “Guidebeacon: beacon-based indoor wayfinding for the blind, visually impaired, and disoriented,” in 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom), IEEE, 2017, pp. 121–130.
  • B. Vamsi Krishna, and K. Aparna, “Iot-based in-door navigation wearable system for blind people,” in Artificial Intelligence and Evolutionary Computations in Engineering Systems, Springer, 2018, pp. 413–421.
  • N. Sathya Mala, S. Sushmi Thushara, and S. Subbiah, “Navigation gadget for visually impaired based on iot,” in 2017 2nd International Conference on Computing and Communications Technologies (ICCCT), IEEE, 2017, pp. 334–338.
  • S. B. Kallara, M. Raj, R. Raju, N.-h. J. Mathew, V. R. Padmaprabha, and D. S. Divya, “Indriya—a smart guidance system for the visually impaired,” in 2017 International Conference on Inventive computing and Informatics (ICICI), IEEE, 2017, pp. 26–29.
  • D. Vera, D. Marcillo, and A. Pereira, “Blind guide: Anytime, anywhere solution for guiding blind people,” in World Conference on Information Systems and Technologies, Springer, 2017, pp. 353–363.
  • S. Gupta, I. Sharma, A. Ti-wari, and G. Chitranshi, “Advanced guide cane for the visually impaired people,” in 2015 1st International Conference on Next Generation Computing Technologies (NGCT), IEEE, 2015, pp. 452–455.
  • A. Sen, K. Sen, and J. Das, “Ultrasonic blind stick for completely blind people to avoid any kind of obstacles,” in 2018 IEEE SENSORS, IEEE, 2018, pp. 1–4.
  • K. Patil, Q. Jawadwala, and F.-l. C. Shu, “Design and construction of electronic Aid for visually impaired people,” IEEE Trans. Human-Mach. Syst, Vol. 48, no. 2, pp. 172–182, 2018. doi: https://doi.org/10.1109/THMS.2018.2799588
  • J. Borenstein, and I. Ulrich, “Applying mobile Robot technologies to assist the visual Impaired,” GuideCane, Vol. 31, no. 2, pp. 131–136, 2001.
  • A. A. Nada, M. A. Fakhr, and A. F. Seddik, “Assistive infrared sensor based smart stick for blind people,” in 2015 Science and Information Conference (SAI), IEEE, 2015, pp. 1149–1154.
  • R. Jafri, R. L. Campos, S. A. Ali, and H. R. Arabnia, “Visual and infrared sensor data-based obstacle detection for the visually impaired using the google project tango tablet development kit and the unity engine,” IEEE. Access., Vol. 6, pp. 443–454, 2017. doi: https://doi.org/10.1109/ACCESS.2017.2766579
  • P. Marzec, and A. Kos, “Low energy precise navigation system for the blind with infrared sensors,” in 2019 MIXDES-26th International Conference” Mixed Design of Integrated Circuits and Systems”, IEEE, 2019, pp. 394–397.
  • J. Ducasse, A. M. Brock, and C. Jouffrais, “Accessible interactive maps for visually impaired users,” in Mobility of Visually Impaired People, Springer, 2018, pp. 537–584.
  • J. Albouys-Perrois, J. Laviole, C. Briant, and A. M. Brock, “Towards a multisensory augmented reality map for blind and low vision people: A participatory design approach,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–14.
  • T. Götzelmann, and K. Winkler, “Smart-tactmaps: a smartphone-based approach to support blind persons in exploring tactile maps,” in Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments, 2015, pp. 1–8.
  • Q. Liu, R. Li, H. Hu, and D.-b. Gu, “Building semantic maps for blind people to navigate at home,” in 2016 8th Computer Science and Electronic Engineering (CEEC), IEEE, 2016, pp. 12–17.
  • T. Götzelmann, “Lucentmaps: 3d printed audiovisual tactile maps for blind and visually impaired people,” in Proceedings of the 18th International ACM Sigaccess Conference on Computers and ccessibility, 2016, pp. 81–90.
  • C. Gleason, A. Guo, G. Laput, K. Kitani, and J. P. Bigham, “Vizmap: Accessible visual information through crowdsourced map reconstruction,” in Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, 2016, pp. 273–274.
  • S. Caraiman, A. Morar, M. Owczarek, A. Burlacu, D. Rzeszotarski, N. Botezatu, P. Herghelegiu, F. Moldoveanu, P. Strumillo, and A. Moldoveanu, “Computer vision for the visually impaired: The sound of vision system,” in Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017, 2018-Janua, 2018, pp. 1480–1489.
  • G. Balakrishnan, G. Sainarayanan, R. Nagarajan, and S. Yaacob, “Wearable real-time stereo vision for the visually impaired,” Eng. Lett., Vol. 14, no. 2, 2007, pp. 6–14.
  • D. Sato, U. Oh, K. Naito, H.-r. Takagi, K. Kitani, and C. Asakawa, “Navcog3,” in Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS ‘17, 2017, pp. 270–279.
  • A. Ganz, J. M. Schafer, Y. Tao, C. Wilson, and M. Robertson, “Perceptii: smartphone based indoor navigation system for the blind,” in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Aug 2014, pp. 3662–3665.
  • S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems, 2015, pp. 91–99.
  • J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
  • T. Mataŕo, F. Masulli, S. Rovetta, A. Cabri, C. Traverso, E. Capris, and S. Torretta, “An assistive mobile system supporting blind and visual impaired people when are outdoor,” in RTSI 2017 - IEEE 3rd International Forum on Research and Technologies for Society and Industry, Conference Proceedings, 2017, pp. 1–6.
  • J. Lock, G. Cielniak, and N. Bellotto. A portable navigation system with an adaptive multimodal interface for the blind. AAAI Spring Symposium - Technical Report, SS-17-01 -:395–400, 2017.
  • W. Heuten, N. Henze, S. Boll, and M. Pielot, “Tactile wayfinder: a non-visual support system for wayfinding,” in Proceedings of the 5th Nordic Conference on Human-computer Interaction: Building Bridges, ACM, 2008, pp. 172–181.
  • R. Manduchi, and J. Coughlan, “(com-puter) vision without sight,” Commun. ACM, Vol. 55, no. 1, pp. 96, 2012. doi: https://doi.org/10.1145/2063176.2063200
  • C. Jayant, H. Ji, S. White, and J. P. Bigham, “Supporting blind photography,” in The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, 2011, pp. 203–210.
  • M. Vázquez, and A. Steinfeld, “Helping visually impaired users properly aim a camera,” in Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility, 2012, pp. 95–102.
  • L. Maddalena, and A. Petrosino, “Moving object detection for real-time applications,” in 14th International Conference on Image Analysis and Processing (ICIAP 2007), IEEE, 2007, pp. 542–547.
  • G. Regal, E. Mattheiss, M. Busch, and M. Tscheligi, “Insights into internet privacy for visually impaired and blind people,” in International Conference on Computers Helping People with Special Needs, Springer, 2016, pp. 231–238.
  • F. E. Sandnes, “What do low-vision users really want from smart glasses? faces, text and perhaps no glasses at all,” in International Conference on Computers Helping People with Special Needs, Springer, 2016, pp. 187–194.
  • M. Gori, G. Cappagli, A. Tonelli, G. Baud-Bovy, and S. Finocchietti, “De-vices for visually impaired people: high technological devices with low user acceptance and no adaptability for children,” Neurosci. Biobehav. Rev., Vol. 69, pp. 79–88, 2016. doi: https://doi.org/10.1016/j.neubiorev.2016.06.043
  • L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, “Deep learning for generic object detection: A survey,” Int. J. Comput. Vision, Vol. 128, no. 2, pp. 261–318, 2020. doi: https://doi.org/10.1007/s11263-019-01247-4
  • Z.-Q. Zhao, “Peng Zheng, Shoutao Xu, and Xindong Wu. object detection with deep learning: A review,” IEEE Trans. Neural Netw. Learn. Sys., Vol. 30, no. 11, pp. 3212–3232, 2019. doi: https://doi.org/10.1109/TNNLS.2018.2876865
  • A. Caspo, G. Wersényi, and M.-o. Jeon, “A survey on hardware and software solutions for multimodal wearable assistive devices targeting the visually impaired,” Acta Polytech. Hung., Vol. 13, no. 5, pp. 39, 2016.
  • S. Real, and A. Araujo, “Navigation systems for the blind and visually impaired: Past work, challenges, and open problems,” Sensors, Vol. 19, no. 15, pp. 3404, 2019. doi: https://doi.org/10.3390/s19153404
  • W. Jeamwatthanachai, M. Wald, and G. Wills, “Indoor navigation by blind people: Behaviors and challenges in unfamiliar spaces and buildings,” Brit. J. Visual Impair., Vol. 37, no. 2, pp. 140–153, 2019. doi: https://doi.org/10.1177/0264619619833723
  • M. Bousbia-Salah, and M. Fezari, “A navigation tool for blind people,” in Innovations and Advanced Techniques in Computer and Information Sciences and Engineering, Springer, 2007, pp. 333–337.
  • A. Abdolrahmani, W. Easley, M. Williams, S. Branham, and A. Hurst, “Embracing errors: Examining how context of use impacts blind individuals’ acceptance of navigation aid errors,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 4158–4169.
  • M. A. Williams, E. Buehler, A. Hurst, and S. K. Kane, “What not to wearable: using participatory workshops to explore wearable device form factors for blind users,” in Proceedings of the 12th Web for All Conference, 2015, pp. 1–4.
  • P. Angin, B. K. Bhargava, et al., “Real-time mobile-cloud computing for context-aware blind navigation,” Int. J. Next-Generation Compu., Vol. 2, no. 2, pp. 405–414, 2011.