12,899
Views
18
CrossRef citations to date
0
Altmetric
Articles

The future of geospatial intelligence

&
Pages 151-162 | Received 06 Apr 2017, Accepted 16 May 2017, Published online: 28 Jun 2017

Abstract

For centuries, humans’ capacity to capture and depict physical space has played a central role in industrial and societal development. However, the digital revolution and the emergence of networked devices and services accelerate geospatial capture, coordination, and intelligence in unprecedented ways. Underlying the digital transformation of industry and society is the fusion of the physical and digital worlds – ‘perceptality’ – where geospatial perception and reality merge. This paper analyzes the myriad forces that are driving perceptality and the future of geospatial intelligence and presents real-world implications and examples of its industrial application. Applications of sensors, robotics, cameras, machine learning, encryption, cloud computing and other software, and hardware intelligence are converging, enabling new ways for organizations and their equipment to perceive and capture reality. Meanwhile, demands for performance, reliability, and security are pushing compute ‘to the edge’ where real-time processing and coordination are vital. Big data place new restraints on economics, as pressures abound to actually use these data, both in real-time and for longer term strategic analysis and decision-making. These challenges require orchestration between information technology (IT) and operational technology (OT) and synchronization of diverse systems, data-sets, devices, environments, workflows, and people.

1. Introduction: perceptality and convergence of digital and physical worlds

Since the dawn of human history, our ability to make informed decisions about the world around us has been one driven by perception; perception of personal relativity and importance. Since the dawn of computing, the digital world has remained, well, digital – data in boxes, hard-drives, servers, rarely integrated or analyzed within any larger context. For years, the physical world remained largely separated from the digital world, technology from business, information technology (IT) from operational technology (OT). However, the pace of technological advancement is finally unifying these worlds.

The discipline inherent in capturing the physical dimension of this intersection is the field of geospatial intelligence. This includes the perception, cognition, computation, control, reaction, and understanding of physical features and geographically referenced activities. As technology has evolved alongside this field, capabilities in these six areas have transformed how we use tools to shape change.

‘Perceptality’, is the convergence of perception and reality, a term we have coined at Hexagon. It is the merging of the digital and physical worlds; the inevitable fusion of real life, objects, and environments with their cyber manifestations; it underlies the digital transformation of industry and society.

Of course, long before the digital age, humans were using technology to capture ‘raw’ data information in the field, ‘at the edge.’ Centuries ago, surveyors traveled to remote locations and used papyrus to record angles and ranges to depict topography, then paper, and later ticker tape. Extracting information directly from and about the physical world has been central to industrial and societal development. Figure depicts a few examples of geospatial technologies over the centuries.

Figure 1. Geospatial technology has evolved over the ages.

Figure 1. Geospatial technology has evolved over the ages.

Hexagon has been a pioneer of capturing data from the edge long before networked devices and connectivity. Early development of communications equipment and antennae helped us shape the trajectory for mobile phones and radios. Our repertoire of metrology technologies like laser scanners, portable measuring arms, calipers, theodolites, tomography, allowed us to redefine precision and quality assurance for industrial manufacturing. From micro-precision to geospatial dimensioning, our software and hardware innovations in the early 2000s pioneered intelligent mapping, spatial awareness, structural monitoring, and industrial plant control and management.

As Moore’s Law has forced down computation costs and size constraints, we continued to accelerate our technological capabilities to capture physical reality with even greater precision. For years now, we have deployed LiDARs, inertial navigation systems, multi-laser systems, precision 3D scanners, integration with Global Navigation Satellite Systems (GNSS), and many other measurement technologies. With advancements in machine learning, we now use 3D model generators, robotic total stations, multi-imaging sensors, and all manner of computer vision techniques. These capabilities have helped us deliver unprecedented perception and cognition applications across agriculture, mining, construction, government, transportation, and security by creating digital replicas of physical realities.

When organizations can mirror physical realities in the digital world, the capacity for agile and intelligence decision-making is redefined. Situational awareness sharpens and expands, and change detection accelerates. Precision and accuracy become more granular than ever before, which reduces error and waste, mitigates risk and uncertainty, and enables greater speed, reliability, productivity, safety and security.

Although critical sectors like engineering, manufacturing, agriculture, and others have been increasing their adoption of sensor technology and software systems, many applications remain siloed, disconnected from other data-sets or stakeholders, and generally lagging in returns on investment. In order to fully realize this potential, current models for computing must shift.

Enabling perceptality is not just about the transition from digitized endpoints to fully digitalized workflows and interactions; it is about seamlessly capturing reality and empowering these interactions at the far edges of the network.

2. Shift to the edge

When we reflect upon the development and evolution of the Internet, what quickly emerges is the ever-evolving direction of network topology and computing architecture. The Internet was born of the mainframe era, a centralized architecture in which a large high-speed processing and memory unit supported multiple workstations. With the rise of personal computers (PCs) and the need for distributed workstations, business logic, simple data, and interfaces to operate as one ‘networked system,’ the client–server model emerged. For more than two decades, this model prevailed, until the massive transformation of user interface and computational power enabled the age of mobile.

Centralized cloud computing marked another shift back to centralized network topology, and the profound scale it enabled. Indeed, cloud computing architectures, software-as-a-service (SaaS) products, and innumerable apps have transformed the way billions of people live, navigate, work, bank, and communicate, and interact in society. To enable mobile functionality, flexibility, efficiency, and ubiquitous adoption, cloud computing has emerged as the de facto, centralized architecture supporting mobile devices. However, the pace of technological innovation is yet again, inspiring a pivot. Figure depicts the shifts between centralized and distributed compute over the past few decades.

Figure 2. The evolution of computing architecture.

Figure 2. The evolution of computing architecture.

With each new era, the total addressable market – both users and machines ‘nodes’ – expands exponentially. The number of users in the height of the mainframe age was around 10 million; this number swelled to 2B (B means billion) with the advent of the PC; today there are roughly 4 billion mobile devices (Statista Citation2017). When sensors pervade any and every object in the physical world, the number of connected endpoints will once again, grow exponentially.

The gravity of so much data generated by so many endpoints has rendered centralized computing topologies inadequate. With the push from cloud services and pull from a rapidly expanding number of connected endpoints, the so-called ‘edge’ of the network must transform, from pure data generation to intelligence generation. Figure offers an overview of where computation and analytics take place depending on application and power requirements.

Figure 3. Agility and execution at the edge; learning and innovation in the cloud.

Figure 3. Agility and execution at the edge; learning and innovation in the cloud.

2.1. Practical forces driving connected industries to ‘live on the edge’

Just as economic and industrial forces have been awakening to ‘digital’ disruption, where cloud computing, mobile, and social are redefining operations and business models, another, far greater wave of disruption is emerging. What are today, centralized structures – organizational, computational, communications – are quietly undergoing a seismic shift toward decentralized and distributed systems. There are a number of forces driving the shift from cloud to edge.

2.2. From big data to colossal data

First, it continues to be true that we have created more data in the last two years than in all of human history combined. In 1992, global internet traffic accounted for 100 GB per day (Cisco Citation2017), and in 2015, that number hit 15 billion GB per day. The digital universe is doubling in size every 12 months (Cloud Times Citation2014). By 2020, it is expected to reach some 44 zetabytes – what some scientists estimate to be more bytes in the digital universe than there are stars in the physical universe (IDC and EMC2 Citation2014).

2.3. Performance and energy constraints

The problem with so much data is that existing infrastructure simply cannot handle the rates or the volumes. The proliferation of devices is collecting vast amounts of data and these data need to be processed in real time, a feat hardly achievable with centralized networks, limited bandwidth, and cloud infrastructure. To the extent high volume data and content are processed to the cloud today, it places tremendous cost pressures and constraints in the form of bandwidth, latency, storage, energy, or raw computational power. In distributed environments, so-called ‘peer to peer’ networks are utilized to lessen the load on core networks and share data locally (Shi et al. Citation2016). This is a key enabler for digitalizing energy-constrained environments.

Many Internet of Things (IoT) edge sensors, particularly in industrial settings, must be equipped to operate in regions of low connectivity, and often for years on the same battery. Even when energy harvesting is possible, power budgets for these devices are a function of processing capacity. In remote environments, when nodes require high-energy currents to both stay active (ie continue sensing, measuring, interpreting) and transmit data, enterprises face a trade-off between power versus performance efficiencies. For data-intensive devices, like video cameras or audio feeds, capabilities to fully harness these data have historically been extremely limited. Although sensing technology itself is advancing rapidly, firmware and CPU designs typically determine power consumption, sleep currents, performance, peripheral functionality, and processing speed.

In industrial and mission-critical environments, especially the inherent latency in connecting to the cloud render such a centralized model inadequate, even unsafe. Adjacent technologies in peer-to-peer energy transmission, storage, data compression, and potentially distributed ledgers architectures will influence performance, by enabling more seamless integration between physical and digital events such as transactions, energy distribution, and authentication.

This level of data volume and performance demands sophisticated data management techniques at every part of the stack, even within ‘edge’ devices. Geospatial applications, for example, are no stranger to demands physical conditions place on computing. In technology originally developed in collaboration with NASA, Hexagon offers a single photon LiDAR product which allows 10 times greater efficient data processing (of more than a 1 TB per hour) for airborne applications over any sort of terrain, day or night (Hexagon Geosystems Citation2017a). Additional onboard airborne sensors to support faster compute by compressing giant data-sets during flight so that performance is unfazed, and data are quickly transferred to the cloud once grounded.

2.4. Security and reliability requirements

Enabling perceptality in industry must prioritize security and reliability. Securing assets, infrastructure, people, and ensuring reliability in workflows, to the greatest extent possible. Moreover, it is foundational to delivering quality of service, supporting certain economic channels, and most importantly, instilling safety and confidence in systems themselves.

Edge computing also impacts security, and in some applications, privacy. For one, the decentralized nature of edge networks reduces emphasis on the cloud or central premise as a core ‘centralized’ computing environment. In many instances, such ‘hub and spoke’ models are more vulnerable to bottleneck or failure than are distributed ones where no single node will necessarily take down the entire network. Secondly, when data encrypted on the device moves closer to the core, security points, firewalls, or other checkpoints identify any tampering more quickly. Finally, in some instances, such as in a smart city context, keeping sensitive data altogether on devices reduces the likelihood malicious actors will penetrate it, since it is generally far easier to penetrate centralized enterprise IT systems (via malware or phishing, for example) than it is to edge nodes scattered throughout an environment.

Even when assets face minimal security threat, reliability is key. Certain remote environments will inevitably have poor connectivity, others highly uncertain conditions, and others where performance has life-or-death consequences. Risking latency to the cloud is not an option. In geospatial applications, it is imperative not only to capture data from the edge, but also to extract value from that data to function with precision, to monitor safety in spatial layouts, or to alert users of hazard.

2.5. From data collection to intelligence and decision-making

As the digitization of society and industry generate unfathomable amounts of data, pressures abound to actually use these data. The integration of digital and physical is not only for accelerating and automating real-time applications, but for decision-making and improvement over time. Despite so much data, IDC and others estimate some 80–90% of enterprise data is ‘dark’ data – i.e. data that organizations collect, process, and store, but never actually use (Technopedia Citation2017). The push to capitalize on what is today (mostly) underleveraged data is one of the biggest reasons applications are demanding intelligence at the edge. After all, investments spent to digitalize processes and equipment require business justification.

Increasing performance and analytics at the edge instead of constantly using resources to communicate data back to the cloud has a number of subsequent benefits, as depicted in Figure .

Figure 4. Business benefits of edge computing.

Figure 4. Business benefits of edge computing.

This confluence of massive volumes and variety of data, the imperative for real-time, agile, and sustainable processing, the deep need to actually leverage these data, signals yet another twist in the evolution of network topology: Real-time data processing and service execution will reside at the ‘edge’ – that is, on the device – while advanced machine intelligence, learning, and longer term service innovation will develop in the cloud.

3. Myriad technological advancements accelerate shift to ‘capture reality’

It is not just broader trends around data volumes, processing and utilization that are fostering the sea change in network topology; the diverse and rapid pace of technological innovation is accelerating the shift toward edge processing as well. Indeed, more than any single technology, it is the inevitable confluence of the following wherein lies the greatest prospect for disruption.

3.1. Capturing things through sensors and IoT

When we add sensors to something, we grant that ‘thing’ – object, vehicle, machine, infrastructure, any ‘thing – the ability to communicate about itself, and very often their patterns of use or users themselves. For years, organizations have been placing sensors on heavy machinery and vehicles, but within the last five years, sharp declines in cost and significant improvements in connectivity have ushered in a new era of pervasive sensor application. Ubiquitous sensors, connectivity, and networked services, often coined ‘IoT’, is redefining business and society’s visibility into, and therefore understanding of, the physical world.

3.2. Application of sensors on any and every ‘thing’ – giving our world a digital nervous system

IoT automates operations by automatically gathering information about physical assets such as devices, machines, equipment, vehicles, infrastructure, facilities, and so on. Visibility into status and behaviors enables optimization of control, processes, and resources. Sensors enable devices to capture the physical reality of things through a wide range of functions. Commonly used sensors, often simultaneously applied to the same object, are depicted in Figure .

Figure 5. Sensors digitally capture reality of physical objects.

Figure 5. Sensors digitally capture reality of physical objects.

The modern smartphone for example, has between 8 and 11 sensors, capturing everything from where devices and their users go (GPS), to when devices are held to the ear (gyrometer), to identifiable biometrics (fingerprint), to how fast the phone is moving (accelerometer). However, these sensors are not so new. Hexagon’s geospatial applications have been using sensors in conjunction with technologies like LiDAR, radar, GNSS, inertial navigation (INS), and simultaneous localization and mapping (SLAM) systems for years. What’s new is the application of these sensors and systems to enable new commercial categories like self-driving vehicles or robotics.

At present, there are an estimated 17.6B connected devices online (IHS Citation2017). In 2016 alone, some 4B more connected devices came online. That number is forecasted to reach between 20B and 30B within just four years (IEEE Spectrum Citation2017). Manufacturers are adding sensing technology to everything from toys to turbines, cows to coffee makers, and all manners of machinery, appliances, wearables, and far beyond. We are laying the foundation for ubiquitous sensing networks; the interoperability and crowdsourcing of vast networks of professional and non-professional sensors sensing all manner of dimension, any place and any time.

The rise of sensing technology is significant, not only for the visibility, reality-capture, and new services it enables, but for the massive amounts of data sensors will generate. Consider one example from one connected object in one industry: a single autonomous car will generate 1 GB of data per second; an estimated 2 petabytes of data per car per year (Datafloq Citation2017). The future of geospatial intelligence is about leveraging these data to perceive, compute, analyze, collaborate, learn from, and shape real change.

3.3. Capturing perception through machine learning and computer vision

Thanks to recent breakthroughs in hardware speed and significant improvements in algorithms, machine learning and artificial intelligence – fields that have existed (primarily in academia) for decades, but were handicapped by inadequate compute power and oversold expectations – are suddenly undergoing a rapid resurgence.

Artificial intelligence is an umbrella term for a range of algorithmically trained perception-capable computing models, including machine learning, computer vision, natural language processing, deep learning, robotics, planning, and beyond. Advancements in augmented and virtual realities, wherein media can be contextually overlaid on physical or virtual spaces will also accelerate demand for automated geospatial intelligence in real time. While plenty of hardware and equipment like cameras, LiDARs, radars, satellites, and countless other instruments used for spatial measurement have been around for years, it is the advent of artificial intelligence (AI) and algorithmically trained learning software that shifts understanding of the physical world from humans-only to machines.

3.4. Put simply – machines can now perceive spatial reality on their own

What are essentially advanced algorithms able to detect patterns, learn about them, and recommend outcomes, are responsible for hundreds of new use cases. What follows is a list of applications that are transforming computational capabilities for perceiving the physical world in diverse ways.

Satellite imagery for geoanalytics

Object detection, navigation, and search

Localization and mapping

Motion detection

Weather forecasting

Sensor data fusion in machinery

Collaborative robots

Hundreds of new use cases emerge when advanced algorithms are trained to detect, classify, and navigate objects, features, and patterns. For many applications, rich data generation requires intelligence at the edge. Through its work in industrial environments, Hexagon has led the development of advanced edge-enabled equipment, not just for onboard data processing, but learning and adaptation as well.

For instance, its ‘self-learning’ total stations (shown in Figure ) can be used in even the harshest environments to automatically adapt to local situations by separating relevant system reflectors from other reflectors on the job site. The robotic total solution, for instance, automatically searches, aims and follows targets, collects measurements, stakes, and defines the area of hundreds or thousands of points. The software on these devices turns complex data into workable 3D models right from the field, while working in conjunction with cloud-based software for deeper data mining and modeling back in the office.

Figure 6. Hexagon’s Leica Nova: a self-learning total station automatically detects, measures, and models environments from the edge.

Figure 6. Hexagon’s Leica Nova: a self-learning total station automatically detects, measures, and models environments from the edge.

Hexagon’s IMAGINE Photogrammetry product is used in numerous geospatial applications for real-time object recognition and machine vision. For instance, simple applications like scanning floorplans help quickly delineate boundaries with extreme precision. This same distributed processing and on-board machine vision is also used in more complex applications like filtering moving objects in a street view scene for navigation, safety, and autonomous decision-making.

When machines themselves are suddenly able to autonomously perceive the world around them, the costs, latency, distance, and unreliability of cloud connectivity no longer suffices. Referring back to our example of a self-driving vehicle, driving, navigation, and object detection data must be processed locally as even the tiniest amount of latency can be a matter of life or death. This reiterates another important driver of edge computing: mission-criticality.

3.5. Capturing intelligence through big data learning and synchronization

Although capturing physical objects through sensing technology and machine perception is demanding computational agility and reliability at the edge, a far greater driver of this technological decentralization is making sense of data at scale. Not only does the volume, velocity, and variety of data demand more real-time and agile processing, so too does the need to learn from it.

Machine learning is the catalyst for harnessing the current and oncoming onslaught of data from the digitization of machines and the physical world. Put simply, if machine learning did not exist, we would have to create it.

Sophisticated modeling techniques such as sensor data fusion, situational forecasting, behavior and scenario simulation, autonomous agent-based decision-making are just a few examples of how constructs like deep learning and neural network architectures are helping enterprises:

Harness their ‘dark’ data

Process unstructured data

Learn from their data

Better analyze their data in conjunction with other sources (eg third-party, disparate sensors)

Predict anomalies, malfunction, corruption, even security threats

Simulate outcomes without risk

Detect patterns, interdependencies, relationships beyond human mental bandwidth (or bias)

When combined with high-accuracy sensing for positioning intelligence, dynamic situational awareness, and multi-data-set contextual insights, unprecedented mobility solutions emerge. Hexagon’s work in industries like mining and agriculture have led to the development of edge intelligence capable of autonomously grading terrain to integrate this information into real-time workflows as well as planning and resource allocation. Such machine-level vision is foundational to fully autonomous mining or farming operations.

The gravitational pull of data processing at the edge means the edge takes on a share of intelligence all its own. When the cloud serves as the training center, handling deeper, ongoing learning, simulation, and recommendation, the edge point of access (device) becomes the object of improvement and update. These updates could come in the form of software updates or patches delivering better sensing, smarter data curation, more accurate inferences, more automated actions and decision-making.

As depicted in Figure , the feedback loop between cloud and device accelerates performance, reliability, and more efficient data processing over time. Through sensors that capture the world as-is or as-built and software that interprets the captured data, organizations are able to better manage real conditions and take immediate action. With accurate and up-to-date digital depictions of what’s going on in the real world, they are able to derive insight, ask relevant questions and manage extensive and complex enterprise-wide information. Reducing the time between information extraction and data-informed action is the foundation for shaping smart change.

Figure 7. Cloud-edge feedback loop.

Figure 7. Cloud-edge feedback loop.

4. Advanced industrial applications illustrate imperative and opportunity

The accelerated and intelligent convergence of digital and real-world reality capture – perceptality – will be empowered by distributed computing on the “edge” and “value-added” in the cloud. For many industrial contexts, edge computing is the enabler to reliability, risk mitigation, situational awareness, and synchronicity across the production chain. Hexagon’s legacy in geospatial awareness, metrology, and data management in the real world have rewarded us the expertise and opportunity to orchestrate connectivity, data fusion, and reliable process automation across incredibly complex industrial environments. What follows are three case examples of environments in which edge networking is transforming capabilities for Hexagon customers.

4.1. Smart digital mine

Mining is an ancient and essential industry for extraction of minerals or other geological materials and resources from the earth to sustain populations and infrastructure. However, in an environment handling precious natural resources, it has never been more important to channel (big and small) data in a way that maximizes its usefulness in real-time, when and where it is needed.

Consider the diverse endpoints in a mining environment:

Inside mine and outside mine

Assets like stockpiles, instruments, and other equipment

Mobile devices, tablets, workstations, etc.

Vehicles such as trucks and tractors

Satellites, cameras, and antennae

Workers on-site and off

Conditions (eg roads, weather)

All workflows, communications, connectivity, analytics, interoperability, etc.

4.1.1. Challenge: lack of integration and connectivity stifles digging deep

Extractive industries such as mining generate enormous amounts of geospatial data, but they are often remote and therefore access to these data can hinder analysis and action. The challenge in mining is thus one of managing complex engineering information, constantly changing, sometimes unpredictable landscapes, and coordinating information across disparate stakeholders and environments. Given the diversity of mining operations, mining has traditionally had to rely on an array of point solutions as ‘patches’ to disparate problems. Not only is the circulation of information fragmented when, for instance, there is one solution for blast management, another for fleet management, another for environmental monitoring, but also too are the hidden relationships and insights when these data and capabilities are not integrated.

4.1.2. Solution: mining for insights keeping data on the surface

The solution to more productive, safe, and intelligent mining is not just about integration, but about powering nodes ‘at the edge’ to reliably and efficiently transmit these data. It is critical that stakeholders have the power and mobility to search and analyze these data from any application and in a disconnected mode. Our experience in this industry finds that what mines really need is an integrated solution uniting surveying, design, fleet management, production optimization, and collision avoidance together in a life-of-mine solution that connects people and processes and augments safety, productivity, and decision-making.

Forward-thinking mining companies are using powerful positioning intelligence technologies such as GNSS, LiDAR, antennae, satellites, GIS data, image detection, and navigation software to capture and map real-time environmental dynamics. Remote sensing monitors and communicates about difficult or dangerous to reach areas, determining surface features, vegetation variation, changes in infrastructure, even pinpointing the location of mineral outcroppings or suspect disturbances. Our fatigue monitoring system is an operator-friendly, unobtrusive monitoring and alert system that uses on-board algorithms to assess current and eventual driver fatigue levels to improve driver safety, prevent vehicle incidents, and improve mining productivity. (Hexagon Geosystems Citation2017b). In mining applications relying on satellite data, GNSS systems are preconfigured to constantly select the most appropriate positioning methods depending on which satellite and communication constellations are most readily available in the area of operation.

Generating actionable reports from these data is also central. Through integration, advanced modeling, and powerful 3D visualization tools, these same technologies help miners ‘see’ across large areas. For example, through repeated intervals of multi-spectral imagery, miner can measure changes to entire pits, quantify and monitor stock piles, the amount of hectares disturbed, even manage contracts scheduled. Combining image, sensor, and mapping data, these systems create land use or land cover maps, help ensure compliance with governmental stands, and help lower fees on disturbance. They aid in operational and environmental safety, and more agile coordination across teams, ensuring mine planning and operations stakeholders have access to the latest ‘full-picture’ data integrated across endpoints and workflows.

4.1.3. Impact: unearthing data’s potential

Broadly speaking, these technologies impact miners by:

Integrating and communicating what and where change has occurred

Producing maps and reports for collaboration as well as compliance

Integrating data into advanced modeling systems for mine planning and operations

Determining any long-term effects to environment

Optimizing costs and efficiency through workflow optimization, risk mitigation, and maximum output

4.2. Smart digital construction project

Construction is the process of building infrastructure. While construction companies are accelerating their adoption of information technologies to help manage the complexities inherent in their work, many tools have remained in disparate silos and efforts have proven to be more disruptive than helpful or cost-effective.

4.2.1. Challenge: laying the groundwork for coordinated construction

Mid-size and large-scale construction and infrastructure projects are some of the most complex endeavors to coordinate. The largest projects often require years of planning and execution and involve thousands of people, tens of thousands of interdependent tasks, and millions, if not billions, of dollars of investment. Regardless of size, every construction project is unique and demands extensive orchestration, beginning with strategic design, preparing the construction plan, budgeting, communicating detailed instructions, tracking variances, coordinating teams and workflows. Plans for successful outcomes are often compromised by lack of clear scope, incomplete designs, data entry errors for estimating and scheduling, never mind the realities of unexpected events or limitations. These projects are almost always subject to delays and cost overruns that erode both profitability and reputation.

4.2.2. Solution: clarity, connectivity, and simplicity from the ground up

Any modern construction site and its output must be designed to function as a large-scale information system. As such, edge computation is a requirement for agile connection, real-time service, data optimization, application intelligence, as well as security and privacy protection. Today, the intelligent construction is about coordinating endpoints from the ground up, from day one.

Numerous Hexagon clients leverage our expertise and SMARTBuild solution to connect all relevant project information – from CAD drawings, 3D models, specifications, schedules, materials, workflows, and instructions, to devices, machinery, and so on, to the thousands of people employed on the project the millions of tasks to manage. Advanced geospatial techniques sometimes offer new solutions to old problems: in one customer example, construction engineers used high-accuracy camera data to position drilling machines on both sides of the mountain so that the tunnels from both sides meet in the middle with centimeter accuracy. By feeding construction models and layout points to robot total stations, builders can streamline the process from planning to execution, quickly and accurately locating building elements they need. Communication ‘at the edge’ between devices, machines, people, and processes is essential for identifying issues or anomalies in order to take corrective action before costlier problems arise. Consider the diverse endpoints in a construction environment, some of which are depicted in Figure .

On-site and outside-of-site

Assets like building materials, metrology instruments, and other equipment

Mobile devices, tablets, workstations, etc.

Vehicles such as trucks, cranes, loaders, and bulldozers

Scanners, robotics total stations, satellites, cameras, and antennae

Workers on-site and off

Conditions (eg ground, roads, and weather)

All workflows, communications, connectivity, analytics, interoperability, etc.

Figure 8. Smart construction.

Figure 8. Smart construction.

Capturing and intuitively visualizing these data are key for empowering role-based, BIM-compliant information-sharing to estimate, model, and track actual costs and specs against RFIs and changes in order to mitigate risks. Project engineers can rely on connectivity and readily access data collected from every part of the job site; stakeholders and executives benefit from a centralized repository of designs, models, documents, and other materials, making it easier to manage projects in progress, avoid errors, and improve safety, efficiency, and profit margins. Ultimately, what begins as a digital construction project, enabled by agile information flow, lays the foundation for the output, a digital asset that is an intelligent building.

4.2.3. Impact: smart building enables smarter buildings

Our experience finds that a solution to power this level of coordination augments efficiency and improves profit margins in the following ways:

Single solution for real-time tracking, managing, and reporting of time, machinery, materials, workers, performance, progress, forecasts, budgets, etc.

Less or no need for complicated and expensive software and plug-ins thanks to end-to-end construction management solution incorporating all endpoints, edge devices, and workflows

Simplifies creation and management of workflows suitable for any project environment, including BIM-compliant projects

Avoid errors in the field and mitigate costly reworks by having integrated workplans able to delivering detailed directions on work execution

4.3. The smart digital plant

Plants are often legacy infrastructure constructions, requiring extensive re-engineering, rehabilitation, and high financial and safety risks to maintain. It is not just digitizing every element of plant assets and facilities, but fusing these digital and physical realities to accelerate efficiency and ongoing optimization.

4.3.1. Challenge: disorganized legacy information hinders digital transformation

Before addressing the challenges associated with connectivity and coordination across assets, many plant operators must address core issues of legacy information management. Data and documents may have been created across decades of the facility’s life cycle, being sourced from various contractors using different design and data management tools and standards. Some documents may only exist in hardcopy. And there may be dozens of versions (or even multiple copies of the same version) of any given document, drawings, model, list, or datasheet in various locations, making it unclear which version accurately represents the current configuration. With limited engineering, administrative, and IT personnel on a brownfield site, organizing and keeping track of this legacy information is a significant challenge, especially when the operational asset is subject to continual updates, revamps, shutdowns, and maintenance changes.

As a result, information is difficult to find when it is needed most, such as for shutdown planning, project evaluation, incident investigation, modifications, revamps, compliance audits, and facility start-up. Unstructured, unreliable information undermines control and exposes the owner operator to significant operational, financial, and safety risks.

4.3.2. Solution: take control of unstructured data structuring

What many industrial plant operators need is a solution to find, capture, organize, link, and visualize large volumes of engineering data and documents. Although information management has been around for years, compiling these assets together through a data fusion program allows plants to actually use technical information and make meaning from unstructured data so they can start to manage it, and consequently their operations, more efficiently. Hexagon’s SmartPlant Fusion solution employs dedicated readers for database processing and optical character recognition (OCR) in order to produce searchable platforms of drawing, documents, 3D models, laser scans, and other essential tools for capturing spatial realities of plant environments. Part of this includes integrated GEO scanning software which creates realistic 3D spatial representations of every asset and every facility which are then linked to respective drawings, specs, datasheets, and operating manuals. This creates unique virtual digital assets to replicate the physical reality in the field.

Once assets are digitized, sourced, organized, and organizational intelligence is brought up to speed, plants benefit significantly from the savings that follow. For one, time is saved as engineering information is readily digitized, fabricated within 3 mm accuracy, and accessible, rather that disparate, detached, and disorganized. A solution with integrated virtual models helps mitigate labor risks associated with relying on the wrong reference information or inadequate training. Most importantly, when all (sometimes decades of information about) assets, sensory information, infrastructure, and workflows are online, plant operators can more efficiently and accurately model their requirements, whether for ongoing decision-making or organizing documentation for engineering, procurement, and construction and rehabilitation projects.

4.3.3. Impact: from unintelligent and offline to smart plant fusion

Owners and operators gain the visibility and value of digitizing workflows; engineers, maintenance personnel working around plants can now have quick, ready access to critical information when they need it. This engenders numerous benefits:

Modernize by bringing analog information into real-time empirical decision-making

Reduces time and effort required to find and validate engineering documents

Discover otherwise hidden (or costly-to-obtain) content and insights

Increases safety and regulatory compliance through timely access to information

Enables off-site access and team collaboration, avoiding travel costs and hazards associated with on-facility work

Fastest way to establish a single point of access to all engineering information

To meet market, productivity, and safety requirements, organizations are using new tools to monitor and detect change and anomaly across just about every aspect of industrial environments, infrastructure, and operations. Environmental and infrastructure awareness is not just about capturing sensor data and imagery, but feeding these data into programs and workflows for a comprehensive view and what happened, what is happening, where, when, why, and triggering or automating actions and reports for optimization.

5. Conclusions

The singularity of perception and reality relies on distributed computing to capture reality and shape intelligent change. A true industry pioneer in the world’s leading geospatial and metrology technologies and concepts, Hexagon supports the perception, cognition, computation, control, reaction, and most importantly, learning from diverse digital perceptions of physical realities – perceptality.

Shaping change in industrial environments is about enabling (digital and workflow) connectivity, integrating tools, automating workflows, coordinating diverse nodes and needs for usable data visualization, and most importantly, transforming organizational currency from raw data to true geospatial intelligence.

Funding

This work was supported by Hexagon AB, a global provider of information technologies for geospatial and industrial enterprises.

Notes on contributors

Juergen Dold is the president of Hexagon Geosystems, and has been a part of Hexagon since 1995. Before his time at Hexagon, he served as an academic counsel at Technical University of Braunschweig, Germany, and in various management positions within Leica Geosystems. He holds a Master of Science and PhD in engineering.

Jessica Groopman is an industry analyst specializing in the Internet of Things, and emerging software intelligence and database architectures that support ubiquitous connectivity. Groopman is a principal analyst with Tractica where she covers artificial intelligence and blockchain. She has served as a research director and a principal analyst with Harbor Research, and before, as a lead IoT analyst with Altimeter Group. Prior to business and technology research, her research career began with academic anthropological field work in ethnographic, linguistic, and archaeological research both in the United States and abroad.

Acknowledgments

The authors would like to thank the following persons for their inputs and assistance in the development of this work: Burkhardt Boeckem, John Welter, Kristen Christensen, Patrick Holcomb, Kelli Montgomery, Christopher Fitzgerald, Sophia Lorroque, and Janina Torres.

References