1,661
Views
26
CrossRef citations to date
0
Altmetric
Articles

The future of Digital Earth

Pages 93-98 | Received 16 Feb 2012, Accepted 16 Feb 2012, Published online: 03 Apr 2012

Abstract

The concept of Digital Earth as a digital replica of the planet has its roots in science fiction, in the writings of environmental visionaries, and in a speech prepared for delivery by Vice President Gore in 1998. Five use cases are identified that together comprise the vision of Digital Earth: a geoportal, a visualization service, a platform for simulation and prediction, a source of unprecedented spatial and temporal resolution, and a technology fully integrated into human activities. Progress to date is reviewed for each use case, along with the research that would be needed to achieve each aspect of the overall vision.

1. Background

Digital Earth is one of a class of services which we might term Digital X, a digital mirror of some complex system X, embedded in space and time, that includes a detailed representation of X, and software for acquiring, storing, manipulating, visualizing, and archiving that representation. Digital X can be used for learning about X, for experimenting with it, for accumulating and communicating knowledge about it, and for measuring its response to various scenarios. For example, digital representations of prototype aircraft are now routinely used as part of the development and testing process, modeling the stresses that are imposed on the airframe; and they are routinely used to train pilots and other crew in operating the aircraft and responding to emergencies. Digital cadavers are now routinely used in medical schools, replacing the traditional real cadavers in learning physiology, dissection, and other medical procedures.

Digital Earth trumps these examples for sheer size if not for complexity. More compellingly, perhaps, it has the potential to engage significant numbers of the Earth's 7 billion inhabitants in exploring and learning about their planet and its seemingly infinite complexity. And perhaps most compelling is the possibility that experiments carried out on Digital Earth, such as the modeling of climate change, might take place in advance of major human-induced modifications, allowing the impacts of increased atmospheric CO2, for example, to be evaluated before they occur.

A digital world that mirrors the real one has long been the stuff of science fiction. The term Digital Earth can be traced to Gore (Citation1992), who called for an improved mechanism for the dissemination of the growing volumes of information about the state of the Earth's environment. The concept was elaborated in a speech prepared for delivery by Gore at the opening of the California Science Center in early 1998 (portal.opengeospatial.org/files/?artifact_id=6210), which described a vision that went far beyond the brief text in Gore's earlier book. Shortly thereafter, a Digital Earth office was opened within NASA, and prototypes began to appear. In 2005, Google acquired the small company Keyhole and its product Earthviewer and enhanced and rebranded it as Google Earth. Other comparable products appeared in quick succession, including NASA's own World Wind and Microsoft's Virtual Earth (later Bing Maps).

Gore's 1998 speech offered a complex vision that seemed almost implausible at the time. The Earth's surface measures roughly 500 million km2, suggesting that even a highly generalized representation of its complexity would amount to petabytes. The speech describes a 3D virtual reality that was only just beginning to appear feasible in 1998. Yet by 2001 it was possible for a primitive version of Digital Earth to be offered through a standard personal computer using Keyhole's downloadable client. The result, following the release of Google Earth in 2005, was a massive and positive response from the general public and the scientific community: the former finding the prospect of exploring a detailed digital representation of the planet irresistible, and the latter interested in exploiting a new and easy-to-use channel for communicating scientific results to a broad audience.

Computer technology has continued to advance rapidly, as has the supply of digital data about the planet's surface and near-surface. All indications are that this will continue, at ever-accelerating rates, as new space-borne sensors, new ground-based sensor networks, and crowd sourcing provide vast new quantities of data of unprecedented spatial and temporal resolution. There is a need, therefore, to update the Gore vision of Digital Earth, to add clarity to it, and to project it into the second decade of the twenty-first century. Several papers have attempted to do this already (Craglia et al. Citation2008, Citation2012). This article takes a somewhat different and novel perspective, focusing on the eventual uses of Digital Earth in an echo of the traditional waterfall method of computer systems planning (Benington Citation1983) that begins with the identification of use cases. It updates an earlier retrospective application of a similar approach (Goodchild Citation2008) that was written in response to the evident lack of planning based on use cases in some of the early prototypes.

The next five sections discuss each of the uses of Digital Earth in turn, from least to most visionary: Gore's original mechanism for data dissemination; a visualization and exploration tool; a platform for simulation and forecasting; a representation of unprecedented spatial and temporal resolution; and a world in which digital geographic information and services are integrated into all aspects of human existence.

2. Uses of Digital Earth

2.1. A geoportal

The first of these uses is essentially what Gore had in mind in his 1992 book, a distribution mechanism for geographically referenced information about the planet. Access to such information was problematic at best in 1992. The traditional method of storing and disseminating geographic information was the map library, with its stacks of paper maps and atlases. Digital geographic information was becoming more available, and early efforts were being made to build mechanisms for search, evaluation, and retrieval using various electronic networks, including the Internet, which was rapidly growing in popularity.

While libraries rely on catalogs to support search, the emphasis on author, title, and subject does not adapt well to the needs of geographically based search, which is one reason among many for the traditional separation of map collections from books and journals. Author, title, and subject indices are discrete and sortable, whereas geographic location expressed in latitude and longitude is continuous and multidimensional. Search based on place names is more manageable, but complicated by the essentially hierarchical nature of place names: should a search for information about Santa Barbara, for example, be expanded to the containing county, state, or even nation? In a digital world, however, these problems are surmountable, and the concept of the geolibrary emerged in the late 1990s (National Research Council Citation1999) to describe an online repository of geographic information that could be searched using a variety of mechanisms. Many web-based geolibraries were established, offering collections defined geographically, by discipline, or by data source. It quickly became apparent, however, that while such geolibraries resolved the problem of how to search for suitable data within a collection, they left open the question of how one would know which collection to search: the so-called collection-level metadata (CLM) problem (Goodchild and Zhou Citation2003).

The most recent in this series of innovations is the geoportal (Maguire and Longley Citation2005), a single point of entry to a distributed, searchable resource, comparable to the union catalog of traditional libraries. Catalog entries can be harvested automatically from participating collections or contributed by custodians of data. Access restrictions can be placed at the level of entire collections, or individual information objects within collections. The geoportal solves the CLM problem to some extent, although human nature being what it is there will likely always be more than one geoportal, with fuzzy demarcations between the collections to which they provide access.

The possibility of using a visualization of the Earth as the basis of search provides a strong link between the concepts of geoportal and Digital Earth. Instead of a library card catalog, or its modern equivalent the web-based index, a Digital Earth mechanism uses a 3D visualization of the planet, allowing the user to pan and zoom in search of an area of interest, to identify it, perhaps in the form of a bounding box or place name, and to ask ‘What have you got about there?,’ with additional restrictions on theme, spatial resolution, or time. The availability of data can be represented as icons that appear on the rendering of the Earth, perhaps displaying only those icons representing data of spatial resolution consistent with the spatial resolution of the current view of Digital Earth, appearing or disappearing as the user zooms in or out. Such ideas have been implemented in various forms in numerous systems in recent years. Nevertheless, they fall far short of Gore's vision of an integrated mechanism for searching across all that is known about the planet's environment.

2.2. Visualizing the Earth

Much of Gore's 1998 speech was concerned with Digital Earth as a learning tool, allowing an imagined child of the future to explore the Earth's surface from global to local scales, taking ‘a magic carpet ride’ over places that appeared as they currently are, as they appeared in the past, and as they might appear in the future. At the time the idea of a simulated flight was still at the cutting edge, in part because of the limitations of graphics display engines in run-of-the-mill computers. Moreover, the generation of such a flight in real time, with its implications for rapid download of information and rapid computation of perspective views, seemed well beyond the capabilities of most personal systems and their Internet bandwidths. Yet these doubts had largely disappeared by 2001, as a result of several developments: significant improvements in the 3D graphics capabilities of personal computers, driven primarily by the gaming industry; steady improvements in broadband network connections to the average household and business; clever algorithms for managing the level of detail in displays; and precomputation of tiles at fixed levels of resolution on the server, to be cached and warped by the client.

These developments allowed Google to launch Google Earth in 2005. Users were able to download free client software and use it to access and visualize a vast repository of fine-resolution Earth imagery. Hundreds of millions of downloads followed, as users zoomed down to see houses in great detail, to take Gore's ‘magic carpet ride,’ and to explore the Earth's surface for interesting and anomalous features. The software made it easy for the user to add his or her own data to the display, and the publication of the Google Earth application programming interface (API) later in 2005 further accelerated the process of customization, the embedding of Google Earth in other applications, and the creation of interesting mashups.

Google Earth works at this level at least in part because it creates a visual replica of the planet's surface. The resolution varies, and misregistration is sometimes a problem for some applications since it is of course impossible to register any image perfectly to the Earth. Nevertheless, sub-meter resolutions are available over much of the land surface, especially in large cities. The 3D extensions and the integration of Google Earth with SketchUp have encouraged the creation of 3D models of buildings and other structures, largely by volunteers.

Visual exploration is a powerful paradigm for learning about geography, discovering geographic knowledge, and generating hypotheses about geographic processes. The base imagery accessible through Google Earth and similar services is acquired from satellites and aircraft, almost always near nadir, and is capable of offering only a ‘God's eye view’ of the surface. Yet the integration of StreetView with Google Earth now allows anyone to move seamlessly from the sky to ground level, adding enormously to the potential for exploration.

Nevertheless, there are severe limitations to the potential of exploration that is purely visual. Unlike the visual form of the surface, abstract phenomena, such as income, atmospheric temperature, or rates of crime, are not readily visualized. Cartographers have long experimented with the ways of conveying such nonvisual information through visual media, using color, cross-hatching, contours, and other devices. But only a few of these are readily implemented in existing Digital Earth services, whether as additional layers of data or as mashups. Visualization of uncertainty is especially problematic, whether it be uncertainty of position or of attributes; that is, what is present at a given location. Traditional cartography has few recognized means of displaying uncertainty, for example, through the dashed lines of uncertain boundaries or seasonal river courses. Although research on the communication of geospatial uncertainty has advanced significantly in recent years, all the methods require explicit instruction to the user, who is simply not accustomed to expecting uncertainty in maps.

2.3. Simulation and forecasting

Previous sections have focused on how the Earth looks, that is, on its form. While historic imagery and maps can be used to create visualizations of past surfaces, and while it is possible to create mashups of simulated futures, at present these are essentially ad hoc and not integrated with the architecture of Digital Earth. Yet one of the strongest arguments for Digital Earth is its envisioned ability to replicate not only the form of the Earth but also its processes, through implementations of models of the various mechanisms that modify the landscape, whether it be in its physical or social dimensions, or both.

A vast number of models of various Earth-related processes have been created in the past few decades, in disciplines ranging from geomorphology to human migration. They vary greatly in their predictive power, in the complexity of the equations and rules that predict the state of the Earth's surface at some future time by incrementing changes from a current state, by the language in which they are written, and by the platform of software, if any, on which they operate. Several efforts have been made to provide catalogs of such models, to integrate them into geoportals along with data (Maguire and Longley Citation2005), and to define metadata that can support search (Crosier et al. Citation2003). In general, however, there continues to be a lack of metadata and search mechanisms, despite the fact that models of process are inherently a more advanced form of knowledge about the Earth than observational data, since they must be inferred and calibrated from such data.

Models also differ in the discretization inherent in their procedures. Many models are of the finite-difference form, solving partial differential equations and implementing rules based on simple rasters of uniform rectangular and sometimes hexagonal cells. Derivatives can be estimated from the differences observed over a neighborhood of cells. PCRaster is a well-known example, a general set of tools for modeling Earth-surface processes on a simple square raster. At global scales, finite-difference approaches are problematic because it is impossible to tile the Earth's curved surface with a uniform grid. Discrete global grids (Sahr et al. Citation2003) are hierarchical structures based on systematic subdivision of one of the five platonic solids and widely used in the implementations of Digital Earth, but it is impossible for such structures to be geometrically uniform at any level of the hierarchy except the highest.

Finite-element methods are more flexible, being based on irregular meshes of triangles and quadrilaterals (Topping et al. Citation2004). It would be possible using them to solve partial differential equations and implement rules using many of the discrete global grids described in the literature, such as the triangle-based structure of Dutton's Quaternary Triangular Mesh (Dutton Citation1999). In this way, predictive models could be incorporated directly into the architecture of Digital Earth, using the same hierarchical structures that many Digital Earth prototypes use to achieve rapid zoom. Finally, some global models, including many of the global climate models, operate in the spectral rather than the spatial domain, discretizing a harmonic representation of variation over the surface.

Modeling processes directly within the architecture of Digital Earth would have several advantages. It would provide a more uniform platform, allowing models to be more interoperable and more easily coupled. It would avoid some of the noise that is introduced by the multiple stages of resampling, upscaling, and downscaling that are required to transfer data from one platform to another; for example, between the platform used for simulation and the Digital Earth platform used for visualization and dissemination. It also opens the potential for direct user involvement in the modeling process. On the other hand, various architectural issues would have to be addressed, including the partitioning of workflow between server and client.

Much of the success of Digital Earth services lies in their ability to engage the general public in exploring, visualizing, and learning about the Earth. Visualizing future scenarios for the planet seems a compelling aspect of that activity, and perhaps even more so if the user is able to have some degree of control over the simulation process, and perhaps even some role in defining the scenarios that are simulated. Direct involvement seems a much better way of engendering trust than if simulations can only be accessed after they have been created by an anonymous scientist in an anonymous laboratory.

From this perspective, Digital Earth can begin to play a vital role in completing the chain of communication that links the scientist to the general public. Services such as Google Earth are popular because they are perceived as free, fast, and fun. If its human occupants deserve to know as much as possible about their planet's future, then providing access to Digital Earth as a platform for simulation, prediction, and evaluation of the effects of planning options seems a logical and compelling step.

2.4. Unprecedented spatial and temporal resolution

When Gore's speech was written, it would have been difficult to have imagined the developments that have occurred in the past 10 years in novel and improved forms of geographic data. Sub-meter satellite-based imaging systems were being planned, and the technology to collect street-level views was in place, but developments in sensors, the advent of crowdsourcing, and new systems for tracking have all forced a rethinking of what is possible. It is already possible to imagine a future world in which we will know the locations of everything: every airplane, every person, every vehicle, every farm animal, or every tree. Technologies such as radio frequency identification (RFID) and quick response (QR) codes are already moving us strongly in this direction and bringing the vision of an Internet of things closer to reality, with frightening implications for the future of personal privacy.

Early thinking about geographic information was based on the paper map, a remarkably concise method of compiling large amounts of geographic information. Today, geotagging of photographs, text, scientific observations, and transactions has broadened that conception and disaggregated it so that the emphasis is more and more on the individual event and the atomic observation that something was present at some time at some location. Although time was largely ignored in map-making, because maps of transient phenomena would be out of date almost as soon as they were produced, it is now possible to make maps whose period of use is defined in seconds, at virtually no cost. While paper maps had to have value to large numbers of people over long periods of time to justify the high costs of their production (Goodchild et al. Citation2007), maps can now appear on the screens of mobile devices, centered on the user's location, and showing information that would be of virtually no interest to anyone else.

Time and the third spatial dimension are now becoming indispensable in geographic information, as wayfinding moves into complex 3D indoor structures, as geospatial technologies are developed to support emergency response, and as geographic information system (GIS) is adapted to the needs of design and support for the military. Moreover, there is increasing recognition that much useful geographic information is not so much about what is where and when, but about how places interact, in the form of flows of information, people, and commodities. Binary geographic information is defined as information about the interactions of pairs of places, rather than information about places taken one at a time, and includes properties such as travel time, distance, and cost; flows of commuters and migrants; and interactions and communications. Binary information is increasingly abundant because of social networking and tracking, which are producing vast quantities of geographic information about interaction and movement.

This new level of attention to the third spatial dimension and time vastly increases the volume of geospatial data, as does moving to finer and finer spatial and temporal resolution through the implementation of new sensors. The potential volume of geospatial data is of course infinite, given the infinite complexity of the Earth's surface and near-surface, but even when sampled at points the potential is still massive. When binary geospatial data are added, with n 2 interactions between each of n sample points or supports, the true magnitude of the problem becomes apparent. There is no doubt that geospatial data have the potential to swamp all other forms of information, given sufficient demand and sufficient advances in data acquisition technologies. Clearly Gore's original vision of a Digital Earth ‘into which we can embed vast quantities of’ geographic information is a real prospect, if the means can be found to integrate those vast quantities into a single, integrated whole.

2.5. Fully integrated geospatial technologies

Each of the previous four sections has discussed Digital Earth as something apart, a mirror world that provides a virtual replica of the real thing. But life on Earth is becoming increasingly digital, and almost all forms of communication are now reliant on digital technologies. Today it is hard to imagine life without search engines, smart phones, and tablet computers, let alone earlier digital technologies such as fax and pagers. Rather than think of Digital Earth as something apart, therefore, this section speculates on a Digital Earth that is fully integrated into life itself, and suggests that in future Digital Earth may become so integrated as to be almost invisible.

Many of the conscious acts that accompanied human engagement with geospatial technologies are already almost invisible and unconscious. While it was once necessary to field teams of map-makers, or to launch remote-sensing satellites, to obtain geospatial data, today's digital street networks are constantly updated and improved through the invisible acquisition of tracks from drivers, vehicles, and people carrying mobile phones. Any issues of privacy that such practices raised are hidden in the fine print of contracts and terms of use. Whereas it was once necessary to stop at a gas station to acquire a map, many online services now automatically provide appropriately centered and scaled maps to accompany results of searches for hotels, restaurants, or businesses. Services such as Google's Latitude or FourSquare provide real-time maps of the locations of friends. In all these examples, we humans are already effectively integrated into the Digital Earth.

Still missing, however, is the sense of integration that is implied by the Gore speech. Integration of services based on geospatial data is already well advanced within corporations such as Google, with its constantly updated information about points of interest, street networks, base imagery, and individual locations. Across corporations there is less if any integration, of course, and there is little also between corporations and the government agencies responsible for geospatial data production. It is still far easier to acquire one type of information about all places – information about soils over the United States, for example – than to discover all types of information about one place. Although the latter kind of integration has always been a claim of GIS from its earliest days, it remains remarkably difficult even with today's advanced versions of GIS technology. Conceptually, our view of Digital Earth remains remarkably layer based, along with the institutional arrangements by which geospatial data are acquired and disseminated.

What Gore clearly had in mind was a single geoportal that would present an integrated view of everything that is known about the planet's surface and near-surface, along with views of the past and predictions of the future. It would provide both vertical and horizontal context: everything that is known about a location, and everything that is known about nearby locations. It would have immense value in many aspects of human activity, from emergency response to learning and environmental science. It would provide a communication channel through which evolving scientific knowledge about the planet's current state and projected future could be communicated to the general public. This kind of integration is what the Open Geospatial Consortium has been driving toward for almost two decades, through its efforts to develop standards and specifications for interoperability. What Gore could not envision, from the perspective of 1998, was the kind of integration of Digital Earth into daily life that is now exemplified by some of our more advanced technologies.

3. Conclusion

Focusing on use cases provides a very effective basis for thinking about the design of complex systems. Five use cases have been identified in this article, ranging from data dissemination to the full integration of Digital Earth into everyday life. Against these five, it is possible to evaluate the current state of Digital Earth, and the advances that will be needed to achieve an updated vision in the coming years.

Geoportal technology has indeed advanced dramatically over the past decade and now provides an effective mechanism for sharing information about the planet. It is still weak in the semantics of search, because of the lack of well-defined terms to describe the various themes of geospatial data, and because of weaknesses in our current metadata standards. Moreover, the collection-level problem remains, because geoportals have proliferated with little clarity in their respective objectives or contents. The dream of a single geoportal remains beyond our grasp, and its achievement will rely on improved understanding of the semantics of search and improved approaches to metadata.

Achievements to date are perhaps strongest in the second area of visualization. The current Digital Earth services are inherently visual and limited therefore in their ability to visualize phenomena that are more abstract. Much research will be needed to solve the problem of visualizing uncertainty, especially if Digital Earth is to advance in its ability to present the results of simulations and predictions, all of which must carry indicators of uncertainty if they are to be honest and informative.

Current Digital Earth services have been widely exploited to display the results of simulations and predictions, through mashups of user-provided data. Concerns about uncertainties aside, these developments have clearly helped to open channels of communication between the scientific community and the general public. The integration of simulation models into the internal architecture of these services can provide a much-needed impetus to the interoperability of models; and their integration with geoportals has the potential to improve the process of search and evaluation.

Massive strides have been made in the acquisition of spatiotemporal data, including the third spatial dimension, and in improvements in spatial and temporal resolution. These developments are likely to continue, at an accelerating rate, into the future as new satellite-based imaging systems and new sensor networks are implemented, and as crowdsourcing continues to evolve.

The final use case provided a vision of Digital Earth as fully integrated into human activities, less and less intrusive, and providing services that are increasingly invisible. Developments in this area will require a substantial conceptual reorientation away from the practices and institutions of the past. Many of the more exciting technological developments of the past few years have not been driven by demand as much as by the vision of technological innovators. The same seems likely to be true of developments in this final area: no one is calling for Digital Earth services that are fully integrated into human activities, but such services will undoubtedly find enthusiastic market acceptance when they appear.

References

  • Benington , H.D. 1983 . Production of large computer programs . IEEE Annals of the History of Computing , 5 ( 4 ) : 350 – 361 .
  • Craglia , M. 2008 . Next-generation Digital Earth. A position paper from the Vespucci Initiative for the Advancement of Geographic Information Science . International Journal of Spatial Data Infrastructure Research , 3 : 146 – 167 .
  • Craglia , M. 2012 . Digital Earth 2020: towards a vision for the next decade . International Journal of Digital Earth , 5 ( 1 ) : 4 – 21 .
  • Crosier , S.J. 2003 . Developing an infrastructure for sharing environmental models . Environment and Planning B: Planning and Design , 30 : 487 – 501 .
  • Dutton , G. 1999 . A hierarchical coordinate system for geoprocessing and cartography , Berlin : Springer .
  • Goodchild , M.F. 2008 . The use cases of Digital Earth . International Journal of Digital Earth , 1 ( 1 ) : 31 – 42 .
  • Goodchild , M.F. , Fu , P. and Rich , P. 2007 . Sharing geographic information: an assessment of the Geospatial One-Stop . Annals of the Association of American Geographers , 97 ( 2 ) : 249 – 265 .
  • Goodchild , M.F. and Zhou , J. 2003 . Finding geographic information: collection-level metadata . GeoInformatica , 7 ( 2 ) : 95 – 112 .
  • Gore , A. 1992 . Earth in the balance: ecology and the human spirit , Boston , MA : Houghton Mifflin .
  • Maguire , D.J. and Longley , P.A. 2005 . The emergence of geoportals and their role in spatial data infrastructures . Computers, Environment and Urban Systems , 29 : 3 – 14 .
  • National Research Council . 1999 . Distributed geolibraries: spatial information resources: summary of a workshop , Washington , DC : National Academies Press .
  • Sahr , K. , White , D. and Kimerling , A.J. 2003 . Geodesic discrete global grid systems . Cartography and Geographic Information Science , 30 ( 2 ) : 121 – 134 .
  • Topping , B.H.V. 2004 . Finite element mesh generation , Stirling , , UK : Saxe-Coburg .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.