References
- Autodesk 123Catch. (2013). Retrieved from http://www.123dapp.com/catch.
- Bashar Alsadik, M. G. (2013). Automated camera network design for 3D modeling of cultural heritage objects. Journal of Cultural Heritage, 14, 515–526.
- Boschker, M. S. J., & Bakker, F. C. (2002). Inexperienced sport climbers might perceive and utilize new opportunities for action by merely observing a model. Perceptual and Motor Skills, 95, 3–9.
- Buchroithner, M. (2002). Creating the virtual Eiger north face. ISPRS Journal of Photogrammetry and Remote Sensing, 57, 114–125.
- Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., & Ranzuglia, G. (2008). MeshLab: An open-source mesh processing tool. Eurographics Italian Chapter Conference (pp. 129–136)
- CMP SfM - Multi-view reconstruction software. (2012). Retrieved from http://ptak.felk.cvut.cz/sfmservice/websfm.pl?menu = cmpmvs
- Daiber, F., Kosmalla, F., & Krüger, A. (2013). BouldAR: Using augmented reality to support collaborative boulder training. In CHI '13 Extended Abstracts on Human Factors in Computing Systems (pp. 949–954). New York, NY: ACM.
- Dellaert, F., Seitz, S. M., Thorpe, C. E., & Thrun, S. (2000). Structure from motion without correspondence. Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition, 2, 557–564.
- Erickson, M. S., Bauer, J. J., & Hayes, W. C. (2013). The accuracy of photo-based three-dimensional scanning for collision reconstruction using 123D Catch (SAE Technical Paper No. 2013-01-0784). Warrendale, PA: SAE International.
- Furukawa, Y., Curless, B., Seitz, S. M., & Szeliski, R. (2010). Towards internet-scale multi-view stereo. In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1434–1441).
- Furukawa, Y., & Ponce, J. (2010). Accurate, dense, and robust multi-view stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32, 1362–1376.
- IEEE Spectrum. (2013). SenseFly and Drone Adventures toss UAVs off the summit of Matterhorn. Retrieved from http://spectrum.ieee.org/automaton/robotics/aerial-robots/sensefly-and-drone-adventures-toss-uavs-off-the-summit-of-the-matterhorn.
- Jancosek, M., & Pajdla, T. (2011). Multi-view reconstruction preserving weakly-supported surfaces. In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3121–3128).
- Kazhdan, M., Bolitho, M., & Hoppe, H. (2006). Poisson surface reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing (pp. 61–70). Aire-la-Ville: Eurographics Association.
- Koivusaari web topo. 27 Crags. Retrieved from http://27crags.com/crags/koivusaari-bloc/topos (2014)
- Kolecka, N. (2011). Photo-based 3D scanning vs. laser scanning – Competitive data acquisition methods for digital terrain modelling of steep mountain slopes. Department of GIS, Cartography and Remote Sensing, Jagiellonian University.
- Kolecka, N. (2012). High-resolution mapping and visualization of a climbing wall. In M. Buchroithner (Ed.), True-3D in cartography (pp. 323–337). Berlin: Springer.
- Kopf, J., Cohen, M. F., & Szeliski, R. (2014). First-person hyper-lapse videos. ACM Transactions on Graphics, 33, 78:1–78:10.
- Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60, 91–110.
- National Geographic. (2008). Capturing midnight lightning, Corey Rich. Retrieved from http://adventure.nationalgeographic.com/2008/09/yosemite/climbing-photosynth-text/1.
- National Geographic. (2013). Yosemite's iconic El Capitan mapped in high-resolution 3D. Retrieved February 24, 2014, from http://news.nationalgeographic.com/news/2013/06/130612-yosemite-el-capitan-rock-mapped/.
- Pezzulo, G., Barca, L., Bocconi, A. L., & Borghi, A. M. (2010). When affordances climb into your mind: Advantages of motor simulation in a memory task performed by novice and expert rock climbers. Brain and Cognition, 73, 68–73.
- PhotoModeller software. (2013). Retrieved from http://www.photomodeler.com/index.html.
- Photosynth. (2014). Retrieved from http://photosynth.net/.
- Project Tango. (2014, February). Retrieved from https://www.google.com/atap/projecttango/.
- Sanchez, X., Lambert, P., Jones, G., & Llewellyn, D. J. (2012). Efficacy of pre-ascent climbing route visual inspection in indoor sport climbing. Scandinavian Journal of Medicine & Science in Sports, 22, 67–72.
- Unity - game engine. (2014, February). Retrieved from https://unity3d.com/.
- Wagner, D., Reitmayr, G., Mulloni, A., Drummond, T., & Schmalstieg, D. (2010). Real-time detection and tracking for augmented reality on mobile phones. IEEE Transactions on Visualization and Computer Graphics, 16, 355–368.
- Westoby, M. J., Brasington, J., Glasser, N. F., Hambrey, M. J., & Reynolds, J. M. (2012). Structure-from-motion photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology, 179, 300–314.
- Wu, C. (2007). SiftGPU: A GPU implementation of scale invariant feature transform (SIFT). Retrieved from http://cs.unc.edu/∼ccwu/siftgpu.
- Wu, C. (2011). VisualSFM: A visual structure from motion system. Retrieved from http://ccwu.me/vsfm/.
- Wu, C., Agarwal, S., Curless, B., & Seitz, S. M. (2011). Multicore bundle adjustment. In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3057–3064).