130
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

A Corneal Surface Reflections-Based Intelligent System for Lifelogging Applications

ORCID Icon, , & ORCID Icon
Pages 1963-1980 | Received 30 Oct 2021, Accepted 03 Dec 2022, Published online: 04 Jan 2023

References

  • Alhasawi, Y., Mullachery, B., & Chatterjee, S. (2018). Design of a mobile-app for non-invasively detecting high blood cholesterol using eye images. In Proceedings of the 51st Hawaii international conference on system sciences.
  • Aloudat, M., Faezipour, M., & El-Sayed, A. (2019). Automated vision-based high intraocular pressure detection using frontal eye images. IEEE Journal of Translational Engineering in Health and Medicine, 7, 1–13. https://doi.org/10.1109/JTEHM.2019.2915534
  • Bertasius, G., Wang, H., & Torresani, L. (2021). Is space-time attention all you need for video understanding? arXiv preprint arXiv:2102.05095.
  • Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709.
  • Chen, T., Kornblith, S., Swersky, K., Norouzi, M., & Hinton, G. (2020). Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029.
  • Dimililer, K., Ever, Y. K., & Ratemi, H. (2016). Intelligent eye tumour detection system. Procedia Computer Science, 102, 325–332. https://doi.org/10.1016/j.procs.2016.09.408
  • Doersch, C., Gupta, A., Efros, A. A. (2015). Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE international conference on computer vision (pp. 1422–1430).
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S. & Uszkoreit, J. (2020). An image is worth 16×16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  • Dwibedi, D., Misra, I., Hebert, M. (2017). OctCut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE international conference on computer vision (iccv).
  • El Hafi, L., Ding, M., Takamatsu, J., & Ogasawara, T. (2016). Gaze tracking using corneal images captured by a single high-sensitivity camera Gaze tracking using corneal images captured by a single high-sensitivity camera. IBC 2016 Conference, Amsterdam, Netherlands (pp. 33–10). https://doi.org/10.1049/ibc.2016.0033
  • El Hafi, L., Ding, M., Takamatsu, J., & Ogasawara, T. (2017a). Gaze tracking and object recognition from eye images. In 2017 first IEEE international conference on robotic computing (irc) (pp. 310–315). https://doi.org/10.1109/IRC.2017.44
  • El Hafi, L., Ding, M., Takamatsu, J., & Ogasawara, T. (2017b). Stare: Realtime, wearable, simultaneous gaze tracking and object recognition from eye images. SMPTE Motion Imaging Journal, 126(6), 37–46. https://doi.org/10.5594/JMI.2017.2711899
  • Fridman, L., Reimer, B., Mehler, B., & Freeman, W. T. (2018). Cognitive load estimation in the wild. In Proceedings of the 2018 chi conference on human factors in computing systems (pp. 1–9).
  • Frings, A., Geerling, G., & Schargus, M. (2017). Red eye: A guide for non-specialists. Deutsches Ärzteblatt International, 114(17), 302. https://doi.org/10.3238/arztebl.2017.0302
  • Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4), 193–202. https://doi.org/10.1007/BF00344251
  • Georgakis, G., Mousavian, A., Berg, A. C., & Kosecka, J. (2017). Synthesizing training data for object detection in indoor scenes. arXiv preprint arXiv:1702.07836.
  • Gidaris, S., Singh, P., & Komodakis, N. (2018). Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728.
  • Guestrin, E. D., & Eizenman, M. (2006). General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Transactions on Bio-Medical Engineering, 53(6), 1124–1133. https://doi.org/10.1109/TBME.2005.863952
  • Gururangan, S., Marasović, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., & Smith, N. A. (2020). Don’t stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.
  • He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. B. (2019). Momentum contrast for unsupervised visual representation learning. CoRR, abs/1911.05722. http://arxiv.org/abs/1911.05722
  • He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
  • Hénaff, O. J., Srinivas, A., De Fauw, J., Razavi, A., Doersch, C., Eslami, S., & van den Oord, A. (2019). Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272.
  • Hendrycks, D., Mazeika, M., Kadavath, S., & Song, D. (2019). Using self-supervised learning can improve model robustness and uncertainty. arXiv preprint arXiv:1906.12340.
  • Ho, C.-H., Liu, B., Wu, T.-Y., & Vasconcelos, N. (2020, June). Exploit clues from views: Self-supervised and regularized learning for multiview object recognition. In IEEE/cvf conference on computer vision and pattern recognition (cvpr).
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M. & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Jakab, T., Gupta, A., Bilen, H., & Vedaldi, A. (2020, June). Self-supervised learning of interpretable keypoints from unlabelled videos. In IEEE/cvf conference on computer vision and pattern recognition (cvpr).
  • Jenkins, R., & Kerr, C. (2013). Identifiable images of bystanders extracted from corneal reflections. PLoS One, 8(12), e83325. https://doi.org/10.1371/journal.pone.0083325
  • Kaluarachchi, T., Reis, A., & Nanayakkara, S. (2021). A review of recent deep learning approaches in human-centered machine learning. Sensors, 21(7), 2514. https://doi.org/10.3390/s21072514
  • Kaluarachchi, T. I., Sapkota, S., Taradel, J., Thevenon, A., Matthies, D. J., & Nanayakkara, S. (2021). Eyeknowyou: A diy toolkit to support monitoring cognitive load and actual screen time using a head-mounted webcam. In Adjunct publication of the 23rd international conference on mobile human-computer interaction. Association for Computing Machinery. https://doi.org/10.1145/3447527.3474850
  • Kernaghan, S. (2016). Google glass: An evaluation of social acceptance [Unpublished doctoral dissertation].
  • Khalil, O., Fathy, M. E., El Kholy, D. K., El Saban, M., Kohli, P., Shotton, J., & Badr, Y. (2013). Synthetic training in object detection. In 2013 IEEE international conference on image processing (pp. 3113–3117). IEEE.
  • Kim, D., Cho, D., Yoo, D., & Kweon, I. S. (2018). Learning image representations by completing damaged jigsaw puzzles. In 2018 IEEE winter conference on applications of computer vision (wacv) (pp. 793–802). https://doi.org/10.1109/WACV.2018.00092
  • Kolesnikov, A., Zhai, X., Beyer, L. (2019a). Revisiting self-supervised visual representation learning. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition (pp. 1920–1929).
  • Kolesnikov, A., Zhai, X., Beyer, L. (2019b). Revisiting self-supervised visual representation learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1920–1929).
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).
  • Kundu, J. N., Seth, S., Jampani, V., Rakesh, M., Babu, R. V., & Chakraborty, A. (2020, June). Self-supervised 3d human pose estimation via part guided novel image synthesis. In IEEE/cvf conference on computer vision and pattern recognition (cvpr).
  • Lander, C., Kosmalla, F., Wiehr, F., Gehring, S. (2017). Using corneal imaging for measuring a human’s visual attention. In Proceedings of the 2017 ACM international joint conference on pervasive and ubiquitous computing and proceedings of the 2017 ACM international symposium on wearable computers (pp. 947–952).
  • Lander, C., & Krüger, A. (2018). Eyesense: Towards information extraction on corneal images. In Proceedings of the 2018 ACM international joint conference and 2018 international symposium on pervasive and ubiquitous computing and wearable computers (pp. 980–987).
  • Lander, C., Krüger, A., Löchtefeld, M. (2016). “the story of life is quicker than the blink of an eye” using corneal imaging for life logging. In Proceedings of the 2016 ACM international joint conference on pervasive and ubiquitous computing: Adjunct (pp. 1686–1695).
  • Law, H., & Deng, J. (2022). Label-free synthetic pretraining of object detectors. arXiv preprint arXiv:2208.04268.
  • LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4), 541–551. https://doi.org/10.1162/neco.1989.1.4.541
  • Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision (pp. 2980–2988).
  • Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P. & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740–755). Springer, Cham.
  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21–37). Springer, Cham.
  • Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S. & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10012–10022).
  • Maffei, A., & Angrilli, A. (2018). Spontaneous eye blink rate: An index of dopaminergic component of sustained attention and fatigue. International Journal of Psychophysiology, 123, 58–63. https://doi.org/10.1016/j.ijpsycho.2017.11.009
  • Margetis, G., Ntoa, S., Antona, M., & Stephanidis, C. (2021). Human-centered design of artificial intelligence. Handbook of Human Factors and Ergonomics, 1085–1106.
  • Misra, I., Maaten, L. v. d. (2020). Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition (pp. 6707–6717).
  • Mittal, H., Okorn, B., & Held, D. (2020, June). Just go with the flow: Self-supervised scene flow estimation. In IEEE/cvf conference on computer vision and pattern recognition (cvpr).
  • Nishino, K., & Nayar, S. K. (2006). Corneal imaging system: Environment from eyes. International Journal of Computer Vision, 70(1), 23–40. https://doi.org/10.1007/s11263-006-6274-9
  • Nitschke, C., & Nakazawa, A. (2012). Super-resolution from corneal images. Bmvc.
  • Nitschke, C., Nakazawa, A., & Takemura, H. (2013). Corneal imaging revisited: An overview of corneal reflection analysis and applications. IPSJ Transactions on Computer Vision and Applications, 5(0), 1–18. https://doi.org/10.2197/ipsjtcva.5.1
  • Nurbani, C. A., Novamizanti, L., Ramatryana, I. A. N., & Wardana, N. P. D. P. (2019). Measurement of cholesterol levels through eye based on co-occurrence matrix on android. In 2019 IEEE asia pacific conference on wireless and mobile (Apwimob) (pp. 88–93). https://doi.org/10.1109/APWiMob48441.2019.8964165
  • Oord, A. v d., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
  • Pham, H., Dai, Z., Xie, Q., Luong, M.-T., & Le, Q. V. (2020). Meta pseudo labels. arXiv preprint arXiv:2003.10580.
  • Profita, H. P., Clawson, J., Gilliland, S., Zeagler, C., Starner, T., Budd, J., & Do, E. Y. L. (2013). Don’t mind me touching my wrist: A case study of interacting with on-body technology in public. In Proceedings of the 2013 International Symposium on Wearable Computers (p. 89–96). Association for Computing Machinery. https://doi.org/10.1145/2493988.2494331
  • Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
  • Ren, P., Barreto, A., Huang, J., Gao, Y., Ortega, F. R., & Adjouadi, M. (2014). Off-line and on-line stress detection through processing of the pupil diameter signal. Annals of Biomedical Engineering, 42(1), 162–176. https://doi.org/10.1007/s10439-013-0880-9
  • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Neurocomputing: Foundations of research. In J. A. Anderson & E. Rosenfeld (Eds.), (pp. 696–699). MIT Press. http://dl.acm.org/citation.cfm?id=65669.104451
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252. https://doi.org/10.1007/s11263-015-0816-y
  • Shneiderman, B. (2022a). Human-centered ai. Oxford University Press.
  • Shneiderman, B. (2022b). Human-centered ai: Ensuring human control while increasing automation. In Proceedings of the 5th workshop on human factors in hypertext. Association for Computing Machinery. https://doi.org/10.1145/3538882.3542790
  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • Sohn, K. (2016). Improved deep metric learning with multi-class n-pair loss objective. In Proceedings of the 30th international conference on neural information processing systems (pp. 1857–1865).
  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9).
  • Takemura, K., Yamakawa, T., Takamatsu, J., & Ogasawara, T. (2014). Estimation of a focused object using a corneal surface image for eye-based interaction. Journal of Eye Movement Research, 7(3), 1–9. https://doi.org/10.16910/jemr.7.3.4
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems. MIT Press.
  • Wang, H., Lin, S., Liu, X., Kang, S. B. (2005). Separating reflections in human iris images for illumination estimation. In Tenth IEEE international conference on computer vision (iccv’05) volume 1 (Vol. 2, pp. 1691–1698).
  • Wang, H., Lin, S., Ye, X., & Gu, W. (2008). Separating corneal reflections for illumination estimation. Neurocomputing, 71(10–12), 1788–1797. https://doi.org/10.1016/j.neucom.2007.07.039
  • Withana, A., Peiris, R., Samarasekara, N., & Nanayakkara, S. (2015). Zsense: Enabling shallow depth gesture recognition for greater input expressivity on smart wearables. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 3661–3670). Association for Computing Machinery. https://doi.org/10.1145/2702123.2702371
  • Yang, C., Wu, Z., Zhou, B., Lin, S. (2021). Instance localization for self-supervised detection pretraining. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition (pp. 3987–3996).
  • Zeiler, M. D., Fergus, R. (2013). Visualizing and understanding convolutional networks. CoRR, abs/1311.2901. http://arxiv.org/abs/1311.2901
  • Zhan, X., Pan, X., Dai, B., Liu, Z., Lin, D., & Loy, C. C. (2020, June). Self-supervised scene de-occlusion. In IEEE/cvf conference on computer vision and pattern recognition (cvpr).
  • Zhou, Y., Sun, X., Zha, Z.-J., Zeng, W. (2018). Mict: Mixed 3d/2d convolutional tube for human action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 449–458).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.