Publication Cover
Expert Review of Precision Medicine and Drug Development
Personalized medicine in drug development and clinical practice
Volume 4, 2019 - Issue 2
9,144
Views
135
CrossRef citations to date
0
Altmetric
Review

Deep learning and radiomics in precision medicine

&
Pages 59-72 | Received 05 Dec 2018, Accepted 19 Feb 2019, Published online: 18 Apr 2019

References

  • Ciresan DC, Meier U, Masci J, et al. Flexible, high performance convolutional neural networks for image classification. Proc Twenty-Second Int joint Conf on Artif Intell. 2011;2: 1237–1242.
  • Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012;1097–1105.
  • Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. Comput Vision Pattern Recognit. 2015;1–9.
  • Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv:151100561. 2015:1–14.
  • Ronneberger O, Fischer P, Brox T U-net: convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention. 2015. p. 234–241.
  • He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Comput Vision Pattern Recognit. 2016;770–778.
  • Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks. Comput Vision Pattern Recognit. 2017;1(2):4700–4708.
  • Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.
  • Cao C, Liu F, Tan H, et al. Deep learning and its applications in biomedicine. Genomics Proteomics Bioinformatics. 2018.
  • Kumar V, Gu Y, Basu S, et al. Radiomics: the process and the challenges. Magn Reson Imaging. 2012 Nov;30(9):1234–1248.
  • Lambin P, Rios-Velazquez E, Leijenaar R, et al. Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer. 2012;48(4):441446.
  • Aerts HJ, Velazquez ER, Leijenaar RT, et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun. 2014;5:4006.
  • Parekh VS, Jacobs MA. Integrated radiomic framework for breast cancer and tumor biology using advanced machine learning and multiparametric MRI. NPJ Breast Cancer. 2017;3(1):43.
  • Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care. 2010 Nov;48(11):981–988.
  • Masanz JJ, Ogren PV, Jiaping Z, et al. Mayo clinical text analysis and knowledge extraction system (cTAKES): architecture, component evaluation and applications. J Am Med Inf Assoc. 2010;17(5):507–513.
  • Hayes MG, Rasmussen-Torvik L, Pacheco JA, et al. Use of diverse electronic medical record systems to identify genetic risk for type 2 diabetes within a genome-wide association study. J Am Med Inf Assoc. 2012;19(2):212–218.
  • Choi S, Ivkin N, Braverman V, et al. DreamNLP: novel NLP system for clinical report metadata extraction using count sketch data streaming algorithm: preliminary results. arXiv:180902665. 2018:1–13.
  • McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115–133.
  • Turing A. Computer Machinery and Intelligence. Mind. 1950;LIX(236):433–460.
  • Farley B, Clark W. Simulation of self-organizing systems by digital computer. Trans IRE Prof Group Inf Theory. 1954;4(4):76–84.
  • Rosenblatt F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65(6):386–408.
  • Widrow B, Hoff ME. Adaptive switching circuits. IRE WESCON Convention Rec. 1960;4:96104.
  • Ivakhnenko AG, Lapa VG. Cybernetic predicting devices. CCM Information Corp. 1966;1–255.
  • Minsky M, Papert S. Perceptrons: an introduction to computational geometry. Cambridge (MA): MIT Press; 1969.
  • Ivakhnenko AG. Polynomial Theory of Complex Systems. IEEE Trans Syst Man Cybern Syst. 1971;1(4):364–378.
  • Werbos P. Beyond regression: new tools for prediction and analysis in the behavioral sciences [Ph D dissertation]. Harvard University; 1974.
  • Fukushima K. Neural network model for a mechanism of pattern recognition unaffected by shift in position-Neocognitron. IEICE Tech Rep A. 1979;62(10):658–665.
  • Eigen M. Selforganization in molecular and cellular networks. Neurochem Int. 1980;2C:25.
  • Fukushima K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position [journal article]. Biol Cybern. 1980;36(4):193–202.
  • Kohonen T Automatic formation of topological maps of patterns in a self-organizing system. Proc 2nd Scand Conf on Image Analysis. 1981. p. 214–220.
  • Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982 Apr;79(8):2554–2558.
  • Ackley DH, Hinton GE, Sejnowski TJ. A learning algorithm for Boltzmann machines. Cognit Sci. 1985;9(1):147–169.
  • Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. La Jolla: Univ California San Diego - Institute for Cognitive Science; 1985. p. 1–49.
  • Smolensky P. Information processing in dynamical systems: foundations of harmony theory; CU-CS-321-86; 1986. (Computer Science Technical Reports: Colorado Univ At Boulder Dept Of Computer Science). p. 1–56.
  • LeCun Y, Boser BE, Denker JS, et al. Handwritten digit recognition with a back-propagation network. Adv Neural Inf Process Syst. 1990;396–404.
  • Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997 Nov 15;9(8):1735–1780.
  • Schuster M, Paliwal KK. Bidirectional recurrent neural networks. IEEE Trans Signal Process. 1997;45(11):2673–2681.
  • Hinton GE, Srivastava N, Krizhevsky A, et al. Improving neural networks by preventing coadaptation of feature detectors. arXiv:12070580. 2012:1–18.
  • Kingma DP, Welling M. Auto-encoding variational bayes. arXiv:13126114. 2013:1–14.
  • Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Adv Neural Inf Process Syst. 2014;2672–2680.
  • Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. arXiv:150504597. 2015:1–8.
  • Sabour S, Frosst N, Hinton GE. Dynamic routing between capsules. Adv Neural Inf Process Syst. 2017;3856–3866.
  • Deng J, Dong W, Socher R, et al. Imagenet: A large-scale hierarchical image database. Comput Vision Pattern Recognit. 2009;248–255.
  • Gatys LA, Ecker AS, Bethge M. Image style transfer using convolutional neural networks. Comput Vision Pattern Recognit. 2016;2414–2423.
  • LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444.
  • Schmidhuber J. Deep learning in neural networks: an overview. Neural Networks. 2015 [cited 2015 Jan 01];61:85–117.
  • Goodfellow I, Bengio Y, Courville A, et al. Deep learning. MIT press Cambridge; 2016.
  • Briot J-P, Hadjeres G, Pachet F. Deep learning techniques for music generation-a survey. arXiv:170901620. 2017:1–189.
  • Xu HZ, Gao Y, Yu F, et al. End-to-end learning of driving models from large-scale video datasets. Comput Vision Pattern Recognit. 2017;3530–3538.
  • Shannon CE. A mathematical theory of communication. Bell Syst Techn J. 1948 July;27(3):379–423.
  • Haralick RM, Shanmugam K, Dinstein IH. Textural features for image classification. IEEE Trans Syst Man Cybern Syst. 1973;6:610–621.
  • Galloway MM. Texture analysis using gray level run lengths. Comput Graphics Image Process. 1975 [cited 1975 June 01];4(2):172–179.
  • Laws KI Rapid texture identification. Proceedings of SPIE 0238, Image Processing for Missile Guidance. 1980. p. 376–381.
  • Parekh V, Jacobs MA. Radiomics: a new application from established techniques. Expert Rev Precis Med Drug Dev. 2016 [cited 2016 March 03];1(2):207–226.
  • Mandelbrot BB. The fractal geometry of nature. Vol. 173. Macmillan; 1983.
  • Amadasun M, King R. Textural features corresponding to textural properties. IEEE Trans Syst Man Cybern Syst. 1989;19(5):1264–1274.
  • Alpaydin E. Introduction to machine learning. 3rd ed. MIT press; 2014.
  • Ng AY, Jordan MI. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Adv Neural Inf Process Syst. 2002;841–848.
  • Li C, Wand M Precomputed real-time texture synthesis with Markovian generative adversarial networks. arXiv:160404382v1. 2016:1–17.
  • Canziani A, Paszke A, Culurciello E. An analysis of deep neural network models for practical applications. arXiv:160507678. 2016:1–7.
  • Vincent P, Larochelle H, Lajoie I, et al. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res. 2010;11(Dec):3371–3408.
  • Parekh VS, Macura KJ, Harvey S, et al. Multiparametric deep learning tissue signatures for a radiological biomarker of breast cancer: preliminary results. arXiv:180208200. 2018:1–23.
  • Hjelm RD, Plis SM, Calhoun VC. Variational autoencoders for feature detection of magnetic resonance imaging data. arXiv:160306624. 2016.
  • Sedai S, Mahapatra D, Hewavitharanage S, et al. Semi-supervised segmentation of optic cup in retinal fundus images using variational autoencoder. International Conference on Medical Image Computing and Computer-Assisted Intervention. 2017. p. 75–82.
  • Uzunova H, Handels H, Ehrhardt J Unsupervised pathology detection in medical images using learning-based methods. In: Bildverarbeitung für die Medizin 2018. Springer; 2018. p. 61–66.
  • Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput. 2006 Jul;18(7):1527–1554.
  • Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: a review. arXiv:180907294. 2018.
  • Mirza M, Osindero S. Conditional generative adversarial nets. arXiv:14111784. 2014:1–7.
  • Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:151106434. 2015:1–16.
  • Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network. Comput Vision Pattern Recognit. 2017;105–114.
  • Zhang H, Xu T, Li H, et al. Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv:161203242. 2017:1–14.
  • Nie D, Trullo R, Lian J, et al. Medical image synthesis with context-aware generative adversarial networks. International Conference on Medical Image Computing and Computer-Assisted Intervention. 2017. p. 417–425.
  • Zhu J-Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv:170310593. 2017:1–10.
  • Schmidhuber J. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Sci. 1987;1(4):403–412.
  • Sutton RS, Barto AG. Reinforcement Learning: an Introduction. 2nd ed. MIT Press; 2018.
  • Schmidhuber J Recurrent networks adjusted by adaptive critics. Int Neural Network Conf. 1990;1:719–722.
  • Schmidhuber J Reinforcement learning with interacting continually running fully recurrent networks. Int Neural Network Conf. 1990;2:817–820.
  • Schmidhuber J. Reinforcement learning in Markovian and non-Markovian environments. Adv Neural Inf Process Syst. 1991;3:500–506.
  • Kaelbling L, Littman M, Moore A. Reinforcement Learning: A Survey. J Artif Intell Res. 1996;4:237–285.
  • Sutton RS. Temporal credit assignment in reinforcement learning. University of Massachusetts Amherst; 1984.
  • Bakker B Reinforcement learning with long short-term memory. Int Conf Neural Inf Process Syst. 2001;14:1475–1482.
  • Graves A, Fernández S, Schmidhuber J Reinforcement learning with interacting continually running fully recurrent networks. Int Conf Artif Neural Networks. 2007;17:549–558.
  • Tseng HH, Luo Y, Cui S, et al. Deep reinforcement learning for automated radiation adaptation in lung cancer. Med Phys. 2017;44(12):6690–6705.
  • Meyer P, Noblet V, Mazzara C, et al. Survey on deep learning for radiotherapy. Comput Biol Med. 2018;98(198):126–146.
  • Ghesu F-C, Georgescu B, Zheng Y, et al. Multi-scale deep reinforcement learning for real-time 3D-landmark detection in CT scans. IEEE Trans Pattern Anal Mach Intell. 2019;41(1):176–189.
  • Graves A, Mohamed A, Hinton G. Speech recognition with deep recurrent neural networks. arXiv:13035778. 2013:1–5.
  • Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature. 2015;518:529–533.
  • Silver D, Huang A, Maddison CJ, et al. Mastering the game of Go with deep neural networks and tree search. Nature. 2016;529(7587):484.
  • Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. Comput Vision Pattern Recognit. 2015;427–436.
  • Moosavi-Dezfooli S-M, Fawzi A, Frossard P. Deepfool: a simple and accurate method to fool deep neural networks. Comput Vision Pattern Recognit. 2016;2574–2582.
  • Su J, Vargas DV, Kouichi S. One pixel attack for fooling deep neural networks. arXiv:171008864. 2017;1–11.
  • Brown TB, Mané D, Roy A, et al. Adversarial patch. arXiv:171209665. 2017:1–6.
  • Erhan D, Bengio Y, Courville A, et al. Visualizing higher-layer features of a deep network. Univ Montreal. 2009;1341(3):1–13. Technical Report.
  • Zeiler MD, Krishnan D, Taylor GW, et al. Deconvolutional networks. Comput Vision Pattern Recognit. 2010;2528–2535.
  • Mahendran A, Vedaldi A. Understanding deep image representations by inverting them. Comput Vision Pattern Recognit. 2015;5188–5196.
  • Bau D, Zhou B, Khosla A, et al. Network dissection: quantifying interpretability of deep visual representations. arXiv:170405796. 2017:1–9.
  • Qin Z, Yu F, Liu C, et al. How convolutional neural network see the world-A survey of convolutional neural network visualization methods. Math Found Comput. 2018;1(2):149–180.
  • Nguyen A, Yosinski J, Clune J. Multifaceted feature visualization: uncovering the different types of features learned by each neuron in deep neural networks. arXiv:160203616. 2016:1–23.
  • Zeiler MD, Taylor GW, Fergus R Adaptive deconvolutional networks for mid and high level feature learning. IEEE Int Conf Comput Vision. 2011:2018–2025.
  • Zeiler MD, Fergus R Visualizing and understanding convolutional networks. Eur Conf Comput Vision. 2014:818–833.
  • Vondrick C, Khosla A, Malisiewicz T, et al. Hoggles: visualizing object detection features. Comput Vision Pattern Recognit. 2013;1–8.
  • Dosovitskiy A, Brox T. Inverting visual representations with convolutional networks. Comput Vision Pattern Recognit. 2016;4829–4837.
  • Dalal N, Triggs B. Histograms of oriented gradients for human detection. Comput Vision Pattern Recognit. 2005;1:886–893.
  • Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vis. 2004;60(2):91–110.
  • Yip SS, Aerts HJ. Applications and limitations of radiomics. Phys Med Biol. 2016;61(13):150–166.
  • Parekh VS, Jacobs MA. MPRAD: a multiparametric radiomics framework. arXiv:180909973. 2018:1–32.
  • Parekh VS, Jacobs MA. Radiomic synthesis using deep convolutional neural networks. arXiv:181011090. 2018:1–4.
  • Shafiq‐ul‐Hassan M, Zhang GG, Latifi K, et al. Intrinsic dependencies of CT radiomic features on voxel size and number of gray levels. Med Phys. 2017;44(3):1050–1062.
  • Shafiq-ul-Hassan M, Latifi K, Zhang G, et al. Voxel size and gray level normalization of CT radiomic features in lung cancer. Sci Rep. 2018;8(1):10545.
  • Graham B. Kaggle diabetic retinopathy detection competition report. University of Warwick; 2015.
  • Nyúl LG, Udupa JK, Zhang X. New variants of a method of MRI scale standardization. IEEE Trans Med Imaging. 2000;19(2):143–150.
  • Akkus Z, Galimzianova A, Hoogi A, et al. Deep learning for brain MRI segmentation: state of the art and future directions. J Digit Imaging. 2017;30(4):449–459.
  • Olah C, Satyanarayan A, Johnson I, et al. The building blocks of interpretability. Distill. 2018;3(3):e10.
  • Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks. arXiv:170606083. 2017:1–27.
  • Cortes C, Gonzalvo X, Kuznetsov V, et al. Adanet: adaptive structural learning of artificial neural networks. arXiv:160701097. 2016:1–14.
  • Parisi GI, Kemker R, Part JL, et al. Continual lifelong learning with neural networks: a review. arXiv:180207569. 2018:1–29.