4,555
Views
1
CrossRef citations to date
0
Altmetric
Research Papers

QuantNet: transferring learning across trading strategies

Pages 1071-1090 | Received 10 Nov 2020, Accepted 22 Oct 2021, Published online: 03 Dec 2021

References

  • Acar, E. and Satchell, S., Advanced Trading Rules, 2002 (Butterworth-Heinemann).
  • Aggarwal, S. and Aggarwal, S., Deep investment in financial markets using deep learning models. Int. J. Comput. Appl., 2017, 162(2), 40–43.
  • Allendbridge IS, Quantitative investment strategy survey. Pensions & Investments, 2014.
  • Araci, D., Finbert: Financial sentiment analysis with pre-trained language models. Preprint, 2019. Available online at: arXiv:1908.10063.
  • Avramovic, A., Lin, V. and Krishnan, M., We're all high frequency traders now. Credit Suisse Market Structure White Paper, 2017.
  • Bailey, D.H. and Lopez de Prado, M., The Sharpe ratio efficient frontier. J. Risk, 2012, 15(2), 13.
  • Baltzer, M., Jank, S. and Smajlbegovic, E., Who trades on momentum? J. Financ. Mark., 2019, 42, 56–74.
  • BarclayHedge, Barclayhedge: CTA's asset under management, 2017. Available online at: https://www.barclayhedge.com/research/indices/cta/Money_Under_Management.html.
  • Baxter, J., Learning internal representations. In Proceedings of the Eighth Annual Conference on Computational Learning Theory, pp. 311–320, 1995.
  • Bayer, C., Horvath, B., Muguruza, A., Stemper, B. and Tomas, M., On deep calibration of (rough) stochastic volatility models. Preprint, 2019. Available online at: arXiv:1908.08806.
  • Baz, J., Granger, N., Harvey, C.R., Le Roux, N. and Rattray, S., Dissecting investment strategies in the cross section and time series, 2015. Available online at: SSRN 2695101.
  • Bengio, Y., Using a financial training criterion rather than a prediction criterion. Int. J. Neural Syst., 1997, 8(04), 433–443.
  • Bergstra, J. and Bengio, Y., Random search for hyper-parameter optimization. J. Mach. Learn. Res., 2012, 13, 281–305.
  • Bitvai, Z. and Cohn, T., Day trading profit maximization with multi-task learning and technical analysis. Mach. Learn., 2015, 101(1-3), 187–209.
  • Blumberg, S.B., Tanno, R., Kokkinos, I. and Alexander, D.C., Deeper image quality transfer: Training low-memory neural networks for 3D images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 118–125, 2018 (Springer).
  • Blumberg, S.B., Palombo, M., Khoo, C.S., Tax, C.M.W., Tanno, R. and Alexander, D.C., Multi-stage prediction networks for data harmonization. In Medical Image Computing and Computer Assisted Intervention, pp. 411–419, 2019.
  • Buehler, H., Gonon, L., Teichmann, J. and Wood, B., Deep hedging. Quant. Finance, 2019, 19(8), 1271–1291.
  • Caruana, R., Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning, pp. 41–48, 1993 (Morgan Kaufmann).
  • Caruana, R., Multitask learning. Mach. Learn., 1997, 28(1), 41–75.
  • Chen, S., Ma, K. and Zheng, Y., Med3D: Transfer learning for 3D medical image analysis. Preprint, 2019. Available online at: arXiv:1904.00625.
  • Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H. and Bengio, Y., Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734, 2014a.
  • Cho, K., van Merrienboer, B., Bahdanau, D. and Bengio, Y., On the properties of neural machine translation: Encoder–decoder approaches, CoRR, 2014b. Available online at: http://arxiv.org/abs/1409.1259.
  • Choueifaty, Y. and Coignard, Y., Toward maximum diversification. J. Portf. Manag., 2008, 35(1), 40–51.
  • Daniel, K. and Moskowitz, T.J., Momentum crashes. J. Financ. Econ., 2016, 122(2), 221–247.
  • De Prado, M.L., Advances in Financial Machine Learning, 2018 (John Wiley & Sons).
  • Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K., Bert: Pre-training of deep bidirectional transformers for language understanding. Preprint, 2018a. Available online at: arXiv:1810.04805.
  • Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K., Bert: Pre-training of deep bidirectional transformers for language understanding. Preprint, 2018b. Available online at: arXiv:1810.04805.
  • Dichtl, H., Investing in the S&P 500 index: Can anything beat the buy-and-hold strategy. Rev. Financ. Econ., 2020, 38(2), 352–378.
  • Du Plessis, J. and Hallerbach, W.G., Volatility weighting applied to momentum strategies. J. Altern. Invest., 2016, 19(3), 40–58.
  • El Bsat, S., Ammar, H.B. and Taylor, M.E., Scalable multitask policy gradient reinforcement learning. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
  • Eling, M. and Schuhmacher, F., Does the choice of performance measure influence the evaluation of hedge funds. J. Bank. Finance, 2007, 31(9), 2632–2647.
  • Elton, E.J., Gruber, M.J. and de Souza, A., Passive mutual funds and ETFs: Performance and comparison. J. Bank. Finance, 2019, 106, 265–275.
  • Escovedo, T., Koshiyama, A., da Cruz, A.A. and Vellasco, M., Detecta: Abrupt concept drift detection in non-stationary environments. Appl. Soft Comput., 2018, 62, 119–133.
  • Fama, E.F. and French, K.R., A five-factor asset pricing model. J. Financ. Econ., 2015, 116(1), 1–22.
  • Fei-Fei, L., Fergus, R. and Perona, P., One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell., 2006, 28(4), 594–611.
  • Feng, G., Giglio, S. and Xiu, D., Taming the factor zoo: A test of new factors. J. Finance, 2020, 75(3), 1327–1370.
  • Firoozye, N. and Koshiyama, A., Optimal dynamic strategies on gaussian returns, 2019. Available online at: SSRN 3385639.
  • Fischer, T. and Krauss, C., Deep learning with long short-term memory networks for financial market predictions. Eur. J. Oper. Res., 2018, 270(2), 654–669.
  • Flennerhag, S., Yin, H., Keane, J. and Elliot, M., Breaking the activation function bottleneck through adaptive parameterization. In Advances in Neural Information Processing Systems, pp. 7739–7750, 2018.
  • Flennerhag, S., Rusu, A.A., Pascanu, R., Visin, F., Yin, H. and Hadsell, R., Meta-learning with warped gradient descent. In International Conference on Learning Representations, 2020a.
  • Flennerhag, S., Rusu, A.A., Pascanu, R., Visin, F., Yin, H. and Hadsell, R., Meta-learning with warped gradient descent. In International Conference on Learning Representations, 2020b.
  • French, K.R., Kenneth R. French-data library. Tuck-MBA Program Web Server, 2012. Available online at: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html (accessed 20 October 2010).
  • Gao, Z., Gao, Y., Hu, Y., Jiang, Z. and Su, J., Application of deep q-network in portfolio management. Preprint, 2020. Available online at: arXiv:2003.06365.
  • Gers, F.A., Schmidhuber, J. and Cummins, F., Learning to forget: Continual prediction with LSTM, 1999.
  • Ghosn, J. and Bengio, Y., Multi-task learning for stock selection. In Advances in Neural Information Processing Systems, pp. 946–952, 1997.
  • Gibiansky, A., Arik, S., Diamos, G., Miller, J., Peng, K., Ping, W., Raiman, J. and Zhou, Y., Deep voice 2: Multi-speaker neural text-to-speech. In Advances in Neural Information Processing Systems, pp. 2962–2970, 2017.
  • Glorot, X., Bordes, A. and Bengio, Y., Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 513–520, 2011.
  • Goodfellow, I., Bengio, Y. and Courville, A., Deep Learning, 2016 (MIT Press).
  • Gu, S., Kelly, B. and Xiu, D., Empirical asset pricing via machine learning. Rev. Financ. Stud., 2020, 33(5), 2223–2273.
  • Harvey, C.R. and Liu, Y., Backtesting. J. Portf. Manag., 12–28.
  • He, X., Alesiani, F. and Shaker, A., Efficient and scalable multi-task regression on massive number of tasks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 3763–3770, 2019.
  • Heaton, J.B., Polson, N.G. and Witte, J.H., Deep learning for finance: Deep portfolios. Appl. Stoch. Models Bus. Ind., 2017, 33(1), 3–12.
  • Hiew, J.Z.G., Huang, X., Mou, H., Li, D., Wu, Q. and Xu, Y., Bert-based financial sentiment index and LSTM-based stock return predictability. Preprint, 2019. Available online at: arXiv:1906.09024.
  • Hochreiter, S. and Schmidhuber, J., Long short-term memory. Neural Comput., 1997, 9(8), 1735–1780.
  • Hu, Y., Liu, K., Zhang, X., Xie, K., Chen, W., Zeng, Y. and Liu, M., Concept drift mining of portfolio selection factors in stock market. Electron. Commer. Res. Appl., 2015, 14(6), 444–455.
  • Jegadeesh, N. and Titman, S., Returns to buying winners and selling losers: Implications for stock market efficiency. J. Finance, 1993, 48(1), 65–91.
  • Jeong, G. and Kim, H.Y., Improving financial trading decisions using deep q-learning: Predicting the number of shares, action strategies, and transfer learning. Expert Syst. Appl., 2019, 117, 125–138.
  • Jiang, S., Mao, H., Ding, Z. and Fu, Y., Deep decision tree transfer boosting. IEEE Trans. Neural Netw. Learn. Syst., 2019.
  • Kenett, D.Y., Raddant, M., Zatlavi, L., Lux, T. and Ben-Jacob, E., Correlations and dependencies in the global financial village. In International Journal of Modern Physics: Conference Series, Vol. 16, pp. 13–28, 2012 (World Scientific).
  • Kornblith, S., Shlens, J. and Le, Q.V., Do better imagenet models transfer better? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2661–2671, 2019.
  • Koshiyama, A. and Firoozye, N., Avoiding backtesting overfitting by covariance-penalties: An empirical investigation of the ordinary and total least squares cases. J. Financ. Data Sci., 2019a, 1(4), 63–83.
  • Koshiyama, A. and Firoozye, N., Avoiding backtesting overfitting by covariance-penalties: An empirical investigation of the ordinary and total least squares cases. J. Financ. Data Sci., 2019b, 1(4), 63–83.
  • Kouw, W.M. and Loog, M., A review of domain adaptation without target labels. IEEE Trans. Pattern Anal. Mach. Intell., 2019.
  • Lee, Y. and Choi, S., Gradient-based meta-learning with learned layerwise metric and subspace. Preprint, 2018. Available online at: arXiv:1801.05558.
  • Lee, C.-Y., Xie, S., Gallagher, P., Zhang, Z. and Tu, Z., Deeply-supervised nets. In International Conference on Artificial Intelligence and Statistics, 2015.
  • Li, W., Ding, S., Chen, Y. and Yang, S., A transfer learning approach for credit scoring. In International Conference on Applications and Techniques in Cyber Security and Intelligence, pp. 64–73, 2018 (Springer).
  • Li, X., Xie, H., Lau, R.Y., Wong, T.-L. and Wang, F.-L., Stock prediction via sentimental transfer learning. IEEE Access, 2018, 6, 73110–73118.
  • Li, Y., Ni, P. and Chang, V., Application of deep reinforcement learning in stock trading strategies and stock forecasting. Computing, 2019, 1–18.
  • Lim, B., Zohren, S. and Roberts, S., Enhancing time-series momentum strategies using deep neural networks. J. Financ. Data Sci., 2019, 1(4), 19–38.
  • Liu, Z., Loo, C.K. and Seera, M., Meta-cognitive recurrent recursive kernel OS-ELM for concept drift handling. Appl. Soft Comput., 2019, 75, 494–507.
  • Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. and Stoyanov, V., Roberta: A robustly optimized bert pre-training approach. Preprint, 2019. Available online at: arXiv:1907.11692.
  • Lu, D.W., Agent inspired trading using recurrent reinforcement learning and LSTM neural networks. Preprint, 2017. Available online at: arXiv:1707.07338.
  • Martellini, L., Toward the design of better equity benchmarks: Rehabilitating the tangency portfolio from modern portfolio theory. J. Portf. Manag., 2008, 34(4), 34–41.
  • Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S. and Dean, J., Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pp. 3111–3119, 2013.
  • Molyboga, M., Portfolio management of commodity trading advisors with volatility targeting, 2018. Available online at: SSRN 3123092.
  • Moskowitz, T.J., Ooi, Y.H. and Pedersen, L.H., Time series momentum. J. Financ. Econ., 2012, 104(2), 228–250.
  • Nunes, M., Gerding, E., McGroarty, F. and Niranjan, M., A comparison of multitask and single task learning with artificial neural networks for yield curve forecasting. Expert Syst. Appl., 2019, 119, 362–375.
  • Pan, S.J. and Yang, Q., A survey on transfer learning. IEEE Trans. Knowl. Data Eng., 2009, 22(10), 1345–1359.
  • Park, W., Kim, D., Lu, Y. and Cho, M., Relational knowledge distillation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • Pendharkar, P.C. and Cusatis, P., Trading financial indices with reinforcement learning agents. Expert Syst. Appl., 2018, 103, 1–13.
  • Qu, C., Ji, F., Qiu, M., Yang, L., Min, Z., Chen, H., Huang, J. and Croft, W.B., Learning to selectively transfer: Reinforced transfer learning for deep text matching. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 699–707, 2019 (ACM).
  • Raddant, M. and Kenett, D.Y., Interconnectedness in the global financial market. OFR WP 16-09, 2016.
  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D. and Sutskever, I., Language models are unsupervised multitask learners. OpenAI Blog, 2019, 1(8), 9.
  • Reddi, S.J., Kale, S. and Kumar, S., On the convergence of ADAM and beyond. Preprint, 2019. Available online at: arXiv:1904.09237.
  • Rollinger, T. and Hoffman, S., Sortino ratio: A better measure of risk. Futures Mag., 2013, 1(2).
  • Romano, J.P. and Wolf, M., Stepwise multiple testing as formalized data snooping. Econometrica, 2005, 73(4), 1237–1282.
  • Ruder, S., An overview of multi-task learning in deep neural networks. Preprint, 2017. Available online at: arXiv:1706.05098.
  • Ruder, S., Peters, M.E., Swayamdipta, S. and Wolf, T., Transfer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials, pp. 15–18, 2019.
  • Sezer, O.B., Gudelek, M.U. and Ozbayoglu, A.M., Financial time series forecasting with deep learning: A systematic literature review: 2005–2019. Appl. Soft Comput., 2020, 90, 106181.
  • Sharpe, W.F., The sharpe ratio. J. Portf. Manag., 1994, 21(1), 49–58.
  • Socher, R., Ganjoo, M., Manning, C.D. and Ng, A., Zero-shot learning through cross-modal transfer. In Advances in Neural Information Processing Systems, pp. 935–943, 2013.
  • Somasundaram, A. and Reddy, S., Parallel and incremental credit card fraud detection model to handle concept drift and data imbalance. Neural Comput. Appl., 2019, 31(1), 3–14.
  • Sutskever, I., Training recurrent neural networks. PhD Thesis, CAN, 2013.
  • Thomann, A., Factor-based tactical bond allocation and interest rate risk management, 2019. Available online at: SSRN 3122096.
  • Torrey, L. and Shavlik, J., Transfer learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, pp. 242–264, 2010 (IGI Global).
  • Vargas, M.R., De Lima, B.S. and Evsukoff, A.G., Deep learning for stock market prediction from financial news articles. In 2017 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), pp. 60–65, 2017 (IEEE).
  • Voit, J., The Statistical Mechanics of Financial Markets, 2013 (Springer Science & Business Media).
  • Voumard, A. and Beydoun, G., Transfer learning in credit risk. In ECML PKDD, pp. 1–16, 2019.
  • Wang, L., Lee, C.-Y., Tu, Z. and Lazebnik, S., Training deeper convolutional networks with deep supervision, 2015. Available online at: arXiv:1505.02496.
  • Wang, D., Li, Y., Lin, Y. and Zhuang, Y., Relational knowledge transfer for zero-shot learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence', AAAI'16, pp. 2145–2151, 2016 (AAAI Press). Available online at: http://dl.acm.org/citation.cfm?id=3016100.3016198.
  • Wang, W., Zheng, V.W., Yu, H. and Miao, C., A survey of zero-shot learning: Settings, methods, and applications. ACM Trans. Intell. Syst. Technol., 2019, 10(2), 13.
  • Wang, Y., Yao, Q., Kwok, J. and Ni, L.M., Generalizing from a few examples: A survey on few-shot learning, 2019.
  • Yang, Z., Zhao, J., Dhingra, B., He, K., Cohen, W.W., Salakhutdinov, R.R. and LeCun, Y., GloMo: Unsupervised learning of transferable relational graphs. In Advances in Neural Information Processing Systems, pp. 8950–8961, 2018.
  • Young, T.W., Calmar ratio: A smoother tool. Futures, 1991, 20(1), 40.
  • Yu, S. and Principe, J.C., Understanding autoencoders with information theoretic concepts. Neural Netw., 2019, 117, 104–123.
  • Zhang, L., Transfer adaptation learning: A decade survey. Preprint, 2019. Available online at: arXiv:1903.04687.
  • Zhang, Y. and Yang, Q., A survey on multi-task learning. Preprint, 2017. Available online at: arXiv:1707.08114.
  • Zhang, M., Jiang, X., Fang, Z., Zeng, Y. and Xu, K., High-order hidden Markov model for trend prediction in financial time series. Physica A, 2019, 517, 1–12.
  • Zhang, Z., Zohren, S. and Roberts, S., Deep reinforcement learning for trading. J. Financ. Data Sci., 2020, 2(2), 25–40.
  • Zhuang, F., Cheng, X., Luo, P., Pan, S.J. and He, Q., Supervised representation learning: Transfer learning with deep autoencoders. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
  • Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H. and He, Q., A comprehensive survey on transfer learning. Preprint, 2019. Available online at: arXiv:1911.02685.
  • Zintgraf, L.M., Shiarlis, K., Kurin, V., Hofmann, K. and Whiteson, S., Fast context adaptation via meta-learning. Preprint, 2018. Available online at: arXiv:1810.03642.
  • Žliobaitė, I., Pechenizkiy, M. and Gama, J., An overview of concept drift applications. In Big Data Analysis: New Algorithms for a New Society, pp. 91–114, 2016 (Springer).