References
- Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. Proc. of IEEE winter Conf. on applications of Comput. vision, 839–15.
- Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. Proc. of 2017 IEEE Conf. on Comput. Vision and Pattern Recog, 1800–1807.
- Datta, R., Joshi, D., Li, J., & Wang, J. Z. (2006). Studying aesthetics in photographic images using a computational approach. Proc. of the 9th European Conf. on Comput. Vision, 288–301.
- Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. Proc. of IEEE Int. Conf. on Comput. Vision and Pattern Recog, 248–255.
- Dhar, S., Ordonez, V., & Berg, T. L. (2011). High level describable attributes for predicting aesthetics and interestingness. Proc. of 2011 IEEE Conf. on Comput. Vision and Pattern Recog, 1657–1664.
- He, K., Zhang, S. R., & Sun, J. (2016). Deep residual learning for image recognition. Proc. of IEEE Conf. Comput. Vision and Pattern Recog, 770–778.
- Karen, S., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. Proc. of The 3rd Int. Conf. on Learning Representations.
- Ke, Y., Tang, X., & Jing, F. (2006). The design of high-level features for photo quality assessment. Proc. of IEEE Conf. Comput. Vision and Pattern Recog, 419–426.
- Kong, S., Shen, X., Lin, Z., Mech, R., & Fowlkes, C. (2016). Photo aesthetics ranking network with attributes and content adaptation. Proc. European Conf. on Comput. Vision, 662–679.
- LeCun, Y., Bengio, Y., & Hinton, G. E. (2015). Deep Learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
- Lu, X., Lin, Z., Shen, X., Mech, R., & Wang, J. Z. (2015). Deep multi-patch aggregation network for image style, aesthetics, and quality estimation. Proc. of the IEEE Int. Conf. on Comput. Vision, 990–998.
- Luo, Y., & Tang, X. (2008). Photo and video quality evaluation: focusing on the subject. Proc. of European Conf. on Comput. Vision, 386–399.
- Luo, W., Wang, X., & Tang, X. (2011). Content-based photo quality assessment. Proc. IEEE Int. Conf. on Comput. Vision, 2206–2213.
- Ma, S., Liu, J., & Chen, C. W. (2017). A-lamp: Adaptive layout-aware multipatch deep convolutional neural network for photo aesthetic assessment. Proc. of IEEE Int. Conf. on Comput. Vision and Pattern Recog, 722–731.
- Mai, L., Le, H., Niu, Y., & Liu, F. (2011). Rule of thirds detection from photograph. Proc. of IEEE Int. Symp. on Multimedia, 91–96.
- Mai, L., Jin, H., & Liu, F. (2016). Composition-preserving deep photo aesthetics assessment. Proc. of IEEE Int. Conf. on Comput. Vision and Pattern Recog, 497–506.
- Marchesotti, L., Perronnin, F., Larlus, D., & Csurka, G. (2011). Assessing the aesthetic quality of photographs using generic image descriptors. Proc. of IEEE Int. Conf. on Comput. Vision, 1784–1791.
- Murray, N., Marchesotti, L., & Perronnin, F. (2012). AVA: A large-scale database for aesthetic visual analysis. Proc. of IEEE Int. Conf. on Comput. Vision and Pattern Recog., 2408–2415.
- Nishiyama, M., Okabe, T., Sato, I., & Sato, Y. (2011). Aesthetic quality classification of photographs based on color harmony. Proc. of IEEE Conf. on Comput. Vision and Pattern Recog., 33–40.
- Omori, F., Takimoto, H., Yamauchi, H., Kanagawa, A., Iwasaki, T., & Ombe, M. (2019). Aesthetic quality evaluation using convolutional neural network. Asia-Pacific Journal of Industrial Management, 8(1), 71–77.
- Pelleg, D., & Moore, A. (2000). X-means: Extending K-means with efficient estimation of the number of clusters. Proc. of the 17th Int. Conf. on Machine Learning, 727–734.
- Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. Proc. of IEEE Int. Conf. on Comput. Vision, 618–626.
- Shizuno, Y., & Hamada, R. (2014). The automatic composition determination method at the time of the photography using the composition matching technique. Proc. of multimedia distributed cooperative and mobile symposium, 646–656.
- Takimoto, H., Omori, F., & Kanagawa, A. (2021). Image aesthetics assessment based on multi-stream CNN architecture and saliency features. Applied Artificial Intelligence, 35(1), 25–40. https://doi.org/10.1080/08839514.2020.1839197
- Talebi, H., & Milanfar, P. (2018). NIMA: Neural image assessment. IEEE Trans. on Image Processing, 27(8), 3998–4011. https://doi.org/10.1109/TIP.2018.2831899
- Yang, G., Ye, Q., & Xia, J. (2022). Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Information Fusion, 77, 29–52. https://doi.org/10.1016/j.inffus.2021.07.016
- Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. Proc. of the IEEE Conf. on Comput. Vision and Pattern Recog, 2921–2929.