^ abSzegedy, Christian; Liu, Wei; Jia, Yangqing; Sermanet, Pierre; Reed, Scott; Anguelov, Dragomir; Erhan, Dumitru; Vanhoucke, Vincent et al. (2014). “Going Deeper with Convolutions”. Computing Research Repository. arXiv:1409.4842. Bibcode: 2014arXiv1409.4842S.
^Lewis, J.P. (1988). Creation by refinement: a creativity paradigm for gradient descent learning networks. IEEE International Conference on Neural Networks. doi:10.1109/ICNN.1988.23933。
^Portilla, J; Simoncelli, Eero (2000). “A parametric texture model based on joint statistics of complex wavelet coefficients”. International Journal of Computer Vision40: 49–70. doi:10.1023/A:1026553619983.
^ abSimonyan, Karen; Vedaldi, Andrea; Zisserman, Andrew (2014). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. International Conference on Learning Representations Workshop.
^Mahendran, Aravindh; Vedaldi, Andrea (2015). Understanding Deep Image Representations by Inverting Them. IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/CVPR.2015.7299155。
^ abYosinski, Jason; Clune, Jeff; Nguyen, Anh; Fuchs, Thomas (2015). Understanding Neural Networks Through Deep Visualization. Deep Learning Workshop, International Conference on Machine Learning (ICML) Deep Learning Workshop.
^Nguyen, Anh; Dosovitskiy, Alexey; Yosinski, Jason; Brox, Thomas (2016). Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. arxiv. Bibcode:2016arXiv160509304N。
^Arora, Sanjeev; Liang, Yingyu; Tengyu, Ma (2016). Why are deep nets reversible: A simple theory, with implications for training. arxiv. Bibcode:2015arXiv151105653A。