Page 43 - First special issue on The impact of Artificial Intelligence on communication networks and services
P. 43

,78 -2851$/  ,&7 'LVFRYHULHV  9RO        0DUFK





         [76] L. Huang, Y. Yang, Y. Deng, and Y. Yu, “Densebox:  [89] L. Song, X. Qian, H. Li, and Y. Chen, “Pipelayer: A
             Unifying landmark localization with end to end object  pipelined reram-based accelerator for deep learning,”
             detection,” arXiv preprint arXiv:1509.04874, 2015.  in High Performance Computer Architecture (HPCA),
                                                                2017 IEEE International Symposium on. IEEE, 2017,
         [77] J. Schmidhuber and S. Hochreiter, “Long short-term
                                                                pp. 541–552.
             memory,” Neural computation, vol. 9, no. 8, pp. 1735–
             1780, 1997.
         [78] S. Han, J. Kang, H. Mao, Y. Hu, X. Li, Y. Li, D. Xie,
             H. Luo, S. Yao, Y. Wang et al., “ESE: Efficient speech
             recognition engine with sparse LSTM on FPGA.” in
             FPGA, 2017, pp. 75–84.

         [79] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and
             Y. Bengio, “Binarized neural networks,” in Advances
             in neural information processing systems, 2016, pp.
             4107–4115.
         [80] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi,
             “Xnor-net: Imagenet classification using binary convo-
             lutional neural networks,” in European Conference on
             Computer Vision.  Springer, 2016, pp. 525–542.
         [81] F. Li, B. Zhang, and B. Liu, “Ternary weight networks,”
             arXiv preprint arXiv:1605.04711, 2016.
         [82] C. Zhu, S. Han, H. Mao, and W. J. Dally,
             “Trained  ternary  quantization,”  arXiv  preprint
             arXiv:1612.01064, 2016.

         [83] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen,
             “Incremental network quantization:  Towards loss-
             less cnns with low-precision weights,” arXiv preprint
             arXiv:1702.03044, 2017.

         [84] V. Sze, T.-J. Yang, and Y.-H. Chen, “Designing energy-
             efficient convolutional neural networks using energy-
             aware pruning,” 2017.
         [85] M. Alwani, H. Chen, M. Ferdman, and P. Milder,
             “Fused-layer cnn accelerators,” in Microarchitecture
             (MICRO), 2016 49th Annual IEEE/ACM International
             Symposium on. IEEE, 2016, pp. 1–12.
         [86] F. Tu, S. Yin, P. Ouyang, S. Tang, L. Liu, and S. Wei,
             “Deep convolutional neural network architecture with
             reconfigurable computation patterns,” IEEE Transac-
             tions on Very Large Scale Integration (VLSI) Systems,
             2017.

         [87] S. Zhang, Z. Du, L. Zhang, H. Lan, S. Liu, L. Li,
             Q. Guo, T. Chen, and Y. Chen, “Cambricon-x: An ac-
             celerator for sparse neural networks,” in Microarchitec-
             ture (MICRO), 2016 49th Annual IEEE/ACM Interna-
             tional Symposium on.  IEEE, 2016, pp. 1–12.
         [88] A. Parashar, M. Rhu, A. Mukkara, A. Puglielli,
             R. Venkatesan, B. Khailany, J. Emer, S. W. Keckler, and
             W. J. Dally, “SCNN: An accelerator for compressed-
             sparse convolutional neural networks,” in Proceedings
             of the 44th Annual International Symposium on Com-
             puter Architecture.  ACM, 2017, pp. 27–40.




                                             ‹ ,QWHUQDWLRQDO 7HOHFRPPXQLFDWLRQ 8QLRQ
   38   39   40   41   42   43   44   45   46   47   48