154
Views
0
CrossRef citations to date
0
Altmetric
Articles

An FPGA-friendly CABAC-encoding architecture with dataflow modelling programming

, , &
Pages 346-354 | Received 07 Aug 2017, Accepted 14 May 2018, Published online: 18 Jun 2018
 

ABSTRACT

Video compression standards play an important role in video encoding, transmitting and decoding. To exploit the similarities or commonality among standards, a Reconfigurable Video Coding framework is developed in MPEG by employing a dataflow modelling method to modulate the basic configuration components of encoders or decoders. However, the entropy coding for bitstream generating and parsing during the configuration process is very complex, especially when employing the Context Adaptive Based Arithmetic Coding (CABAC). This paper proposes an optimized ‘Producer–Consumer’ architecture for CABAC by dataflow modelling. To achieve high-throughput and low-resource consumption, the buffer accessing speed and buffer size in the architecture is analysed and refined. The proposed CABAC is implemented by dataflow language Cal and is synthesized to FPGA. Results show that it can process 3.5 bins/cycle with a 10-byte buffer consumption at a 120 MHz working frequency. It is sufficient for real-time encoding of H.265/HEVC at level 6.2 main tier.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes on contributors

Dandan Ding received her BS and Ph.D degree in communication and information system from Zhejiang University, Hangzhou, China, in 2006 and 2011, respectively. She was an exchange student of GR-LSM, EPFL, Lausanne, Switzerland, from 2007 to 2008. She was a postdoctoral research fellow and research staff with multimedia communication in Zhejiang University from 2011 to 2015. She is currently with the institute of servicing engineering of Hangzhou Normal University, Hangzhou, China. Her research interests include video coding algorithm optimization, very large scale integration system design, FPGA design, and image processing.

Fuchang Liu received the BS and PhD degrees in computer science from the Nanjing University of Science and Technology in 2004 and 2009, respectively. He was a postdoctoral research fellow of computer science and engineering at Ewha Womans University and Nanyang Technological University, from 2009 to 2013. His research interests include image processing, computer vision, scene understanding, GPUs rendering, and collision detection.

Honggang Qi (M’14) received the M.S. degree in computer science from Northeast University, Shenyang, China, in 2002, and the Ph.D. degree in computer science from the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China, in 2008. He is currently a Professor with the School of Computer and Control Engineering, University of Chinese Academy of Sciences. His current research interests include video coding and very large scale integration design.

Zhengwei Yao received the M.S. degree in computer science and education from Hangzhou Normal University, Hangzhou, China, in 2006 and the Ph.D. degree in computer application technology from Shanghai University, Shanghai, China, in 2010 respectively. He came to Indiana University-Purdue University Fort Wayne for visiting scholar in 2012. He is currently an associate professor at the Digital Media and HCI Research Center, Hangzhou Normal University. His research interests include image processing, virtual reality, augmented reality, interactive video games, and scientific visualization.

Additional information

Funding

This work was supported by the Zhejiang Provincial Natural Science Foundation of China under Grant No. LQ15F010001 and LY16F020029 and the General Research Project of Zhejiang Provincial Education Department under Grant No. Y201430479.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.