202
Views
2
CrossRef citations to date
0
Altmetric
Articles

Generating visually appealing QR codes using colour image embedding

Pages 1-13 | Received 20 Jan 2016, Accepted 21 Sep 2016, Published online: 28 Mar 2017
 

ABSTRACT

Quick Response (QR) codes have become very popular in current scenario. Being machine readable, they appear like blocks of random black and white noise. With their increasing use in marketing material, many attempts have been made to make them visually appealing by embedding images and logos. We propose a colour image embedding method that uses circular modules to reduce the block-like appearance of the code. The luminance of the image pixels corresponding to the centric and surrounding regions of a circular code module is modified in such a way that they blend into the code with least visual distortion, and the resultant code demonstrates high degree of decoding robustness. The results of experiments to assess the visual appeal of the resultant codes, and their tolerance to noise and blur indicate that the visually appealing codes have decodability comparable to the original QR codes, thus justifying the attempt to embed image in an otherwise non-appealing QR code. The codes generated by the proposed method and three other state-of-the art methods are compared for visual appeal and decoding robustness. The results of the comparison indicate that the codes generated by the proposed method have visual appeal best amongst the codes generated by other state-of-the art methods. In terms of tolerance to noise and blur, they are the best amongst the codes which have comparable visual appeal, and second best amongst all.

Acknowledgments

I thank Dr Zachi Baharav and other authors of Baharav and Kakarala Citation[16] who helped us with the implementation of their method. Thanks are also due to Dr Gonzalo Garateguy and other authors of Garateguy et al., Citation[18] who provided the results of their method on our sample dataset. These researchers enabled us in performing the comparative study of our method viz-à-viz their method. I am also indebted to Mr Jaskirat Singh, for the contributions made in this work during his Master's study.

Notes

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.