1.西北大学 信息科学与技术学院,陕西 西安 710127
2.陕西省丝绸之路文化遗产数字化保护与传承协同创新中心,陕西 西安 710127
[ "彭进业,男,1964年生,博士,二级教授、博士生导师,文化遗产数字化保护与传播教育部创新团队负责人(带头人),文化遗产数字化保护陕西省三秦学者创新团队负责人(带头人),教育部新世纪优秀人才,陕西省三秦学者,陕西省教学名师,全国高校黄大年式教师团队核心成员(排名第二)。现任西北大学信息科学与技术学院院长、软件学院院长,新型网络智能信息服务国家地方联合工程研究中心常务副主任,文化遗产数字化国家地方联合工程研究中心副主任,陕西省面向领域应用的人工智能技术学科创新引智基地主任,陕西省丝绸之路文化遗产数字化保护与传承协同创新中心主任。兼任中国计算机自动测量与控制技术协会理事、陕西省图象图形学学会副理事长等职。先后担任IoTaas 2020、CBD 2021、ICIPMC 2022/2023等多个国际学术会议主席。从事智能信息处理、文物数字化保护技术等方面的研究与教学工作。主持国家重点研发课题、国家自然科学基金等科研项目20多项。曾在日本国立Toyohashi科技大学作访问学者。在IEEE TIP、TMM、TKDE、TITB、Journal of Cultural Heritage、《中国科学》等国内外刊物及CVPR、WWW、IJCAI等重要国际学术会议上发表一批学术论文,获授权发明专利20多项,其中转化应用7项,获国家教学成果二等奖、陕西省科学技术二等奖等教学科研奖励。" ]
扫 描 看 全 文
彭进业, 余喆, 屈书毅, 等. 基于深度学习的图像修复方法研究综述[J]. 西北大学学报(自然科学版), 2023,53(6):943-963.
PENG Jinye, YU Zhe, QU Shuyi, et al. A survey of image inpainting methods based on deep learning[J]. Journal of Northwest University (Natural Science Edition), 2023,53(6):943-963.
彭进业, 余喆, 屈书毅, 等. 基于深度学习的图像修复方法研究综述[J]. 西北大学学报(自然科学版), 2023,53(6):943-963. DOI: 10.16152/j.cnki.xdxbzr.2023-06-006.
PENG Jinye, YU Zhe, QU Shuyi, et al. A survey of image inpainting methods based on deep learning[J]. Journal of Northwest University (Natural Science Edition), 2023,53(6):943-963. DOI: 10.16152/j.cnki.xdxbzr.2023-06-006.
图像修复是指通过使用计算机算法和图像处理技术还原损坏、缺失或被破坏的图像区域,其目标是使修复后的图像在视觉上具有合理的结构、纹理和连贯性,并且尽可能与原始图像的外观和信息接近。传统的图像修复技术通常基于规则和启发式方法,利用像素间的局部关系、边缘信息、纹理统计等低级特征进行图像修复,难以修复具有复杂语义的图像。近年来,深度学习技术由于其强大的特征提取能力,在图像修复任务中逐渐成为主流方法。这些方法借助大规模数据集进行训练,通过深层次的卷积神经网络或生成对抗网络自动学习图像的高级特征和复杂语义信息。然而,现有的图像修复总结研究较少,且深度学习技术更新太快,为了更好地推动深度学习技术在图像修复领域中的应用及发展,有必要对现有相关方法进行分类和总结。该文对基于深度学习的图像修复方法进行了系统回顾和全面概述,从修复策略的角度出发对图像修复方法进行系统性总结。具体分析了每类方法的优势和局限性,总结了常用的数据集、定量评价指标及代表性方法的性能对比,对图像修复领域存在的难点问题及未来研究方向进行了展望。
Image inpainting is a process that involves utilizing computer algorithms and image processing techniques to restore damaged, missing, or corrupted regions within an image. The objective of this process is to generate visually reasonable and coherent structures and textures in the repaired regions, while simultaneously being as consistent as possible with the appearance of the original image. Traditional image inpainting techniques predominantly rely on rule-based and heuristic methods, utilizing low-level features such as local pixel relationships, edge information, and texture statistics to perform inpainting tasks. However, handling images with intricate semantics through these methods has proven challenging. In recent years, the prominence of deep learning technology has grown significantly in image inpainting tasks owing to its powerful feature extraction capabilities. By leveraging large-scale datasets, these methods automatically learn high-level features and complex semantic information of images through deep convolutional neural networks or generative adversarial networks. However, there are few existing summary studies on image inpainting, while the evolution of deep learning technology is progressing rapidly. In order to facilitate the effective application and development of deep learning methods in image inpainting, a systematic categorization and summary of existing techniques is imperative. This article provides a systematic review and comprehensive overview of deep learning-based image restoration methods, offering a systematic summary of image inpainting methods from the perspective of inpainting strategies. We specifically analyzed the strengths and limitations of each method category, summarized commonly used datasets, quantitative evaluation metrics, and performance comparisons of representative approaches. Ultimately, we discussed the existing challenges in the field of image inpainting and proposed potential research avenues for future investigations.
数字图像处理图像修复深度学习计算机视觉
digital image processingimage inpaintingdeep learningcomputer vision
JAM J, KENDRICK C, WALKER K, et al. A comprehensivereview of past and present image inpainting methods[J]. Computer Vision and Image Understanding, 2021, 203: 103147.
RUMELHART D E, HINTON G E, WILLIAMS R J. Learning internal representations by error propagation[M]//Readings in Cognitive Science.Amsterdam: Elsevier, 1988: 399-421.
GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. New York: ACM, 2014: 2672-2680.
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.New York: ACM, 2017: 6000-6010.
DHARIWAL P, NICHOL A. Diffusion models beat GANs on image synthesis[EB/OL]. (2021-06-01)[2023-05-25]. https://arxiv.org/abs/2105.05233https://arxiv.org/abs/2105.05233.
KÖHLER R, SCHULER C, SCHÖLKOPF B, et al. Mask-specific inpainting with deep neural networks[C]//German Conference on Pattern Recognition.Cham: Springer, 2014: 523-534.
REN J S, XU L, YAN Q, et al. Shepard convolutional neural networks[M]//Advances in Neural Information Processing Systems (NIPS). San Francisco: Morgan Kaufmann, 2015.
DAPOGNY A, CORD M, PÉREZ P. The missing data encoder: Cross-channel image completion with hide-and-seek adversarial network[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 10688-10695.
KOUTNÍK J, GREFF K, GOMEZ F, et al.A clockwork RNN[EB/OL]. (2014-02-14)[2023-05-25]. https://arxiv.org/abs/1402.3511https://arxiv.org/abs/1402.3511.
LECUN Y, BOSER B, DENKER J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4): 541-551.
VAN OORD A, KALCHBRENNER N, KAVUKCUOGLU K. Pixel recurrent neural networks[C]//International Conference on Machine Learning. New York: PMLR, 2016: 1747-1756.
HOCHREITER S, SCHMIDHUBER J. Long short-term memory [J]. Neural Computation, 1997, 9(8): 1735-1780.
VAN DEN OORD A, KALCHBRENNER N, VINYALS O, et al. Conditional image generation with PixelCNN decoders[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 4797-4805.
SALIMANS T, KARPATHY A, CHEN X, et al. PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modifications[EB/OL]. (2017-01-19)[2023-05-25]. https://arxiv.org/abs/1701.05517https://arxiv.org/abs/1701.05517.
OLIVEIRA M M, BOWEN B, MCKENNA R, et al. Fast digital image inpainting[C]//Proceedings of the International Conference on Visualization, Imaging and Image Processing(VIIP 2001), Marbella: [s.n.], 2001: 106-107.
HADHOUD M M, MOUSTAFA K A, SHENODA S Z. Digital images inpainting using modified convolution based method[C]//Proceedings SPIE 7340, Optical Pattern Recognition XX.Orlando: SPIE, 2009, 7340: 234-240.
JAIN V, SEUNG S. Natural image denoising with convolutional networks[C]//Advances in Neural Information Processing Systems. Spain: Curran Associates, Inc, 2008: 769-776.
YU J H, LIN Z, YANG J M, et al. Generative image inpainting with contextual attention[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE, 2018: 5505-5514.
SAGONG M C, SHIN Y G, KIM S W, et al. PEPSI: Fast image inpainting with parallel decoding network[C]//2919 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach: IEEE, 2020: 11352-11360.
MA Y Q, LIU X L, BAI S H, et al. Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao: IJCAI, 2019: 3123-3129.
ZHANG H R, HU Z Z, LUO C Z, et al. Semantic image inpainting with progressive generative networks[C]//Proceedings of the 26th ACM International Conference on Multimedia. New York: ACM, 2018: 1939-1947.
LI J Y, WANG N, ZHANG L F, et al. Recurrent feature reasoning for image inpainting[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 7757-7765.
ZENG Y, LIN Z, YANG J M, et al. High-resolution image inpainting with iterative confidence feedback and guided upsampling[C]// European Conference on Computer Vision (ECCV). Cham: Springer, 2020: 1-17.
YANG C, LU X, LIN Z, et al. High-resolution image inpainting using multi-scale neural patch synthesis[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 4076-4084.
YI Z L, TANG Q, AZIZI S, et al. Contextual residual aggregation for ultrahigh-resolution image inpainting[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 7505-7514.
KULSHRESHTHA P, PUGH B, JIDDI S. Feature refinement to improve high resolution image inpainting[EB/OL]. (2002-06-29)[2023-05-25]. https://arxiv.org/abs/2206.13644https://arxiv.org/abs/2206.13644.
LIU W H, CUN X D, PUN C M, et al. CoordFill: Efficient high-resolution image inpainting via parameterized coordinate querying[EB/OL]. (2023-03-15)[2023-05-25]. https://arxiv.org/abs/2303.08524https://arxiv.org/abs/2303.08524.
LIAO L, HU R M, XIAO J, et al. Edge-aware context encoder for image inpainting[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Calgary: IEEE, 2018: 3156-3160.
NAZERI K, NG E, JOSEPH T, et al. EdgeConnect: Structure guided image inpainting using edge prediction[C]//2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Seoul: IEEE, 2019: 3265-3274.
LI J Y, HE F X, ZHANG L F, et al. Progressive reconstruction of visual structure for image inpainting [C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2020: 5961-5970.
REN Y R, YU X M, ZHANG R N, et al. StructureFlow: Image inpainting via structure-aware appearance flow[C]// 2019 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2020: 181-190.
DENG X C, YU Y. Ancient mural inpainting via structure information guided two-branch model[J]. Heritage Science, 2023, 11(1): 1-17.
YANG J E, QI Z Q, SHI Y. Learning to incorporate structure knowledge for image inpainting[J]. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020, 34(7): 12605-12612.
SONG Y H, YANG C, SHEN Y J, et al. SPG-net: Segmentation prediction and guidance network for image inpainting[EB/OL]. (2018-08-06)[2023-05-25]. https://arxiv.org/abs/1805.03356https://arxiv.org/abs/1805.03356.
YU T, FENG R S, FENG R Y, et al. Inpaint anything: Segment anything meets image inpainting[EB/OL]. (2023-04-13)[2023-05-25]. https://arxiv.org/abs/2304.06790https://arxiv.org/abs/2304.06790.
LIU Y, PAN J S, SU Z X. Deep blind image inpainting[M]//CUI Z, PAN J S, ZHANG S S, et al., Eds. Intelligence Science and Big Data Engineering. Visual Data Engineering. Cham: Springer International Publishing, 2019: 128-141.
WANG Y, CHEN Y C, TAO X, et al. VCNET: A robust approach to blind image inpainting[C]//European Conference on Computer Vision.Cham: Springer, 2020: 752-768.
PHUTKE S S, KULKARNI A, VIPPARTHI S K, et al. Blind image inpainting via omni-dimensional gated attention and wavelet queries[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW). Vancouver: IEEE, 2023: 1251-1260.
LIU G L, REDA F A, SHIH K J, et al. Catanzaro, Image inpainting for irregular holes using partial convolutions[C]//European Conference on Computer Vision (ECCV). Cham: Springer, 2018: 85-100.
CHEN M, ZHAO X D, XU D Q. Image inpainting for digital Dunhuang murals using partial convolutions and sliding window method[J]. Journal of Physics: Conference Series, 2019, 1302(3): 032040.
WANG N Y, WANG W L, HU W J, et al. Thanka mural inpainting based on multi-scale adaptive partial convolution and stroke-like mask[J]. IEEE Transactions on Image Processing, 2021, 30: 3720-3733.
YU J H, LIN Z, YANG J M, et al. Free-form image inpainting with gated convolution[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2019: 4470-4479.
CHANG Y L, LIU Z Y, LEE K Y, et al. Free-form video inpainting with 3D gated convolution and temporal PatchGAN[C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV). Sesoul: IEEE, 2020: 9065-9074.
LI H A, WANG G Y, GAO K, et al. A gated convolution and self-attention-based pyramid image inpainting network[J]. Journal of Circuits, Systems and Computers, 2022, 31(12): 2250208.
XIE K, GAO L G, ZHANG H, et al. Inpainting truncated areas of CT images based on generative adversarial networks with gated convolution for radiotherapy[J]. Medical & Biological Engineering & Computing, 2023, 61(7): 1757-1772.
MA X X, DENG Y B, ZHANG L, et al. A novel generative image inpainting model with dense gated convolutional network[J]. International Journal of Computers Communications & Control, 2023, 18(2): 1-18.
XIE C H, LIU S H, LI C, et al. Image inpainting with learnable bidirectional attention maps[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2020: 8857-8866.
GUO X F, YANG H Y, HUANG D. Image inpainting via conditional texture and structure dual generation[C]//2021 IEEE/CVF International Conference on Computer Vision(ICCV). Montreal: IEEE, 2022: 14114-14123.
ZHAO S Y, CUI J, SHENG Y L, et al. Large scale image completion via co-modulated generative adversarial networks[EB/OL]. (2021-03-18)[2023-05-25]. https://arxiv.org/abs/2103.10428https://arxiv.org/abs/2103.10428.
ZHENG H T, LIN Z, LU J W, et al. Image inpainting with cascaded modulation GAN and object-aware training[C]//AVIDAN S, BROSTOW G, CISSÉ M, et al. European Conference on Computer Vision. Cham: Springer, 2022: 277-296.
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.
WAN Z Y, ZHANG J B, CHEN D D, et al. High-fidelity pluralistic image completion with transformers[C]//2021 IEEE/CVF International Conference on Computer Vision(ICCV). Montreal: IEEE, 2022: 4672-4681.
ZHOU Y Q, BARNES C, SHECHTMAN E, et al. TransFill: Reference-guided image inpainting by merging multiple color and spatial transformations[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Nashville: IEEE, 2021: 2266-2267.
WANG J K, CHEN S X, WU Z X, et al. FT-TDR: Frequency-guided transformer and top-down refinement network for blind face inpainting[J]. IEEE Transactions on Multimedia, 2023, 25: 2382-2392.
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[EB/OL]. (2021-06-03)[2023-05-25]. https://arxiv.org/abs/2010.11929https://arxiv.org/abs/2010.11929.
CAO C J, DONG Q L, FU Y M. Learning prior feature and attention enhanced image inpainting[C]//AVIDAN S, BROSTOW G, CISSÉ M, et al. European Conference on Computer Vision. Cham: Springer, 2022: 306-322.
YU Y S, DU D W, ZHANG L B, et al. Unbiased multi-modality guidance for image inpainting[C]//AVIDAN S, BROSTOW G, CISSÉ M, et al. European Conference on Computer Vision. Cham: Springer, 2022: 668-684.
LIU H P, WANG Y, WANG M, et al. Delving globally into texture and structure for image inpainting[C]//Proceedings of the 30th ACM International Conference on Multimedia. New York: ACM, 2022: 1270-1278.
CHEN B L, LIU T J, LIU K H. Lightweight image inpainting by stripe window transformer with joint attention to CNN[EB/OL]. (2023-01-02)[2023-05-25]. https://arxiv.org/abs/2301.00553https://arxiv.org/abs/2301.00553.
NADERI M, GIVKASHI M H, KARIMI N, et al. SFI-swin: Symmetric face inpainting with swin transformer by distinctly learning face components distributions[EB/OL]. (2023-01-09)[2023-05-25]. https://arxiv.org/abs/2301.03130https://arxiv.org/abs/2301.03130.
LIAO L, LIU T R, CHEN D L, et al. TransRef: Multi-scale reference embedding transformer for reference-guided image inpainting[EB/OL]. (2023-06-20)[2023-05-25]. https://arxiv.org/abs/2306.11528https://arxiv.org/abs/2306.11528.
DHARIWAL P, NICHOL A. Diffusion models beat GANs on image synthesis[EB/OL]. (2021-06-01)[2023-05-25]. https://arxiv.org/abs/2105.05233.pdfhttps://arxiv.org/abs/2105.05233.pdf.
HO J, JAIN A, ABBEEL P. Denoising diffusion probabilistic models[M]//Advances in neural Information Processing Systems.San Francisco: Margan Kaufmann, 2020.
LUGMAYR A, DANELLJAN M, ROMERO A, et al. RePaint: Inpainting using denoising diffusion probabilistic models[EB/OL]. (2022-08-31)[2023-07-25]. https://arxiv.org/abs/2201.09865https://arxiv.org/abs/2201.09865.
LI W B, YU X, ZHOU K, et al. SDM: Spatial diffusion model for large hole image inpainting[EB/OL]. (2023-03-08)[2023-07-25]. https://arxiv.org/abs/2212.02963https://arxiv.org/abs/2212.02963.
HORITA D, YANG J L, CHEN D, et al. A structure-guided diffusion model for large-hole diverse image completion[EB/OL]. (2022-11-18)[2023-07-25]. https://arxiv.org/abs/2211.10437https://arxiv.org/abs/2211.10437.
GRILL J B, STRUB F, ALTCHÉ F, et al. Bootstrap your own latent: A new approach to self-supervised Learning[EB/OL]. (2020-09-10)[2023-05-25]. https://arxiv.org/abs/2006.07733https://arxiv.org/abs/2006.07733.
ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans: IEEE, 2022: 10674-10685.
ISKAKOV K. Semi-parametric image inpainting[EB/OL]. (2018-11-13)[2023-07-25]. https://arxiv.org/abs/1807.02855https://arxiv.org/abs/1807.02855.
XIONG W, YU J H, LIN Z, et al. Foreground-aware image inpainting[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2020: 5833-5841.
YUVAL N. Reading digits in natural images with unsupervised feature learning[C]//Proceedings of the NIPS Workshop on Deep Learning and Unsupervised Feature Learning.Granada: NIPS Foundation, 2011.
DOERSCH C, SINGH S, GUPTA A, et al. What makes Paris look like Paris?[J]. ACM Transactions on Graphics, 2012, 31(4): 1-9.
CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Las Vegas: IEEE, 2016: 3213-3223.
LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]//European Conference on Computer Vision. Cham: Springer, 2014: 740-755.
RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252.
ZHOU B L, LAPEDRIZA A, KHOSLA A, et al. Places: A 10 million image database for scene recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(6): 1452-1464.
LE V, BRANDT J, LIN Z, et al. Interactive facial feature localization[C]//European Conference on Computer Vision. Berlin, Heidelberg: Springer, 2012: 679-692.
LIU Z W, LUO P, WANG X G, et al. Deep learning face attributes in the wild[C]//2015 IEEE International Conference on Computer Vision(ICCV). Santiago: IEEE, 2016: 3730-3738.
KARRAS T, AILA T M, LAINE S, et al. Progressive growing of GANs for improved quality, stability, and variation[EB/OL]. (2018-02-26)[2023-05-25]. https://arxiv.org/abs/1710.10196https://arxiv.org/abs/1710.10196.
KARRAS T, LAINE S, AILA T M. A style-based generator architecture for generative adversarial networks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach: IEEE, 2020: 4396-4405.
TYLECEK R, ŠÁRA R. Spatial pattern templates for recognition of objects with regular structure[C]//German Conference 35th on Pattern Recognition. Berlin, Heidelberg: Springer, 2013: 364-374.
CIMPOI M, MAJI S, KOKKINOS I, et al. Describing textures in the wild[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus: IEEE, 2014: 3606-3613.
KRAUSE J, STARK M, JIA D, et al. 3D object representations for fine-grained categorization[C]//2013 IEEE International Conference on Computer Vision Workshops (ICCVW). Sydney: IEEE, 2014: 554-561.
WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: From error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
SALIMANS T, GOODFELLOW I, ZAREMBA W, et al. Improved techniques for training GANs[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 2234-2242.
HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local Nash equilibrium[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6629-6640.
LOSSON O, MACAIRE L, YANG Y. Comparison of color demosaicing methods[M]//Advances in Imaging and Electron Physics. Amsterdam: Elsevier, 2010, 162: 173-265.
HACCIUS C, HERFET T. Computer vision performance and image quality metrics: A reciprocal relation[C]//Computer Science & Information Technology (CS & IT). Florence: Academy & Industry Research Collaboration Center (AIRCC), 2017: 27-37.
ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 586-595.
0
浏览量
0
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构