联合深度学习和宽度学习的纹理样图自动提取
摘 要
目的 纹理样图是指一幅用于描述纹理特征的图像,纹理样图多样性在纹理合成任务中是至关重要的,它可以为合成的纹理带来更丰富、多样和逼真的外观,同时为艺术家和设计师提供了更多的创作灵感和自由度。当前,纹理样图的提取主要通过手工剪裁和算法自动提取,从大量的图像中手工剪裁提取出高质量的纹理样图十分耗费精力和时间,并且该方式易受主观驱动且多样性受限。目前先进的纹理样图自动提取算法基于卷积神经网络的TrimmedT-CNN(texture convolutional neural network)模型存在推理速度慢的问题。基于此,本文致力于利用互联网上丰富的图像资源,自动快速地从各种图像中裁剪出理想且多样的纹理样图,让用户有更多的选择。方法 本文提出一个结合深度学习和宽度学习的从原始图像中自动提取纹理样图的方法。为了获取理想的纹理样图,首先通过残差特征金字塔网络提取特征图,有效地从输入图像中识别样图候选者,然后采用区域候选网络快速自动地获取大量的纹理样图候选区域。接下来,利用宽度学习系统对纹理样图的候选区域进行分类。最后,使用评分准则对宽度学习系统的分类结果进行评分,从而筛选出理想的纹理样图。结果 为了验证本文方法的有效性,收集大量理想纹理样图并将它们分成6个类进行实验验证,本文模型的准确度达到了94.66%。与当前先进的方法Trimmed T-CNN相比,本文模型准确度提高了0.22%且速度得到了提升。对于分辨率为512×512像素、1024×1024像素和2048×2048像素的图像,算法速度分别提快了1.3938s、1.8643s和2.3687s。结论 本文提出的纹理样图自动提取算法,综合了深度学习和宽度学习的优点,使纹理样图的提取结果更加准确且高效。
关键词
Automatic texture exemplar extraction with jointed deep and broad learning models
Wu Huisi, Liang Chongxin, Yan Wei, Wen Zhenkun(College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China) Abstract
Objective Texture exemplar refers to the input samples or templates for texture synthesis or generation that contains the desired texture features and structures.Texture synthesis refers to the generation of new texture images by combining or duplicating one or more texture samples.In the texture synthesis task based on the texture exemplar,the diversity and texture structure of the texture exemplar play a decisive role that determines the effect of the texture synthesis task.In the field of computer vision,texture sample diversity is crucial in texture synthesis tasks,which can bring richer,diverse,and realistic appearance to synthesized textures.Simultaneously,it can provide greater creative inspiration and design ideas to artists and designers.At present,texture exemplars can be extracted from multiple sources,such as public texture datasets,Internet picture clips,or photography.That is,texture exemplars are mostly extracted via manual cutting and automatic algorithm extraction.However,not everyone is an artist,and extracting a good texture sample or cutting out a small texture exemplar from an existing image is difficult for ordinary people.In addition,manually cropping and extracting high-quality texture samples from a large number of images consumes considerable energy and time for texture artists,and this method is easily driven by subjectivity and limited in diversity.With the development of deep learning algorithms,the currently used state-of-the-art automatic texture exemplar extraction algorithm is the Trimmed T-CNN model based on a convolutional neural network(CNN).It can effectively extract a variety of texture exemplars from the input image.However,the model uses a selective search algorithm to generate a candidate region,and thus,this process is time-consuming and computationally complex,and the model suffers from slow inference speed.Considering the aforementioned reasons,this study is committed to using the rich image resources on the Internet to automatically,quickly,and accurately cut out ideal and diverse texture exemplars from various images,providing users with more choices,and to better meet the needs of texture synthesis task requirements.Method On the basis of the algorithm idea of object detection,we propose an automatic texture exemplar extraction algorithm that combines deep learning and broad learning.This algorithm generates candidate texture exemplar regions through CNN and then uses broad learning for classification.To obtain the ideal texture exemplar,this study first uses the residual feature pyramid network to extract feature maps from the original image,aiming to effectively identify and generate texture exemplar candidates from the input image and then using the region candidate network to automatically and quickly obtain a large number of multi-scale texture exemplar candidate regions.Subsequently,we leverage a broad learning system to classify the candidate regions of texture exemplars extracted in the previous step.Finally,to obtain the ideal texture exemplar,we designed a scoring criterion based on classification accuracy,distribution characteristics,and size,aiming to use the scoring criterion to score the classification results of the broad learning system to screen out the ideal texture exemplars.Result To verify the effectiveness of the proposed method,we first collect a large number of ideal texture exemplars with distinguishable and representative features as a training dataset and divide them into six classes based on size and regularity for experimental verification.A large number of qualitative and quantitative experiments are performed in this study.The experimental results show that the accuracy of the model developed in this study reaches 94.66%.Compared with the state-of-the-art method Trimmed T-CNN,the accuracy of the model in this study increases by 0.22% and speed is improved.In particular,for images with resolutions of 512 × 512 pixels,1 024 × 1 024 pixels,and 2 048 × 2 048 pixels,the speed of the algorithm in this study is increased by 1.393 8 s,1.864 3 s,and 2.368 7 s,respectively.Conclusion In this study,we propose an automatic texture exemplar extraction algorithm based on deep learning and broad learning.This algorithm effectively combines the advantages of CNNs and broad learning classification systems.The experimental results show that our model outperforms several state-of-the-art texture exemplar extraction methods,making texture exemplar extraction results more accurate and efficient.
Keywords
broad learning convolutional neural network(CNN) texture exemplar extraction object detection region proposal network feature pyramid network(FPN)
|