开启辅助访问
 找回密码
 立即注册

超分辨率好发论文吗?

abc159598 回答数5 浏览数1043
使用道具 举报
| 来自北京
chinbk | 来自北京
只能说越来越难了,魔改的网络越来越难发了
给个简单的例子:训练一个所谓的Very-Deep Super-Resolution (VDSR) 深层网络,然后使用 VDSR 网络从单低分辨率图像中估计高分辨率图像。关于VDSR网络,见文章末尾参考文献。
超分辨率重建是从低分辨率图像创建高分辨率图像的过程,本例考虑单图像超分辨率 (single image super-resolution。SISR),其目标是从一张低分辨率图像中恢复为一张高分辨率图像。SISR具有一定的难度,因为高频图像内容通常无法从低分辨率图像中恢复。如果没有高频信息,高分辨率图像的质量就会受到限制。此外,SISR是一个所谓的病态问题,因为低分辨率图像可以产生几种可能的高分辨率图像。


VDSR网络

VDSR是一种卷积神经网络,旨在执行单图像超分辨率重建。VDSR网络学习低分辨率和高分辨率图像之间的映射。理论上这种映射是存在的,因为低分辨率和高分辨率图像具有相似的图像内容,并且主要在高频细节上有所不同。
VDSR采用残差学习策略,即网络通过学习估计残差图像。残差图像是高分辨率参考图像和低分辨率图像之间的差异,后者使用双三次插值进行升级以匹配参考图像的大小。残差图像包含有关图像的高频细节的信息。
VDSR网络根据彩色图像的亮度检测残差图像。图像的亮度通道 Y 通过红色、绿色和蓝色像素值的线性组合来表示每个像素的亮度。相比之下,图像的两个色度通道 Cb 和 Cr 是表示色差信息的红色、绿色和蓝色像素值的不同线性组合。VDSR仅使用亮度通道进行训练,因为人类的感知对亮度变化比对颜色变化更敏感。


如下用英文表述较为方便:


在 VDSR 网络通过学习估计残差图像后,可以将估计的残差图像添加到上采样的低分辨率图像,然后将图像转换回 RGB 色彩空间来重建高分辨率图像。
尺度因子将参考图像的大小与低分辨率图像的大小相关联。随着尺度因子的增加,SISR变得更加病态,因为低分辨率图像会丢失有关高频图像内容的更多信息。VDSR通过使用较大的感受野receptive field来解决这个问题。
下载训练数据和测试数据

下载 IAPR TC-12 Benchmark,其中包含 20000 张自然图像,、包括人物,动物,城市等。可以使用downloadIAPRTC12Data函数来下载数据,数据文件的大小约为 1.8 GB。
本文将使用 IAPR TC-12 Benchmark的一小部分数据来训练网络,所有图像均为 32 位 JPEG 彩色图像。
trainImagesDir = fullfile(imagesDir,'iaprtc12','images','02');
exts = {'.jpg','.bmp','.png'};
trainImages = imageDatastore(trainImagesDir,'FileExtensions',exts);训练图像的数量
numel(trainImages.Files)ans = 616
定义用于训练的Mini-Batch Datastore

Mini-batch datastore 用于将训练数据输送到网络,本文通过vdsrImagePatch类自定义实现。vdsrImagePatch从低分辨率图像中提取图像块,并以不同的尺度因子升级图像块。每个mini-batch 包含 64 个大小为 41 x 41 像素的图像块,在训练期间将从图像中的随机位置提取所有图像块。要训练多尺度因子网络,将“尺度因子”设置为 [2 3 4]。
miniBatchSize = 64;
scaleFactors = [2 3 4];
source = vdsrImagePatchDatastore(trainImages,...
'MiniBatchSize',miniBatchSize,...
'PatchSize',41,...
'BatchesPerImage',1,...
'ScaleFactor',scaleFactors);
vdsrImagePatchDatastore向网络提供小批量数据,对datastore执行读取操作以浏览数据
inputBatch = read(source);
summary(inputBatch)设置VDSR层
本例使用MATLAB神经网络工具箱中的 41 个层定义 VDSR 网络,包括:
·imageInputLayer - Image input layer
· convolutional2dLayer - 2-D convolution layer for convolutional neural networks
· reluLayer - Rectified linear unit (ReLU) layer
· regressionLayer - Regression output layer for a neural network
第一层,图像输入层,对图像块进行操作,图像块的大小基于网络感受野,网络感受野是影响网络中最顶层响应的空间图像区域。对于深度为D的网络,感受野是(2D + 1)x(2D + 1)。由于 VDSR 是一个 20 层网络,因此感受野和图像块大小为 41 x 41。图像输入层接受具有一个通道的图像,因为 VDSR 仅使用亮度通道进行训练。
networkDepth = 20;
firstLayer = imageInputLayer([41 41 1],'Name','InputLayer','Normalization','none');图像输入层后跟一个 2-D 卷积层,其中包含 64 个大小为 3 x 3 的滤波器。mini-batch大小决定了滤波器的数量。每个卷积层后面都有一个ReLU层,便于引入非线性。
convolutionLayer = convolution2dLayer(3,64,'Padding',1, ...
'Name','Conv1');
convolutionLayer.Weights = sqrt(2/(9*64))*randn(3,3,1,64);
convolutionLayer.Bias = zeros(1,1,64);ReLU 层
relLayer = reluLayer('Name','ReLU1');中间层包含 18 个交替的卷积和ReLU 层。每个卷积层包含 64 个大小为 3 x 3 x 64 的滤波器。
middleLayers = [convolutionLayer relLayer];
for layerNumber = 2:networkDepth-1
conv2dLayer = convolution2dLayer(3,64,...
'Padding',[1 1],...
'Name',['Conv' num2str(layerNumber)]);权重初始化
conv2dLayer.Weights = sqrt(2/(9*64))*randn(3,3,64,64);
conv2dLayer.Bias = zeros(1,1,64);
relLayer = reluLayer('Name',['ReLU' num2str(layerNumber)]);
middleLayers = [middleLayers conv2dLayer relLayer];
end倒数第二层是一个卷积层,具有一个大小为 3x3x64 的滤波器·,用于重建图像。
conv2dLayer = convolution2dLayer(3,1,...
'NumChannels',64,...
'Padding',[1 1],...
'Name',['Conv' num2str(networkDepth)]);
conv2dLayer.Weights = sqrt(2/(9*64))*randn(3,3,64,1);
conv2dLayer.Bias = zeros(1,1,1);最后一层是回归层,回归层计算残差图像和网络预测之间的均方误差。
finalLayers = [conv2dLayer regressionLayer('Name','FinalRegressionLayer')];连接所有层以形成 VDSR 网络
layers = [firstLayer middleLayers finalLayers];指定训练参数

使用具有动量 (SGDM) 优化的随机梯度下降来训练网络,学习速率最初设置为 0.1,然后每 20 个 epoch 降低 10 倍。
maxEpochs = 100;
epochIntervals = 1;
initLearningRate = 0.1;
learningRateFactor = 0.1;
l2reg = 0.0001;
options = trainingOptions('sgdm',...
'Momentum',0.9,...
'InitialLearnRate',initLearningRate,...
'LearnRateSchedule','piecewise',...
'LearnRateDropPeriod',10,...
'LearnRateDropFactor',learningRateFactor,...
'L2Regularization',l2reg,...
'MaxEpochs',maxEpochs,...
'MiniBatchSize',miniBatchSize,...
'GradientThresholdMethod','l2norm',...
'GradientThreshold',0.01);训练网络

使用trainNetwork函数训练 VDSR 网络
modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');
net = trainNetwork(source,layers,options);
save(['trainedVDSR-' modelDateTime '-Epoch-' num2str(maxEpochs*epochIntervals) 'ScaleFactors-' num2str(234) '.mat'],'net','options');使用 VDSR 网络执行单图像超分辨率重建
使用 VDSR 网络执行单图像超分辨率 (SISR)重建的步骤如下:
· 从高分辨率参考图像创建采样的低分辨率图像。
· 使用双三次插值对低分辨率图像执行 SISR,这是一种不依赖于深度学习的传统图像处理解决方案。
· 使用 VDSR 网络对低分辨率图像执行 SISR。
· 使用双三次插值和 VDSR 直观地比较重建的高分辨率图像
· 评估超分辨率图像的质量。
创建低分辨率图像

创建一个低分辨率图像,该图像将使用深度学习的超分辨率重建结果与使用传统图像处理技术(如双三次插值)重建的结果进行比较。
exts = {'.jpg','.png'};
fileNames = {'sherlock.jpg','car2.jpg','fabric.png','greens.jpg','hands1.jpg','kobi.png',...
'lighthouse.png','micromarket.jpg','office_4.jpg','onion.png','pears.png','yellowlily.jpg',...
'indiancorn.jpg','flamingos.jpg','sevilla.jpg','llama.jpg','parkavenue.jpg',...
'peacock.jpg','car1.jpg','strawberries.jpg','wagon.jpg'};
filePath = [fullfile(matlabroot,'toolbox','images','imdata') filesep];
filePathNames = strcat(filePath,fileNames);
testImages = imageDatastore(filePathNames,'FileExtensions',exts);显示若干测试图像
montage(testImages)

选择其中一个图像作为超分辨率的参考图像
indx = 1; % 图像索引
Ireference = readimage(testImages,indx);
Ireference = im2double(Ireference);
imshow(Ireference)
title('High-Resolution Reference Image')

设置尺度因子为 0.25 ,创建高分辨率参考图像的低分辨率版本。
scaleFactor = 0.25;
Ilowres = imresize(Ireference,scaleFactor,'bicubic');
imshow(Ilowres)
title('Low-Resolution Image')

使用双三次插值提高图像分辨率
在没有深度学习的情况下提高图像分辨率的标准方法是使用双三次插值。使用双三次插值放大低分辨率图像,以便生成的高分辨率图像与参考图像的大小相同。
[nrows,ncols,np] = size(Ireference);
Ibicubic = imresize(Ilowres,[nrows ncols],'bicubic');
imshow(Ibicubic)
title('High-Resolution Image Obtained Using Bicubic Interpolation')

使用VDSR 网络提高图像的分辨率
回想一下,VDSR仅使用图像的亮度通道进行训练,因为人类感知对亮度变化比对颜色变化更敏感。
使用 rgb2ycbr 函数将低分辨率图像从 RGB 色彩空间转换为亮度 (Iy) 和色度(Icb 和 Icr)通道。
Iycbcr = rgb2ycbcr(Ilowres);
Iy = Iycbcr(:,:,1);
Icb = Iycbcr(:,:,2);
Icr = Iycbcr(:,:,3);使用双三次插值放大亮度和两个色度通道。上采样色度通道(Icb_bicubic和Icr_bicubic)无需进一步处理。
Iy_bicubic = imresize(Iy,[nrows ncols],'bicubic');
Icb_bicubic = imresize(Icb,[nrows ncols],'bicubic');
Icr_bicubic = imresize(Icr,[nrows ncols],'bicubic');将Iy_bicubic输入VDSR网络,观察最后一层(回归层)的激活。网络的输出是所需的残差图像。
Iresidual = activations(net,Iy_bicubic,41);
Iresidual = double(Iresidual);
imshow(Iresidual,[])
title('Residual Image from VDSR')

将残差图像加到亮度分量中,以获得高分辨率的VDSR亮度分量。
Isr = Iy_bicubic + Iresidual;将高分辨率 VDSR 亮度分量与颜色分量连接,并使用 ycbcr2rgb 函数将图像转换为 RGB 色彩空间,则得到VDSR的最终高分辨率彩色图像。
Ivdsr = ycbcr2rgb(cat(3,Isr,Icb_bicubic,Icr_bicubic));
imshow(Ivdsr)
title('High-Resolution Image Obtained Using VDSR')

定量比较
为了更直观的理解高分辨率图像,使用格式为 [x y width height]的向量roi来指定感兴趣区域 (ROI), x 和 y 为ROI的坐标,width和height为ROI的宽度和高度。
roi = [320 30 480 400];将高分辨率图像裁剪到此 ROI区域,显示结果
montage({imcrop(Ibicubic,roi),imcrop(Ivdsr,roi)})
title('High-Resolution Results Using Bicubic Interpolation (Left) vs. VDSR (Right)');

结果显示VDSR高分辨率图像具有更清晰的细节和更清晰的边缘。
利用图像质量度量定量比较双三次插值的高分辨率图像与 VDSR高分辨率图像。
计算图像的峰值信噪比(PSNR),PNSR 值越大,通常表示图像质量越好
bicubicPSNR = psnr(Ibicubic,Ireference)
vdsrPSNR = psnr(Ivdsr,Ireference)bicubicPSNR = 38.4747
vdsrPSNR = 39.4473
计算每图像的结构相似性指数 (SSIM),SSIM 评估图像的三个特征的视觉冲击:亮度、对比度和结构信息,SSIM 值越接近 1,测试图像与参考图像的一致性越好
bicubicSSIM = ssim(Ibicubic,Ireference)
vdsrSSIM = ssim(Ivdsr,Ireference)bicubicSSIM = 0.9861
vdsrSSIM = 0.9878
使用自然图像质量评估(NIQE),NIQE的设计思路是基于构建一系列的用于衡量图像质量的特征,并且将这些特征用于拟合一个多元的高斯模型,NIQE实际上是衡量一张待测图像在多元分布上的差异。NIQE评分越小,表示感知质量越好。
bicubicNIQE = niqe(Ibicubic)
vdsrNIQE = niqe(Ivdsr)bicubicNIQE = 5.1719
vdsrNIQE = 4.7463
计算尺度因子为 2、3 和 4 的整组测试图像的平均 PSNR 和 SSIM值,可以使用辅助函数superResolutionMetric计算平均指标。
superResolutionMetrics(net,testImages,scaleFactors);Results for Scale factor 2

Average PSNR for Bicubic = 31.809683
Average PSNR for VDSR = 32.915853
Average SSIM for Bicubic = 0.938194
Average SSIM for VDSR = 0.953473

Results for Scale factor 3

Average PSNR for Bicubic = 28.170441
Average PSNR for VDSR = 28.802722
Average SSIM for Bicubic = 0.884381
Average SSIM for VDSR = 0.898248

Results for Scale factor 4

Average PSNR for Bicubic = 27.010839
Average PSNR for VDSR = 28.087250
Average SSIM for Bicubic = 0.861604
Average SSIM for VDSR = 0.882349
对于每个尺度因子,与双三次插值相比,VDSR 具有更好的指标分数。
参考文献
[1] Kim, J., J. K. Lee, and K. M. Lee. "Accurate Image Super-Resolution Using Very Deep Convolutional Networks." Proceedings of the IEEE® Conference on Computer Vision and Pattern Recognition. 2016, pp. 1646-1654.
[2] Grubinger, M., P. Clough, H. Müller, and T. Deselaers. "The IAPR TC-12 Benchmark: A New Evaluation Resource for Visual Information Systems." Proceedings of the OntoImage 2006 Language Resources For Content-Based Image Retrieval. Genoa, Italy. Vol. 5, May 2006, p. 10.
[3] He, K., X. Zhang, S. Ren, and J. Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1026-1034.
回复
使用道具 举报
hau | 未知
Super-Resolution - 超分辨率

1. 【Super-Resolution】Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

【超分辨率】使用生成对抗网络的逼真的单图像超分辨率


作者:Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken,
链接:
https://arxiv.org/abs/1609.04802v5
代码:
https://github.com/alexjc/neural-enhance
英文摘要:
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
中文摘要:
尽管使用更快和更深的卷积神经网络在单幅图像超分辨率的准确性和速度方面取得了突破,但一个核心问题仍然很大程度上没有解决:当我们在大的放大倍数下进行超分辨率时,我们如何恢复更精细的纹理细节?基于优化的超分辨率方法的行为主要由目标函数的选择驱动。最近的工作主要集中在最小化均方重建误差上。得到的估计具有很高的峰值信噪比,但它们通常缺乏高频细节,并且在感知上不令人满意,因为它们无法匹配在更高分辨率下预期的保真度。在本文中,我们提出了SRGAN,一种用于图像超分辨率(SR)的生成对抗网络(GAN)。据我们所知,它是第一个能够为4倍放大因子推断照片般逼真的自然图像的框架。为了实现这一点,我们提出了一个感知损失函数,它由对抗性损失和内容损失组成。对抗性损失将我们的解决方案推向自然图像流形,使用经过训练的鉴别器网络来区分超分辨率图像和原始照片般逼真的图像。此外,我们使用由感知相似性而不是像素空间中的相似性驱动的内容损失。我们的深度残差网络能够从公共基准的大量下采样图像中恢复照片般逼真的纹理。广泛的平均意见分数(MOS)测试显示,使用SRGAN在感知质量方面取得了巨大的进步。使用SRGAN获得的MOS分数更接近于原始高分辨率图像的分数,而不是使用任何最先进的方法获得的分数。


<hr/>2. 【Super-Resolution】Perceptual Losses for Real-Time Style Transfer and Super-Resolution

【超分辨率】实时风格迁移和超分辨率的感知损失


作者:Justin Johnson, Alexandre Alahi, Li Fei-Fei
链接:
https://arxiv.org/abs/1603.08155v1
代码:
https://github.com/DmitryUlyanov/texture_nets
英文摘要:
We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.
中文摘要:
我们考虑图像转换问题,其中输入图像被转换为​​输出图像。最近针对此类问题的方法通常使用输出和真实图像之间的\emph{per-pixel}损失来训练前馈卷积神经网络。并行工作表明,可以通过基于从预训练网络中提取的高级特征定义和优化\emph{perceptual}损失函数来生成高质量的图像。我们结合了这两种方法的优点,并建议使用感知损失函数来训练前馈网络以完成图像转换任务。我们展示了图像风格迁移的结果,其中训练了前馈网络以实时解决Gatys等人提出的优化问题。与基于优化的方法相比,我们的网络给出了相似的定性结果,但速度快了三个数量级。我们还尝试了单图像超分辨率,其中用感知损失替换每像素损失可以得到视觉上令人愉悦的结果。


<hr/>3. 【Super-Resolution】Image Super-Resolution Using Deep Convolutional Networks

【超分辨率】使用深度卷积网络的图像超分辨率


作者:Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang
链接:
https://arxiv.org/abs/1501.00092v3
代码:
https://github.com/titu1994/Image-Super-Resolution
英文摘要:
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.
中文摘要:
我们提出了一种用于单幅图像超分辨率(SR)的深度学习方法。我们的方法直接学习低/高分辨率图像之间的端到端映射。该映射表示为一个深度卷积神经网络(CNN),它将低分辨率图像作为输入并输出高分辨率图像。我们进一步表明,传统的基于稀疏编码的SR方法也可以被视为深度卷积网络。但与单独处理每个组件的传统方法不同,我们的方法联合优化了所有层。我们的深度CNN结构轻巧,但展示了最先进的恢复质量,并实现了实际在线使用的快速速度。我们探索不同的网络结构和参数设置,以实现性能和速度之间的权衡。此外,我们扩展了我们的网络以同时处理三个颜色通道,并显示出更好的整体重建质量。


<hr/>4. 【Super-Resolution】PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

【超分辨率】PULSE:通过生成模型的潜在空间探索进行自我监督的照片上采样


作者:Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, Cynthia Rudin
链接:
https://arxiv.org/abs/2003.03808v3
代码:
https://github.com/tg-bomze/Face-Depixelizer
英文摘要:
The primary aim of single-image super-resolution is to construct high-resolution (HR) images from corresponding low-resolution (LR) inputs. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require supervised training on databases of LR-HR image pairs). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee realistic outputs. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show proof of concept of our approach in the domain of face super-resolution (i.e., face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
中文摘要:
单图像超分辨率的主要目的是从相应的低分辨率(LR)输入构建高分辨率(HR)图像。在通常受到监督的先前方法中,训练目标通常测量超分辨率(SR)和HR图像之间的像素平均距离。优化此类指标通常会导致模糊,尤其是在高方差(详细)区域中。我们提出了一种基于创建正确缩小比例的真实SR图像的超分辨率问题的替代公式。我们提出了一种算法来解决这个问题,PULSE(通过潜在空间探索的照片上采样),它以文献中以前未见的分辨率生成高分辨率、逼真的图像。它以完全自我监督的方式完成此任务,并且不限于在训练期间使用的特定退化算子,这与以前的方法(需要对LR-HR图像对的数据库进行监督训练)不同。PULSE不是从LR图像开始慢慢添加细节,而是遍历高分辨率自然图像流形,搜索缩小到原始LR图像的图像。这是通过“缩小损失”形式化的,它引导探索生成模型的潜在空间。通过利用高维高斯的特性,我们限制搜索空间以保证真实的输出。因此,PULSE可以生成真实且正确缩小比例的超分辨率图像。我们在人脸超分辨率(即人脸幻觉)领域展示了我们方法的概念证明。我们还讨论了当前使用带有相关指标的随附模型卡实现的方法的局限性和偏差。我们的方法在比以前可能的更高的分辨率和比例因子下的感知质量优于最先进的方法。


<hr/>5. 【Super-Resolution】Learning Enriched Features for Real Image Restoration and Enhancement

【超分辨率】学习用于真实图像恢复和增强的丰富特征


作者:Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, Ling Shao
链接:
https://arxiv.org/abs/2003.06792v2
代码:
https://github.com/Rishit-dagli/MIRNet-TFJS
英文摘要:
With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in surveillance, computational photography, medical imaging, and remote sensing. Recently, convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task. Existing CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatially precise but contextually less robust results are achieved, while in the latter case, semantically reliable but spatially less accurate outputs are generated. In this paper, we present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network and receiving strong contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing several keys elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) spatial and channel attention mechanisms for capturing contextual information, and (d) attention based multi -scale feature aggregation. In a nutshell, our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on five real image benchmark datasets demonstrate that our method, named as MIRNet, achieves state-of-the-art results for a variety of image processing tasks, including image denoising, super-resolution, and image enhancement.
中文摘要:
为了从降级版本中恢复高质量的图像内容,图像恢复在监控、计算摄影、医学成像和遥感等领域有着广泛的应用。最近,卷积神经网络(CNN)在图像恢复任务的传统方法上取得了显着的进步。现有的基于CNN的方法通常在全分辨率或逐步低分辨率表示上运行。在前一种情况下,获得了空间精确但在上下文上不太稳健的结果,而在后一种情况下,生成了语义可靠但空间上不太准确的输出。在本文中,我们提出了一种新颖的架构,其共同目标是通过整个网络维护空间精确的高分辨率表示,并从低分辨率表示中接收强大的上下文信息。我们方法的核心是包含几个关键元素的多尺度残差块:(a)用于提取多尺度特征的并行多分辨率卷积流,(b)跨多分辨率流的信息交换,(c)空间和用于捕获上下文信息的通道注意机制,以及(d)基于注意的多尺度特征聚合。简而言之,我们的方法学习了一组丰富的特征,这些特征结合了来自多个尺度的上下文信息,同时保留了高分辨率的空间细节。在五个真实图像基准数据集上进行的大量实验表明,我们的方法(称为MIRNet)在图像去噪、超分辨率和图像增强等各种图像处理任务中取得了最先进的结果。


<hr/>AI&R是人工智能与机器人垂直领域的综合信息平台。我们的愿景是成为通往AGI(通用人工智能)的高速公路,连接人与人、人与信息,信息与信息,让人工智能与机器人没有门槛。
欢迎各位AI与机器人爱好者关注我们,每天给你有深度的内容。
微信搜索公众号【AIandR艾尔】关注我们,获取更多资源❤biubiubiu~
回复
使用道具 举报
tengxiao | 来自北京
发好的论文越来越难了。前几年也许改改网络,loss,把效果堆一堆提点,也许还很容易……当然,这种做法,用来发水一点的论文现在也还可以。
也许可以往数据集、网络轻量化之类的搞搞。前几天,整理了一下结合GAN做超分的几十篇论文,也附带下载地址,有兴趣可以了解一下。研究研究,可做的点应该还是有一些。
<hr/>GAN相关阅读:

  • GAN整整6年了!是时候要来捋捋了!
  • 强数据所难!SSL(半监督学习)结合GAN如何?
  • 新手指南综述 | GAN模型太多,不知道选哪儿个?
  • 天降斯雨,于我却无!GAN用于去雨如何?
  • 有点夸张、有点扭曲!速览GAN如何夸张漫画化人脸!
  • 脸部转正!GAN能否让侧颜杀手、小猪佩奇真容无处遁形?
  • 容颜渐失!GAN来预测?
  • 弱水三千,只取你标!AL(主动学习)结合GAN如何?
  • 异常检测,GAN如何gan?
  • 虚拟换衣!这几篇最新论文不来GAN GAN?
  • 脸部妆容迁移!速览几篇用GAN来做的论文
  • 【1】GAN在医学图像上的生成,今如何?
  • 01-GAN公式简明原理之铁甲小宝篇
  • 数百篇GAN论文已下载好!搭配一份生成对抗网络最新综述!
引言

这日,你伸着懒腰,打着呵欠,对着窗外,正感慨时光已逝,红了樱桃绿了芭蕉……忽然,桌面上的手机传来了微信的振动声,你极其不耐烦地走过去。
“老猪,我在超市看到了一个气质佳人!”
面对老铁这未见世面的无措,你弹指键飞:
“你还能见到啥佳人?再说,就你审美???”
“稍等!……”
“你要干嘛……”
很快,对面传来一幅图:


“你偷拍人家真的好吗。。再说脸呢??……”
这时手机又亮起:
“我刚刚把无关的截了一下,再截个脸吧~



你:“???……”


“隔得有点远,可能拍的有点小,好像看不清……”
正文引言

摘自SRGAN: The highly challenging task of estimating a highresolution (HR) image from its low-resolution (LR) counterpart is referred to as super-resolution (SR).
图像超分辨率,简称超分SR,一般指放大分辨率,例如把256X256变到512X512的分辨率,这时的放大倍数scale为2。显然,这是一个无中生有、去补全像素的ill-posed问题,没有唯一解。图像超分,应用场景自然是广泛的。一般的方法是将低分辨率的图像LR作为方法的输入,进行处理得到高分辨率的HR图像。
但值得注意的是,在现实场景中,匹配成对的数据集是极其难以获取得到的。如今相当多的论文,都是自制这种LR-HR图像对去作为训练集。比如先将原图HR通过下采样得到LR,再进行LR到HR的映射学习。但真正应用到实际中,LR和HR之间的关系是不是我们自以为是的“下采样”的关系呢?这恐怕是未知、难以模拟的,人为的下采样或其他人工方法不过是一厢情愿罢。在医学图像SR上可能更需谨慎。
今天整理的是结合GAN生成对抗网络的图像超分。首先总结两篇极具代表意义的、大名鼎鼎的超分GAN即SRGAN和ESRGAN,并大概提一下一篇用网络去收集小分辨率的数据的论文,最后给出70多篇结合GAN做超分的论文!!!希望给有志这方面探索、了解的同学一个参考!


(70多篇论文已经下载打包好,获取方式关注下图,回复【超分GAN】即可)
1. (2017-05-25) (SRGAN)Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

https://arxiv.xilesou.top/pdf/1609.04802.pdf
尽管使用更快、更深的卷积神经网络实现单图像超分辨率在准确性和速度上都有突破,仍然存在一个焦点问题在很大程度上未解决:当在较大的放大倍数上去获取超分辨率图像时,如何恢复更精细的纹理细节?以往的工作主要集中在均方差重建上,在结果评估时使用PSNR等,但通常缺乏高频细节,并且在视觉上难以令人满意。在本文提出SRGAN,第一个用于图像超分辨率(SR)的生成对抗网络(GAN),能够推断出4倍逼真的自然图像。为了实现这一目标,提出了一种感知损失函数,包括对抗损失和内容损失。使用基于感知相似性的内容损失摈弃了在像素空间进行相似性衡量。平均意见分数(MOS)表明了方法卓越的性能。
如下图所示,放大4倍的超分方法对比。第一个是双立方插值,第二个是基于均方差损失驱动的卷积神经网络,第三个是本文SRGAN,最后是参考原始图。


优化:


损失函数:


生成器损失(原文作者把整个生成器损失叫感知损失:内容损失+生成器对抗损失+):


内容损失:


生成器对抗损失:


作者做了蛮多一些消融探究的,此不述。
最后是实验结果其一。堪称大型SSIM和PSNR打脸现场。SRGAN在PSNR和SSIM上的表现不如SRResNet但在MOS、也就是人眼观察上吊打前者足矣。


2. (2018-09-17) ESRGAN Enhanced Super-Resolution Generative Adversarial Networks

https://arxiv.xilesou.top/pdf/1809.00219.pdf
SRGAN是具有开创性的工作。但细节之处仍然难令人满意,为此进一步研究了SRGAN的三个关键组成部分:网络架构,对抗损失和感知损失,并将其改善得到增强型SRGAN(ESRGAN)。特别地,引入了无BN批归一化的残差密集块Residual-in-Residual Dense Block(RRDB)作为基本的网络构建单元。而且,借用相对GAN 的思想让判别器进行预测相对真实性。最后,通过使用在激活之前的特征去进行感知损失计算,来达到在亮度一致性和纹理恢复方面提供更强的监督的目的。受益于这些改进, ESRGAN相比SRGAN,具有更好的视觉质量、更逼真的自然纹理并赢得PIRM2018-SR挑战赛的第一名。
CVPR 2020之117篇GAN论文分类汇总清单等你着陆!【GAN生成对抗网络】知识星球!网络结构上的改进:
由于BN在比如粗粒度任务分类等中具有积极效果,但对于类似于风格迁移这种单幅图像具有鲜明特点的任务中,不宜使用批量的统计量,否则容易弱化单图像固有的本身细节信息。于是作者尝试去掉BN,但这又容易导致网络训练的困难,于是采用Dense block这种更易提升网络性能的结构。


对抗方式的改进:
参考了相对GAN的设计思路。


对抗损失:


大致推导一下:
原始GAN:


感知Loss的改进:
使用relu激活之前的特征进行损失计算。这样的特征可以包含更丰富和细节的响应信息。


使用网络插值:
GAN过于“自由胡来”,有一些细节可能不太自然。而以往基于MSE优化的卷积网络偏向平滑模糊丢失细节。于是网络插值提出综合两者网络的方法:先训练一个常规的超分网络,在这个网络的基础上再fine-tuning得到GAN的生成器,然后把两个网络的参数加权相加:


如下图所示,通过调节α可以找到一个更偏好或平衡的的中间效果。


3.  (2018-07-30) To learn image super-resolution use a GAN to learn how to do image degradation first

https://arxiv.xilesou.top/pdf/1807.11458.pdf
在前面提到,超分的训练里,通过简单的双线性下采样(少数情况下是先模糊后下采样)人工生成的低分辨率的图像,然后将它们进行超分处理。但在现实生活中,这种方法并不能产生很好的效果。
为此提出一个两阶段的过程,首先训练一个High-to-Low GAN来学习如何对高分辨率图像进行下采样,在训练过程中,只需要非配对的高分辨率和低分辨率图像。实现了这部分后,该网络的输出可以用来训练一个Low-to-High GAN来实现超分辨率重建,这次利用配对的低分辨率和高分辨率图像。我们的主要结果是,这个网络可以有效地提高真实世界低分辨率图像的质量。本文将这种方法应用于人脸超分辨率的问题,并验证其有效性,方法也可能适用于其他图像对象类别。


实验结果:


<hr/>001  (2020-03-4) Turbulence Enrichment using Generative Adversarial Networks
https://arxiv.xilesou.top/pdf/2003.01907.pdf

002  (2020-03-2) MRI Super-Resolution with GAN and 3D Multi-Level DenseNet  Smaller Faster and Better
https://arxiv.xilesou.top/pdf/2003.01217.pdf

003  (2020-02-29) Joint Face Completion and Super-resolution using Multi-scale Feature Relation Learning
https://arxiv.xilesou.top/pdf/2003.00255.pdf

004  (2020-02-21) Generator From Edges  Reconstruction of Facial Images
https://arxiv.xilesou.top/pdf/2002.06682.pdf

005  (2020-01-22) Optimizing Generative Adversarial Networks for Image Super Resolution via Latent Space Regularization
https://arxiv.xilesou.top/pdf/2001.08126.pdf

006  (2020-01-21) Adaptive Loss Function for Super Resolution Neural Networks Using Convex Optimization Techniques
https://arxiv.xilesou.top/pdf/2001.07766.pdf

007  (2020-01-10) Segmentation and Generation of Magnetic Resonance Images by Deep Neural Networks
https://arxiv.xilesou.top/pdf/2001.05447.pdf

008  (2019-12-15) Image Processing Using Multi-Code GAN Prior
https://arxiv.xilesou.top/pdf/1912.07116.pdf

009  (2020-02-6) Quality analysis of DCGAN-generated mammography lesions
https://arxiv.xilesou.top/pdf/1911.12850.pdf

010  (2019-12-19) A deep learning framework for morphologic detail beyond the diffraction limit in infrared spectroscopic imaging
https://arxiv.xilesou.top/pdf/1911.04410.pdf

011  (2019-11-8) Joint Demosaicing and Super-Resolution (JDSR)  Network Design and Perceptual Optimization
https://arxiv.xilesou.top/pdf/1911.03558.pdf

012  (2019-11-4) FCSR-GAN  Joint Face Completion and Super-resolution via Multi-task Learning
https://arxiv.xilesou.top/pdf/1911.01045.pdf

013  (2019-10-9) Wavelet Domain Style Transfer for an Effective Perception-distortion Tradeoff in Single Image Super-Resolution
https://arxiv.xilesou.top/pdf/1910.04074.pdf

014  (2020-02-3) Optimal Transport CycleGAN and Penalized LS for Unsupervised Learning in Inverse Problems
https://arxiv.xilesou.top/pdf/1909.12116.pdf

015  (2019-08-26) RankSRGAN  Generative Adversarial Networks with Ranker for Image Super-Resolution
https://arxiv.xilesou.top/pdf/1908.06382.pdf

016  (2019-07-24) Progressive Perception-Oriented Network for Single Image Super-Resolution
https://arxiv.xilesou.top/pdf/1907.10399.pdf

017  (2019-07-26) Boosting Resolution and Recovering Texture of micro-CT Images with Deep Learning
https://arxiv.xilesou.top/pdf/1907.07131.pdf

018  (2019-07-15) Enhanced generative adversarial network for 3D brain MRI super-resolution
https://arxiv.xilesou.top/pdf/1907.04835.pdf

019  (2019-07-5) MRI Super-Resolution with Ensemble Learning and Complementary Priors
https://arxiv.xilesou.top/pdf/1907.03063.pdf

020  (2019-11-25) Image-Adaptive GAN based Reconstruction
https://arxiv.xilesou.top/pdf/1906.05284.pdf

021  (2019-06-13) A Hybrid Approach Between Adversarial Generative Networks and Actor-Critic Policy Gradient for Low Rate High-Resolution Image Compression
https://arxiv.xilesou.top/pdf/1906.04681.pdf

022  (2019-06-4) A Multi-Pass GAN for Fluid Flow Super-Resolution
https://arxiv.xilesou.top/pdf/1906.01689.pdf

023  (2019-05-23) Generative Imaging and Image Processing via Generative Encoder
https://arxiv.xilesou.top/pdf/1905.13300.pdf

024  (2019-05-26) Cross-Resolution Face Recognition via Prior-Aided Face Hallucination and Residual Knowledge Distillation
https://arxiv.xilesou.top/pdf/1905.10777.pdf

025  (2019-05-9) 3DFaceGAN  Adversarial Nets for 3D Face Representation Generation and Translation
https://arxiv.xilesou.top/pdf/1905.00307.pdf

026  (2019-08-27) Super-Resolved Image Perceptual Quality Improvement via Multi-Feature Discriminators
https://arxiv.xilesou.top/pdf/1904.10654.pdf

027  (2019-03-28) SRDGAN  learning the noise prior for Super Resolution with Dual Generative Adversarial Networks
https://arxiv.xilesou.top/pdf/1903.11821.pdf

028  (2019-03-21) Bandwidth Extension on Raw Audio via Generative Adversarial Networks
https://arxiv.xilesou.top/pdf/1903.09027.pdf

029  (2019-03-6) DepthwiseGANs  Fast Training Generative Adversarial Networks for Realistic Image Synthesis
https://arxiv.xilesou.top/pdf/1903.02225.pdf

030  (2019-02-28) A Unified Neural Architecture for Instrumental Audio Tasks
https://arxiv.xilesou.top/pdf/1903.00142.pdf

031  (2019-02-28) Two-phase Hair Image Synthesis by Self-Enhancing Generative Model
https://arxiv.xilesou.top/pdf/1902.11203.pdf

032  (2019-10-23) GAN-based Projector for Faster Recovery with Convergence Guarantees in Linear Inverse Problems
https://arxiv.xilesou.top/pdf/1902.09698.pdf

033  (2019-02-17) Progressive Generative Adversarial Networks for Medical Image Super resolution
https://arxiv.xilesou.top/pdf/1902.02144.pdf

034  (2019-01-31) Compressing GANs using Knowledge Distillation
https://arxiv.xilesou.top/pdf/1902.00159.pdf

035  (2019-01-18) Generative Adversarial Classifier for Handwriting Characters Super-Resolution
https://arxiv.xilesou.top/pdf/1901.06199.pdf

036  (2019-01-10) How Can We Make GAN Perform Better in Single Medical Image Super-Resolution  A Lesion Focused Multi-Scale Approach
https://arxiv.xilesou.top/pdf/1901.03419.pdf

037  (2019-01-9) Detecting Overfitting of Deep Generative Networks via Latent Recovery
https://arxiv.xilesou.top/pdf/1901.03396.pdf

038  (2018-12-29) Brain MRI super-resolution using 3D generative adversarial networks
https://arxiv.xilesou.top/pdf/1812.11440.pdf

039  (2019-01-13) Efficient Super Resolution For Large-Scale Images Using Attentional GAN
https://arxiv.xilesou.top/pdf/1812.04821.pdf

040  (2019-12-24) Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation
https://arxiv.xilesou.top/pdf/1811.09393.pdf

041  (2018-11-20) Adversarial Feedback Loop
https://arxiv.xilesou.top/pdf/1811.08126.pdf

042  (2018-11-1) Bi-GANs-ST for Perceptual Image Super-resolution
https://arxiv.xilesou.top/pdf/1811.00367.pdf

043  (2018-10-15) Lesion Focused Super-Resolution
https://arxiv.xilesou.top/pdf/1810.06693.pdf

044  (2018-10-15) Deep learning-based super-resolution in coherent imaging systems
https://arxiv.xilesou.top/pdf/1810.06611.pdf

045  (2018-10-10) Image Super-Resolution Using VDSR-ResNeXt and SRCGAN
https://arxiv.xilesou.top/pdf/1810.05731.pdf

046  (2019-01-28) Multi-Scale Recursive and Perception-Distortion Controllable Image Super-Resolution
https://arxiv.xilesou.top/pdf/1809.10711.pdf

047  (2018-09-2) Unsupervised Image Super-Resolution using Cycle-in-Cycle Generative Adversarial Networks
https://arxiv.xilesou.top/pdf/1809.00437.pdf

048  (2018-09-17) ESRGAN  Enhanced Super-Resolution Generative Adversarial Networks
https://arxiv.xilesou.top/pdf/1809.00219.pdf

049  (2018-09-6) CT Super-resolution GAN Constrained by the Identical Residual and Cycle Learning Ensemble(GAN-CIRCLE)
https://arxiv.xilesou.top/pdf/1808.04256.pdf

050  (2018-07-30) To learn image super-resolution use a GAN to learn how to do image degradation first
https://arxiv.xilesou.top/pdf/1807.11458.pdf

051  (2018-07-1) Performance Comparison of Convolutional AutoEncoders Generative Adversarial Networks and Super-Resolution for Image Compression
https://arxiv.xilesou.top/pdf/1807.00270.pdf

052  (2018-12-19) Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution
https://arxiv.xilesou.top/pdf/1806.05764.pdf

053  (2018-08-22) cellSTORM - Cost-effective Super-Resolution on a Cellphone using dSTORM
https://arxiv.xilesou.top/pdf/1804.06244.pdf

054  (2018-04-10) A Fully Progressive Approach to Single-Image Super-Resolution
https://arxiv.xilesou.top/pdf/1804.02900.pdf

055  (2018-07-18) Maintaining Natural Image Statistics with the Contextual Loss
https://arxiv.xilesou.top/pdf/1803.04626.pdf

056  (2018-06-9) Efficient and Accurate MRI Super-Resolution using a Generative Adversarial Network and 3D Multi-Level Densely Connected Network
https://arxiv.xilesou.top/pdf/1803.01417.pdf

057  (2018-05-28) tempoGAN  A Temporally Coherent Volumetric GAN for Super-resolution Fluid Flow
https://arxiv.xilesou.top/pdf/1801.09710.pdf

058  (2018-10-3) High-throughput high-resolution registration-free generated adversarial network microscopy
https://arxiv.xilesou.top/pdf/1801.07330.pdf

059  (2017-11-28) Super-Resolution for Overhead Imagery Using DenseNets and Adversarial Learning
https://arxiv.xilesou.top/pdf/1711.10312.pdf

060  (2019-10-3) The Perception-Distortion Tradeoff
https://arxiv.xilesou.top/pdf/1711.06077.pdf

061  (2017-11-7) Tensor-Generative Adversarial Network with Two-dimensional Sparse Coding  Application to Real-time Indoor Localization
https://arxiv.xilesou.top/pdf/1711.02666.pdf

062  (2017-11-7) ZipNet-GAN  Inferring Fine-grained Mobile Traffic Patterns via a Generative Adversarial Neural Network
https://arxiv.xilesou.top/pdf/1711.02413.pdf

063  (2017-10-19) Generative Adversarial Networks  An Overview
https://arxiv.xilesou.top/pdf/1710.07035.pdf

064  (2018-05-21) Retinal Vasculature Segmentation Using Local Saliency Maps and Generative Adversarial Networks For Image Super Resolution
https://arxiv.xilesou.top/pdf/1710.04783.pdf

065  (2018-11-28) Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network
https://arxiv.xilesou.top/pdf/1708.09105.pdf

066  (2017-06-20) Perceptual Generative Adversarial Networks for Small Object Detection
https://arxiv.xilesou.top/pdf/1706.05274.pdf

067  (2017-05-7) A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA
https://arxiv.xilesou.top/pdf/1705.02583.pdf

068  (2017-05-5) Face Super-Resolution Through Wasserstein GANs
https://arxiv.xilesou.top/pdf/1705.02438.pdf

069  (2017-10-12) CVAE-GAN  Fine-Grained Image Generation through Asymmetric Training
https://arxiv.xilesou.top/pdf/1703.10155.pdf

070  (2017-02-21) Amortised MAP Inference for Image Super-resolution
https://arxiv.xilesou.top/pdf/1610.04490.pdf

071  (2017-05-25) Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
https://arxiv.xilesou.top/pdf/1609.04802.pdf
回复
使用道具 举报
DirkHo | 来自浙江
现在无外乎几个小方向,一个是基于resnet改网络结构(堆参数量),比如edsr. wdsr。还有一个就是和传统的方法结合,比如dbpn和最近比较新的cliquesr(这个我竟然和他撞点子了),
改网络其实非常困难,因为作者有时候都不知道原因。但看最近这些顶会改网络sr,基本都是Ntire这种比赛的第一名发的。拿了第一,只要语言不是稀烂,凭这个结果,都可以吃老本,发好会。但是,拿第一的难度可想而知,所以,基本这条路是走不通。
而和传统的图像处理结合,其实更加适合水。。。小波变换之类的方法,和深度学习碰撞,哪怕没有什么理论的证据(比如 cvpr 2019的 Octave conv..其实就很牵强),也很能抓住眼球。配上比较合理,清楚的图表和不错的指标,应该不会太差?更何况sr不算热门方向,竞争也不会太激烈。。
回复
使用道具 举报
wy717 | 来自北京
super resolution这几年做的挺火的 算是一个很热的方向 传统的基于深度学习的方法,基于patch的方法到这两年开始用GAN来做,效果也越来越好。
个人感觉普通的方法挺极限的了,需要有创新点的东西还是挺难的。但是也可以去努力尝试啊。
回复
使用道具 举报
快速回复
您需要登录后才可以回帖 登录 | 立即注册

当贝投影