Kernel-Predicting Convolutional Networks
for Denoising Monte Carlo Renderings
The focus of this paper is the application of deep-learning based denoising to production-quality Monte Carlo renderings. Most of the training and evaluation has therefore been done on proprietary data. In order to facilitate comparisons to future methods, we also provide results for our models when trained and tested on a publicly available dataset.
Training data
We trained our models on a training set consisting of perturbations of scenes available on https://benedikt-bitterli.me/resources/. To assure the training data covers a sufficient range of lighting situations, color patterns, and geometry, we randomly perturbed the scenes by varying camera parameters, materials, and lighting. In this way, we generated 1484 (noisy image, high quality image) pairs from the following 8 base scenes. We extracted patches from those images at four different sample counts per pixel: 128 spp, 256 spp, 512 spp and 1024 spp.
![]() Contemporary Bathroom Mareck, Blendswap.com |
![]() Pontiac GTO 67 MrChimp2313, Blendswap.com |
![]() Bedroom SlykDrako, Blendswap.com |
![]() 4060.b Spaceship thecali, Blendswap.com |
![]() Victorian Style House MrChimp2313, Blendswap.com |
![]() The Breakfast Room Wig42, Blendswap.com |
![]() The Wooden Staircase Wig42, Blendswap.com |
![]() Japanese Classroom NovaZeeke, Blendswap.com |
Meta-parameters
We found that a learning rate of 10-4 and a batch size of 100 patches worked well for this dataset. A larger batch size helps to deal with the amount of noise in these scenes, which is much larger than in our production frames. For the fine-tuning stage, we use a learning rate of 10-6.
Evaluation
To assess the quality of our models, we evaluate them on different scenes from the same website.
* joint first authors