Differentiable rasterization changes the standard formulation of primitive rasterization -by enabling gradient flow from a pixel to its underlying triangles- using distribution functions in different stages of rendering, creating a ''soft'' version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.
Ours MLP | Gaussian | Logistic | Exponential(R) | Gamma(R,0.5) |
---|---|---|---|---|
Optimization Reference | Optimization Reference | Optimization Reference | Optimization Reference | Optimization Reference |
@article{10.1111:cgf.15145,
journal = {Computer Graphics Forum},
title = {{Learning to Rasterize Differentiably}},
author = {Wu, Chenghao and Mailee, Hamila and Montazeri, Zahra and Ritschel, Tobias},
year = {2024},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.15145}
}