- 1University of Illinois at Urbana-Champaign
- 2Zhejiang University
- 3University of Maryland, College Park
- * Equal Contribution
Abstract
Physical simulations produce excellent predictions of weather effects.
Neural radiance fields produce SOTA scene models. We describe a novel NeRF-editing procedure that
can fuse physical simulations with NeRF models of scenes, producing realistic movies of physical
phenomena in
those scenes. Our application -- Climate NeRF -- allows people
to visualize what climate change outcomes will do to them.
ClimateNeRF allows us to render realistic weather effects, including smog, snow, and flood.
Results can be controlled with physically meaningful variables like water level.
Qualitative and quantitative studies show that our
simulated results are significantly more realistic than those from
state-of-the-art 2D image editing and 3D NeRF stylization.

Weather simulations
* You can select different weather conditions on different scenes and compare
our method with baselines.
* 3D stylization denotes finetuning pre-trained NGP model using FastPhotoStyle.
Weather
Scene
Method
Controllable rendering
Our method simulates different densities of smog and distinct heights of flood and accumulated
snow:
Simulations on drone views
Our method simulates flood on scenes captured by drones.
Rendering Procedure of ClimateNeRF

We first determine the position of physical entities (smog particle, snow balls, water surface)
with physical simulation. We can then render the
scene with desired effects by modeling the light transport between the physical entities and the
scene. More specifically, we follow
the volume rendering process and fuse the estimated color and density from 1) the original
radiance field (by querying the trained
instant-NGP model) and 2) the physical entities (by physically based rendering). Our rendering
procedure thus maintain the realism while achieving complex, yet physically plausible visual
effects.
Flood Simulation
![]() |
![]() |
![]() |
![]() |
![]() |
(a) Original NeRF | (b) Depth map | (c) Water surface | (d) Normal map with wave | (e) Final ClimateNeRF |
We first estimate the vanishing point direction based on the original image (a) and depth (b).
With the vertical vanishing direction (yellow arrows painted (c)), we can insert a planar water
surface.
We use FFT based water surface simulation to produce a spatiotemporal surface normal map in (d).
Our ClimateNeRF renders the scene with the simulated flood through ray tracing NeRF (e).
Snow Simulation
![]() |
![]() |
![]() |
![]() |
![]() |
(a) Original NeRF | (b) Surface normal | (c) Metaball centers (red) | (d) Snow with diffuse model | (e) Snow with scattering |
We first locate metaballs on object surfaces facing upward based on surface normal values (b).
With metaballs (centers painted in red), we can estimate snow's density and color with a parzen
window density estimator.
(d) and (e) show the differences between fully diffuse model and scattering approximations,
shadowed parts in (d) are lit in (e).
User Study
We perform a user study to validate our approach quantitatively. Users are asked to watch pairs
of
synthesized images or videos of the same scene and pick the
one with higher realism. 37 users participated in the study,
and in total, we collected 2664 pairs of comparisons.
Images | Videos |
Smog |
![]() |
![]() |
Flood |
![]() |
![]() |
Snow |
![]() |
![]() |
The length of bars indicates the percentage of users voting for higher realism than the
opponents. The green bar with the number shows our win rate against each baseline. The video
quality of our method significantly outperforms all baselines.
References
- Victor Schmidt, Alexandra Sasha Luccioni, M ́elisande Teng, Tianyu Zhang, Alexia Reynaud, Sunand Raghupathi, Gautier Cosne, Adrien Juraver, Vahe Vardanyan, Alex Hernandez-Garcia, Yoshua Bengio. Climategan: Raising climate change awareness by generating images of floods. ICLR, 2022. [code]
- Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈orn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. [code]
- Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei Efros, and Richard Zhang. Swapping autoencoder for deep image manipulation. NeurIPS, 2020. [code]