High-Dynamic-Range Lighting Estimation from
Face Portraits

Alejandro Sztrajman1 Alexandros Neophytou2 Tim Weyrich1 Eric Sommerlade2
1University College London, UK. 2Microsoft, Reading, UK.
International Conference on 3D Vision 2020
We present a CNN-based method for outdoor high-dynamic-range (HDR) environment map prediction from low-dynamic-range (LDR) portrait images. Our method relies on two different CNN architectures, one for light encoding and another for face-to-light prediction. Outdoor lighting is characterised by an extremely high dynamic range, and thus our encoding splits the environment map data between low and high-intensity components, and encodes them using tailored representations. The combination of both network architectures constitutes an end-to-end method for accurate HDR light prediction from faces at real-time rates, inaccessible for previous methods which focused on low dynamic range lighting or relied on non-linear optimisation schemes. We train our networks using both real and synthetic images, we compare our light encoding with other methods for light representation, and we analyse our results for light prediction on real images. We show that our predicted HDR environment maps can be used as accurate illumination sources for scene renderings, with potential applications in 3D object insertion for augmented reality.

Short Presentation

Long Presentation


Presentation with notes.

Poster

Paper

BibTeX

@inproceedings{sztrajman2020hdr,
    author={Sztrajman, Alejandro and Neophytou, Alexandros and Weyrich, Tim and Sommerlade, Eric},
    booktitle={International Conference on 3D Vision (3DV)},
    title={High-Dynamic-Range Lighting Estimation From Face Portraits},
    year={2020},
    pages={355-363},
    doi={10.1109/3DV50981.2020.00045}
}

Acknowledgement

This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 642841. The page template was inspired by this project.