CVPR (2024)
arXivProject WebsiteHierarchical grouping in 3D by training a scale-conditioned affinity field from multi-level masks
SIGGRAPH (2023)
Project WebsiteGithubarXivA Modular Framework for Neural Radiance Field Development.
ICCV (2023) Oral
arXivProject WebsiteGrounding CLIP vectors volumetrically inside a NeRF allows flexible natural language queries in 3D.
ICCV (2023) Oral
arXivProject WebsiteInstruct-NeRF2NeRF enables instruction-based editing of NeRFs via a 2D diffusion model.
ICCV (2023)
arXivProject WebsiteNerfAcc integrates advanced efficient sampling techniques that lead to significant speedups in training various recent NeRF papers with minimal modifications to existing codebases.
ICCV (2023)
arXivProject WebsiteNerfbusters proposes an evaluation procedure for in-the-wild NeRFs, and presents a method that uses a 3D diffusion prior to clean NeRFs.
CoRL (2022) Oral
OpenReviewProject WebsiteWe show that by training NeRFs incrementally over a stream of images, they can be used robotics grasping tasks. They are particularly useful in tasks involving transparent objects which are traditionally hard to compute geometry for.
ECCV (2022)
arXivProject WebsiteWe show that is it possible to reconstruct TV show in 3D. Further, reasoning about humans and their environment in 3D enables a broad range of downstream applications: re-identification, gaze estimation, cinematography and image editing.
CVPR (2022) Oral
arXivProject WebsiteVideoWe present a variant of Neural Radiance Fields that can represent large-scale environments. We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.
CVPR (2022) Oral
arXivProject WebsiteVideoWe propose a view-dependent sparse voxel model, Plenoxel (plenoptic volume element), that can optimize to the same fidelity as Neural Radiance Fields (NeRFs) without any neural networks. Our typical optimization time is 11 minutes on a single GPU, a speedup of two orders of magnitude compared to NeRF.
ICCV (2021) Oral
arXivDemo / Project WebsiteVideoWe introduce a method to render Neural Radiance Fields (NeRFs) in real time without sacrificing quality. Our method preserves the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects.
ICCV (2021) Oral - Best Paper Honorable Mention
arXivProject WebsiteVideoThe rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. We prefilter the positional encoding function and train NeRF to generate anti-aliased renderings.
ICCV (2021)
arXivProject WebsiteVideoWe introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP.
CVPR (2021) Oral
arXivProject WebsiteCodeVideoWe find that standard meta-learning algorithms for weight initialization can enable faster convergence during optimization and can serve as a strong prior over the signal class being modeled, resulting in better generalization when only partial observations of a given signal are available.
CVPR (2021)
arXivProject WebsiteCodeVideoWe propose a learning framework that predicts a continuous neural scene representation from one or few input images by conditioning on image features encoded by a convolutional neural network.
CVPR (2021)
arXivProject WebsiteVideoWe recover relightable NeRF-like models using neural approximations of expensive visibility integrals, so we can simulate complex volumetric light transport during training.
NeurIPS (2020) Spotlight
arXivProject WebsiteCodeVideoWe show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes.
ECCV (2020) Oral - Best Paper Honorable Mention
arXivProject WebsiteCodeVideoFollow-upsWe propose an algorithm that represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. With this representation we achieve state-of-the-art results for synthesizing novel views of scenes from a sparse set of input views.
CVPR (2020)
arXivProject WebsiteCodeVideoWe present a deep learning method to hide imperceptible data into printed images that can be recovered after photographing the print. The method is robust to corruptions like shadows, occlusions, noice, and shift in color .
CVPR (2020)
arXivProject WebsiteVideoWe present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair. We propose a model that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume.
CHI (2020)
arXivProject WebsiteCodeEye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention.
ICCP (2018)
Project WebsiteLocal CopyVideoMIT NewsWe demonstrate a technique that recovers reflectance and depth of a scene obstructed by dense, dynamic, and heterogeneous fog. We use a single photon avalanche diode (SPAD) camera filter our the light that scatters off of the fog in the scene.
We introduce a method that couples traditional geometric understanding and data-driven techniques to image around corners with consumer cameras. We show that we can recover information in real scenes despite only training our models on synthetically generated data.
Nature Photonics (2018)
Project WebsiteNature ArticleVideoMIT NewsWe demonstrate that by folding the optical path in time, one can collapse the conventional photography optics into a compact volume or multiplex various functionalities into a single imaging optics piece without losing spatial or temporal resolution. By using time-folding at different regions of the optical path, we achieve an order of magnitude lens tube compression, ultrafast multi-zoom imaging, and ultrafast multi-spectral imaging.
Combining icon classification and text extraction, we present a multi-modal summarization application. Our application takes an infographic as input and automatically produces text tags and visual hashtags that are textually and visually representative of the infographic’s topics respectively.
IEEE Transactions on Computational Imaging (2017)
Project WebsiteLocal CopyIEEEMIT NewsWe demonstrate a new imaging method that is lensless and requires only a single pixel. Compared to previous single pixel cameras our system allows significantly faster and more efficient acquisition by using ultrafast time-resolved measurement with compressive sensing.
Optics Express (2017)
Project WebsiteLocal CopyOSAA deep learning method for object classification through scattering media. Our method trains on synthetic data with variations in calibration parameters that allows the network to learn a calibration invariant model.