Notes
by Ken Turkowski

**Outline
**

- Forward Rendering
- Interactive
Rendering
- Reflection/Environment/Reflectance
Maps
- Complex Illumination

- Inverse Rendering
- Measure realistic
material models and lighting from real photographs

- Object Recognition
- Reflection as
Convolution
- Efficient Rendering:
Environment Maps
- Lighting Variability
in Object Recognition
- Deconvolution,
Inverse Rendering

The
results of this paper stem from the fundamental observation that

*
**distant
lights as the source of illumination, *

*the
radiance integral reduces to a convolution. *

Convolution is efficiently implemented as multiplication in the
frequency domain.

Spherical harmonics are used as the basis for illumination functions
defined on the sphere. By representing these in Cartesian rather
than spherical coordinates, the spherical harmonics are represented
as simple polynomials, rather than trancendental functions, thus
simplifying the computations enormously.

* Light
is the signal, *

the BRDF (bi-directional
reflection distribution function)

is the filter,
and reflection on a curved surface is convolution.

Inverse rendering is deconvolution.

The BRDF of a mirror ball is an impulse response. The environment
lighting can be determined directly.

Lambertian reflection falls off quickly, with periodic zeros in
the spherical harmonic coefficients. This can be represented with
only 9 parameters, and can be well approximated with a quadratic
polynomial.

The diffuse component is localized in frequency space, and the
specular component is localized in angular space. This observation
leads to an efficient dual algorithm for global illumination,
where

* ** diffuse
reflection is computed in frequency space, and *

specular reflection is computed
with ray-tracing* *

From a practical viewpoint, the method works well even for most
local lighting.

**Inverse rendering results: **

Illumination can be recovered from the reflected image of a specular
surface, but not from a diffuse surface. Attempting to do so from
a diffuse surface yields division by numbers very close to zero,
yielding very noisy and mostly bogus results.

Light fields (a.k.a. Lumigraphs) can be *factored *to extract
both the BRDF and illumination from an image *simultaneously.*

Inverse rendering via deconvolution or other methods is (obviously)
an *inverse *problem, which can be ill-conditioned. Success
is highly dependent on the characteristics of the source image,
and any a *priori* knowledge of the scene (i.e. characteristics
of the illumination or BRDF; geometry is assumed to be known).

The BRDF can be robustly and quickly computed with knowledge of
the illumination (and geometry) upon a diffuse surface.

http://graphics.stanford.edu/~ravir/research.html