天天看點

Real-Time Rendering——Chapter 9 Global Illumination

radiance is the final quantity computed by the rendering process. 渲染的最終結果是計算輻射率。我們使用反射率方程進行求解。

so far, we have been using the reflectance equation to compute it:

Real-Time Rendering——Chapter 9 Global Illumination

where Lo(p, v) is the outgoing radiance from the surface location p in the view direction v; Ω is the hemisphere of directions above p; f(l,v) is the BRDF evaluated for v and the current incoming direction l; Li(p, l) is the incoming radiance into p from l; ⊗ is the piecewise vector multiplication operator (used because both f(l, v) and Li(p, l) vary with wavelength, so are represented as RGB vectors); and θi is the angle between l and the surface normal n. The integration is over all possible l in Ω.

the reflectance equation is a restricted special case of the full rendering equation, presented by kajiya 卡集亞 in 1986. different forms have been used for the rendering equation. we will use the following one:

上面的反射率方程在某些情況下有點缺陷,是以我們使用下面的公式:

Real-Time Rendering——Chapter 9 Global Illumination

where the new elements are Le(p,v) (the emitted 首字母Le的e radiance 自發光的輻射率 from the surface location p in direction v), and the following replacement:

Real-Time Rendering——Chapter 9 Global Illumination

this replacement means that the incoming radiance into location p from direction l is equal to the outgoing radiance from some other point in the opposite direction -l. in this case, the “other point” is defined by the ray casting function r(p, l). this function returns the location of the first surface point hit by a ray cast from p in direction l.

Real-Time Rendering——Chapter 9 Global Illumination

the meaing of the rendering equation is straightforward. to shade a surface location p, we need to know the outgoing radiance Lo leaving p in the view direction v. This is equal to the emitted radiance Le 自發光的輻射率 plus the reflected radiance. Emission from light sources has been studied in previous chapters, as has reflectance. Even the ray casting operator is not as unfamiliar as it may seem. The Z-buffer computes it for rays cast from the eye into the scene.

這個有點遞歸的思想,就是進如的L(p,l),必然來自于另外一個點的出去的L(r(p,l),-l)的輻射率。

啥意思呢?比如進入的光線是l,點為p,這個進入的光線,必然是另外一個物體的點出去的輻射光線。如下圖:

Real-Time Rendering——Chapter 9 Global Illumination

p的進入的輻射率,必然是來自于Q點出去的輻射率。

而Q點呢,又是來自于另外的點的出去的輻射率。以此遞歸下去。

the only new term is Lo(r(p, l),−l), which makes explicit the fact that the incoming radiance into one point must be outgoing from an other point. 這句話就是上面的解釋的意思。unfortunately, this is a recusive term. 遞歸的過程。 that is, it is computed by yet another summation 求和 over outgoing radiance from locations r(r(p, l), l’). these in turn need to compute the outgoing radiance from locations r(r(r(p, l), l’), l’’), ad infinitum (and it is amzing that the real world can compute all this in real-time).

現實世界多麼的強度,遞歸到無窮。

we know this intuitively, that lights illuminance a scene, and the photons bounce around the at each collision are aborbed, reflected, and refracted in a variety of ways. the rendering equation is significant in that it sums up all possible paths in a simple (looking) equation.

in real-time rendering, using just a local lighting model is the default. that is, only the surface data at the visible point is needed to compute the lighting. this is strength of the gpu pipeline, that primitives can be generated, processed, and then be discarded. transparency, reflections, and shadows, are examples of global illumination algorithms, in that they use information from other objects than the one being illuminated. these effects contribute greatly to increasing the realism in a rendered image, and also provide cues that help the viewer to understand spatical realtionships.

one way to think of the problem of illunimations is by the paths the photons take. in the local lighting model, photons travel from the light to a surface (ignoring intervening objects), then to the eye. shadowing techniques take into account these inervening object’s direct effects. with environment mapping, illumination travels from light sources to distant objects, then to local shiny objects, which mirror-reflect this light to the eye. Irradiance maps 輻照度貼圖 simulate the photons again, which first travel to distant objects, but then light from all these objects is weighted and summed to compute the effect on the diffuse surface, which in turn is seen by the eye.

9.2 ambient occlusion

when a light source covers a large solid angle, it casts a soft shadow. ambient light, which illuminates evenly from all directions (see section 8.3) casts the softest shadows. since ambient light lacks any directional variation, shadows are especially important——in their absence, objects appear flat (see left side of figure 9.36).

Real-Time Rendering——Chapter 9 Global Illumination

figure 9.36 a diffuse object lit evenly from all directions. on the left, the object is rendered without any shadowing or interreflections. no details are visible, other than the outline. in the center, the object has been rendered with ambient occlusion. the image on the right was generated with a full global illumination simulation.

shadow of ambient light is referred to as ambient occlusion. unlike other types of shadowing, ambient occlusion does not depend on light direction, so it can be precomputed for static objects. this options will be discussed in more detail in section 9.10.1. here we will focus on techniques for computing ambient occlusion dynamically, which is useful for animating scenes or deforming 變形 objects.

9.2.1 ambient occlusion theory

for simplicity, we will focus on lambertian surfaces. the outgoing radiance Lo from such surfaces is proportional to the surface irradiacne E. irradiance is the cosine-weighted integral of incoming radiance, and in the general case depends on the surface position p and the surface normal n. ambient light is defined as constant incoming radiance Li(l)=LA for all incoming directions l. this results in the following equation for computing irradiance:

Real-Time Rendering——Chapter 9 Global Illumination

where the integral is performed over the hemisphere Ω of possible incoming directions. it can be seen that equation 9.12 yields irradiance values unaffected by surface position and orientation, leading to a flat appearance.

equation 9.12 does not take visibility into account. some directions will be blocked by other objects in the scene, or by other parts of the same object (see figure 9.37, for example, point p2). these directions will have some other incoming radiance, not La. assuming (for simplicity)

Real-Time Rendering——Chapter 9 Global Illumination

figure 9.37. an object under ambient illumination. three points (p0, p1, and p2) are shown. on the left, blocked directions are shown as black rays ending in intersection points (black circles). unblocked directions are shown as arrows, colored according to the cosine factor, so arrows closer t othe surface normal are lighter. on the right, each blue arrow shows the average unoccluded direction or bent normal.

that blocked directions have zero incoming radiance results in the following equation (first proposed by cook and torrance):

Real-Time Rendering——Chapter 9 Global Illumination

where v(p,l) is a visibility function that equals 0 if a ray cast from p in the direction of l is blocked, and 1 if it is not. the ambient occlusion value kA is defined thus:

Real-Time Rendering——Chapter 9 Global Illumination

possible values for kA range from 0 (representing a fully occluded surface point, only possible in degenerate cases) to 1 (representing a fully open surface point with no occlusion). once kA is defined, the equation for ambient irradiance in the presence of occlusion is simply:

Real-Time Rendering——Chapter 9 Global Illumination

9.2.2 shading with ambient occlusion

shading with ambient occlusion is best understood in the context of the full shading equation, which includes the effects of both direct and indirect (ambient) light.

9.3 reflectoins

environment mapping techniques for providing reflections of objects at a distance have been covered in section 8.4 and 8.5, with reflected rays computed using equation 7.30 on page 230. the limitation of such techniques is that they work on the assumption that the reflected objects are located far from the reflector, so that the same texture can be used by all reflection rays. generating planar reflections of nearby objects will be presented in this section, along with methods for rendering frosted glass and handling curved reflections.

9.3.1 planar reflections

Planar reflection, by which we mean reflection off a flat surface such as

a mirror, is a special case of reflection off arbitrary surfaces. As often

occurs with special cases, planar reflections are easier to implement and

can execute more rapidly than general reflections.

An ideal reflector follows the law of reflection, which states that the

angle of incidence is equal to the angle of reflection. That is, the angle

between the incident ray and the normal is equal to the angle between the

reflected ray and the normal. This is depicted in Figure 9.41, which illustrates

a simple object that is reflected in a plane. The figure also shows

an “image” of the reflected object. Due to the law of reflection, the reflected

image of the object is simply the object itself, physically reflected

through the plane. That is, instead of following the reflected ray, we could

follow the incident ray through the reflector and hit the same point, but

Real-Time Rendering——Chapter 9 Global Illumination

Figure 9.41. Reflection in a plane, showing angle of incidence and reflection, the reflected

geometry, and the reflector.

9.4 Transmittance 透光率

As discussed in Section 5.7, a transparent surface can be treated as a blend color or a filter color. When blending, the transmitter’s color is mixed with the incoming color from the objects seen through the transmitter. The over operator uses the α value as an opacity to blend these two colors. The transmitter color is multiplied by α, the incoming color by 1 −α, and the two summed. So, for example, a higher opacity means more of the transmitter’s color and less of the incoming color affects the pixel. While this gives a visual sense of transparency to a surface [554], it has little physical basis.

Multiplying the incoming color by a transmitter’s filter color is more

in keeping with how the physical world works. Say a blue-tinted filter is

attached to a camera lens. The filter absorbs or reflects light in such a

way that its spectrum resolves to a blue color. The exact spectrum is

usually unimportant, so using the RGB equivalent color works fairly well

in practice. For a thin object like a filter, stained glass, windows, etc., we

simply ignore the thickness and assign a filter color.

For objects that vary in thickness, the amount of light absorption can

be computed using the Beer-Lambert Law:

Real-Time Rendering——Chapter 9 Global Illumination

9.5 Refractions

For simple transmittance, we assume that the incoming light comes from

directly beyond the transmitter. This is a reasonable assumption when the

front and back surfaces of the transmitter are parallel and the thickness

is not great, e.g., for a pane of glass. For other transparent media, the

index of refraction plays an important part. Snell’s Law, which describes

how light changes direction when a transmitter’s surface is encountered, is

described in Section 7.5.3.

Bec [78] presents an efficient method of computing the refraction vector.

For readability (because n is traditionally used for the index of refraction

in Snell’s equation), define N as the surface normal and L as the direction

to the light:

Real-Time Rendering——Chapter 9 Global Illumination

where n = n1/n2 is the relative index of refraction, and

Real-Time Rendering——Chapter 9 Global Illumination

The resulting refraction vector t is returned normalized.

This evaluation can nonetheless be expensive. Oliveira [962] notes that

because the contribution of refraction drops off near the horizon, an approximation

for incoming angles near the normal direction is

Real-Time Rendering——Chapter 9 Global Illumination

where c is somewhere around 1.0 for simulating water. Note that the

resulting vector t needs to be normalized when using this formula.

The index of refraction varies with wavelength. That is, a transparent

medium will bend different colors of light at different angles. This

phenomenon is called dispersion, and explains why prisms work and why

rainbows occur. Dispersion can cause a problem in lenses, called chromatic

aberration. In photography, this phenomenon is called purple fringing, and

can be particularly noticeable along high contrast edges in daylight. In

computer graphics we normally ignore this effect, as it is usually an artifact

to be avoided. Additional computation is needed to properly simulate

the effect, as each light ray entering a transparent surface generates a set

of light rays that must then be tracked. As such, normally a single refracted

ray is used. In practical terms, water has an index of refraction of

approximately 1.33, glass typically around 1.5, and air essentially 1.0.

Some techniques for simulating refraction are somewhat comparable to

those of reflection. However, for refraction through a planar surface, it is

not as straightforward as just moving the viewpoint. Diefenbach [252] discusses

this problem in depth, noting that a homogeneous transform matrix

Real-Time Rendering——Chapter 9 Global Illumination

figure 9.49. refraction and reflection by a glass ball of a cubic environment map, with the map itself used as a skybox background. (image courtesy of NVIDIA corporation.)

is needed to propertly warp an image generated from a refracted viewpoint. in a similar vein靜脈, Vlachos[1306] presents the shears necessary to render the refraction effect of a fish tank. Section 9.3.1 gave some techniques where the scene behind a refractor

was used as a limited-angle environment map. A more general way to give

an impression of refraction is to generate a cubic environment map from

the refracting object’s position. The refracting object is then rendered,

accessing this EM by using the refraction direction computed for the frontfacing

surfaces. An example is shown in Figure 9.49. These techniques give

the impression of refraction, but usually bear little resemblance to physical

reality. The refraction ray gets redirected when it enters the transparent

solid, but the ray never gets bent the second time, when it is supposed to

leave this object; this backface never comes into play. This flaw sometimes

does not matter, because the eye is forgiving for what the right appearance

should be.

Oliveira and Brauwers [965] improve upon this simple approximation

by taking into account refraction by the backfaces. In their scheme, the

backfaces are rendered and the depths and normals stored. The frontfaces

are then rendered and rays are refracted from these faces. The idea is to

find where on the stored backface data these refracted rays fall. Once the backface texel is found where the ray exits, the backface’s data at that point

properly refracts the ray, which is then used to access the environment map.

The hard part is to find this backface pixel. The procedure they use to trace

the rays is in the spirit of relief mapping (Section 6.7.4). The backface zdepths

are treated like a heightfield, and each ray walks through this buffer

until an intersection is found. Depth peeling can be used for multiple

refractions. The main drawback is that total internal reflection cannot be

handled. Using Heckbert’s regular expression notation [519], described at

the beginning of this chapter, the paths simulated are then L(D|S)SSE:

The eye sees a refractive surface, a backface then also refracts as the ray

leaves, and some surface in an environment map is then seen through the

transmitter.

Davis and Wyman [229] take this relief mapping approach a step farther,

storing both back and frontfaces as separate heightfield textures. Nearby

objects behind the transparent object can be converted into color and depth

maps so that the refracted rays treat these as local objects. An example

is shown in Figure 9.50. In addition, rays can have multiple bounces, and

total internal reflection can be handled. This gives a refractive light path of

L(D|S)S +SE. A limitation of all of these image-space refraction schemes

is that if a part of the model is rendered offscreen, the clipped data cannot

refract (since it does not exist).

Simpler forms of refraction ray tracing can be used directly for basic geometric objects. For example, Vlachos and Mitchell [1303] generate the refraction ray and use ray tracing in a pixel shader program to find which wall of the water’s container is hit. See Figure 9.51.

9.7.1 Subsurface Scattering Theory

Figure 9.54 shows light being scattered through an object. Scattering

causes incoming light to take many different paths through the object.

Since it is impractical to simulate each photon separately (even for offline

rendering), the problem must be solved probabilistically, by integrating

over possible paths, or by approximating such an integral. Besides scatter-ing, light traveling through the material also undergoes absorption. The

absorption obeys an exponential decay law with respect to the total travel

distance through the material (see Section 9.4). Scattering behaves similarly.

The probability of the light not being scattered obeys an exponential

decay law with distance.

The absorption decay constants are often spectrally variant (have different

values for R, G, and B). In contrast, the scattering probability constants

usually do not have a strong dependence on wavelength. That said, in certain

cases, the discontinuities causing the scattering are on the order of a

light wavelength or smaller. In these circumstances, the scattering probability

does have a significant dependence on wavelength. Scattering from

individual air molecules is an example. Blue light is scattered more than

red light, which causes the blue color of the daytime sky. A similar effect

causes the blue colors often found in bird feathers.

One important factor that distinguishes the various light paths shown

in Figure 9.54 is the number of scattering events. For some paths, the light

leaves the material after being scattered once; for others, the light is scattered

twice, three times, or more. Scattering paths are commonly grouped

into single scattering and multiple scattering paths. Different rendering

techniques are often used for each group.

9.7.2 wrap lighting

for many solid materials, the distances between scattering events are short enough that single scattering can be approximated via a BRDF. also, for some materials, single scattering is a relatively weak part of the total scattering effect, and multiple scattering predominates——skin is notable example. for these reasons, many subsurface scattering 次表面散射 rendering techniques focus on simulating multiple scattering.

perhaps the simplest of these is wrap lighting. wrap lighting as dicussed on page 294 as an an approximation of area light sources. when used to approximate subsurface scattering, it can be useful to add a color shift. this accounts for the partial absorption of light traveling through the material. for example, when rendering skin, a red color shift could be used.

https://blog.csdn.net/pianpiansq/article/details/74453602

when used in this way, wrap lighting attempts to model the effect of multiple scattering on the shading of curved surfaces. the “leakage” of light from adjacent points into the currently shaded point softens the transition area from light to dark where the surfaces curves away from the light source. kolchin points out that this effect depends on surface curvature, and he derives a physically based version. although the derived expression is somewhat expensive to evaluate, the idea behind it are uesful.

9.7.3 normal blurring

Stam人名 points out that multipe scattering can be modeld as a diffusion process. jensen et al. further develop this idea to derive an analytical BSSRDF model. the diffusion process has a spatial blurring effect on the outgoing radiance.

this blurring affects only diffuse reflectance. specular reflectance occurs at the material surface and is unaffected by substance scattering.次表面散射 since normal maps often encode small-scale variation, a useful trick for subsurface scattering is to apply normal maps to only the specular reflecance. the smooth, unperturbed 未受擾亂的 normal is used for the diffuse reflectance. since there is no added cost, it is often worthwhile to apply this technique when using other subsurface scattering methods.

for many materials, multiple scattering 多種反射 occurs over a relatively small distance. skin is an important example, where most scattering takes place over a distance of a few millimeters 毫米. for such materials, the trick of not perturbing the diffuse shading normal may be sufficient by itself. Ma et al. 人名 extend this method, based on measured data. they measured reflected light from scattering objects and found that while the specular reflectance is based on the geometric surface normals, subsurface scattering makes diffuse reflectance behave as if it uses blurred surface normals. futhermore, the amount of blurring can vary over the visible spectrum. they propose a real-time shading technique using independently acquired normal maps for the specular reflectance and for the R, G and B channels of the diffuse reflectance. since these diffuse normal maps typically resemble blurred versions of the specular map, it is straightforward to modify this technique to use a single normal map, while adjusting the mipmap level. this adjustment should be performed similarly to the adjustment of environment map mimap levels discussed on page 310.

9.7.4 texture spacing diffusion 紋理空間模糊

blurring the diffuse normals accounts for some visual effects of multiple scattering, but not for others, such as softened shadow edges. borshukov and lewis popularized the concept of texture space diffusion. they formalize 形式化 the idea of multiple scattering as a blurring process. first, the surface irradiance (diffuse lighting) is rendered into a texture. this is done by using texture coordinates as positions for rasterization (the real positions are interpolated separately for use in shading). this texture is blurred, and then used for diffuse shading when rendering. 先對圖檔進行模糊,然後在漫反射的時候使用。

https://www.cnblogs.com/psklf/p/9526690.html

https://www.cnblogs.com/zhanlang96/p/4941531.html

9.7.5 depth-map techniques

the techniques discussed so far model scattering over only relatively small distances. other techniques are needed for materials exhibiting large-scale scattering. many of these focus on large-scale single 大規模單個 scattering, which is easier to model than large-scale multiple 大規模多個 scattering.

the ideal simulation for large-scale single scattering can be seen on the left side of figure 9.56. the light paths change direction on entering and exiting the object, due to refraction. the effects of all the paths need to be summed to shade a single surface point. absorption also needs to be taken into account——the amount of absorption in a path depends on its length inside the material. computing all these refracted rays for a single shaded point is expensive even for offline renderers, so the refraction on entering the material is usually ignored 折射進物體的被忽略, and only the change in direction on exiting the material is taken into account. 隻考慮反射的部分,離開物體表面的? this approximation is shown in the center of figure 9.56. since the rays cast are alwasy in the direction of the light, Hery 人名 points out that light space depth maps (typically used for shadowing) can be used instead of ray casting. multiple points (shown in yellow) on the refracted view ray are sampled, and a lookup into the light space depth map, or shadow map, is performed for each one. the result can be projected to get the position of the red intersection point. the sum of the distances from red to yellow and yellow to blue points is used to determine the absorption. for media that scatter light anisotropically 各向異性的材質 the scattering angle also affects the amount of scatterd light.

Real-Time Rendering——Chapter 9 Global Illumination

figure 9.56. on the left, the idea situation, in which light refracts when entering the object, then all scattering contributions that would propertly refract upon leaving the object are computed. the middle shows a computaionally simpler situation, in which the rays refract only on exit. the right shows a much simpler, and therefore faster, approximation, where only a single ray is considered.

performing depth map lookups is faster than ray casting, but the multiple samples required make Hery’s method too slow for most real-time rendering applications. Green [447] proposes a faster approximation, shown on the right side of figure 9.56. instead of multiple samples along the refracted ray, a single depth map lookup is performed at the shaded point. although this method is somewhat nonphysical, its results can be convincing. one problem is that details on the back side of the object an show through, since every change in object thickness will directly affect the shaded color. despite this, Green’s approximation is effective enough to be used by Pixar for films such as Ratatouille [460]. Pixar refers to this technique as Gummi Lights. another problem (shared with Hery’s implementation, but not Pixar’s) is that the depth map should not contain multiple objects, or highly nonconvex objects. this is because it is assumed that the entire path between the shaded (blue) point and the red intersection point lies within the object.(11)

modeling large-scale multiple scattering in real time is quite difficult, since each surface point can be influenced by light incoming from any other surface point. Dachsbacher and Stamminger [218] propose an extension of the light space depth map method, called translucent shadow maps, for modeling multiple scattering. additional information, such as irradiance and surface normal, is stored in light space textures. several samples are taken from these textures (as well as from the depth map) and combined to form an estimation of the scattered radiance. a modification of this technique was used in NVIDIA’S system [246, 247, 248]. Mertens et al. [859] propose a similar method, but using a texture in screen space, rather than light space.

9.7.6 other methods

several techniques assume that the scattering object is rigid and precalculate the proportion of light scattered among different parts of the object [499, 500, 757]. these are similar in principle to precomputed radiance transfer techniques (discussed in section 9.11). precomputed radiance transfer can be used to model small-or large-scale multiple scattering on rigid objects, under low-frequency distant lighting. Isidoro [593] discusses several pratical issues relating to the use of precomputed radiance transfer for suburface scattering, and also details how to combine it with other subsurface scattering methods, such as texture space diffusion.

modeling large scale multiple scattering is even more difficult in the case of deforming objects. Mertens et al. [860] present a technique based on a hierarchical surface representation. scattering factors are dynamically computed between hierarchial elements, based on distance. A GPU implementation is not given. in constrast, Hoberock [553] proposes a GPU-based method derived from Bunnell’s [146] hierarchical disk-based surface model, previously used for dynamic ambient occlusion.

9.8 full global illumination

so far, this chapter has presented a “piecemeal” 零碎的 approach to solving the rendering equation. individual parts or special cases from the rendering equation were solved with specialized algorithms. in this section, we will present algorithms that are designed to solve most or all the rendering equation. we will refer to these as full global illumination algorithms.

in the general case, full global illumination algorithms are too computaionally expensive for real-time applications. why do we discuss them in a book about real-time rendering? the reason is that in static or partially static scenes, full global illumination algorithms can be run as a pre-process, storing the results for later use during rendering. this is a very popular approach in games, and will be discussed in detail in sections 9.9, 9.10 and 9.11.

the second reason is that under certain restricted circumstances, full global illumination algorithms can be run at rendering time to produce particular visual effects. this is a growing trend as graphics hardware becomes more powerful and flexible.

radiosity and ray tracing are the first two algorithms introduced for global illumination in computer graphics, and are still in use today. we will also present some other techniques, including some intended for real time implementation on the gpu.

9.8.1 Radiosity 光能傳遞

the importance of indirect lighting to the appearance of a scene was discussed in chapter 8. 間接光增加真實感 multiple bounces of light amoung surfaces cause a subtle interplay of light and shadow that is key to a realistic appearance. interreflections also cause color bleeding, where the color of an object appears on adjacent objects. for example, walls will have a reddish tint where they are adjacent to a red carpet. see figure 9.57.

Real-Time Rendering——Chapter 9 Global Illumination

figure 9.57. color bleeding. the light shines on the beds and carpets, which in turn bounce light not only to the eye but to other surfaces in the room, which pick up their color.

Radiosity was the first computer graphics technique developed to simulate bounced light between diffuse surfaces. there have been whole books written on this algorithm, but the baisc idea is relatively simple. light bounces around an environment; u turn a light on and the illumination quickly reaches equilibrium. in this stable state, each surface can be considered as a light source in its own light. when light hits a surface, it can be absorbed, diffusely reflected, or reflected in some other fashion (specularly, anisotropically, etc.).

各向異性,Anisotropy,通俗上講就是在各個方向上所展現出來的性質都不一樣。英文維基詞條的解釋比較貼切,并且在各個領域中都舉出了比較恰當的例子。舉例來說,比如金屬良導體,你無論是正接,反接,還是各個角度的側接,它都導電,電導的實體常數也沒有很大的變化,那麼這個材料的導電性上我們就說它是各向同性(Isotropy)。但是有的材料,比如一些電阻原件,正接是良導體,反接就是絕緣體或者電阻很大,他各個方向上的電導的實體常數差異很大,這種材料,在導電性上就是各向異性(Anisotropy)。各向異性或者各向同性,是物質材料的自身的屬性,跟材料的尺度大小,内部原子排列結構,分子互相作用等等密切相關。

https://www.zhihu.com/question/20583248/answer/15551035

basic radiosity algorithms first make the simplifying assumption that all indirect light is from diffuse surfaces. this assumption fails for places with polished marble floors or large mirrors on the walls, but for most architectural settings this is a reasonable approximation. the BRDF of a diffuse surface is a simple, uniform hemisphere, so the surface’s radiance from any direction is proportional purely to the irradiance multiplied by the reflectacne of the surface. the outgoing radiance is then

Real-Time Rendering——Chapter 9 Global Illumination

where E is the irradiance and r is the reflectance of the surface. Note that, though the hemisphere covers 2π steradians, the integration of the cosine term for surface irradiance brings this divisor down to π.

To begin the process, each surface is represented by a number of patches

(e.g., polygons, or texels on a texture). The patches do not have to match

one-for-one with the underlying polygons of the rendered surface. There

can be fewer patches, as for a mildly curving spline surface, or more patches

can be generated during processing, in order to capture features such as

shadow edges.

To create a radiosity solution, the basic idea is to create a matrix of

form factors among all the patches in a scene. Given some point or area on

the surface (such as at a vertex or in the patch itself), imagine a hemisphere

above it. Similar to environment mapping, the entire scene can be projected

onto this hemisphere. The form factor is a purely geometric value denoting

the proportion of how much light travels directly from one patch to another.

A significant part of the radiosity algorithm is accurately determining the

form factors between the receiving patch and each other patch in the scene.

The area, distance, and orientations of both patches affect this value.

The basic form of a differential form factor, fij , between a surface point

with differential area, dai, to another surface point with daj, is

Real-Time Rendering——Chapter 9 Global Illumination

where θi and θj are the angles between the ray connecting the two points

and the two surface normals. If the receiving patch faces away from the

viewed patch, or vice versa, the form factor is 0, since no light can travel

from one to the other. Furthermore, hij is a visibility factor, which is either

0 (not visible) or 1 (visible). This value describes whether the two points

can “see” each other, and d is the distance between the two points. See

Figure 9.58.

Real-Time Rendering——Chapter 9 Global Illumination

Figure 9.58. The form factor between two surface points.

繼續閱讀