http://ogldev.atspace.co.uk/www/tutorial46/tutorial46.html

in the previous tutorial we studied the screen space ambient occlusion algorithm.
we used a geometry buffer which contained the view space position of all the pixels as a first step in our calculations.
in this tutorial we are going to challenge ourselves by calcualting the view space position directly from the depth buffer.
the advantage of this approach is that much less memory is required because we will only need one floating point value per pixel instead of three.
this tutorial relies heavily on the previous tutorial so make sure u fully understand it before going on.
the code here will be presented only as required changes over the original algorithm.

in the SSAO algorithm we scan the entire window pixel by pixel, generate random points around each pixel in view space, project them on the near clipping plane and compare their Z value with the actual pixel at that location.
the view space position is generated in a geometry pass at the start of the render loop.
in order to populate correctly the geometry buffer with the view space position we also need a depth buffer (else pixels will be updated based on draw order rather than depth).
we cab use that depth buffer alone to reconstruct the entire view space position vector, thus reducing the space requried for it (though some more per-pixel math will required).

let us do a short recap 扼要概述 on the stages required to populate the depth buffer (if u need a more in-depth review please see tutorial 12).
we begin with the object space position of a vertex and multiply it with the WVP matrix is a combined transformations of loca-to-world, world-to-view and projection from view on the near clipping plane. the result is a 4D vector with the view space Z value in the fourth component.
we say that this vector is in clip space at this point.
the clip space vector goes into the gl_Position output vector from the vertex shader and the GPU clips its first three components between -W and W (W is the fourth component with the view space Z value).
next the GPU performs perspective divide which means that the vector is divided by W.
now the first three components are between -1 and 1 and the last component is simply 1.
we say that at this point the vector is in NDC space(normalized device coordinates).

usually the vertex is just one out of three vertices comprising a triangle so the gpu interpolates between the three NDC vectors across the triangle face and executes the fragment shader on each pixel.

on the way out of the fragment shader the gpu updates the depth buffer with the Z component of the NDC vector (based on several state nobs that must be configured correctly such as depth testing, depth write, etc).

an important point to remember is that before writing the Z value to the depth buffer the gpu transform it from (-1,1) to (0,1).
we must handle this correctly or else we will get visual anomalies.

so this is basically all the math relevant to the z buffer handling.
now let us say that we have a Z value that we sampled for the pixel and we want to reconstruct the entire view space vector from it.
everything we need in order to retrace our steps is in the above description but before we dive any further let us see that math again only this time with numbers and matrices rather than words.

since we are only interested in the view space position we can look at the projection matrix rather than the combined WVP (because projection works on the view space position):
OGL(教程46)——SSAO With Depth Reconstruction

what we see above is the projection of the view space vector to clip space (the result on the right). few notations:

1/ ar = Aspect Ratio (width/height)
2/ FOV = Field of View
3/ n = near clipping plane
4/ f = far clipping plane

in order to simplify the next steps let us call the value in location (3,3) of the projection matrix ‘S’ and the value in location (3,4) ‘T’.
this means that the value of the Z in NDC is (remember perspective divide):
OGL(教程46)——SSAO With Depth Reconstruction
and sicne we need to transform the NDC value from (-1,1) to (0,1) the actual value written to the depth buffer is:
OGL(教程46)——SSAO With Depth Reconstruction
it is now easy to see that we can extract the view space Z from the above formula.
i have not specified all the intermediate steps because u should be able to do them yourself. the final result is:

OGL(教程46)——SSAO With Depth Reconstruction

相关文章:

  • 2021-05-22
  • 2021-11-17
  • 2021-06-19
  • 2022-02-04
  • 2022-01-21
  • 2022-12-23
  • 2022-12-23
猜你喜欢
  • 2021-04-30
  • 2021-08-20
  • 2021-11-15
  • 2021-11-05
  • 2022-12-23
  • 2022-12-23
  • 2021-05-15
相关资源
相似解决方案