I just finished changing my per-pixel bumpmapping in my program yet again. This time I'm using a cube-map normalizer and attenuation like in ATI's PointLight Shader demo and NitroGL's DOT3 Falloff 2 light demo. I noticed that it isn't a really accurate method on my 2 triangle square, compared to my 200 triangle square. When I take out attenuation, the effect of light on the surface moves up and down in a sin wave as I move the light from left to right. This isn't -quite- as noticable when attenutation is on, but it got me wondering again how to do true per pixel bump-mapping.
I had a method where I could calculate the light vector on a per-pixel basis in the fragment shader rather than on a per vertex basis in the vertex shader. I calculated the position of the light relative to the polygon surface as a 3d texture coordinate. This coordinate was scaled exactly to the texture coordinates on the polygon and gave the same result for each vertex (it had to, in order to work). The result was very accurate, but somehow it didn't look quite right.
My question is: is there at all a true per-pixel bump-mapping technique? I can't consider the cube-map normalizer to give true per-pixel results, though it certainly is alot better than the results of per-vertex calculations.
Another question refers to the use of multiple lights shining on a surface. Using a cubemap normalizer and attenuation for a single light it takes four operations in the fragment shader. Looking at NitroGL's DOT3 Falloff 2 Light demo it takes 10 operations for 2 lights. Obviously, if this progression follows even remotely like it currently appears to, there is no way 8 lights could be calculated in a single pass. How then, would this be done? Would it require potentially several fragment shader programs to be written and run through sequentially? If so, how would this be done?
I had a method where I could calculate the light vector on a per-pixel basis in the fragment shader rather than on a per vertex basis in the vertex shader. I calculated the position of the light relative to the polygon surface as a 3d texture coordinate. This coordinate was scaled exactly to the texture coordinates on the polygon and gave the same result for each vertex (it had to, in order to work). The result was very accurate, but somehow it didn't look quite right.
My question is: is there at all a true per-pixel bump-mapping technique? I can't consider the cube-map normalizer to give true per-pixel results, though it certainly is alot better than the results of per-vertex calculations.
Another question refers to the use of multiple lights shining on a surface. Using a cubemap normalizer and attenuation for a single light it takes four operations in the fragment shader. Looking at NitroGL's DOT3 Falloff 2 Light demo it takes 10 operations for 2 lights. Obviously, if this progression follows even remotely like it currently appears to, there is no way 8 lights could be calculated in a single pass. How then, would this be done? Would it require potentially several fragment shader programs to be written and run through sequentially? If so, how would this be done?