BZ-Next: A modern OpenGL BZFlag client
Re: BZ-Next: A modern OpenGL BZFlag client
Some progress towards getting shadow mapping and a custom shader pipeline working: a depth map rendered for a light source for shadow mapping.
Re: BZ-Next: A modern OpenGL BZFlag client
Even basic shadow mapping (beyond just drawing them on the ground like we do now) would be a huge visual improvement, I imagine… can’t wait to see it.
Re: BZ-Next: A modern OpenGL BZFlag client
I am also looking forward to seeing it working It is quite a challenge, but I think once shadow mapping is implemented, it will prove that the infrastructure is in place to do basically anything, graphics-wise.
The challenge with shadow mapping, and what it implies for the engine, is basically:
- Supporting multiple cameras (render a depth map from the POV of the light source in orthographic projection)
- Supporting lights as scene objects (need to be able to feed light position and transformation into shader)
- Supporting multi-stage rendering with arbitrary shaders
Point 3 is done, and point 1 is semi-done. Point 2 is a bit of a headache (getting straight all the various transformations can be confusing. Am I in the world's coordinate system, or the camera's? What data does the shader need? How does it need to be transformed? How to you get the transformation out of the scene graph and into the shader without making the code a mess?)
Really, this work is forcing a general cleanup of the way the scene graph is handled, which is for the best.
I am thinking, the scene graph should have two types of objects, generally:
- Named objects, that can be looked up in a global map using a string. They should delete themselves from the map on destruction (or when their parent is destroyed).
- Unnamed objects, that just exist in the scene and may be manipulated by other code that keeps a reference to them. This is the default type of scene object in Magnum, and is probably good enough for most things.
Named objects would be things like "SceneRoot", "WorldManipulator", "Tanks", "Sun", "Moon", anything that groups objects on a large scale or represents an entity that a lot of code might want to mess with.
Unnamed objects would be things like the individual wheels on Tank#0.
All objects are part of the same tree. If the scene root is deleted, so are all the contained objects, and render lists etc are automatically cleaned up.
Probably, the SceneRenderer will just handle lists of cameras and lights and stuff, since those objects will have lots of scene-specific properties anyway. (For example, the light camera's ortho projection matrix probably needs to be tailored to the world size using some geometry math, so that shadow mapping reaches all corners of the map.)
I would like to have it so that all the rendering functions take a camera to operate from, so that you can render the scene from multiple perspectives at once. It'll keep things flexible for other rendering techniques down the line, too. But I'd like to, for instance, be able to follow two separate tanks around at once, in two ImGUI windows. That'd be neat.
The challenge with shadow mapping, and what it implies for the engine, is basically:
- Supporting multiple cameras (render a depth map from the POV of the light source in orthographic projection)
- Supporting lights as scene objects (need to be able to feed light position and transformation into shader)
- Supporting multi-stage rendering with arbitrary shaders
Point 3 is done, and point 1 is semi-done. Point 2 is a bit of a headache (getting straight all the various transformations can be confusing. Am I in the world's coordinate system, or the camera's? What data does the shader need? How does it need to be transformed? How to you get the transformation out of the scene graph and into the shader without making the code a mess?)
Really, this work is forcing a general cleanup of the way the scene graph is handled, which is for the best.
I am thinking, the scene graph should have two types of objects, generally:
- Named objects, that can be looked up in a global map using a string. They should delete themselves from the map on destruction (or when their parent is destroyed).
- Unnamed objects, that just exist in the scene and may be manipulated by other code that keeps a reference to them. This is the default type of scene object in Magnum, and is probably good enough for most things.
Named objects would be things like "SceneRoot", "WorldManipulator", "Tanks", "Sun", "Moon", anything that groups objects on a large scale or represents an entity that a lot of code might want to mess with.
Unnamed objects would be things like the individual wheels on Tank#0.
All objects are part of the same tree. If the scene root is deleted, so are all the contained objects, and render lists etc are automatically cleaned up.
Probably, the SceneRenderer will just handle lists of cameras and lights and stuff, since those objects will have lots of scene-specific properties anyway. (For example, the light camera's ortho projection matrix probably needs to be tailored to the world size using some geometry math, so that shadow mapping reaches all corners of the map.)
I would like to have it so that all the rendering functions take a camera to operate from, so that you can render the scene from multiple perspectives at once. It'll keep things flexible for other rendering techniques down the line, too. But I'd like to, for instance, be able to follow two separate tanks around at once, in two ImGUI windows. That'd be neat.
Re: BZ-Next: A modern OpenGL BZFlag client
Basic shadow mapping now works. The code needs to be cleaned up a bit, but all the moving pieces are there now to dynamically cast shadows on world geometry based on the position of the sun.
With shadows, you can keep investing time and code into making them better, so the question really is, how good do they have to be.
Currently, my strategy is to just render one big depth map when the sun changes position, and then that can be re-used to render the map until the sun changes position again.
Fancier strategies might only render a depth map for the area that the player can currently see, render different LODs of depth map based on how far away things are, etc. But that implies a bunch of infrastructure would need to be written, so I'd rather keep it simpler...
With shadows, you can keep investing time and code into making them better, so the question really is, how good do they have to be.
Currently, my strategy is to just render one big depth map when the sun changes position, and then that can be re-used to render the map until the sun changes position again.
Fancier strategies might only render a depth map for the area that the player can currently see, render different LODs of depth map based on how far away things are, etc. But that implies a bunch of infrastructure would need to be written, so I'd rather keep it simpler...
Re: BZ-Next: A modern OpenGL BZFlag client
Another few neat screenshots:
There's still a lot to optimize, but it's promising. It's neat that tanks can cast shadows on themselves.
There's still a lot to optimize, but it's promising. It's neat that tanks can cast shadows on themselves.
Re: BZ-Next: A modern OpenGL BZFlag client
The sun really shouldn't move, it makes map visibility worse unless you design for all light levels and means that you have to recalculate building shadows mid-game...
Re: BZ-Next: A modern OpenGL BZFlag client
Performance wise, in order to have tanks and other dynamic things cast shadows, we need to update shadows every frame anyway.
The advantage here, though, is that is all done on the GPU in a really simple rendering pass.
For playability of older maps, it might be good to have an option where things only cast shadows on the ground, and not on each other. That should be pretty similar to the classic shadow result.
The real trick here is figuring out how to use the depth map texture for best results. I'm thinking, could compute the bounding box of the world, and the current camera view frustrum. Then, using those as input, compute an orthographic projection matrix for the depth map render that only encapsulates what the player camera might see. That way, more of the valuable depth map texture space will be used for things that might actually affect the render. That'd be simple enough that I could actually do it, and I think it'd be an improvement.
Once that's working, I'd like to look into deferred rendering, HDR, and bloom, since I don't think those would be too hard to add at this point (simpler than shadow mapping, I believe). Then perhaps SSAO after that, if that goes well.
HDR and bloom would be nice for adding some zest to emissive materials that already exist in bzflag, but currently don't do anything except look bright. It'd also be cool for shots and lasers. I imagine an auto-expose feature, where the camera exposure dynamically adjusts based on the amount of light, so being blasted with a glowing bullet would cause the rest of the world to dim in comparison to the bullet. Could be cool.
Normal mapping should be really straightforward, the only barrier is that the map materials don't currently support it. Perhaps a quick fix would be to assume that texture0 on a material is always diffuse, and texture1, if it exists, is always a normal map. But since existing maps don't have that data, I am deprioritizing it for now, even though it would be simple.
The advantage here, though, is that is all done on the GPU in a really simple rendering pass.
For playability of older maps, it might be good to have an option where things only cast shadows on the ground, and not on each other. That should be pretty similar to the classic shadow result.
The real trick here is figuring out how to use the depth map texture for best results. I'm thinking, could compute the bounding box of the world, and the current camera view frustrum. Then, using those as input, compute an orthographic projection matrix for the depth map render that only encapsulates what the player camera might see. That way, more of the valuable depth map texture space will be used for things that might actually affect the render. That'd be simple enough that I could actually do it, and I think it'd be an improvement.
Once that's working, I'd like to look into deferred rendering, HDR, and bloom, since I don't think those would be too hard to add at this point (simpler than shadow mapping, I believe). Then perhaps SSAO after that, if that goes well.
HDR and bloom would be nice for adding some zest to emissive materials that already exist in bzflag, but currently don't do anything except look bright. It'd also be cool for shots and lasers. I imagine an auto-expose feature, where the camera exposure dynamically adjusts based on the amount of light, so being blasted with a glowing bullet would cause the rest of the world to dim in comparison to the bullet. Could be cool.
Normal mapping should be really straightforward, the only barrier is that the map materials don't currently support it. Perhaps a quick fix would be to assume that texture0 on a material is always diffuse, and texture1, if it exists, is always a normal map. But since existing maps don't have that data, I am deprioritizing it for now, even though it would be simple.
Re: BZ-Next: A modern OpenGL BZFlag client
There's now a cool sun adjust widget so you can move the sun around and see how the shadows change:
It'd be cool to apply a sunset color to the sunlight at high angles for evening/morning.
It'd be cool to apply a sunset color to the sunlight at high angles for evening/morning.
Re: BZ-Next: A modern OpenGL BZFlag client
Gamma correction results in gentler-looking shadows, which is more in line with shadows in the current version of BZFlag.
With the new system, rendering to intermediate textures and setting up a shader pipeline is really easy. I currently have an HDR render target texture that is working. Applying tone mapping and gamma correction to the HDR buffer should give a slightly better result still.
The idea is to have the final HDR step set an exposure uniform that is used to set the camera exposure during the conversion from RGB16F->RGB8U. The exposure uniform will be based on the average luminance of the HDR buffer through some mechanism (moving average or similar).
Average luminance can be roughly calculated by generating mipmaps for the HDR buffer. The ideal way would be to use a compute shader and do some fancy histogram math stuff, but compute shaders are not available on webgl, so it's best to avoid them.
Once that is working, adding bloom should be relatively easy (simple shader that peels off bright regions, blurs them, and then mixes the blurred data back into the image.)
I'd then like to add the sun as a blindingly bright object in the sky, as a test. Adding clouds would be cool, also. Maybe there's a cool cloud shader that can be ripped from shadertoy...
With the new system, rendering to intermediate textures and setting up a shader pipeline is really easy. I currently have an HDR render target texture that is working. Applying tone mapping and gamma correction to the HDR buffer should give a slightly better result still.
The idea is to have the final HDR step set an exposure uniform that is used to set the camera exposure during the conversion from RGB16F->RGB8U. The exposure uniform will be based on the average luminance of the HDR buffer through some mechanism (moving average or similar).
Average luminance can be roughly calculated by generating mipmaps for the HDR buffer. The ideal way would be to use a compute shader and do some fancy histogram math stuff, but compute shaders are not available on webgl, so it's best to avoid them.
Once that is working, adding bloom should be relatively easy (simple shader that peels off bright regions, blurs them, and then mixes the blurred data back into the image.)
I'd then like to add the sun as a blindingly bright object in the sky, as a test. Adding clouds would be cool, also. Maybe there's a cool cloud shader that can be ripped from shadertoy...
Re: BZ-Next: A modern OpenGL BZFlag client
Getting there with a test of fancy clouds: a raymarching cloud shader ripped from shadertoy:
The idea is to just render this behind the world for a quick sky/cloud/sun ensemble.
The idea is to just render this behind the world for a quick sky/cloud/sun ensemble.
Re: BZ-Next: A modern OpenGL BZFlag client
Well, it's certainly rough around the edges. The cloud shader needs to be adapted to play nice with the scene, but I was able to hack it to make it "pretty close" to correct. At least it demonstrates that you can pull in shaders from other sources and plug them into the renderer.
Here's an image of urban with these kinda rough looking raymarched clouds: The clouds move across the sky in real time.
Another shot of mw2:
Fairground:
With tanks:
With the (currently hacky) sky shader and gamma corrected shadows, I feel it's approaching a much higher degree of realism, without introducing stuff that would affect gameplay.
I feel like SSAO will really be the last piece of the puzzle in making the game look a lot better. It would slightly darken corners and creases in the map, which would make the lighting look more realistic.
Here's an image of urban with these kinda rough looking raymarched clouds: The clouds move across the sky in real time.
Another shot of mw2:
Fairground:
With tanks:
With the (currently hacky) sky shader and gamma corrected shadows, I feel it's approaching a much higher degree of realism, without introducing stuff that would affect gameplay.
I feel like SSAO will really be the last piece of the puzzle in making the game look a lot better. It would slightly darken corners and creases in the map, which would make the lighting look more realistic.
Re: BZ-Next: A modern OpenGL BZFlag client
The cloud rendering is fixed now, so clouds don't look all weird and blocky.
There is still a bit of work to do to sync the sky render with the world camera position (the shader takes an eye position and a look_at direction and fov, and does ray casting. So it's important to get the shader's internal "camera" in sync with the game camera and world transformation, so that it looks like part of the scene when you move/rotate the view.)
There is still a bit of work to do to sync the sky render with the world camera position (the shader takes an eye position and a look_at direction and fov, and does ray casting. So it's important to get the shader's internal "camera" in sync with the game camera and world transformation, so that it looks like part of the scene when you move/rotate the view.)
Re: BZ-Next: A modern OpenGL BZFlag client
The shadows implementation at first looks like a subtle change, and then on closer inspection you can see what a huge improvement that is. The gamma adjustment was another big improvement over the initial screenshots. Very nice work indeed.
Re: BZ-Next: A modern OpenGL BZFlag client
Thanks
I finally got the cloud/atmosphere shader properly synchronized with the raster pipeline, so that the projection used by both matches, and there are no weird artifacts when transforming the scene.
Just as a test I loaded up The Duck Who Knows' phantom map, which has mountains hard-baked into the map, to see what it'd look like with the default mountains in place:
I think the mountains could be improved a lot. It's good to have something or other in the scene at the horizon to improve realism, but these mountains detract from the overall visual quality, I feel.
Another shot of essentially how the map is embedded into a scene with the atmosphere renderer:
And of course UJ: In comparison, in the current release client:
I think it's a definite improvement.
The sun in the cloud/atmosphere shader is now synchronized with the world sun position, which is neat, so if you move the sun around, you can see the sun move around behind the cloud layer.
These clouds are essentially raytraced, which gives a lot of easy visual appeal, but might be slow for users with less capable machines. So it'd be good to have a basic textured cloud fallback for less capable machines. I need to make a quick and dirty render options dialog that you can use to enable/disable features and set quality settings.
For the atmosphere shader, the remaining things to do are to add sunrise/sunset effects (sky coloration, light coloration), perhaps some atmosphere effects (if they aren't too expensive / complicated), and the ability to set the sky color (used by some maps).
Aside from that, the other thing to do is to plug everything into an HDR pipeline that applies exposure settings and gamma correction at the end of rendering. (We'll see if this really adds anything, if it doesn't, I'll just leave it be.)
Once that's working, it could be nice to add bloom.
After that, generally moving everything over to a deferred rendering setup is the next step. That would allow all bullets to be true point light sources that could illuminate the scene, which would be a nice improvement over the previous implementation that only allowed bullets to illuminate certain objects.
Then, adding billboards and adding shots/lasers and stuff back into the renderer...
Lots to do, but the way forward is pretty clear.
I finally got the cloud/atmosphere shader properly synchronized with the raster pipeline, so that the projection used by both matches, and there are no weird artifacts when transforming the scene.
Just as a test I loaded up The Duck Who Knows' phantom map, which has mountains hard-baked into the map, to see what it'd look like with the default mountains in place:
I think the mountains could be improved a lot. It's good to have something or other in the scene at the horizon to improve realism, but these mountains detract from the overall visual quality, I feel.
Another shot of essentially how the map is embedded into a scene with the atmosphere renderer:
And of course UJ: In comparison, in the current release client:
I think it's a definite improvement.
The sun in the cloud/atmosphere shader is now synchronized with the world sun position, which is neat, so if you move the sun around, you can see the sun move around behind the cloud layer.
These clouds are essentially raytraced, which gives a lot of easy visual appeal, but might be slow for users with less capable machines. So it'd be good to have a basic textured cloud fallback for less capable machines. I need to make a quick and dirty render options dialog that you can use to enable/disable features and set quality settings.
For the atmosphere shader, the remaining things to do are to add sunrise/sunset effects (sky coloration, light coloration), perhaps some atmosphere effects (if they aren't too expensive / complicated), and the ability to set the sky color (used by some maps).
Aside from that, the other thing to do is to plug everything into an HDR pipeline that applies exposure settings and gamma correction at the end of rendering. (We'll see if this really adds anything, if it doesn't, I'll just leave it be.)
Once that's working, it could be nice to add bloom.
After that, generally moving everything over to a deferred rendering setup is the next step. That would allow all bullets to be true point light sources that could illuminate the scene, which would be a nice improvement over the previous implementation that only allowed bullets to illuminate certain objects.
Then, adding billboards and adding shots/lasers and stuff back into the renderer...
Lots to do, but the way forward is pretty clear.
Re: BZ-Next: A modern OpenGL BZFlag client
One thing that is generally bothering me is the idea of copyright assignment. If in the future this code gets merged as "BZFlag", the current copyright assignment scheme would make it impossible to include code or assets from other sources that are permissively licensed.
So far, I have made sure any code or assets I have used have been licensed in a way that is compatible with LGPL and MPL terms. IIRC all 3rd party code is MIT licensed. The only requirement this imposes is attribution, there is no noncommercial stipulation as there would be with certain CC licenses and so on.
It would be unreasonable to be prevented from pulling assets and code from other permissively licensed open source projects and libraries.
Even just starting a new class or a new build target based on an MIT-licensed example from a library would imply incompatibility with the current scheme...
Perhaps there is something I am not understanding about the current scheme. I understand the importance of getting this sort of thing right, which is why it bothers me, and why I would like some more clarity on this policy.
I also wish to make it clear that I have not committed to copyright assignment at this time. I am not opposed to it, just, these details would have to be worked out before committing to such a thing.
So far, I have made sure any code or assets I have used have been licensed in a way that is compatible with LGPL and MPL terms. IIRC all 3rd party code is MIT licensed. The only requirement this imposes is attribution, there is no noncommercial stipulation as there would be with certain CC licenses and so on.
It would be unreasonable to be prevented from pulling assets and code from other permissively licensed open source projects and libraries.
Even just starting a new class or a new build target based on an MIT-licensed example from a library would imply incompatibility with the current scheme...
Perhaps there is something I am not understanding about the current scheme. I understand the importance of getting this sort of thing right, which is why it bothers me, and why I would like some more clarity on this policy.
I also wish to make it clear that I have not committed to copyright assignment at this time. I am not opposed to it, just, these details would have to be worked out before committing to such a thing.
- Bullet Catcher
- Captain
- Posts: 564
- Joined: Sat Dec 23, 2006 7:56 am
- Location: Escondido, California
Re: BZ-Next: A modern OpenGL BZFlag client
BZFlag distributions already contain code (misc/shtool for example) that does not have copyright assigned to Tim Riker. Such code must be clearly marked with appropriate copyright statements and we need to have the compatible license well attributed. This is an extra burden on the project and we prefer not to do it, but it isn't necessarily a deal breaker. We expect you to assign copyright for your orginal code.
Re: BZ-Next: A modern OpenGL BZFlag client
Okay, thank you for the clarification, that makes sense to me.
There are a few things (mostly ImGUI and shader related) that are based on MIT licensed stuff. I'll do another pass over the various components and update the license info to accurately reflect the current state of things.
I have zero reservations about the process otherwise, just thinking ahead.
There are a few things (mostly ImGUI and shader related) that are based on MIT licensed stuff. I'll do another pass over the various components and update the license info to accurately reflect the current state of things.
I have zero reservations about the process otherwise, just thinking ahead.
Re: BZ-Next: A modern OpenGL BZFlag client
If you want to exactly match the camera position/angle between both the stock and experimental clients, there is a /roampos command in BZFlag that you could replicate in your experimental client.
/roampos {reset|send|angle|x y z [theta [phi [zoom]]]}
Manipulate the observer camera (only useful in Roaming and Tracking modes). Without arguments, it shows a usage message and the current camera location. reset resets the camera's location to the center of the map and send sends information about the camera to the server. angle moves the camera outside the map at a certain angle, looking towards its center. x, y and z are used to set the camera's location, theta defines the camera's horizontal angle, phi defines its vertical angle and zoom sets the camera's zoom level. All angles are defined in degrees.
Re: BZ-Next: A modern OpenGL BZFlag client
Oh awesome! I'll definitely try that out for future comparison shots. (Also adjusting the fov to match bzflag would be a good idea too).
Re: BZ-Next: A modern OpenGL BZFlag client
You can try out the new shadow mapping and clouds in the browser here: https://bz-next.github.io/mapviewer5/mapviewer.html
There's a new menu: Scene, which allows you to enable/disable shadows, change the shadow map size, and enable/disable cloud rendering, so you can see how it performs in your browser.
You can see the shadow map for the world in the Pipeline Texture Browser under Debug.
There will be a new release of the map viewer shortly that adds better navigation (rather than the simple arcball used now). When that is ready, I'll release it under mapviewer6 and update the related thread.
This is really the advantage of the mapviewer target and the wasm build -- it's before all else a check to make sure that whatever is developed supports web out of the box, to keep that compatibility alive as more dev work is done, and catch bugs. It's already helped me catch a few...
There's a new menu: Scene, which allows you to enable/disable shadows, change the shadow map size, and enable/disable cloud rendering, so you can see how it performs in your browser.
You can see the shadow map for the world in the Pipeline Texture Browser under Debug.
There will be a new release of the map viewer shortly that adds better navigation (rather than the simple arcball used now). When that is ready, I'll release it under mapviewer6 and update the related thread.
This is really the advantage of the mapviewer target and the wasm build -- it's before all else a check to make sure that whatever is developed supports web out of the box, to keep that compatibility alive as more dev work is done, and catch bugs. It's already helped me catch a few...
Re: BZ-Next: A modern OpenGL BZFlag client
This has bothered me as well, at times. Specifically, when I was porting the game to OpenGL ES, I re-used some code from another one of my projects, and there was a question of “exclusive” versus “non-exclusive” copyright and the implications on my rights over my code. There was also the principle of the copyright holder being basically AFK for 15 years and unresponsive to important discussions (even those related to copyright). Ultimately, I kept that fork in a separate repository, and indicated in the documentation that I was keeping copyright of my own work.
I would not worry at this point about potentially merging this into our main repository. It seems to stand on its own, other than perhaps you are using BZFlag’s textures at the moment (and who knows where those came from). There may very well be better options when it comes time to consider that.
Re: BZ-Next: A modern OpenGL BZFlag client
New update: https://bz-next.github.io/2024-03-21-shaders-release/
Windows build here: https://github.com/bz-next/bz-next/rele ... erelease_3
Online viewer release here: https://bz-next.github.io/mapviewer6/mapviewer.html
Finally got around to finishing the port of functionality to webgl2.
Mouse navigation is enhanced. You can hold shift to pan. Press numpad 0 to reset the view to default. (This was trickier than one would expect to get working in webgl...)
Shader and shadow code has been merged into the main branch and integrated into the emscripten, windows, and linux builds.
There is a lot of optimization possible. Some branching was used in a shader to simplify some logic for debug, so eliminating that could help.
Rendering the sky / ground plane in a shader could be made disableable, and perhaps turned off by default for the online map viewer for performance.
Shadow map bias likely needs to be configurable, since on webgl shadow artifacts seem to be more likely to occur.
FOV should be fed to the sky shader as a uniform so that it can be made to match a configurable FOV for the scene, rather than hard-coded.
I've noticed some weirdness on mobile wrt the skydome shader not matching the perspective of the scene properly. Normally, the sky and the scene are synced when a viewport event is triggered, but my guess is that on mobile, no event is triggered and it silently chugs along with some weird incorrect default. Seems to work OK on desktop though... who knows.
On my phone, shadow maps work properly up to 4096x4096. If I try to set an 8192x8192 shadow map, I am assuming it exceeds the max texture size, and just renders the depth map as a black square. In the scene, it just makes everything look like it is in shadow -- not a catastrophic failure. The solution is, obviously, to query GL capabilities and set limits based on those, but that is rather uninteresting and unenlightening work, and may be in the vein of premature optimization, so I'll leave that for later. It'll be more obvious what to query, and where, with more experience and more code that actually needs limits, so it's not yet the most productive time to do that work.
For the webapp, I keep all published versions hosted indefinitely. It's a convenient way to check progress and check for performance regressions. Versions are published at:
https://bz-next.github.io/mapviewer<N>/mapviewer.html
where mapviewer<N> is mapviewer, mapviewer2, ... mapviewer6.
From V.4 to V.6, I think the performance regression is mostly due to the sky shader. There may be something related to changing the default framebuffer format to support 8-bit stencil too (needed for the horrifying depth sampling hack for mouse navigation in webgl...). In any case, in the next release, I will make all the eye-candy disabled by default, since it is secondary to the purpose of the tool.
The real improvement in this release is mostly under the hood. Shader infrastructure is in place and is used throughout the app. Shaders are used for the following:
- Shadow mapping: A shader renders a depth map of the scene from the perspective of the camera (with ortho projection). This is fed into the main material shader as a texture, along with a matrix describing the view/projection transformation of the scene from the light source's perspective. This data is used to draw shadows on geometry.
- Cloud shader: Renders sky and ground plane to a quad mesh that is hard-locked to maximum depth, so it is always in the background.
- Depth map preview: Demo shader that reads in the depth map (16-bit depth buffer single-channel texture) and renders it to a black-and-white RGBA8 texture for presentation.
- Enhanced Phong: Phong shader + shadow mapping support. Support rendering to an HDR texture buffer internally. Ultimately, everything can be rendered to an HDR buffer, and then quantized at the end of the rendering pipeline.
- Basic Textured Shader: A demo shader that just renders textured geometry in screen-space, without applied transformation.
Windows build here: https://github.com/bz-next/bz-next/rele ... erelease_3
Online viewer release here: https://bz-next.github.io/mapviewer6/mapviewer.html
Finally got around to finishing the port of functionality to webgl2.
Mouse navigation is enhanced. You can hold shift to pan. Press numpad 0 to reset the view to default. (This was trickier than one would expect to get working in webgl...)
Shader and shadow code has been merged into the main branch and integrated into the emscripten, windows, and linux builds.
There is a lot of optimization possible. Some branching was used in a shader to simplify some logic for debug, so eliminating that could help.
Rendering the sky / ground plane in a shader could be made disableable, and perhaps turned off by default for the online map viewer for performance.
Shadow map bias likely needs to be configurable, since on webgl shadow artifacts seem to be more likely to occur.
FOV should be fed to the sky shader as a uniform so that it can be made to match a configurable FOV for the scene, rather than hard-coded.
I've noticed some weirdness on mobile wrt the skydome shader not matching the perspective of the scene properly. Normally, the sky and the scene are synced when a viewport event is triggered, but my guess is that on mobile, no event is triggered and it silently chugs along with some weird incorrect default. Seems to work OK on desktop though... who knows.
On my phone, shadow maps work properly up to 4096x4096. If I try to set an 8192x8192 shadow map, I am assuming it exceeds the max texture size, and just renders the depth map as a black square. In the scene, it just makes everything look like it is in shadow -- not a catastrophic failure. The solution is, obviously, to query GL capabilities and set limits based on those, but that is rather uninteresting and unenlightening work, and may be in the vein of premature optimization, so I'll leave that for later. It'll be more obvious what to query, and where, with more experience and more code that actually needs limits, so it's not yet the most productive time to do that work.
For the webapp, I keep all published versions hosted indefinitely. It's a convenient way to check progress and check for performance regressions. Versions are published at:
https://bz-next.github.io/mapviewer<N>/mapviewer.html
where mapviewer<N> is mapviewer, mapviewer2, ... mapviewer6.
From V.4 to V.6, I think the performance regression is mostly due to the sky shader. There may be something related to changing the default framebuffer format to support 8-bit stencil too (needed for the horrifying depth sampling hack for mouse navigation in webgl...). In any case, in the next release, I will make all the eye-candy disabled by default, since it is secondary to the purpose of the tool.
The real improvement in this release is mostly under the hood. Shader infrastructure is in place and is used throughout the app. Shaders are used for the following:
- Shadow mapping: A shader renders a depth map of the scene from the perspective of the camera (with ortho projection). This is fed into the main material shader as a texture, along with a matrix describing the view/projection transformation of the scene from the light source's perspective. This data is used to draw shadows on geometry.
- Cloud shader: Renders sky and ground plane to a quad mesh that is hard-locked to maximum depth, so it is always in the background.
- Depth map preview: Demo shader that reads in the depth map (16-bit depth buffer single-channel texture) and renders it to a black-and-white RGBA8 texture for presentation.
- Enhanced Phong: Phong shader + shadow mapping support. Support rendering to an HDR texture buffer internally. Ultimately, everything can be rendered to an HDR buffer, and then quantized at the end of the rendering pipeline.
- Basic Textured Shader: A demo shader that just renders textured geometry in screen-space, without applied transformation.