OpenGL: Mesh shaders in the current year
supergoodcode.com169 points by pjmlp 3 days ago
169 points by pjmlp 3 days ago
AMD, do support for NV_shader_buffer_load next! Shader Buffer Load brought "Buffer Device Address" / pointers to OpenGL/glsl, long before Vulkan was even a thing. It's the best thing since sliced bread, and easily lets you access all your vertex data with pointers, i.e., you don't need to bind any vertex buffers anymore. Also easily lets you draw the entire scene in a single draw call since vertex shaders can just load data from wherever the pointers lead them, e.g., it makes GLSL vertex shaders look like this:
uniform Node* u_nodes;
void main(){
Node node = u_nodes[gl_DrawID];
vec3 pos = node.position[gl_VertexID];
vec2 uv = node.uv[gl_VertexID];
...
}
This is the real killer feature of Vulkan/DX12, it makes writing generalized renderer so much easier because you don't need to batch draw calls per vertex layout of individual meshes. Personally I use Buffer Device Address for connecting Multidraw Indirect calls to mesh definitions to materials as well.
I just wish there was more literature about this, especially about perf implications. Also synchronization is very painful, which may be why this is hard to do on a driver level inside OpenGL
Maybe I’m missing something, but isn’t this the norm in Metal as well? You can bind buffers individually, or a use a single uber-buffer that all vertex shaders can access.
But I haven’t written OpenGL since Metal debuted over a decade ago.
VK_EXT_descriptor_buffer?
If you are using Slang, then you just access everything as standard pointers to chunks of GPU memory.
And it's mostly Intel and mobile dragging their feet on VK_EXT_descriptor_buffer ...
I'm talking about OpenGL. Vulkan is too hard for my small mind to understand, so I'm still using OpenGL. And the extension that allows this in OpenGL came out in 2010, so long before Vulkan.
No one at the big companies is developing OpenGL anymore and their support for the GLSL compiler has dwindled to nothing.
If you want that extension you're going to have better luck convincing the Zink folks:
https://docs.mesa3d.org/drivers/zink.html "The Zink driver is a Gallium driver that emits Vulkan API calls instead of targeting a specific GPU architecture. This can be used to get full desktop OpenGL support on devices that only support Vulkan."
However, you're still probably going to have to come off of GLSL and use Slang or HLSL. The features you want are simply not going to get developed in the GLSL compiler at this point.
> The features you want are simply not going to get developed in the GLSL compiler at this point.
They exist in GLSL on Nvidia devices. If other vendors refuse to implement them, then I will be an Nvidia-only developer. Fine by me. I no longer care about other vendors if they completely ignore massive quality of life features.
You could do similar thing with SSBO, I think?
That is for SSBOs. u_nodes is a pointer to an SSBO in this case. That SSBO then has lots of more pointers to various different SSBOs that contain the vertex data.
I'm thinking of declaring array of SSBOs that contain array of data structs. Address would be represented by index of SSBO binding and offset within that buffer. Though that limits maximum number of used SSBOs within drawcall to GL_MAX_VERTEX_SHADER_STORAGE_BLOCKS.
To my knowledge you can't have an array of SSBOs in OpenGL. You could have one SSBO for everything, but that makes other things very difficult, like how to deal with dynamically growing scenes, loading and unloading models, etc.
From https://registry.khronos.org/OpenGL/extensions/ARB/ARB_shade...:
(3) "Do we allow arrays of shader storage blocks?
RESOLVED: Yes; we already allow arrays of uniform blocks, where each
block instance has an identical layout but is backed by a separate
buffer object. It seems like we should do this here for consistency.
PS: You could also access data through bindless textures, though you would need to deal with ugly wrappers to unpack structs from image formats.Do you have an example for that? I can't find any.
Regarding bindless textures, they're really ugly to use. Shader buffer load is so much better, being able to access everything with simple pointers.
Here's some code: https://github.com/KhronosGroup/OpenGL-API/issues/46 But yeah, GL_MAX_VERTEX_SHADER_STORAGE_BLOCKS limits usefulness of that.
I wanted to say that with some compiler hacking it should be possible to lower SPIR-V using GL_EXT_buffer_reference into bindless image loads, but SPIR-V doesn't have standardized bindless texture, duh!
A little bit off topic but: GL_LINES doesn't have a performant analog on lots of other platforms, even Unity. Drawing a line properly requires turning the two endpoint vertices into a quad and optionally adding endcaps which are at least triangular but can be polygons. From my understanding, that requires a geometry shader since we're adding virtual/implicit vertices. Does anyone know if mesh shaders could accomplish the same thing?
Also I wish that GL_LINES was open-sourced for other platforms. Maybe it is in the OpenGL spec and I just haven't looked. I've attempted some other techniques like having the fragment shader draw a border around each triangle, but they all have their drawbacks.
To draw lines instead of a geometry shader you can use instancing, since you know how many vertices you need to represent a line segment's bounding box. Have one vertex buffer that just contains N vertices (the actual attribute data doesn't matter, but you can shove UVs or index values in there) and bind it alongside a buffer containing your actual line information (start, end, color, etc). The driver+GPU will replicate the 'line vertex buffer' vertices for every instance in the 'line instance buffer' that you bound.
This works for most other regular shapes too, like a relatively tight bounding box for circles if you're drawing a bunch of them.
In my experience, drawing quads with GL_POINTS in OpenGL was way faster than drawing quads with instancing in DirectX. That was noticable with the DirectX vs. OpenGL backends for WebGL, where switching between the two resulted in widely different performance.
drawing using GL_LINES is old school fixed function pipeline and it's how modern graphics hardware works. If you want a single line, draw a small rectangle between V1 and V2 using geometry. The thickness is the distance between P1 and P2 / P3 and P4 of the rectangle. A line has no thickness as it's 1 dimensional.
Draw in screen space based on projected points in world space.
set gl_Color to your desired color vec and bam, line.
I'm not sure exactly what you mean, but you can both output line primitives directly from the mesh shader or output mitered/capped extruded lines via triangles.
As far as other platforms, there's VK_EXT_line_rasterization which is a port of opengl line drawing functionality to vulkan.
hundredrabbits' game Verreciel uses a reimplementation of webgl-lines, to pretty good effect, if I may say so:
https://github.com/mattdesl/webgl-lines
https://hundredrabbits.itch.io/verreciel
PS— I still play Retro, and dream of resuscitating it :)
Why is Minecraft mentioned several times in the post?
The post links to this: https://github.com/MCRcortex/nvidium
nvidium is using GL_NV_mesh_shader which is only available for nVIDIA cards. This mod is the only game/mod I know of that uses mesh shaders & is OpenGL. & so the new gl extension will let users of other vendors use the mod if it gets updated to use the new extension.
Presumably because Minecraft is the only application which still uses OpenGL but would use the extension
pretty sure the base minecraft rendering engine is still using opengl, and most of the improvement mods also just use opengl so exposing this extension to them is probably important to a game where its 50 billion simple cubes being rendered
Is Minecraft the only thing using OpenGL anymore?
What is the current state of OpenGL, I thought it had faded away?
It's officially deprecated in favor of Vulkan, but it will likely live on for decades to come due to legacy CAD software and a bunch of older games still using it. I don't share the distaste many have for it, it's good to have a cross-platform medium-complexity graphics API for doing the 90% of rendering that isn't cutting-edge AAA gaming.
> It's officially deprecated in favor of Vulkan
Can you provide a reference for this? I work in the GPU driver space (not on either of these apis), but from my understanding Vulkan wasn't meant to replace OpenGL, it was only introduced to give developers the chance at getting lower level in the hardware (still agnostic from the hardware, at least compared to compiling PTX/CUDA or against AMD's PAL directly, many still think they failed.) I would still highly advocate for developers using OpenGL or dx11 if their game/software doesn't need the capabilities of Vulkan or dx12. And even if you did, you might be able to get away with interop and do small parts with the lower api and leave everything else in the higher api.
I will admit I don't like the trend of all the fancy new features only getting introduced into Vulkan and dx12, but I'm not sure how to change that trend.
I think Vulkan was originally called OpenGL Next. Furthermore, Vulkan's verbosity allows for a level of control of the graphics pipeline you simply can't have with OpenGL, on top of having built in support for things like dynamic rendering, bindless descriptors, push constants, etc.
Those are the main reasons IMO why most people say it's deprecated.
I only play with this stuff as a hobbiest. But OpenGL is way more simple than Vulkan I think. Vulkan is really really complicated to get some basic stuff going.
Which is as-designed. Vulkan (and DX12, and Metal) is a much more low-level API, precisely because that's what professional 3D engine developers asked for.
Closer to the hardware, more control, fewer workarounds because the driver is doing something "clever" hidden behind the scenes. The tradeoff is greater complexity.
Mere mortals are supposed to use a game engine, or a scene graph library (e.g. VulkanSceneGraph), or stick with OpenGL for now.
The long-term future for OpenGL is to be implemented on top of Vulkan (specifically the Mesa Zink driver that the blog post author is the main developer of).
> Closer to the hardware
To what hardware? Ancient desktop GPUs vs modern desktop GPUs? Ancient smartphones? Modern smartphones? Consoles? Vulkan is an abstraction of a huge set of diverging hardware architectures.
And a pretty bad one, on my opinion. If you need to make an abstraction due to fundamentally different hardware, then at least make an abstraction that isn't terribly overengineered for little to no gain.
Closer to AMD and mobile hardware. We got abominations like monolithic pipelines and layout transition thanks to the first, and render passes thanks to the latter. Luckily all of these are out or on their way out.
Not really, other than on desktops, because as we all know mobile hardware gets the drivers it gets on release date, and that's it.
Hence why on Android, even with Google nowadays enforcing Vulkan, if you want to deal with a less painful experience in driver bugs, better stick with OpenGL ES, outside Pixel and Samsung phones.