In the very early days of this crate, we had a trait and heterogenous list of shaders. The idea was to allow shaders to be chained together to produce a complete set of render passes. See: https://docs.rs/pixels/0.0.4/pixels/trait.RenderPass.html
The RenderPass trait was replaced in v0.1 with the Pixels::render_with() method in #95 and #96. This API simplified the design substantially, at the cost of making certain things more difficult, and some things impossible.
This issue tracks the reintroduction of this feature to chain shaders programmatically
Rationale
Shaders that need to operate in pixel buffer space (opposed to screen space) cannot be implemented easily today because the default renderer is hardcoded to accept the pixel buffer texture view as its only input. (See #285) To build these kinds of shaders, one must reimplement the scaling renderer on the user-side and ignore the renderer passed to render_with() via PixelsContext.
Chaining shaders should be a first-class experience. The examples in
|
let render_result = pixels.render_with(|encoder, render_target, context| { |
|
let noise_texture = noise_renderer.get_texture_view(); |
|
context.scaling_renderer.render(encoder, noise_texture); |
|
|
|
noise_renderer.update(&context.queue, time); |
|
time += 0.01; |
|
|
|
noise_renderer.render(encoder, render_target, context.scaling_renderer.clip_rect()); |
|
|
|
Ok(()) |
|
}); |
and
|
let render_result = pixels.render_with(|encoder, render_target, context| { |
|
let fill_texture = fill_renderer.get_texture_view(); |
|
context.scaling_renderer.render(encoder, fill_texture); |
|
|
|
fill_renderer.render(encoder, render_target); |
|
|
|
Ok(()) |
|
}); |
are not ideal; chaining is entirely ad hoc and by convention. Each custom renderer has a bespoke method to create a texture view, which is used to pass the result from the scaling renderer to the custom renderer.
A more unified API would treat these renderers as "of the same class" where the interface itself offers a seamless way to chain each renderer together.
This existing API also forces users to synchronize the inverse scaling matrix to handle mouse coordinates with multiple passes. The scissor rect also needs to be set correctly, etc. See #262.
API Sketch
This is very incomplete pseudocode, but I want to put my thoughts down now. (And maybe it will help someone make progress here if they feel inclined.)
pub trait Renderer {
fn chain(&mut self, next: Box<dyn Renderer>) -> Option<Box<dyn Renderer>>;
fn resize(&mut self, width: u32, height: u32);
fn render(&self, encoder: &mut wgpu::RenderPass);
}
This API will allow a Renderer to consume another Renderer for chaining purposes. In other words, the next arg becomes a child of self after chaining. The method returns an existing child if it needs to be replaced. The render method takes a mutable RenderPass, which each renderer in the chain can recursively render to.
Unresolved Questions
-
How does chaining "connect" the render target of one renderer to the input texture view on the next? the RenderPass only has one color attachment, decided by the caller (which is pixels itself; and it only uses the Surface texture view as the color attachment).
- Alternatively, we can continue using the "pass
&mut wgpu::Encoder and &wgpu::TextureView as the render target" pattern that we have today. This requires renderers to create a new RenderPass for themselves and chaining would be performed by each Renderer with a method like fn update_input(&mut self, texture_view: wgpu::TextureView), e.g. called by resize().
-
Is there anything else we can learn from other users of wgpu?
- This wiki page has some ideas for how to implement
wgpu middleware: https://github.com/gfx-rs/wgpu/wiki/Encapsulating-Graphics-Work
glyph_brush doesn't use the middleware pattern as defined in the link above. Instead its draw_queued() method resembles our ScalingRenderer::render() method as it is today.
- It looks pretty much the same with
iced.
In the very early days of this crate, we had a trait and heterogenous list of shaders. The idea was to allow shaders to be chained together to produce a complete set of render passes. See: https://docs.rs/pixels/0.0.4/pixels/trait.RenderPass.html
The
RenderPasstrait was replaced in v0.1 with thePixels::render_with()method in #95 and #96. This API simplified the design substantially, at the cost of making certain things more difficult, and some things impossible.This issue tracks the reintroduction of this feature to chain shaders programmatically
Rationale
Shaders that need to operate in pixel buffer space (opposed to screen space) cannot be implemented easily today because the default renderer is hardcoded to accept the pixel buffer texture view as its only input. (See #285) To build these kinds of shaders, one must reimplement the scaling renderer on the user-side and ignore the renderer passed to
render_with()viaPixelsContext.Chaining shaders should be a first-class experience. The examples in
pixels/examples/custom-shader/src/main.rs
Lines 55 to 65 in 0a85025
pixels/examples/fill-window/src/main.rs
Lines 56 to 63 in 00f774a
A more unified API would treat these renderers as "of the same class" where the interface itself offers a seamless way to chain each renderer together.
This existing API also forces users to synchronize the inverse scaling matrix to handle mouse coordinates with multiple passes. The scissor rect also needs to be set correctly, etc. See #262.
API Sketch
This is very incomplete pseudocode, but I want to put my thoughts down now. (And maybe it will help someone make progress here if they feel inclined.)
This API will allow a
Rendererto consume anotherRendererfor chaining purposes. In other words, thenextarg becomes a child ofselfafter chaining. The method returns an existing child if it needs to be replaced. Therendermethod takes a mutableRenderPass, which each renderer in the chain can recursively render to.Unresolved Questions
How does chaining "connect" the render target of one renderer to the input texture view on the next? the
RenderPassonly has one color attachment, decided by the caller (which is pixels itself; and it only uses theSurfacetexture view as the color attachment).&mut wgpu::Encoderand&wgpu::TextureViewas the render target" pattern that we have today. This requires renderers to create a newRenderPassfor themselves and chaining would be performed by eachRendererwith a method likefn update_input(&mut self, texture_view: wgpu::TextureView), e.g. called byresize().Is there anything else we can learn from other users of
wgpu?wgpumiddleware: https://github.com/gfx-rs/wgpu/wiki/Encapsulating-Graphics-Workglyph_brushdoesn't use the middleware pattern as defined in the link above. Instead itsdraw_queued()method resembles ourScalingRenderer::render()method as it is today.iced.