<![CDATA[programmable pipeline - ruby0x1.notes]]>https://notes.underscorediscovery.com/Ghost 0.11Tue, 02 Feb 2021 11:33:23 GMT60<![CDATA[Shaders : second stage]]>The second part in a series on understanding shaders, covering how data gets sent between shaders and your app, how shaders are created and more.


Other parts:
- here is part one
- you are viewing part two


I wrote a post about shaders recently - it was a primer,

]]>
https://notes.underscorediscovery.com/shaders-second-stage/d924a65f-6027-45c3-adc0-0d3df24a7888Thu, 11 Sep 2014 12:17:00 GMTThe second part in a series on understanding shaders, covering how data gets sent between shaders and your app, how shaders are created and more.


Other parts:
- here is part one
- you are viewing part two


I wrote a post about shaders recently - it was a primer, a "What the heck are shaders?" type of introduction. You should read it if you haven't, as this post is a continuation of the series. This article is a little deeper down the rabbit hole, a bit more technical but also a high level overview of how shaders are generally made, fit together, and communicated with.

As before, this post will reference WebGL/OpenGL specific shaders but this article is by no means specific to OpenGL - the concepts apply to many rendering APIs.

sidenote:
I was overwhelmed by the positive response and continued sharing of the article, and I want you to know that I appreciate it.

brief “second stage” overview

This article will cover the following topics:

  • The road from text source code to active GPU program
  • Communication between each stage, and to and from your application

How shaders are created

Most rendering APIs share a common pattern when it comes to programming the GPU. The pattern consists of the following :

  • Compile a vertex shader from source code
  • Compile a fragment shader from source code*
  • Link them together, this is your shader program
  • Use this program ID to enable the program

*Intentionally keeping it simple, there are other stages etc. This series is for those learning and that is ok.

For a simple example, have a look at how WebGL would do it. I am not going to get TOO specific about it, just show the process in a real world use case.

There are some implied variables here, like vertex_stage_source, and fragment_stage_source are assumed to contain the shader code itself.

1 - Create the stages first

var vertex_stage = gl.createShader(gl.VERTEX_SHADER);  
var fragment_stage = gl.createShader(gl.FRAGMENT_SHADER);  

2 - Give the source code to each stage

gl.shaderSource(vertex_stage, vertex_stage_source);  
gl.shaderSource(fragment_stage, fragment_stage_source);  

3 - Compile the shader code, this checks for syntax errors and such.

gl.compileShader(vertex_stage);  
gl.compileShader(fragment_stage);  

Now we have the stages compiled, we link them together to create a single program that can be used to render with.

   //this is your actual program you use to render
var the_shader_program = gl.createProgram();

   //It's empty though, so we attach the stages we just compiled
gl.attachShader(the_shader_program, vertex_stage);  
gl.attachShader(the_shader_program, fragment_stage);

   //Then, link the program. This will also check for errors!
gl.linkProgram(the_shader_program);

Finally, when you are ready to use the program you created, you simply use it :

gl.useProgram(the_shader_program);  

Simple complexity

This seems like a lot of code for something so fundamental, and it can be a lot of boilerplate but remember that programming is built around the concept of repeating tasks. Make a function to generate your shader objects and your boilerplate goes away, you only need to do it once. As long as you understand how it fits together, you are in full control of how much boilerplate you have to write.

Pipeline communications

As discussed in part one - the pipeline for a GPU program consists of a number of stages that are executed in order, feeding information from one stage to the next and returning information along the way.

The next most frequent question I come across when dealing with shaders, is how information travels between your application and between the different stages.

The way it works is a little confusing at first, it's very much a black box. This confusion is also amplified by "built in" values that magically exist. It's even more confusing because there are deprecated values that should never be used - in every second article. So when someone shows you "the most basic shader" it's basically 100% unknowns at first.

Aside from these things though, like the rest of the shading pipeline - A lot of it is very simple in concept and likely something you will grasp pretty quickly.

Let's start with the built in values, because these are the easiest.

Built in functions

All shader languages have built in language features to compliment programming on the graphics hardware. For example, GLSL has a function called mix, this is a Linear Interpolation function (often called lerp) and is very useful in programming on the GPU. What I mean is that you should look these up. Depending on your platform/shader language, there are many functions that may be new concepts to you, as they don't really occur by default in other disciplines.

Another important note about the built in functions - these functions often are handled by the graphics hardware intrinsically, meaning that they are optimized and streamlined for use. Barring any wild driver bugs or hardware issues, these are often faster than rolling your own code for the functions they offer - so you should familiarize yourself with them before hand writing small maths functions and the like.

Built in variables

Built in variables are different to the functions, they store values from the state of the program/rendering pipeline, rather than operating on values. A simple example would be when you are creating a pixel shader, gl_FragCoord exists, and contains the window-relative coordinates of the current fragment. As with the function list, they are often documented and there are many to learn, so don't worry if there seem to be a lot. You learn about them and use them only when you need to in practice. Every shader programmer I know remembers a subset by heart and has a reference on hand at all times.

These values are implicit connections between the pipeline and code you write.

staying on track

To avoid the “traps” of deprecated functions, as with any API for any programming language, you just have to read the documentation. It's the same principle as targeting bleeding edge features - you check the status in the API level you want to support, you make sure your requirements are met, and you avoid things that are clearly marked as deprecated for that API level and above. It's irrelevant that they were changed and swapped before that - focus only on what you need, and forget it's history.

Edit: Since posting these, an amazing resource has come up for figuring out the availability and usage of openGL features http://docs.gl/

Most APIs provide really comprehensive "quick reference" sheets, jam packed with every little detail you would need to know, including version, deprecation, and signatures. Below are some examples from the OpenGL 4.4 quick reference card.

OpenGL 4.4 built in variables

OpenGL 4.4 built in functions


Information between stages

Also mentioned in part one, the stages can and do send information to the next stage.

Across different versions of APIs, over a few years, newer better APIs were released that improved drastically over the initial confusing names. This means that, across major versions of APIs, you will come across multiple approaches for the same thing.

Remember : The important thing here is the concepts and principles. The naming/descriptions may be specific but it's simply to ground the concept in an existing API. These concepts apply in other APIs, and are differ only in use, rather than concept.

stage outputs

vertex stage

In OpenGL, the concept of the vertex shader sending information to the fragment shader was named varying. To use it, you would:

  • create a named varying variable inside of the vertex shader
  • create the same named varying variable inside of the fragment shader

This allowed OpenGL to know that you meant "make this value available in the next stage, please". In other shader languages the same concept applies, where explicit connections are created, by you, to signify outputs from the vertex shader.

An implicit connection exists for gl_Position which you return to the pipeline for the vertex position.

In newer OpenGL versions, these were renamed to out:

out vec2 texcoord;  
out vec4 vertcolor;  

fragment stage

We are already saw that the fragment shader uses gl_FragColor as an output to return the color. This is an implicit connection. In newer GL versions, out is used in place of gl_FragColor:

out vec4 final_color;  

It can also be noted that there are other built in variables (like gl_FragColor) that are outputs. These feed back into the pipeline. One example is the depth value, it can be written to from the fragment shader.

stage inputs

Also in OpenGL, you would "reference" the variable value from the previous stage using varying or, in newer APIs, in. This is an explicit connection, as you are architecting the shader.

in vec2 texcoord;  
in vec4 vertcolor;  

The second type of explicit input connections are between your code and the rendering pipeline. These are set through API functions in the application code, and submit data through them, for use in the shaders.

In OpenGL API, these were named uniform, attribute and sampler among others. attribute is vertex specific, sampler is fragment specific. In newer OpenGL versions these can take on the form of more expressive structures, but for the purpose of concept, we will only look at the principle :

vertex stage

Attributes are handed into the shader from your code, into the first stage :

attribute vec4 vertex_position;  
attribute vec2 vertex_tcoord;  
attribute vec4 vertex_color;  

This stage can forward that information to the fragments, modified, or as is.

The vertex stage can take uniform as well, there is a difference in how attributes work and uniforms work.

fragment stage

uniform vec4 tint_color;  
uniform float radius;  
uniform vec2 screen_position;  

Notice that these variables are whatever I want them to be. I am making explicit connections from my code, like a game, into the shader. The above example could be for a simple lantern effect, lighting up a radius area, with a specific color, at a specific point on screen.

That is application domain information, submitted to the shader, by me.

Another explicit type of connection is a sampler. Images on the graphics card are sampled and can be read inside of the fragment shader. Take note that the value passed in is not the texture ID, it's not the texture pointer, it is the active texture slot. Texturing is usually a state, like use this shader, then use this texture and then draw. The texture slot, allows multiple textures to co-exist, and be used by the shaders.

  • bind texture A
  • set active slot 0
  • bind texture B
  • set active slot 1

The texture slot determines what value the shader wants, as it will always use the bound texture, and the given sampler slot!

fundamental shaders

The most basic shaders you will come across simply take information, and use it to present the information as it is. Below, we can look at how "default shaders" would fit together, based on the knowledge we now have.

This will be using WebGL shaders again, for reference only. These concepts are described above, so they should hopefully make sense now.

As you recall - geometry is a set of vertices. Vertices hold (in this example) :

  • a color
  • a position
  • a texture coordinate

This is a vertex, geometry, so these values will go into a vertex attribute and sent to the vertex stage.

The texture itself, is color information. It will be applied in the fragment shader, so we pass the active texture slot we want to use, as a shader uniform.

The other information in the shader below, is for camera transforms, these are sent as uniforms because they are not vertex specific data. They are just data that I want to use to apply a camera.

You can ignore the projection code for now, as this is simply about moving data around from your app, into the shader, between shaders, and back again.

Basic Vertex shader

//vertex specific attributes, for THIS vertex

attribute vec3 vertexPosition;  
attribute vec2 vertexTCoord;  
attribute vec4 vertexColor;

//generic data = uniforms, the same between each vertex!
//this is why the term uniform is used, it's "fixed" between
//each fragment, and each vertex that it runs across. It's 
//uniform across the whole program.

uniform mat4 projectionMatrix;  
uniform mat4 modelViewMatrix;

//outputs, these are sent the next stage.
//they vary from vertex to vertex, hence the name.

varying vec2 tcoord;  
varying vec4 color;

void main(void) {

        //work out the position of the vertex, 
        //based on its local position, affected by the camera

    gl_Position = projectionMatrix * 
                  modelViewMatrix * 
                  vec4(vertexPosition, 1.0);

        //make sure the fragment shader is handed the values for this vertex

    tcoord = vertexTCoord;
    color = vertexColor;

} 

Basic fragment shader

If we have no textures, only vertices, like a rectangle that only has a color, this is really simple :

Untextured

//make sure we accept the values we passed from the previous stage

varying vec2 tcoord;  
varying vec4 color;

void main() {

        //return the color of this fragment based on the vertex 
        //information that was handed into the varying value!

        // in other words, this color can vary per vertex/fragment

    gl_FragColor = color;

}

Textured

   //from the vertex shader
varying vec2 tcoord;  
varying vec4 color;

   //sampler == texture slot
   //these are named anything, as explained later

uniform sampler2D tex0;

void main() {

        //use the texture coordinate from the vertex, 
        //passed in from the vertex shader,
        //and read from the texture sampler, 
        //what the color would be at this texel
        //in the texture map

    vec4 texcolor = texture2D(tex0, tcoord);

        //crude colorization using modulation,
        //use the color of the vertex, and the color 
        //of the texture to determine the fragment color

    gl_FragColor = color * texcolor;

}

Binding data to the inputs

Now that we know how inputs are sent and stored, we can look at how they get connected from your code. This pattern is very similar again, across all major APIs.

finding the location of the inputs

There are two ways :

  1. Set the attribute name to a specific location OR
  2. Fetch the attribute/uniform/sampler location by name

This location is a shader program specific value, assigned by the compiler. You have control over the assignments by name, or, by forcing a name to be assigned at a specific location.

Put in simpler terms :

”radius“, I want you to be at location 0.
vs
Compiler, where have you placed “radius”?

If you use the second way, requesting the location, you should cache this value. You can request all the locations once you have linked your program successfully, and reuse them when assigning values to the inputs.

Assigning a value to the inputs

This is often application language specific, but again the principle is universal : The API will offer a means to set a value of an input from code.

vertex attributes

Let's use WebGL as an example again, and let's use a single attribute, for the vertex position, to locate and set the position.

var vertex_pos_loc = gl.getAttribLocation(the_shader_program, "vertexPosition");  

Notice the name? I am asking the compiler where it assigned the named variable I declared in the shader. Now we can use that location to give it some array of vertex position data.

First, because we are going to use attribute arrays, we want to enable them. If you read this code as simple terms, it says "enable a vertex attribute array for location, where location refers to "vertexPosition".

gl.enableVertexAttribArray(vertex_pos_loc);  

To focus on what we are talking about here, some variables are implied :

   //this simply sets the vertex buffer (list of vertices) 
   //as active, so subsequent commands use this buffer

gl.bindBuffer( gl.ARRAY_BUFFER, rectangle_vertices_buffer );

   //and this line points the buffer to the location, or "vertexPosition"

gl.vertexAttribPointer(vertex_pos_loc, 6, gl.FLOAT, false, 0, 0);  

There, now we have:

  • taken a list of vertex positions, stored them in a vertex buffer
  • located the vertexPosition variable location in the shader
  • enabled attribute arrays, because we are using arrays
  • we set the buffer as active,
  • and finally pointed our location to this buffer.

What happens now, is the vertexPosition value in the shader, is associated with the list of vertices from the application code. Details on vertex buffers are well covered online, so we will continue with shader specifics here.

uniform values

As with attributes, we need to know the location.

var radius_loc = gl.getUniformLocation(the_shader_program, "radius");  

As this is a simple float value, we use gl.uniform1f. This varies by API in syntax, but the concept is the same across the APIs.

gl.uniform1f(radius_loc, 4.0);  

This tells OpenGL that the uniform value for "radius" is 4.0, and we can call this multiple times to update it each render.

Conclusion

As this article was already getting quite long, I will continue further in the next part.

As much of this is still understanding the theory, it can seem like a lot to get around before digging into programming actual shaders, but remember there are many places to have a look at real shaders, and try to understand how they fit together :

Playing around with shaders : recap
Here are some links to some sandbox sites where you can see examples, and create your own shaders with minimal effort directly in your browser.

https://www.shadertoy.com/
http://glsl.heroku.com/
http://www.mrdoob.com/projects/glsl_sandbox/

An important factor here is understanding what your framework is doing to give you access to the shaders,

which allows you to interact with the framework in more powerful ways. Like, drawing a million particles in your browser - passing information through textures, encoding values into color information and vertex attributes.


The delay between post one and two were way too long, as I have been busy, but the next two posts are hot on the heels of this one.

Tentative topics for the next posts :

shaders stage three

  • A brief discussion on architectural implications of shaders, or "How do I fit this into a rendering framework" and "How to do more complex materials".
  • Understanding and integrating new shaders into existing shader pipelines
  • Shader generation tools and their output

shaders stage four

  • Deconstructing a shader with a live example
  • Constructing a basic shader on your own
  • A look at a few frameworks shader approach
  • series conclusion

Follow ups

If you would like to suggest a specific topic to cover, or know when the next installment is ready, you can subsribe to this blog (top right of post), or follow me on twitter, as I will tweet about the articles there as I write them. You can find the rest of my contact info on my home page.

I welcome asking questions, sending feedback and suggesting topics.
As before, I hope this helps someone in their journey, and look forward to seeing what you create.

]]>
<![CDATA[Primer : Shaders]]>A common theme I run into when talking to some developers is that they wish they could wrap their head around shaders. Shaders always seem to solve a lot of problems, and often are referenced as to the solution to the task at hand.

But just as often they are

]]>
https://notes.underscorediscovery.com/shaders-a-primer/a23707ac-c0b8-42fa-9ae6-25899e96f94eThu, 03 Apr 2014 22:13:35 GMTA common theme I run into when talking to some developers is that they wish they could wrap their head around shaders. Shaders always seem to solve a lot of problems, and often are referenced as to the solution to the task at hand.

But just as often they are seen as a sort of enigma or black box - one that is so shrouded in complexity that it makes learning them from ”basic” examples near impossible.

Hopefully, this primer will help those that aren't well versed and help transition into using shaders, where applicable.


Other parts:
- you are viewing part one
- here is part two

What are shaders?

When you draw something on screen, it is generally submitted as some “geometry”. Like, a polygon or a group of triangles. Even drawing a sprite, is drawing some geometry with an image applied.

Geometry is a set of points (vertices) describing the layout which is sent to the graphics card for drawing. A sprite, like a player or a platform is usually a “quad”, and is often sent as two triangles arranged in a rectangle shape.

When you send geometry to the graphics card to be drawn, you can tell the graphics card to use custom shaders that will be applied to the geometry, before it shows up on the render.

There are two kinds of shaders to understand for now - vertex and fragment shaders. You can think of a shader as a small function that is run over each vertex, and every fragment (a fragment is like a pixel) when rendering. If you look at the code for a shader, it would resemble a regular function :

void main() {  
   //this code runs on each fragment, or vertex.
}

It should be noted as well that the examples below reference OpenGL Shader Language, referred to as GLSL, but the concepts apply to the programmable pipeline in general and are not for any specific rendering API. This information applies to almost any platform or API.


The vertex shader


As mentioned, there are vertices sent to the hardware to draw a sprite. Two triangles - and each triangle has 3 vertices, making a total of 6 vertices sent to be drawn.

When these 6 vertices reach the rendering pipeline in the hardware, there is a small program (a shader) that can run on each and every vertex. Remember the graphics hardware is built for this, so it does many of these at once in parallel so it is really fast.

That program only cares about one thing really : The vertex shader mainly cares about the position that the vertex will be (there is a footnote in the conclusion). This means that we can manipulate (or calculate) the correct position that the vertex should be. Very often this includes camera calculations and determines how and where the vertex ends up before being drawn.

Let's visualise this below, by shifting the sprite 10 units to the left :

vertex shader

If you wanted to, you could apply sin waves, or random noise or any number of calculations on a per vertex level to manipulate the geometry.

Practical example
This can be used to generate waves that work to move vertices according to patterns that look like lava or water. All of the following examples were provided by Tyler Glaiel from Bombernauts

The lava (purple) area geometry, bunches of vertices!

Lava Area

How it looks when a vertex shader moves it around (notice how the vertices are pushed up and down and around like water, this is the vertex shader at work)

You can have a look at how it looks when it ripples on the blog post here, at the Bombernauts development blog.


The fragment shader


After the vertices are done moving about, they are sent to the next stage of the shader to be "rasterized", that means converted into fragments that end up as pixels on screen.

When doing this stage of rasterizing geometry (which are now called fragments), each fragment is given to the fragment shader. These are also sometimes referred to as pixel shaders, because some people associate the fragments with pixels on screen, but there is a difference.

Here is a gif from an excellent presentation on Acko.net which usefully demonstrates how sampling works, which is part of the rasterization process. It should help understand how the vector geometry becomes pixels in the end.

rasterization

Now, the fragment shader, much like the vertex shader, is run on every single fragment. Again, it is good at doing this really quickly, but it is important to understand that a single line of code in a shader can cause drastic performance cost due to the sheer number of times the code will be run! (See the note at the end of this section for some interesting numbers).

The fragment shader mainly cares about what the resulting color of the fragment becomes. It will also interpolate (blend) from each vertex, based on it's location between them. Let's visualize this below :

fragment shader

When I say interpolated, here is what I mean : Given a rectangle with 4 corners (arranged as 2 triangles) and the corner vertices colors set to red, green, blue and white - the result is a rectangle that is blended between the colors automatically.

Interpolated colors sourced from open.gl

Practical example
A fragment shader can be used to blur some or all of the screen before drawing it, like in this example, some blur was applied to the map screen below the UI to obtain a tilt shift effect. This is from a game I was working on for a while, and the tilt shift shader came from Martin Jonasson.
For the curious, here is the source for the tilt shift shader along with some notes about separating the x and y passes for a blur, since that has come up a bunch.

tilt shift

An important note on numbers

A game rendered at 1080p, a resolution of 1920x1080 pixels, would be 1920 * 1080 = 2,073,600 pixels.

That is per frame - usually games run at 30 or 60 frames per second. That means (1920 x 1080) x 60 for one second of time, that's a total of 124,416,000 pixels each second. For a single frame buffer, usually games have multiple buffers as well, for special effects and all kinds of rendering needs.

This is important because you can do a lot with fragment shaders especially because the hardware is exceptionally good at it but when you are pushing performance problems it can often come down to how quickly the hardware can process the fragments, and shaders can easily become a bottleneck if you aren't paying attention.

Playing around with shaders

Playing with shaders can be fun, here are some links to some sandbox sites where you can see examples, and create your own shaders with minimal effort directly in your browser.

https://www.shadertoy.com/
http://glsl.heroku.com/
http://www.mrdoob.com/projects/glsl_sandbox/

Conclusion


Recap : Shaders are applied in a program that consists of parts, and apply when enabled to geometry, when submitted to be drawn.

Vertex shaders : first, applied to every vertex when enabled, each render, and mainly care about the end position of the vertex.

Fragment shaders : second, applied to every fragment when enabled, each render, and mainly care about the resulting color of the fragment.

Because the nature of shaders are so versatile, there are many many things that you can do with them. From complex 3D lighting algorithms down to simple image distortion or coloring, you can do a huge range of things with the rendering pipeline.

Hopefully this post has helped you better understand shaders, and let you explore the possibilites without being completely confused by what they are and how they work going into it.

footnote
It should be said there is more that you can do with vertex shaders, like vertex colors and uv coordinates, and there is a lot more you can do with fragment shaders as well but to keep this post a primer, that is for a future post.


Notes on the term “Shaders”
The term “Shader” is often called out as a bit of a misnomer (but only sort of), so be aware of the differences. This post is really about the ”programmable pipeline”, as mentioned in bold really early on. The pipeline has stages that you can run some code for certain stages. A GPU program is made up of code from each programmable stage (vertex,fragment,etc), compiled into a single unit and then run over the entire pipeline while geometry is submitted for drawing, if that program is enabled.

Each stage does a little communicating between the stages (like the vertex stage hands the vertex color to the fragment stage), and the vertex and fragment stages are the most important to understand first.

I personally feel like the term shader comes from the fact that 99.9% of the time you will be working with the programmable pipeline will be spent on shading things, while the vertex and other stages are often a fraction of the day to day use of your average application or game.

]]>