<![CDATA[learn - ruby0x1.notes]]>https://notes.underscorediscovery.com/Ghost 0.11Mon, 08 Feb 2021 07:59:26 GMT60<![CDATA[Haxe Entry Point]]>A title with two meanings, what have we here!

Recently I wrote about Haxe from 1000ft, which looks a the way Haxe fits together, how it's flexibility makes it difficult to explain, and how an onlooker might better understand it.

This post is a follow up, and discusses what happens

]]>
https://notes.underscorediscovery.com/haxe-entry-point/5d5cba3d-2506-4d4c-ac78-abe860e0e3ebSat, 07 Mar 2015 08:03:56 GMTA title with two meanings, what have we here!

Recently I wrote about Haxe from 1000ft, which looks a the way Haxe fits together, how it's flexibility makes it difficult to explain, and how an onlooker might better understand it.

This post is a follow up, and discusses what happens if you were interested in using Haxe for something, and were curious about the entry point from a user perspective.

To tackle the basic usage and understanding of the Haxe environment, we will write an example command line tool using Haxe.

This is a continuation of a series on Haxe itself, which you can find all related and future posts under the “haxe” tag.

haxe.org

Here is part one.
This is part two.


User entry

When you install Haxe, it installs a few command line tools that include the compiler, haxe, the package manager, haxelib and some other tools that it needs.

Let's start at haxelib, the first step usually.

haxelib

haxelib is a tool that manages haxe libraries, which is where the name comes from. In python you would be familiar with pip or pypa, on ruby there are gems, node.js has npm - if you've used any package manager before you should be quite at home.

haxelib repository
haxelib has a central repository of libraries. Libraries are distributed in zip form and are easily installed/removed from your computer at your discretion. Libraries are submitted by third party developers and offer a diverse range of tools, frameworks, and more through a simple command line interface.

haxelib install
If you want to install a library, it's as simple as saying haxelib install hxcpp. That would install a library named ”hxcpp”, which you will need to develop against the Haxe C++ target.

haxelib run
A library is allowed to include a "run script", which is a small neko module (mentioned here before) that can do certain tasks - perhaps some post-install configuration or thats how you interact with the library. For example - flow - the build tool used by snowkit libraries - is used entirely via the haxelib run command. To run the script, you would call haxelib run flow, which would show it's options.

This varies by library, so you would usually want to check the docs or readme about the library before executing the command blindly.

haxelib config and versions
A lib would download and unzip into a location you can set or view by calling haxelib setup(set) or haxelib config(view). On OS X for example the default should be /usr/lib/haxe/lib/, and within that folder it would put all your libraries.

Libraries have versions, and are unpacked into a folder structure which looks like this: hxcpp/3,1,68/. To install a specific version, I would call haxelib install hxcpp 3.1.48, which would install that version alongside the existing ones.

switching library versions
To switch between multiple versions of the same library you use haxelib set. To switch to an older version of hxcpp, I would call haxelib set hxcpp 3.1.48, which would make that the active version that's used by subsequent builds.

To see which versions you have installed, you can run haxelib list which shows all library versions. If you wanted to know where exactly the current version is located, you would use haxelib path hxcpp (which also for some reason includes some additional lib definitions).

haxelib updates
To update to a newer version of a library (if there is one) you would run haxelib update hxcpp.

You can also try haxelib upgrade, which would check and ask about upgrading every installed library - but be aware that you should check what was updated against your projects. You don't want to accidentally update to an incompatible version without realizing. I find it easier to be intentional about it, so I can switch to a version that is compatible again later if needed.

dev libraries
Sometimes a library won't be uploaded to haxelib repository yet, or you might want to make your own folder into a haxelib library under a given name. To do this you can use the haxelib dev command, which would specify the dev version and which specific folder on your computer you want it to use to represent it.

haxelib dev mylibrary /Users/Sven/dev/mylibrary/ would "install" this folder as a haxelib called "mylibrary", allowing me to use -lib mylibrary when building haxe projects later.

You can also install a library from a zip package, maybe you manually downloaded hxcpp 3.1.39 at work and wanted to install it offline at home. To do this, you would use haxelib local file.zip, which would be the same as if it were downloaded. This is also useful for testing your own libraries while developing them.

Let's say I was editing hxcpp for something I was testing, I could make a local copy, and point to it with dev, haxelib dev hxcpp ~/dev/hxcpp/. This allows me to say which hxcpp is used for all projects. If I wanted to go back to using the stable version, I would just call haxelib dev hxcpp which would stop using the dev version again.

git haxelib repositories
Another great use case is installing a dev version directly from a git URL. This is useful as many libraries might not yet be ready for haxelib releases, due to active development. Or, sometimes libraries get stuck in limbo waiting for a new haxe release.

The usage would be the same as the dev command, except it would use git to pull down the repository. It would use git to update it as well, when running update/upgrade. Using hxcpp as a continued example, if you wanted to try the bleeding edge version, you could run haxelib git hxcpp https://github.com/HaxeFoundation/hxcpp.git which would pull down the latest git version for you. To switch back to stable, you would just run haxelib dev hxcpp as before.

other haxelib info
More information about haxelib can be found on the Haxe website or by running haxelib with no arguments.

hxml

Now that we know how to install and manage libraries, how do we use them?

The Haxe intro post mentions hxml briefly, but here we will consider it as step two, in getting started with Haxe.

hxml = haxe markup language
The meaning of hxml is a guess from my part (but it would make sense). It's a very simple markup file that acts as command line arguments to the Haxe compiler - and that's it.

Let's take the simplest example, the haxe -version command. That second bit, the argument, we can put that inside of a text file and split each argument up, one on each line.

# haxe_version_example.hxml
-version

It helps to use the hxml extension, because then people know what to do with the file.

using hxml files
To use the hxml file, you simply hand it to haxe as an argument:
haxe ./haxe_version_example.hxml

The results would be identical to the above haxe -version command. This seems silly for one command, but for large projects with many flags, defines, library dependencies and targets it would quickly become impractical to use the command line directly. hxml files solve this easily.

hxml in practice
Now we know where to put the hxml file, what do we put inside it? There's a LOT of options actually, if you run haxe -help you would see a small section of the possible commands to Haxe.

To continue on our path though (we are at step two), we will only look at a few useful pieces of hxml to move forward.

  • -main ClassName
    • The class where the literal entry point will be
    • The entry point is a static function main() { }
  • a target to build to
    • Could be one of many, in our example -neko
  • -lib libraryname
    • Specify a haxelib dependency
  • -D mydefine
    • Specify a conditional define for #if mydefine within code
  • --next
    • Split between subsequent sections of the same hxml
    • You could consider this as starting a new hxml, within the existing one. This allows multiple targets or steps to be executed in order.
  • -cmd
    • Runs an arbitrary command after previous commands complete.

Example: “haxelibcounter”

To further our journey, we will ground it in practice. To do this, we will write a simple tool to count the number of haxelib libraries using the knowledge above. Here's what we'll do :

  • Use a library to parse command line arguments
  • Use hxml to build our program into a neko binary
  • Turn this command line tool into a haxelib of it's own

Note: The code is in snippets below, but the entire code is embedded at the end for clarity, and is also at the github repo in full.

This seems trivial, but it's the basis of every haxe game, application, website or tool. We are going to be using the haxe neko target (more about that here) to generate a small, cross platform binary that haxelib will use to run our tool from a user perpsective, as we discussed above.

step one: dependencies
So to begin, if you were paying attention, we need our lib dependencies! We are going to need the library we want to use to parse arguments. For the sake of example I have used my own simple library called “arguable” which I use in my own tools to handle command line parameters. It's very simple in itself, so it might serve as an interesting place to poke at if you're curious.

First, we need it installed. From a command line, we run haxelib install arguable.

There - it shows us what was inside the library, it has no run.n file, so we can't use haxelib run arguable, and we can see it has a test/build.hxml file - if we wanted to build the test we could pass that file to haxe.

The rest doesn't matter for now,

step two: create a hxml file

To do this we create a blank file, and fill it Haxe command line arguments. We will name our file build.hxml, as I find this easy to spot as the method to (re)build something.

The minimum we would be doing is specifying an entry point (-main) and a target (-neko), but we are also using a library, adding a define, and running the tool while we develop (this gives us quick turn around time).

  • Specify the entry point:
    In the hxml file, we start with -main HaxelibCounter. This tells Haxe to look for a file called HaxeCounter.hx - and inside it look for a static function main {} so that it knows where to start.

  • Specify a target:
    Since we want to build to the neko target with haxe, the docs tell us that we use -neko file_to_create.n. Our exact usage is -neko run.n, we are using run.n, because we want to use this library from haxelib run.
    This file is platform independent - as long as you have neko, you can run it. This is why it's used for the haxelib run command, it works anywhere Haxe is installed. If you wanted to create a platform dependent binary/exe, you would use nekotools boot file.n and what you would get out of that is file or file.exe, depending on the platform.

  • Specify the dependency:
    -lib arguable will tell Haxe that we are using haxelib to depend on this library. What this does is tell Haxe where the code from arguable will be on your computer, via haxelib. This way, when you reference code from within an arguable package, it can find it (more on this below).

  • Specify a define:
    This is just an example, so we will make a define called haxelibcounter_verbose which we will use to output more information in a debug type of build.

  • Tell neko to run the file it just built:
    As mentioned above, the -cmd argument will execute something for us. We can add -cmd neko run.n, immediately running the results if any of the build.

Onward
This is our starting hxml file, for this project:
(# is a comment line, it's ignored by Haxe)

Writing the tool

Now we get to the code. We create a blank file, name it HaxelibCounter.hx, and put it in the same folder with the build.hxml. Inside the file, we need a class named HaxelibCounter, and we need a function within that class called static function main. This is the Haxe entry point.

class HaxelibCounter {

    static function main() {
        trace("This will do stuff soon!");
    } //main

} //HaxelibCount

First run

Now we have :

  • A file to build
  • A file to tell Haxe how to build it
  • And that will run the program when it's done

We can run our application using Haxe \o/
From within the same folder from a command line, run: haxe build.hxml

And you should see:

> haxe build.hxml
HaxelibCounter.hx:6: This will do stuff soon!

Importing code from the library

Now we need to reference some code from arguable. We do this using the import statement. Import statements must be before everything else in the file, so it goes right at the top of our class.

Class paths
The import statement basically checks a list of locations called the class path to see if the module you are looking for is in there. Our example: import arguable.ArgParser;

If you look at the haxelib install log above, you will notice that arguable/ArgParser.hx is actually just a file within a folder. This is no coincidence! A . within the import statement is stepping one depth lower into the class path. This is true for a folder (a package), or a Module (multiple types within one hx file) - more on that below.

By using -lib arguable, we add the location of that folder that haxelib is keeping track of - to the class path. Now, when some Haxe code says import arguable.ArgParser;, the modules within the class path are checked. Since a single file on disk can contain multiple types, the file that contains them is called a Module. hint: You can also look up the -cp command, it adds a package folder manually.

Since that's all we need to know about that for now,

What about packages?
A package is a folder within the class path. arguable. is the package, ArgParser is the module. import arguable.ArgParser; tells Haxe to look inside the arguable package folder(s) and find a module called ArgParser. Simple enough!

This brings up an important rule of understanding in Haxe:

  • Modules/Classes MUST start with a Capital Letter
  • packages/folders MUST start with lower case

This is how you tell the difference between game.Game and Game.game.
Small letter: package (reaching into a folder).
Capital letter: A haxe class (module).

In this obtuse example, Game.game is a static variable on the class called Game, and game.Game implies that the Game class is in a folder called game/. Ideally you wouldn't name things in such convoluted ways, but with this simple rule you can always tell the difference.

after import
So, we imported the ArgParser module which is a Haxe class, this gives us a shortcut name for arguable.ArgParser. We no longer have to explicitly say arguable.ArgParser every time we want to talk about that class in our code - we have now explained to Haxe which ArgParser we are referring to, later.

Using the ArgParser class

Hopefully the library you are using is documented, as that's usually where you check for how to use things. If not, you should probably file an issue for documentation with the developer as, well, that's a major issue.

arguable is a very simple library, so the documentation is housed in the readme file. To use it, we simply give it the list of system arguments from the haxe std api.

The Sys.args() function gives us the command line arguments to our haxe program on language targets that have access to Sys (neko does). Note again, Capital = Class. There is also a sys package, which contains various other useful modules but we are using the Top Level class called Sys.

Let's change our code to the following and run it again haxe build.hxml:

static function main() {

    var args = ArgParser.parse( Sys.args() );
    trace(args);

} //main

We get
HaxelibCounter.hx:10: { any => false, length => 0, valid => [], invalid => [] }

So far so good, we didn't give it any arguments, so that result makes sense.

Using our define

Since we are making a tool we might want to print a lot of information during development to help us understand what path the tool is taking. We made a define to allow us to turn this on and off easily, so it doesn't burden the user unless they need it.

The defines are used with #if define_name and allow opposites with ! (not). #if !neko. You can combine them with ( ), like #if (cpp && neko && !mydefine). The Haxe manual covers these further, that's all we need. Let's make a debug function to do some logging:

Now our code looks like this:

var args = ArgParser.parse( Sys.args() );  
debug('> found args $args');  

And without that define (commented out in the hxml) we don't see that output. This will come in handy.

Implementation details

What we really want to do in this example is:

  • If the user has no arguments provided, show the usage information.
  • If the user requested we show the count, do that.
  • Additionally, if the user asks, print the names while we are at it.

We can now use arguable to do the simple handling, and add the printing of the usage. We'll fill in the do_count function next :

do_count()
What we are doing next is pretty straight forward, we ask haxelib list for all installed libraries, we split that up one per line, and then print the total.

Some notes: I have intentionally specified the 'full name' for sys.io.Process. This is an alternative to using import, and is important when you might have to modules/classes with overlapping names. I like to keep haxe API names explicit, so myself and others can easily look them up.

and --show-names

The last bit of functionality, we can display the names since we already have the list.

To do that, we can add:

Let's run it again. You'll probably find it only ever returns the usage info! We need to pass the arguments via our hxml file for now. You can try it out by just removing or adding the arguments here:

-cmd neko run.n --count --show-names

Now my output says:

Tidying up

One thing you'll notice is the HaxelibCounter.hx:48: stuff, that comes from trace, which is really important for debugging effectively. But since our tool will be user facing, we want to remove that from the output. For this, Sys.println is available, which will print to the command line without any prefixes.

We can use a built in define for this: #if debug. Haxe has a -debug command line option which enables more debug related features, depending on the target. It also defines the debug conditional which we can use to swap between clean logging and debug logging. When we want to debug, we uncomment the #-debug line from inside our hxml (shown later) and in the code :

Turning this into a haxelib for haxelib run

We already know that a haxelib needs a json file - and optionally a run.n file (which we have!) so let's create the haxelib.json, which is pretty much copied from the template in the documentation:

Now we can tell haxelib that it exists. From the project folder, we tell it to use "this folder" (./):

haxelib dev haxelibcounter ./

Now, it should show up in haxelib list, and we can try haxelib run. I've left debug and verbose flags enabled, for testing:

Notice anything new?

haxelib run adds a needed argument

When running from haxelib the last argument is always the path at which the command is being run from! That's useful, because we often need to know that. What haxelib is doing is basically neko run.n path/that/this/is/running/from. Keep that in mind when writing tools, and how that affects your argument processing.

Let's try with arguments to haxelib run. This time I have disabled -debug and verbose logging, so what we get is the end result we were looking for (i've trimmed the output obviously):

The keen observer might notice the number went from 42 -> 43 haxelibs, that's because I added this one during!

Submitting to haxelib for other users


Please note : Don't submit this tutorial yourself. I have submitted it and you can install it from haxelib install haxelibcounter to mess around with it. But submitting a bunch of the same thing is of no real use to anyone.

First we add documentation. Since ours is really small we will just add it to the README.md file, but it's crucial.

Then, we need to make our project into a zip file, and name the zip file with the same version name for clarity. I called mine haxelibcounter-1.0.0.zip.

Since haxelib.json mentions the contributors, it will ask me for my user password (it read the file and knows who I am). Now I just run haxelib submit haxelibcounter-1.0.0.zip and it will upload it, and become available.

Conclusion

Hopefully this guide helps you further understand the underlying Haxe toolkit and it's eco system through a practical example. Once you start using big libraries and frameworks that abstract you away from the hxml and the command line - it can easily become unclear as to how things fit together.

My hope in writing this is that you are aware of how the Haxe part fits together without all that, so when you are writing tools, games, applications, websites or whatever else using Haxe (either manually, or through a framework) - you have a good understanding of what happens behind the scenes. The more you understand about the tools you're using, the better equipped you'll be to tackle any problem. You'll also probably start to see Haxe in situations you didn't before, because of it's versatility.

Follow and find me on Twitter if you have feedback or questions, and as always I really appreciate the amount of people that share these articles with others!


If you would like something to try further, try listing the active version next to the name using --show-version or similar. You could also try compiling to a different target, but haxelib run only works with a run.n file and neko, and since we used sys api's, these won't be available on some targets.

All haxe related posts.


Full code listing:

All code is available directly here:
https://github.com/underscorediscovery/haxe-entry-point-example

build.hxml

HaxelibCounter.hx

]]>
<![CDATA[Haxe from 1000ft]]>I often run into people confused as to how to use Haxe or, where exactly it fits into a project pipeline.

For the newcomer, Haxe is a high level modern programming toolkit that compiles to a multitude of languages (c#, c++, java, js, python, php, etc). You can visit Haxe.

]]>
https://notes.underscorediscovery.com/haxe-from-1000ft/4b75c653-4e40-426b-8ba3-d5f1e9e7e776Wed, 18 Feb 2015 14:03:45 GMTI often run into people confused as to how to use Haxe or, where exactly it fits into a project pipeline.

For the newcomer, Haxe is a high level modern programming toolkit that compiles to a multitude of languages (c#, c++, java, js, python, php, etc). You can visit Haxe.org for a closer look - in the mean time, here is a broad strokes view of what Haxe is and can do.

haxe.org

This is part one.
Here is part two.


The versatility confusion matrix

Haxe is an extremely versatile toolkit for cross platform and cross target development (more on this soon). It is a tool that fits many uses which by nature makes it very hard to pinpoint it to a list of "What is it" or "What is it for". It's difficult to say - because you can do basically anything using it.

Let's take a language people don't get “confused” by (or do they?) and ask the same questions : What is C#? What can I use it for? What about C++? Or Java? Or JS? Or Python? Every single one of these languages are versatile, and let you :

  • write a console application
  • write a game / game engine
  • write a web page/app/backend
  • write a high level utility script

Since Haxe is a programming language, it does all of these things as well. Simple enough! Bonus: because Haxe becomes other languages (including the ones in our example), it can do everything those languages do as well.

The Haxe Toolkit

Now that we see that Haxe includes a programming language, what else does the Haxe Toolkit include? more details on haxe.org

  • The programming language
  • The (cross) compiler
  • The standard library

The programming language

As mentioned, it's a programming language. It's modern, it's high level, it's strictly typed and includes access to the underlying target languages and native platforms.

The cross compiler

What do you do with some code written in the Haxe programming language? Feed it to the compiler!

What does that compiler generate? Code in a target language and depending on the target, a binary of sorts. Let's look at a simple example.

Given App.hx as a Haxe class, let's see an example of how to get a target language file from a Haxe file:

  • haxe -main App -python app.py
  • haxe -main App -js app.js

Edit: I was reminded that although python has been available as a target for almost a year in repo, and is listed on the main home page, that it is in fact a 3.2 feature and is not available in 3.1.3 stable. It will be shipping sometime in the near future - Sorry for the lack of clarity on that!

This is covered in more detail later on, with more examples of real world uses and other targets. The compiler includes many other utilities as well :

noteable compiler utilities

There are so many to list, because of the wide number of targets, but I'll short list some of my favourite ones :

  • Insanely fast. Building complex projects in < 1s usually, which of course varies by the target language and the complexity of the output code, your disk write speeds, etc - but the compiler itself is so fast you'll often wonder if it worked because it blinked by.
  • Documentation xml output can be generated directly from Haxe itself, which will return the information about every type and all the documentation within the files for consumption. This data is pretty brutal, but can be coerced using the Haxe xml parser itself fairly easily into other data like JSON. From it, though, really useful and fully in depth documentation can be generated. Of couse the Haxe API itself uses it.
  • Code completion is handled by the compiler itself. You tell it a file location and the command line options, it will return xml information about the completion at that point. This allows third party plugins and tools to fully utilitize the features of the language in their interface, if they provide the capability to do so. Included in the compiler is a code completion cache server, which will stay running and cache types and classes for projects, ensuring that there is never delays in completion.

The standard library

When you're writing code in general, and especially cross platform code, it's really helpful (and often important) to have code that works the same across all targets.

This is what the standard library is for. There are a myriad of classes in there, way too many to list here, but let's pick a few for examples of the type of thing that are "built in" to the language and are often available on every target.

The full Haxe api listing is here

common cross platform examples

  • JSON
    • var object = haxe.Json.parse(str);
    • var string = haxe.Json.stringify(object,null,4);
  • Base64
    • var encoded = haxe.crypto.Base64.encode(source);
    • There is also CRC, md5, sha1, sha256 etc
  • XML
    • var xml = haxe.xml.Parser.parse(source, strict);
    • for(node in xml) var attribute = node.get('example');
  • Serialization
    • var str = haxe.Serializer.run(object);
    • var object = haxe.Unserializer.run(str);
  • Http
    • var data = haxe.Http.requestUrl('http://example.com');
    • available on cpp, cs, java, js, macro, neko, php, python

target specific utilities

On top of common tools and types and utilities, the standard library includes a lot of options for each target individually, allowing maximum use of the underlying language features.

Some easy examples : File/FileSystem access. These are only available on certain platforms where it makes sense. When compiling to other targets, these types are not available.

Also remember : these are all strongly typed API's.

python example
Say you were writing something for the python target, and you needed access to the python standard library.

import python.lib.Glob;  
...
var list:Array<String> = Glob.glob('path/');  

javascript example
If you're writing a web page, you might want access to the native api, like window or document:

js.Browser.window.alert('hello from haxe'); js.Browser.document.getElementById('id');

Maybe you wanted to create an element:

var element = new js.html.DivElement();  
    element.style.width = '99px';
js.Browser.document.appendChild(element);  

sys target example
A good number of platforms support the sys types.

var data = sys.io.File.getContent('path.txt');

In many of these cases you might want to use #if python of #if sys and so on, to prevent code from trying to compile for the wrong target and throw errors your way.

Using the generated code

As this is a large and open ended topic, I will only give a few simple examples for you to understand how to use the low level code output in another project or as part of a project. This is ALWAYS going to be dependent on the actual language output you are targeting. For example, C# can use dll's really easily, or C++ target can generate as a static library, etc. Each target usage differs based on where you will use it.

js example
JS is straight forward, you simply load it in an HTML file in the browser or in node.js or other interpreters.

There are two main use cases here, one is that your application boots itself up (using the regular static function main entry point in the -main class, and the second is that you're complimenting existing js code with some js code from Haxe.

In the first case, you would just include the generated js file as usual, with <script type="text/javascript" src="app.js"> </script> and the entry point will be called for you.

The second case would require you to call into or out of the Haxe code. This is a js specific problem, and is solved with js specific solutions. You would be able to for example, store a value in the global scope from Haxe, or call global stuff from Haxe that the non Haxe code exposes.

C# example

With Haxe code written say as a library for use in C# tools (like Unity, Mono, etc) you can directly generate a .dll file for immediate use as a reference. The code can also be included in the project manually, and the -D no-root flag will keep all the Haxe code in it's own namespace to avoid conflicts.

You can also use Haxe with Unity directly.

Java example

A good example of a Java use case would be an Android plugin. If you add the android.jar library to your class path in the hxml file or command line, you can compile immediately usable jar files from Haxe code that call the native android api's directly.
Here's an example class, written in Haxe, that opens a URL in the device default browser as a new Activity. This Haxe class is compiled into a .jar file, and is then usable in any android project thanks to the Haxe cross compiler, and the Haxe java backend.


Haxe is a low level toolkit

Most people encounter Haxe through a framework using the Haxe toolkit to deploy to multiple targets - this is because Haxe is designed to be a lower level set of tools to efficiently and adeptly handle cross platform development.

This also means that there are things you will need a framework in order to achieve them. To answer the question "how do I draw graphics with Haxe" is a bit of a misnomer, the toolkit (and the language) cannot answer this for you.

Let's look at js:

  • You could draw with Canvas/HTML5
  • You could draw with WebGL
  • You could draw with Processing.js, Pixi.js, phaser.js etc

Every single target has a plethora of options available for every specific need.

Thus, frameworks, bindings, native externs and cross target implementations are written by third party developers, to give you access to these things in your Haxe code.

Continuing with the js target example, how about JQuery? What about node.js? Pixi.js? Phaser? What about some audio for the js web target, like Howler.js.

All of these offer strongly typed, ready to use bindings for just one of the many targets.

Certain targets like C++ and C# are a bit more complicated due to the nature of different compilers and runtimes, but there are still many frameworks and bindings available to you for any number of things.

Notes about the backends

Often a target has it's own implementation of all of the backend details, and standard lib implementation specifics. These are an automatic dependency when targeting that platform, as they make all the generated code work, but the process is transparent to you provided you have them installed through haxelib. For example, haxelib install hxjava would install the java backend, hxcs and hxcpp are the C# and C++ backends respectively.

Notes about neko

Neko is a VM (Virtual Machine) that runs code written in the neko programming language. The neko language is not intended to be programmed manually, but rather is a generated language that allows running the code (often bytecode) across multiple platforms through the virtual machine. You can think along the lines of lua or other VM's that run bytecode interpreted at runtime.

So why do you see neko mentioned in Haxe discussions?

  • Neko is a Haxe target
    Haxe code can be compiled directly to neko bytecode, which can then be loaded into a neko VM. A good example: mod_neko which runs on Apache web server, can run Haxe code on the server side, much like you would install mod_php and run php files, mod_neko runs neko files. Lot's of frameworks and developers use neko on the server, and code their backend in Haxe. The haxelib server and site are all written in Haxe.
  • Neko bytecode is cross platform
    This makes it a great candidate for using Haxe to write cross platform command line utilities. The reason you see neko when Haxe is running, is because the Haxe toolkit includes neko (and its tools) and uses it for many things. Again, haxelib, the Haxe package manager, allows packages to include a run.n file, a neko file compiled from Haxe, that will run when a user calls haxelib run yourpackage. This is powerful because you only need one file, and it will run on every target that you support in your code, with full access to the standard library and more.

Neko is useful for these types of tools and includes features like nekotools -boot bytecode.n, which will generate a binary for a plaform (like a windows exe or mac binary), should you want to distribute the tool standalone. Neko also compiles really quickly, because Haxe is fast at compiling and the generated bytecode has no compile step.

Finally, because Haxe includes neko in the installer and is dependent on it, it's a reliable tool that many frameworks and developers lean on to do their bidding. If you're going to be using Haxe and writing tools, it's a great option for scripting user facing utilities that will have no dependencies and be cross platform.

notes about platform vs target

  • A platform is a host environment that runs code.
    • web/mac/windows/linux/ios/android
  • A target is a language target that haxe compiles to
    • c++/c#/java/js/python/etc

The important distinction is that a single language target can and will run on multiple platforms. For example, the c++ backend will run on iOS, Android, Mac, Windows, Linux, Blackberry, Tizen and quite a few others. It also supports custom toolchains for things like consoles and native cross compilers and makes it easy to compile the c++ using it's build tools, to any C++ supported platform.

JS output can run on node.js or other interpreter based platforms. Python can run on any platform with a python interpreter. PHP as well. The targets and platforms are not the same thing.

On the shoulders of a giant

Because Haxe is this flexible, because it's a low level toolkit, many frameworks are built on top of Haxe in order to achieve some goal. A few examples of these, some of which you have probably heard of:

  • NME (native media engine)
    • long standing media and game framework loosely based on flash API's
    • the original backbone of many frameworks and tools
  • OpenFL (Open Flash Library)
    • forked from NME originally to align closer to Flash API
    • general purpose framework based on Flash 2D API
    • many game frameworks built on top of it, HaxeFlixel, HaxePunk, etc
    • many tools built dependent on it like haxeui
  • UFront
    • large, powerful MVC web backend/frontend framework
    • compiles to php/neko
  • Flambe
    • Flash/WebGL/HTML5/Android/iOS 2D game engine using AIR
  • Nape physics
    • cross target high performance 2D physics engine

Frameworks determine workflow

Now that we are higher up, there are workflows determined by the higher level frameworks you'll probably be using, which means you might not have to call the Haxe command line directly much at all (if ever).

Integration with IDE's like Flash Develop, they supporting Haxe as a relatively first class part of the editor. The features and decisions Flash Develop makes are often informed by the IDE that implements them. For tools and frameworks, you'll find that a variety of options will exist under different circumstances that meet different needs in the eco system.

know your tools
Having frameworks built on frameworks using tools that are built into IDE's made by various third parties can lead to layers of indirection. The best approach is to pick a framework or tool or work directly with Haxe itself and try to understand the stack you are using to achieve your goals. There is no way around it, and this applies to any language/platform/programming tool. Try and know more about how things fit together to serve your needs.

platform and target considerations
Framework/tool specific choices can also lead to isolating a specific subset of targets, so of course you shouldn't assume that because Haxe can target so many languages, every framework automatically works on all of those options. A lot of times that doesn't even make sense (there is no concept of a console application on web or mobile etc).

These choices are expected and good, and are often goal oriented, man power based or related to logical reasoning.

A great example is Flambe, which targets swf/js Haxe targets primarily. It then leverages AIR for mobile/desktop and HTML5/WebGL for browsers. This maximizes the focus and provide something that does what it sets out to do well, while leveraging the power of the Haxe toolkit to achieve it.

Flambe uses npm (node package manager) to install, and has it's own command line utilities to build and run the games, including automatic asset updates while the game is running in the browser and lots more. This means a tight integration with the framework and the best possible options to be avaible to the end user, things that the Haxe toolkit couldn't (and shouldn't) provide.

If you wanted to jump to using another framework, it will probably (unless it is built on Flambe) have a completely different workflow. This is great, and normal/expected, and allows a rich set of tools across a variety of preferences, goals, target languages and platforms to cover a wide range of use cases.

Conclusion

As you'll notice the amount of use cases spirals rapidly when you consider that the output code can be used in a multitude of ways. This is a valuable and powerful thing about Haxe and what makes it quite hard to describe.

Hopefully this helps someone starting out with Haxe or explains why I choose it for everything I write, why I created http://snowkit.org and use Haxe for my games and engines.

You can view all my ramblings about Haxe here.

]]>
<![CDATA[haxe: compile time macros]]>Haxe is a really great language for me. It does cross platform in a sensible way - by compiling and generating code to a target language. The best part is that it's not just converting, it's properly compiling the code - so all errors are caught by the compiler itself

]]>
https://notes.underscorediscovery.com/haxe-compile-time-macros/05c4c7f6-bc2c-41be-b44f-5f18e7295a1aTue, 04 Nov 2014 11:05:45 GMTHaxe is a really great language for me. It does cross platform in a sensible way - by compiling and generating code to a target language. The best part is that it's not just converting, it's properly compiling the code - so all errors are caught by the compiler itself long before the generated code even gets there.

One of its most powerful features is the macro system which allows you to run haxe code at compile time, to augment and empower your existing haxe code. It sounds crazy - so let's dig in.

haxe.org


In this post, the example output code will be shown using javascript for simplicity - just one of the many language targets that it supports - c++, c#, php, java, python, js.

Haxe in Haxe at compile time

Haxe manages to get macros right, it uses it's own language at compile time to alter the compilation state. This means you can inject expressions, code, remove code, throw errors and generally make code do things not usually possible, that are specific to your code base or target. And even better, you have during this compilation phase the full power of the language behind you to do so.

In luxe for example - there is a concept available for using a Component/Entity system. Sometimes, a user would accidentally try and use the entity parent property in the contstructor of the Component, long before it was assigned. This wasn't their fault, it's just the nature of the way the system works and that was something that would have to be learnt. But not with macros around!

One of the first macros I decided to write was based on this problem - I made an @:autoBuild macro happen on every descendent of Component - which at compile time, has a look at the expressions within the constructor of the given component. If it finds you touching the field named entity - it throws a neat and clearly marked error message to warn you. This saves oodles of time on things being null and obscure crashes, and gives a massive boost to usability when you can design for that explicitly.

The exact code is actually not complete right now - but the ability to do this type of thing is far more helpful than it first seems.

complex code rejection

Because you can alter the expressions in the macros at compile time, you can reject code from ever existing in the output. This is possible through #if loglevel > 1 using Haxe already - but what if the condition was far more complex? What if the condition was based on where the code is being built - like in a continuous integration server? What about environment variables? Or git commit revisions? Basically - any condition you can program - a macro can do. Since a macro is just haxe code, it has the full capability of the Haxe language and compiler to do it's bidding at compile time.

log code rejection

One simple example is logging code, using log levels to define what level of logging is present in a build. I like really dense detailed logs because I can write a parser for them and visualize them in ways that aid debugging complex systems quickly. This can add a large toll on a code base if the log code ends up in the output, because every logged string has to be stored and allocated and adds to the final build size output and sometimes runtime cost.

The macro rejecting the expression means the final code does not include the logging at all. Haxe already has a concept like this built in as a build flag --no-traces, which removes trace() calls - the built in debugging print command - but the concept applies not only to logging but more expensive and intricate systems like profiling and instrumentation.

profiling and instrumentation

Haxe macros let me add instrumentation code to my hearts content without it ever affecting runtime release builds, something I have been wanting an elegant solution for for quite some time. The next section is an even better option - what about deep profiling all functions automatically? Or each block, or each expression of each block?

complex code injection

Since you can emit code expressions from a macro, you can inject code as well. You can construct entire classes and types dynamically - at compile time.

Let's take the profiling example one step further and devise a conceptual macro for automatically profiling every block expression within a given class. Notice below I have tagged my class for a "build" macro - I want this class to be handled by my Profiling macro apply function at compile time. Since I only care about the update function right now in this example - let's tag that code for profiling only using custom metadata @:profiling. Note that @:build is from haxe, the custom one is ours.

Also take note that I separated logic into blocks { } of expressions - because I can use this to my advantage in the macros at compile time.

@:build(luxe.macros.Profiling.apply())
class Player {

    @:profiling
    function update(dt:Float) {
        {
            //update_ai
            ...
        }
        {
            //update_stats
            ...
        }
    }
}

automatic injection

Now I have everything I need - my macro will run at compile time on the class I am interested in measuring, my macro will check all methods in the class for @:profiling - if it finds it, it will look for each root block { } expression and automatically insert a start and end measurement at runtime so the final code would in pseudo code look like

profiler.start('update_ai') {

    //update ai
    ...

} profiler.end('update_ai').

For now - I won't be posting the code (this system is not even finished being coded heh) but the important thing is to understand the potential from macros and their ability to empower the code base for the development process to be quicker, more friendly, more streamlined in the output and more expressive.

continuing

This of course has down sides, code is being executed at compile time. While haxe compiler is incredibly fast - you can slow it to a crawl by a single compile time macro. If your macro introduces network latency for pinging a server or something - you will be waiting for that too.

The other thing to consider is that macros are quite complex and are the most advanced feature in haxe - so it often appears unapproachably difficult. Often this is not the case, and patience and examples will get you using them in no time. Haxe 3 made massive strides in simplifying their usage - they still have some things that are fairly difficult to wrap your head around that WILL take time to get used to.

This is not something you can fast track - the easiest way I have found is to learn by doing. That's why I am making this post, to hopefully inspire you to think of really simple, really easy macros that help you get your feet wet.

Simple concrete example

Most times a unique build id is useful in determining which version or specific build is being executed on a test machine or users machine for debugging purposes. To that end, our simple example will generate a unique-ish static string value for a build id. Since this code happens at compile time, a far more complex algorithm can be used to ensure uniqueness if required, but for the most part this code will do fine.

// MIT License
//https://github.com/underscorediscovery/haxe-macro-examples | notes.underscorediscovery.com

import haxe.macro.Expr;  
import haxe.macro.Context;

import haxe.crypto.Md5;  
import haxe.Timer.stamp;  
import Math.random;

class BuildID {

        /** Generate a unique enough string */
    public static function unique_id() : String {
        return Md5.encode(Std.string( stamp()*random() ));
    }

        /** Generates a unique string id at compile time only */
    macro public static function get() {
        return macro $v{ unique_id() };
    }

} //BuildID

Take note of the functions here - one is for generating a string ID at runtime - a regular public static function. You can use this any time from your program. Then, there is a macro function, these are compile time functions and can use the macro context to spit out expressions. I won't dig too much into the specifics of the expressions themselves - but $v{ } generates an expression from a value if it's a primitive type. Our case is a string but this is covered in the Haxe manual if you wanted more insight.

Let's look at what the using code would look like, and the resulting output target javascript code. This class is stand alone and can be used with the Haxe compiler to have a look at the results yourself, using the older documentation here as the new manual is still working on these introductions.

Basically to use this example at home, run haxe build.hxml from the compile_time_buildid/ folder of the repo.

// MIT License
//https://github.com/underscorediscovery/haxe-macro-examples | notes.underscorediscovery.com

class TestID {

    public static var build_id : String = BuildID.get();

    public function new() {
        trace( build_id );
        trace( 'running build ${build_id}' );
    }

        //called automatically as the entry point
    static function main() {
        new TestID();
    }

}

resulting output

There - now we have a unique value for the ID. The ... represents some haxe specifics that aren't useful in this example but notice how the build id is hardcoded into the output source file. This value will change with every build you run.

(function () { "use strict";
var TestID = function() {  
    console.log(TestID.build_id);
    console.log("running build " + TestID.build_id);
};
TestID.main = function() {  
    new TestID();
};
...
TestID.build_id = "cf30a1a97db5628b91535dfd3a972ea6";  
TestID.main();  
})();

even more hardcoded

Notice console.log(TestID.buildid); and the line below it? This is printing the value of a variable called build_id. It's a fixed value because its hardcoded into the file, but then why do we even need the variable access when we could replace every mention of TestID.build_id with the exact id string? Haxe allows this too, using inline static access.

Let's change :
public static var build_id : String = BuildID.get();
to
public inline static var build_id : String = BuildID.get();

For strings this is not that great, since it will generate a lot more strings, but for numbers, constants and the like it can really cut out a lot of code and even optimize the output significantly by doing away with superflous values at compile time.

Now that we have changed it to inline, this is the output - every mention of the variable build_id is gone and is now hardcoded into the file directly.

(function () { "use strict";
var TestID = function() {  
    console.log("10e594bd858844cb16a1577c61309b49");
    console.log("running build " + "10e594bd858844cb16a1577c61309b49");
};
TestID.main = function() {  
    new TestID();
};
...
TestID.main();  
})();

code example

The complete code for the above can be found here for convenience :

Github Repository

Links and tutorials

The official Haxe manual
Getting better all the time, this is the definitive guide though quite meaty and requires a good couple of passes before things make sense. Learn from simple examples and practicing, use the manual as a reference.

Andi Li: Everything in Haxe is an expression
This guide is a really helpful understanding that everything is an expression in haxe. This makes macros make a lot more sense for me.

Mark Knol: custom autocompletion with macros
An example of using macros to populate code for the compiler so it can code complete things that it wouldn't be able to otherwise.

Mark Weber: lots of simple macro snippet examples
A simple and useful reference for getting some ideas and introduction to the concepts behind the macro code, the different macro contexts and their uses.

Dan Korostelev writes about macros for tolerant JSON code
A look at "Using haxe macros as syntax-tolerant, position-aware json parser" with example code.

Lot's of these blogs and links include many great posts about haxe, and there are many more online if you search.


Good luck - and I hope to post more about haxe macros specifically in the near future as well.

]]>
<![CDATA[Shaders : second stage]]>The second part in a series on understanding shaders, covering how data gets sent between shaders and your app, how shaders are created and more.


Other parts:
- here is part one
- you are viewing part two


I wrote a post about shaders recently - it was a primer,

]]>
https://notes.underscorediscovery.com/shaders-second-stage/d924a65f-6027-45c3-adc0-0d3df24a7888Thu, 11 Sep 2014 12:17:00 GMTThe second part in a series on understanding shaders, covering how data gets sent between shaders and your app, how shaders are created and more.


Other parts:
- here is part one
- you are viewing part two


I wrote a post about shaders recently - it was a primer, a "What the heck are shaders?" type of introduction. You should read it if you haven't, as this post is a continuation of the series. This article is a little deeper down the rabbit hole, a bit more technical but also a high level overview of how shaders are generally made, fit together, and communicated with.

As before, this post will reference WebGL/OpenGL specific shaders but this article is by no means specific to OpenGL - the concepts apply to many rendering APIs.

sidenote:
I was overwhelmed by the positive response and continued sharing of the article, and I want you to know that I appreciate it.

brief “second stage” overview

This article will cover the following topics:

  • The road from text source code to active GPU program
  • Communication between each stage, and to and from your application

How shaders are created

Most rendering APIs share a common pattern when it comes to programming the GPU. The pattern consists of the following :

  • Compile a vertex shader from source code
  • Compile a fragment shader from source code*
  • Link them together, this is your shader program
  • Use this program ID to enable the program

*Intentionally keeping it simple, there are other stages etc. This series is for those learning and that is ok.

For a simple example, have a look at how WebGL would do it. I am not going to get TOO specific about it, just show the process in a real world use case.

There are some implied variables here, like vertex_stage_source, and fragment_stage_source are assumed to contain the shader code itself.

1 - Create the stages first

var vertex_stage = gl.createShader(gl.VERTEX_SHADER);  
var fragment_stage = gl.createShader(gl.FRAGMENT_SHADER);  

2 - Give the source code to each stage

gl.shaderSource(vertex_stage, vertex_stage_source);  
gl.shaderSource(fragment_stage, fragment_stage_source);  

3 - Compile the shader code, this checks for syntax errors and such.

gl.compileShader(vertex_stage);  
gl.compileShader(fragment_stage);  

Now we have the stages compiled, we link them together to create a single program that can be used to render with.

   //this is your actual program you use to render
var the_shader_program = gl.createProgram();

   //It's empty though, so we attach the stages we just compiled
gl.attachShader(the_shader_program, vertex_stage);  
gl.attachShader(the_shader_program, fragment_stage);

   //Then, link the program. This will also check for errors!
gl.linkProgram(the_shader_program);

Finally, when you are ready to use the program you created, you simply use it :

gl.useProgram(the_shader_program);  

Simple complexity

This seems like a lot of code for something so fundamental, and it can be a lot of boilerplate but remember that programming is built around the concept of repeating tasks. Make a function to generate your shader objects and your boilerplate goes away, you only need to do it once. As long as you understand how it fits together, you are in full control of how much boilerplate you have to write.

Pipeline communications

As discussed in part one - the pipeline for a GPU program consists of a number of stages that are executed in order, feeding information from one stage to the next and returning information along the way.

The next most frequent question I come across when dealing with shaders, is how information travels between your application and between the different stages.

The way it works is a little confusing at first, it's very much a black box. This confusion is also amplified by "built in" values that magically exist. It's even more confusing because there are deprecated values that should never be used - in every second article. So when someone shows you "the most basic shader" it's basically 100% unknowns at first.

Aside from these things though, like the rest of the shading pipeline - A lot of it is very simple in concept and likely something you will grasp pretty quickly.

Let's start with the built in values, because these are the easiest.

Built in functions

All shader languages have built in language features to compliment programming on the graphics hardware. For example, GLSL has a function called mix, this is a Linear Interpolation function (often called lerp) and is very useful in programming on the GPU. What I mean is that you should look these up. Depending on your platform/shader language, there are many functions that may be new concepts to you, as they don't really occur by default in other disciplines.

Another important note about the built in functions - these functions often are handled by the graphics hardware intrinsically, meaning that they are optimized and streamlined for use. Barring any wild driver bugs or hardware issues, these are often faster than rolling your own code for the functions they offer - so you should familiarize yourself with them before hand writing small maths functions and the like.

Built in variables

Built in variables are different to the functions, they store values from the state of the program/rendering pipeline, rather than operating on values. A simple example would be when you are creating a pixel shader, gl_FragCoord exists, and contains the window-relative coordinates of the current fragment. As with the function list, they are often documented and there are many to learn, so don't worry if there seem to be a lot. You learn about them and use them only when you need to in practice. Every shader programmer I know remembers a subset by heart and has a reference on hand at all times.

These values are implicit connections between the pipeline and code you write.

staying on track

To avoid the “traps” of deprecated functions, as with any API for any programming language, you just have to read the documentation. It's the same principle as targeting bleeding edge features - you check the status in the API level you want to support, you make sure your requirements are met, and you avoid things that are clearly marked as deprecated for that API level and above. It's irrelevant that they were changed and swapped before that - focus only on what you need, and forget it's history.

Edit: Since posting these, an amazing resource has come up for figuring out the availability and usage of openGL features http://docs.gl/

Most APIs provide really comprehensive "quick reference" sheets, jam packed with every little detail you would need to know, including version, deprecation, and signatures. Below are some examples from the OpenGL 4.4 quick reference card.

OpenGL 4.4 built in variables

OpenGL 4.4 built in functions


Information between stages

Also mentioned in part one, the stages can and do send information to the next stage.

Across different versions of APIs, over a few years, newer better APIs were released that improved drastically over the initial confusing names. This means that, across major versions of APIs, you will come across multiple approaches for the same thing.

Remember : The important thing here is the concepts and principles. The naming/descriptions may be specific but it's simply to ground the concept in an existing API. These concepts apply in other APIs, and are differ only in use, rather than concept.

stage outputs

vertex stage

In OpenGL, the concept of the vertex shader sending information to the fragment shader was named varying. To use it, you would:

  • create a named varying variable inside of the vertex shader
  • create the same named varying variable inside of the fragment shader

This allowed OpenGL to know that you meant "make this value available in the next stage, please". In other shader languages the same concept applies, where explicit connections are created, by you, to signify outputs from the vertex shader.

An implicit connection exists for gl_Position which you return to the pipeline for the vertex position.

In newer OpenGL versions, these were renamed to out:

out vec2 texcoord;  
out vec4 vertcolor;  

fragment stage

We are already saw that the fragment shader uses gl_FragColor as an output to return the color. This is an implicit connection. In newer GL versions, out is used in place of gl_FragColor:

out vec4 final_color;  

It can also be noted that there are other built in variables (like gl_FragColor) that are outputs. These feed back into the pipeline. One example is the depth value, it can be written to from the fragment shader.

stage inputs

Also in OpenGL, you would "reference" the variable value from the previous stage using varying or, in newer APIs, in. This is an explicit connection, as you are architecting the shader.

in vec2 texcoord;  
in vec4 vertcolor;  

The second type of explicit input connections are between your code and the rendering pipeline. These are set through API functions in the application code, and submit data through them, for use in the shaders.

In OpenGL API, these were named uniform, attribute and sampler among others. attribute is vertex specific, sampler is fragment specific. In newer OpenGL versions these can take on the form of more expressive structures, but for the purpose of concept, we will only look at the principle :

vertex stage

Attributes are handed into the shader from your code, into the first stage :

attribute vec4 vertex_position;  
attribute vec2 vertex_tcoord;  
attribute vec4 vertex_color;  

This stage can forward that information to the fragments, modified, or as is.

The vertex stage can take uniform as well, there is a difference in how attributes work and uniforms work.

fragment stage

uniform vec4 tint_color;  
uniform float radius;  
uniform vec2 screen_position;  

Notice that these variables are whatever I want them to be. I am making explicit connections from my code, like a game, into the shader. The above example could be for a simple lantern effect, lighting up a radius area, with a specific color, at a specific point on screen.

That is application domain information, submitted to the shader, by me.

Another explicit type of connection is a sampler. Images on the graphics card are sampled and can be read inside of the fragment shader. Take note that the value passed in is not the texture ID, it's not the texture pointer, it is the active texture slot. Texturing is usually a state, like use this shader, then use this texture and then draw. The texture slot, allows multiple textures to co-exist, and be used by the shaders.

  • bind texture A
  • set active slot 0
  • bind texture B
  • set active slot 1

The texture slot determines what value the shader wants, as it will always use the bound texture, and the given sampler slot!

fundamental shaders

The most basic shaders you will come across simply take information, and use it to present the information as it is. Below, we can look at how "default shaders" would fit together, based on the knowledge we now have.

This will be using WebGL shaders again, for reference only. These concepts are described above, so they should hopefully make sense now.

As you recall - geometry is a set of vertices. Vertices hold (in this example) :

  • a color
  • a position
  • a texture coordinate

This is a vertex, geometry, so these values will go into a vertex attribute and sent to the vertex stage.

The texture itself, is color information. It will be applied in the fragment shader, so we pass the active texture slot we want to use, as a shader uniform.

The other information in the shader below, is for camera transforms, these are sent as uniforms because they are not vertex specific data. They are just data that I want to use to apply a camera.

You can ignore the projection code for now, as this is simply about moving data around from your app, into the shader, between shaders, and back again.

Basic Vertex shader

//vertex specific attributes, for THIS vertex

attribute vec3 vertexPosition;  
attribute vec2 vertexTCoord;  
attribute vec4 vertexColor;

//generic data = uniforms, the same between each vertex!
//this is why the term uniform is used, it's "fixed" between
//each fragment, and each vertex that it runs across. It's 
//uniform across the whole program.

uniform mat4 projectionMatrix;  
uniform mat4 modelViewMatrix;

//outputs, these are sent the next stage.
//they vary from vertex to vertex, hence the name.

varying vec2 tcoord;  
varying vec4 color;

void main(void) {

        //work out the position of the vertex, 
        //based on its local position, affected by the camera

    gl_Position = projectionMatrix * 
                  modelViewMatrix * 
                  vec4(vertexPosition, 1.0);

        //make sure the fragment shader is handed the values for this vertex

    tcoord = vertexTCoord;
    color = vertexColor;

} 

Basic fragment shader

If we have no textures, only vertices, like a rectangle that only has a color, this is really simple :

Untextured

//make sure we accept the values we passed from the previous stage

varying vec2 tcoord;  
varying vec4 color;

void main() {

        //return the color of this fragment based on the vertex 
        //information that was handed into the varying value!

        // in other words, this color can vary per vertex/fragment

    gl_FragColor = color;

}

Textured

   //from the vertex shader
varying vec2 tcoord;  
varying vec4 color;

   //sampler == texture slot
   //these are named anything, as explained later

uniform sampler2D tex0;

void main() {

        //use the texture coordinate from the vertex, 
        //passed in from the vertex shader,
        //and read from the texture sampler, 
        //what the color would be at this texel
        //in the texture map

    vec4 texcolor = texture2D(tex0, tcoord);

        //crude colorization using modulation,
        //use the color of the vertex, and the color 
        //of the texture to determine the fragment color

    gl_FragColor = color * texcolor;

}

Binding data to the inputs

Now that we know how inputs are sent and stored, we can look at how they get connected from your code. This pattern is very similar again, across all major APIs.

finding the location of the inputs

There are two ways :

  1. Set the attribute name to a specific location OR
  2. Fetch the attribute/uniform/sampler location by name

This location is a shader program specific value, assigned by the compiler. You have control over the assignments by name, or, by forcing a name to be assigned at a specific location.

Put in simpler terms :

”radius“, I want you to be at location 0.
vs
Compiler, where have you placed “radius”?

If you use the second way, requesting the location, you should cache this value. You can request all the locations once you have linked your program successfully, and reuse them when assigning values to the inputs.

Assigning a value to the inputs

This is often application language specific, but again the principle is universal : The API will offer a means to set a value of an input from code.

vertex attributes

Let's use WebGL as an example again, and let's use a single attribute, for the vertex position, to locate and set the position.

var vertex_pos_loc = gl.getAttribLocation(the_shader_program, "vertexPosition");  

Notice the name? I am asking the compiler where it assigned the named variable I declared in the shader. Now we can use that location to give it some array of vertex position data.

First, because we are going to use attribute arrays, we want to enable them. If you read this code as simple terms, it says "enable a vertex attribute array for location, where location refers to "vertexPosition".

gl.enableVertexAttribArray(vertex_pos_loc);  

To focus on what we are talking about here, some variables are implied :

   //this simply sets the vertex buffer (list of vertices) 
   //as active, so subsequent commands use this buffer

gl.bindBuffer( gl.ARRAY_BUFFER, rectangle_vertices_buffer );

   //and this line points the buffer to the location, or "vertexPosition"

gl.vertexAttribPointer(vertex_pos_loc, 6, gl.FLOAT, false, 0, 0);  

There, now we have:

  • taken a list of vertex positions, stored them in a vertex buffer
  • located the vertexPosition variable location in the shader
  • enabled attribute arrays, because we are using arrays
  • we set the buffer as active,
  • and finally pointed our location to this buffer.

What happens now, is the vertexPosition value in the shader, is associated with the list of vertices from the application code. Details on vertex buffers are well covered online, so we will continue with shader specifics here.

uniform values

As with attributes, we need to know the location.

var radius_loc = gl.getUniformLocation(the_shader_program, "radius");  

As this is a simple float value, we use gl.uniform1f. This varies by API in syntax, but the concept is the same across the APIs.

gl.uniform1f(radius_loc, 4.0);  

This tells OpenGL that the uniform value for "radius" is 4.0, and we can call this multiple times to update it each render.

Conclusion

As this article was already getting quite long, I will continue further in the next part.

As much of this is still understanding the theory, it can seem like a lot to get around before digging into programming actual shaders, but remember there are many places to have a look at real shaders, and try to understand how they fit together :

Playing around with shaders : recap
Here are some links to some sandbox sites where you can see examples, and create your own shaders with minimal effort directly in your browser.

https://www.shadertoy.com/
http://glsl.heroku.com/
http://www.mrdoob.com/projects/glsl_sandbox/

An important factor here is understanding what your framework is doing to give you access to the shaders,

which allows you to interact with the framework in more powerful ways. Like, drawing a million particles in your browser - passing information through textures, encoding values into color information and vertex attributes.


The delay between post one and two were way too long, as I have been busy, but the next two posts are hot on the heels of this one.

Tentative topics for the next posts :

shaders stage three

  • A brief discussion on architectural implications of shaders, or "How do I fit this into a rendering framework" and "How to do more complex materials".
  • Understanding and integrating new shaders into existing shader pipelines
  • Shader generation tools and their output

shaders stage four

  • Deconstructing a shader with a live example
  • Constructing a basic shader on your own
  • A look at a few frameworks shader approach
  • series conclusion

Follow ups

If you would like to suggest a specific topic to cover, or know when the next installment is ready, you can subsribe to this blog (top right of post), or follow me on twitter, as I will tweet about the articles there as I write them. You can find the rest of my contact info on my home page.

I welcome asking questions, sending feedback and suggesting topics.
As before, I hope this helps someone in their journey, and look forward to seeing what you create.

]]>
<![CDATA[Primer : Shaders]]>A common theme I run into when talking to some developers is that they wish they could wrap their head around shaders. Shaders always seem to solve a lot of problems, and often are referenced as to the solution to the task at hand.

But just as often they are

]]>
https://notes.underscorediscovery.com/shaders-a-primer/a23707ac-c0b8-42fa-9ae6-25899e96f94eThu, 03 Apr 2014 22:13:35 GMTA common theme I run into when talking to some developers is that they wish they could wrap their head around shaders. Shaders always seem to solve a lot of problems, and often are referenced as to the solution to the task at hand.

But just as often they are seen as a sort of enigma or black box - one that is so shrouded in complexity that it makes learning them from ”basic” examples near impossible.

Hopefully, this primer will help those that aren't well versed and help transition into using shaders, where applicable.


Other parts:
- you are viewing part one
- here is part two

What are shaders?

When you draw something on screen, it is generally submitted as some “geometry”. Like, a polygon or a group of triangles. Even drawing a sprite, is drawing some geometry with an image applied.

Geometry is a set of points (vertices) describing the layout which is sent to the graphics card for drawing. A sprite, like a player or a platform is usually a “quad”, and is often sent as two triangles arranged in a rectangle shape.

When you send geometry to the graphics card to be drawn, you can tell the graphics card to use custom shaders that will be applied to the geometry, before it shows up on the render.

There are two kinds of shaders to understand for now - vertex and fragment shaders. You can think of a shader as a small function that is run over each vertex, and every fragment (a fragment is like a pixel) when rendering. If you look at the code for a shader, it would resemble a regular function :

void main() {  
   //this code runs on each fragment, or vertex.
}

It should be noted as well that the examples below reference OpenGL Shader Language, referred to as GLSL, but the concepts apply to the programmable pipeline in general and are not for any specific rendering API. This information applies to almost any platform or API.


The vertex shader


As mentioned, there are vertices sent to the hardware to draw a sprite. Two triangles - and each triangle has 3 vertices, making a total of 6 vertices sent to be drawn.

When these 6 vertices reach the rendering pipeline in the hardware, there is a small program (a shader) that can run on each and every vertex. Remember the graphics hardware is built for this, so it does many of these at once in parallel so it is really fast.

That program only cares about one thing really : The vertex shader mainly cares about the position that the vertex will be (there is a footnote in the conclusion). This means that we can manipulate (or calculate) the correct position that the vertex should be. Very often this includes camera calculations and determines how and where the vertex ends up before being drawn.

Let's visualise this below, by shifting the sprite 10 units to the left :

vertex shader

If you wanted to, you could apply sin waves, or random noise or any number of calculations on a per vertex level to manipulate the geometry.

Practical example
This can be used to generate waves that work to move vertices according to patterns that look like lava or water. All of the following examples were provided by Tyler Glaiel from Bombernauts

The lava (purple) area geometry, bunches of vertices!

Lava Area

How it looks when a vertex shader moves it around (notice how the vertices are pushed up and down and around like water, this is the vertex shader at work)

You can have a look at how it looks when it ripples on the blog post here, at the Bombernauts development blog.


The fragment shader


After the vertices are done moving about, they are sent to the next stage of the shader to be "rasterized", that means converted into fragments that end up as pixels on screen.

When doing this stage of rasterizing geometry (which are now called fragments), each fragment is given to the fragment shader. These are also sometimes referred to as pixel shaders, because some people associate the fragments with pixels on screen, but there is a difference.

Here is a gif from an excellent presentation on Acko.net which usefully demonstrates how sampling works, which is part of the rasterization process. It should help understand how the vector geometry becomes pixels in the end.

rasterization

Now, the fragment shader, much like the vertex shader, is run on every single fragment. Again, it is good at doing this really quickly, but it is important to understand that a single line of code in a shader can cause drastic performance cost due to the sheer number of times the code will be run! (See the note at the end of this section for some interesting numbers).

The fragment shader mainly cares about what the resulting color of the fragment becomes. It will also interpolate (blend) from each vertex, based on it's location between them. Let's visualize this below :

fragment shader

When I say interpolated, here is what I mean : Given a rectangle with 4 corners (arranged as 2 triangles) and the corner vertices colors set to red, green, blue and white - the result is a rectangle that is blended between the colors automatically.

Interpolated colors sourced from open.gl

Practical example
A fragment shader can be used to blur some or all of the screen before drawing it, like in this example, some blur was applied to the map screen below the UI to obtain a tilt shift effect. This is from a game I was working on for a while, and the tilt shift shader came from Martin Jonasson.
For the curious, here is the source for the tilt shift shader along with some notes about separating the x and y passes for a blur, since that has come up a bunch.

tilt shift

An important note on numbers

A game rendered at 1080p, a resolution of 1920x1080 pixels, would be 1920 * 1080 = 2,073,600 pixels.

That is per frame - usually games run at 30 or 60 frames per second. That means (1920 x 1080) x 60 for one second of time, that's a total of 124,416,000 pixels each second. For a single frame buffer, usually games have multiple buffers as well, for special effects and all kinds of rendering needs.

This is important because you can do a lot with fragment shaders especially because the hardware is exceptionally good at it but when you are pushing performance problems it can often come down to how quickly the hardware can process the fragments, and shaders can easily become a bottleneck if you aren't paying attention.

Playing around with shaders

Playing with shaders can be fun, here are some links to some sandbox sites where you can see examples, and create your own shaders with minimal effort directly in your browser.

https://www.shadertoy.com/
http://glsl.heroku.com/
http://www.mrdoob.com/projects/glsl_sandbox/

Conclusion


Recap : Shaders are applied in a program that consists of parts, and apply when enabled to geometry, when submitted to be drawn.

Vertex shaders : first, applied to every vertex when enabled, each render, and mainly care about the end position of the vertex.

Fragment shaders : second, applied to every fragment when enabled, each render, and mainly care about the resulting color of the fragment.

Because the nature of shaders are so versatile, there are many many things that you can do with them. From complex 3D lighting algorithms down to simple image distortion or coloring, you can do a huge range of things with the rendering pipeline.

Hopefully this post has helped you better understand shaders, and let you explore the possibilites without being completely confused by what they are and how they work going into it.

footnote
It should be said there is more that you can do with vertex shaders, like vertex colors and uv coordinates, and there is a lot more you can do with fragment shaders as well but to keep this post a primer, that is for a future post.


Notes on the term “Shaders”
The term “Shader” is often called out as a bit of a misnomer (but only sort of), so be aware of the differences. This post is really about the ”programmable pipeline”, as mentioned in bold really early on. The pipeline has stages that you can run some code for certain stages. A GPU program is made up of code from each programmable stage (vertex,fragment,etc), compiled into a single unit and then run over the entire pipeline while geometry is submitted for drawing, if that program is enabled.

Each stage does a little communicating between the stages (like the vertex stage hands the vertex color to the fragment stage), and the vertex and fragment stages are the most important to understand first.

I personally feel like the term shader comes from the fact that 99.9% of the time you will be working with the programmable pipeline will be spent on shading things, while the vertex and other stages are often a fraction of the day to day use of your average application or game.

]]>
<![CDATA[Understanding Realtime Multiplayer]]>An article I wrote about understanding realtime multiplayer. Includes theory, plenty of links and diagrams and includes a working demo on github written in HTML5 with client and server for 1 vs 1 realtime multiplayer.


http://buildnewgames.com/real-time-multiplayer/

demo

Enjoy!

]]>
https://notes.underscorediscovery.com/understanding-realtime-multiplayer/2b7fc2cd-b7b3-47bd-8f52-4a03f0bb1a37Wed, 08 Jan 2014 01:45:05 GMTAn article I wrote about understanding realtime multiplayer. Includes theory, plenty of links and diagrams and includes a working demo on github written in HTML5 with client and server for 1 vs 1 realtime multiplayer.


http://buildnewgames.com/real-time-multiplayer/

demo

Enjoy!

]]>
<![CDATA[L-systems and procedural generation]]>L-systems are a generation system that uses a simple descriptor to define fractal patterns that can be useful for many things, like trees, streets and more. This post goes over the way they look and how they work.


Axioms

If you have never seen or heard of L-Systems, it is

]]>
https://notes.underscorediscovery.com/l-systems-and-procedural-generation/21890d88-a0c3-42ec-89f5-efd5d516ee48Wed, 08 Jan 2014 01:35:50 GMTL-systems are a generation system that uses a simple descriptor to define fractal patterns that can be useful for many things, like trees, streets and more. This post goes over the way they look and how they work.


Axioms

If you have never seen or heard of L-Systems, it is a fractal generator that can take a number of axioms and generate output in a fractal approach. It can almost be described as a simple language, that describes what happens to some lines as they spread out. Let's see how this works, here is an example axiom set :

Instructions (axiom)
+A-B

A set of items
A = FFFF[--AE]F[+++AE]FFF
B = FFFF

This looks a bit like variables in programming, A and B contain values, and they are replaced in the generation step by their contents.

We start at a single point and "step" outward, following the instructions in the axiom above.

Here is the output, which I will explain below.

+FFFF[--FFFF[--AE]F[+++AE]FFF[---F][--FF][-FFF][+++F][++FF][+FFF]FFF]F[+++FFFF[--AE]F[+++AE]FFF[---F][--FF][-FFF][+++F][++FF][+FFF]FFF]FFF-FFFF

We have control over the angle the step lines will happen, and the number of times to run over the step. Remember this is fractal, so things repeat on themselves. We have a small set of rules which determine what happens, at each step. For example, a - step, means that the angle at which we draw the line changes. It's like a small language saying "add the direction, then move and draw, then move and draw, then create a leaf, then move and draw".

The point of an l-system generator is to generate a final 'instruction list' at the end of the generator, which we can loop over and use, for drawing or other structures.

Instructions?

The little 'language' the above is using is relatively simple, for example, a - means that the direction the line is moving at should be changed ( -angle ), and + means the direction will add the angle we specify in the system ( +angle ).

A lowercase letter (between a and z) will move the line point but will not draw, leaving gaps in the system.

An uppercase letter will move the line point, and draw the line.

A ' will change the color you are drawing with.

A [ will create a “root node” that can have children, and a ] terminates the current root node. In other words, the [] makes a tree branch that can have its own children.

l-systems 1

What does it look like?

Remember now, that this is a fractal system, and can iterate recursively (making smaller leaves on branches, and even smaller ones under that) all using the same system. This gives the system the following parameters : 

axiom : +A-B
angle : 10
iterations : 2
linelength : 16 (pixels) 

This generates the following : 

l-system 2

Next, I simply changed the angle to 30 : 

And then 60 : 

What did I do with this?

Long ago, I was working on a small 2D game with a city component. I wanted the city streets to be generated dynamically so that the city will be somewhat interesting and unique. Take a look at the outputs when you set the angle to 90, and what do you see? Looks like streets to me!

Conclusion

The nice thing about the system is that you have complete control over the patterns (using the simple axioms), and can even generate those procedurally.
This in turn seeds the city streets, and in fact, the rest of the city.

Below are some examples with the parameters changed, all using the same axiom. The grid size changes, maybe generating blocks/business districts can be used with a larger step in the grids,        

Or maybe it can be used to determine density in population,

But for now, it is just a simple street system,

Results

The outcome is used as streets, along these lines :

Here are some resources I used to get here :

Sol Graphics Tutorials on L-Systems
In browser canvas generator with parameters
An amazing city generated with similar systems (subversion/introversion)

]]>
<![CDATA[Pathing excursions]]>I enjoy messing with path finding algorithms and finding interesting ways to obtain the results, this is about a few more recent attempts.


Paths and "I hate grids"

This post covers approaches I have been messing with to confront some of the issues I have with path finding on grids,

]]>
https://notes.underscorediscovery.com/pathing-excursions/1223694f-e1ec-415d-b15b-1aca84f2638cTue, 07 Jan 2014 23:57:04 GMTI enjoy messing with path finding algorithms and finding interesting ways to obtain the results, this is about a few more recent attempts.


Paths and "I hate grids"

This post covers approaches I have been messing with to confront some of the issues I have with path finding on grids, usually this is that the path results always look really rigid and unnatural. Even with path smoothing and all that, you still get these less than human looking movement patterns that always bothered me.

Graph theory

Graphs are interesting, and so is the theory surrounding them.

When it comes to path finding, like A-star, you can apply the algorithm to graphs as well. You can see a really nice live demo of graph based path finding on this Polygonal lab page, until I have what I am making in demo form. Essentially it is about connecting points to each other.

This makes traversing the nodes quite a bit cheaper than grid based path finding because there are less nodes (though there can be more, obviously) but the other benefit is that the neighbors and their nodes can be determined ahead of time, and using other algorithms - like Voronoi diagrams - you can insert/remove cells from the landscape dynamically.

Voronoi Diagrams and Delaunay Triangulation

While I was getting ready to move to Canada I wanted to distract myself from thinking too much and worked on implementing pathfinding across large areas using zones, using Voronoi diagrams.

Voronoi Manhattan Distance

You may know what they look like, and their uses and scope are outside of this article (though the link to Wikipedia is fairly thorough) - but they have a lot of fascinating properties including Delaunay Triangulation by connecting their center points.

Delaunay Triangulation

Back to paths!

What I was working on was a small procedural stealth prototype using some code from ctrlr (a post about this generation is forthcoming), to generate a large random space to explore with guards and cameras for interest sake. A procedural stealth playground.

Procedural Stealth Playground

Looking closer you can see there are some guards and some cameras and the building shapes are all random.

Stealthy

Stealhier

The pathing setup

So, to have a guard meaningfully run from one side of the world to another was an interesting challenge, there is of course more than one viable approach - I went with zones (larger cells, like broadphase) and then smaller phase, using the graphs.

At first I tried making the areas a single mesh, but there were just too many nodes for my liking and on larger areas this just got slower and slower. Not ideal.

Voronoi graph and Delaunay triangle mesh

The first step would be to section off areas, and do path finding on the large scale grid, when travelling across boundaries. Like this :

Boundary Broadphase paths

Now the AI can path first on a coarse grid, find the sub grids to navigate, and only ask those sub grids for a path on arriving at their boundary.

This also means that when a destination changes, it is often not significant enough to affect their sub grid path and they won't "suddenly change their mind" and run in a different direction, they will continue until their next boundary arrives unless their next boundary location changes.

Now we have split section boundaries, a decent amount of nodes to generate paths with, and "global" vs "local" navigation running.

gray boxes

Determine your cell location

Clicking on a cell (or deciding where your exact cell location is), in order to find a path, I also used the broadphase cell split approach. Break down each local section into a grid, calculate which grid you are in, and at creation of the grid, store every point and center point of the cell that lies inside the grid in a list. I stored these as list of vertices (to use with a "point in polygon" algorithm).

So when you click on the bottom, you get something like row 6, and column 2. Work out the position in world space, add it by the cell sizes (simple grid math) and check the list of cells for the on it lands inside.

Cell selection

The blue lines are any cell that has a vertex inside of the chose grid location (the third one from the bottom left). The vertices highlighted show the polygon selected, and the white point is the click position. To the path finding!

Neighbors and heuristics

A-star is simple in that it works on a lot of stuff, it's just an algorithm and can be applied to multiple situations. Here we have a start cell, and an end cell - and thanks to Voronoi cells being the basis, we now have Delaunay Triangles (connecting the center of each cell to the other center points) we have the information we already need - the neighbors and the distance for finding a path.

Path Directions

So as you can see, we have a possible direction to go from here. Voronoi can be tweaked to have less or more cell sides, depending on the complexity of your diagram and source points (which is covered later on). The key here is that everything you need to find a path, is sort of inherent to having it being built as a graph in the first place!

Finally, a path

Here is what a path looks like from A to B on this grid (A bit dark, sorry but that was because I was jamming on the code and just taking screenshots along the way).

Path


The drunken stealth guard

So cool, we have a path. It looks a whole lot more natural than a normal grid based path, to me. It is quite rough still, but you could increase the fidelity of the graph and make it higher resolution so that the path is smoother. Like this :

Neat natural paths

That's one approach, but what if you had ... almost a grid? And what if that almost-grid used the same graph theory, just on a more concise set of points? Well, this is what happens :

Natural Paths

I REALLY like this path compared to grids. It's basically a grid! But the results are so much better, and cheaper (thanks to the graph theory having all the data on hand for me).

I still like the results with a messier grid :

Voronoi Paths

But I really like the results of the slightly neater grid as well :

Voronoi Paths

Generating the graph

To generate these spaces, for the Voronoi looking cells - simply place points randomly in the area, and then run a smoothing algorithm over the points using something similar to Lloyd's Algorithm, this neatens up the really random placements into more uniform placements.

Before smoothing :
unsmoothed graph

After smoothing :
smoothed graph

To make the "almost grid" it is as it sounds, generate a set of points on a grid but add a slight amount of randomness to their position. Generate a uniform grid, and then add some noise.

point.x += (-0.5 + Math.random()) * noise_scale;

would give you ~2 pixels of randomness of noise_scale was 2. The -0.5 will make it + / - instead of just + so that the randomness is not shifting the grid to the right and downward. For those unfamiliar, Math.random returns a random number between 0 and 1 (like 0.23953148096).

Other pathing experiments

I have a few more path experiments that I have been exploring over time, some of which I stumbled upon due to buggy Heap algorithm code, and followed the rabbit hole because the results were pretty interesting.

The interesting thing about the following experiments, is that they all still reach the destination (the pathing is 100% ok) but the resulting paths are fascinating.

The path starts at the top left, and ends bottom right. The orange is the grid, the black are obstacles, the white is the path.

Crazy Path 1

Crazy Path 2

Crazy Path 3

Crazy Path 4

Some of the comments on these images were interesting as well, mentioning the last one looking like an optimal tower defense layout. My thoughts once digging into the results were around guard patrol routes, where your job is as a stealth security patrol, instead of being completely predictable you could be more interesting and varied while still maintaining your destination.

Explosions

Things don't always go according to plan (the above is one of those times it works out for the better) but these are some images of when the pathing or graphs were going crazy.

Bad point in cell collection

point fail

Failed "radius" point-in-cell check

Looked neat though!
point fail

Failed Voronoi diagram

voronoi fail

uhhh... what

voronoi fail

Conclusion / resources

I also really like the paths from Theta* path finding, it is a nice way to generate smoother more sensible looking paths.

Alex May linked to an interesting video using tree like paths. This makes me wonder about using L-system generated paths as well... Something I may mess with in future.

]]>