Tuesday, August 16, 2016

Render Config Extensions

Untitled Document.md

The rendering pipe in Stingray is completely data-driven, meaning that everything from which GPU buffers (render targets etc) that are needed to compose the final rendered frame to the actual flow of the frames is described in the render_config file - a human readable json file. I have covered this in various presentations [1,2] over the years so I won’t be going into more details about it in this blog post, instead I’d like to focus on a new feature that we are rolling out in Stingray v1.5 - Render Config Extensions.

As Stingray is growing to cater to more industries than game development we see lots of feature requests that don’t necessarily fit in with our ideas of what should go into the default rendering pipe that we ship with Stingray. This has made it apparent that we need a way of doing deep integrations of new rendering features without having to duplicate the entire render_config file.

This is where the render_config_extension files comes into play. A render_config_extension is very similar to the main render_config except that instead of having to describe the entire rendering pipe it appends and inserts different json blocks into the main render_config.

When the engine starts the boot ini-file specifies what render_config to use as well as an array of render_config_extensions to load when setting up the renderer.

render_config = "core/stingray_renderer/renderer"
render_config_extensions = ["clouds-resources/clouds", "prism/prism"]

The array describes the initialization order of the extensions which makes it possible for the project author to control how the different extensions stacks on top of each other. It also makes it possible to build extensions that depends on other extensions.

A render_config_extension consists of two root blocks: append and insert_at:

append

The append block is used for everything that is order independent and allows you to append data to the following root blocks of the main render_config:

  • shader_libraries – lists additional shader_libraries to load
  • render_settings – add more render_settings (quality settings, debug flags, etc.)
  • shader_pass_flags – add more shader_pass_flags (used by shader system to dynamically turn on/off passes)
  • global_resources – additional global GPU resources to allocate on boot
  • resource_generators – expose new resource_generators
  • viewports – expose new viewport templates
  • lookup_tables – append to the list of resource_generators to execute when booting the renderer (mainly used for generating lookup tables)

One thing to note about extending these blocks is that we currently do not do any kind of name collision checking, so using a prefix to mimic a namespace for your extension is probably a good idea.

// example append block from JPs volumetric clouds plugin
append = {
  render_settings = {
    clouds_enabled = true
    clouds_raw_data_visualization = false
    clouds_weather_data_visualization = false
  }

  shader_libraries = [
    "clouds-resources/clouds"       
  ]

  global_resources = [
    // Clouds modelling resources:
    { name="clouds_result_texture1" type="render_target" image_type="image_3d" width=256 height=256 layers=256 format="R8G8B8A8" }
    { name="clouds_result_texture2" type="render_target" image_type="image_3d" width=64 height=64 layers=64 format="R8G8B8A8" }
    { name="clouds_result_texture3" type="render_target" image_type="image_2d" width=128 height=128 format="R8G8B8A8" }
    { name="clouds_weather_texture" type="render_target" image_type="image_2d" width=256 height=256 format="R8G8B8A8" }
  ]
}

insert_at

The insert_at block allows you to insert layers and modifiers into already existing layer_configurations and resource_generators, either belonging to the main render_config file or a render_config_extension listed earlier in the render_config_extensions array of engine boot ini-file.

// example insert_at block from JPs volumetric clouds plugin
insert_at = {
  post_processing_development = {
    modifiers = [
      { type="dynamic_branch" render_settings={ clouds_weather_data_visualization=true }
        pass = [
          { type="fullscreen_pass" shader="debug_weather" input=["clouds_weather_texture"] output=["output_target"]  }
        ]
      }
    ]
  }

  skydome = {
    layers = [
      { resource_generator="clouds_modifier" profiling_scope="clouds" }
    ]
  }
}

The object names under the insert_at block refers to extension_insertion_points listed in the main render_config file or one of the previously loaded render_config_extension files. We’ve chosen not to allow extensions to inject anywhere they like (using line numbers or similar crazyness), instead we expose a bunch of extension “hooks” at various places in the main render_config file. By doing this we hope to have a somewhat better chance of not breaking existing extensions as we continue to develop and potentially do bigger refactorings of the default render_config file.

Future work

This extension mechanism is somewhat of an experiment and we might need to rethink parts of it in a later version of Stingray. We’ve briefly discussed a potential need for dealing with versioning, i.e. allowing extensions to explicitly list what versions of Stingray they are compatible with (and maybe also allow extensions to have deviating implementations depending on version). Some kind of enforced name spacing and more aggressive validation to avoid name collisions have also been debated.

In the end we decided to ignore these potential problems for now and instead push for getting a first version out in 1.5 to unblock plugin developers and internal teams wanting to do efficient “deep” integrations of various rendering features. Hopefully we won’t regret this decision too much later on. ;)

References

  • [1] Flexible Rendering for Multiple Platforms (Tobias Persson, GDC 2012)
  • [2] Benefits of data-driven renderer (Tobias Persson, GDC 2011)

Sunday, July 31, 2016

Volumetric Clouds

There has been a lot of progress made recently with volumetric clouds in games. The folks from Reset have posted a great article regarding their custom dynamic clouds solution, Egor Yusov published Real-time Rendering of Physics-Based Clouds using Precomputed Scattering in GPU Pro 6, last year Andrew Schneider presented Real-time Volumetric Cloudscapes of Horizon: Zero Dawn, and just last week Sébastien Hillaire presented Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite. Inspired by all this latest progress we decided to implement a Stingray plugin to get a feel for the challenge that is real time clouds rendering.

Note: This article isn't an introduction to volumetric cloud rendering but more of a small log of the development process of the plugin. Also, you can try it out for yourself or look at the code by downloading the Stingray plugin. Feel free to contribute!

Modeling

The modeling of our clouds is heavily inspired by the Real-time Volumetric Rendering Course Notes and Real-time Volumetric Cloudscapes of Horizon: Zero Dawn. It uses a set of 3d and 2d noises that are modulated by a coverage and altitude term to generate the 3d volume to be rendered.

I was really impressed at the shapes that can be created from such simple building blocks. While you can definitely see cases where some tiling occurs, it’s not as bad as you would imagine. Once the textures are generated the tough part is to find the right sampling spaces and scales at which they should be sampled in the atmosphere. It's difficult to get a good balance between tiling artifacts vs getting enough high frequency details for the clouds. On top of that cache hits are greatly affected by the sampling scale used so it's another factor to consider.

Finding good sampling scales for all of these textures and choosing by how much the extrusion texture should affect the low frequency clouds is very time consuming. With some time you eventually build intuition for what will look good in most scenarios but it’s definitely a difficult part of the process.

We also generate some curl noise which is used to perturb and animate the clouds slightly. I've found that adding noise to the sampling position also reduces linear filtering artifacts that can arise when ray marching these low resolution 3d textures.

One thing that often bothered me is the oddly shaped cumulus clouds that can arise from tilled 3d noise. Those cases are particularly noticeable for distant clouds. Adding extra cloud coverage for lower altitude sampling positions minimizes this artifact.

Raymarching the volume at full resolution is too expensive even for high end graphics cards. So as suggested by Real-time Volumetric Cloudscapes of Horizon: Zero Dawn we reconstruct a full frame over 16 frames. I've found that to retain enough high frequency details of the clouds, we need a fairly high number of samples. We are currently using 256 steps when raymarching. We offset the starting position of the ray by a 4x4 Bayer matrix pattern to reduce banding artifacts that might appear due to undersampling. Mikkel Gjoel shared some great tips for banding reduction while presenting The Rendering Of Inside and encouraged the use of blue noise to remove banding patterns. While this gives better results there is a nice advantage of using a 4x4 pattern here: since we are rendering interleaved pixels it means that when rendering one frame we are rendering all pixels with the same Bayer offset. This yields a significant improvement in cache coherency compared to using a random noise offset per pixel. We also use an animated offset which allows us to gather a few extra samples through time. We use a 1d Halton sequence of 8 values and instead of using 100% of the 16ᵗʰ frame we use something like 75% to absorb the Halton samples.

To re-project the cloud volume we try to find a good approximation of the cloud's world position. While raymarching we track a weighted sum of the absorption position and generate a motion vector from it.

This allows us to reproject clouds with some degree of accuracy. Since we build one full resolution frame every 16ᵗʰ frame it’s important to track the samples as precisely as possible. This is especially true when the clouds are animated. Finding the right number of temporal samples you want to integrate over time is a compromise between getting a smoother signal for trackable pixels vs having a more noisy signal for invalidated pixels.

Lighting

To light the volume we use the "Beer-Powder" term described by Real-time Volumetric Cloudscapes of Horizon: Zero Dawn. It's a nice model since it simulates some of the out-scattering that occurs at the edges of the clouds. We discovered early on that it was going to be difficult to find terms that looked good for both close and distant clouds. So (for now anyways) a lot of the scattering and extinction coefficients are view dependent. This proved to be a useful way of building intuition for how each term affects the lighting of the clouds.

We also added the ambient term described by the Real-time Volumetric Rendering Course Notes which is very useful to add detail where all light is absorbed by the volume.

The ambient function described takes three parameters: sampling altitude, bottom color and top color. Instead of using constant values, we calculate these values by sampling the atmosphere at a few key locations. This means our ambient term is dynamic and will reflect the current state of the atmosphere. We use two pairs of samples perpendicular to the sun vector and average them to get the bottom and top ambient colors respectively.

Since we already calculated an approximate absorption position for the reprojection, we use this position to change the absorption color based on the absorption altitude.

Finally, we can reduce the alpha term by a constant amount to skew the absorption color towards the overlayed atmospheric color. By default this is disabled but it can be interesting to create some very hazy skyscapes. If this hack is used, it's important to protect the scattering highlight colors somewhat.

Animation

The animation of the clouds consists of a 2d wind vector, a vertical draft amount and a weather system.

We dynamically calculate a 512x512 weather map which consists of 5 octaves of animated Perlin noise. We remap the noise value differently for each rgb component. This weather map is then sampled during the raymarch to update the coverage, cloud type and wetness terms of the current cloud sample. Right now we resample this weather term for each ray step but a possible optimization would be to sample the weather data and the start and end of the ray positions and interpolate these values at each step. All of the weather terms come in sunny/stormy pairs so that we can lerp them based in a probability of rain percentage. This allows the weather system to have storms coming in and out.

The wetness term is used to update a structure of terms which defines how the clouds look based on how much humidity they carry. This is a very expensive lerp which happens per ray march and should be reduced to the bare minimum (the raymarch is instruction bound so each removed lerp is a big win optimization wise). But for the current exploratory phase it’s proving useful to be able to tweak a lot of these terms individually.

Future work

I think that as hardware gets more powerful realtime cloudscape solutions will be used more and more. There is tons of work left to do in this area. It is absolutely fascinating, challenging and beautiful. I am personally interested in improving the sense of scale the rendered clouds can have. To do so, I feel that the key is to reveal more and more of the high frequency details that shape the clouds. I think smaller cloud features are key to put in perspective the larger cloud features around them. But extracting higher frequency details usually comes at the cost of increasing the sampling rate.

We also need to think of how to handle shadows and reflections. We've done some quick tests by updating a 512x512 opacity shadow map which seemed to work ok. Since it is not a view frustum dependent term we can absorb the cost of updating the map over a much longer period of time than 16 frames. Also, we could generate this map by taking fewer samples in a coarser representation of the clouds. The same approach would work for generating a global specular cubemap.

I hope we continue to see more awesome presentations at GDC and Siggraph in the coming years regarding this topic!

Links

Friday, April 1, 2016

The Poolroom

The Poolroom

Figure 1 : Poolroom Pool Table

The poolroom was my first attempt at creating a truly rich environmental experience with Stingray. Most architectural visualization scenes you see are antiseptically clean and uncomfortably modern. I wanted to break away from that. I wanted an environment I would feel at home with, not one that a movie star would buy for sheer resale value to another movie star. I also wanted the challenge of working with natural and texturally rich materials. Not white on white, as is generally the case.

Figure : Poolroom Clock

To this end, I started looking for cozy but luxurious spaces on google and eventually came across a nice reference photo I could work with. Warm rich woods, lots of games, a bar, and well... those all speak to me. For better or worse, I felt this room was one I would personally feel comfortable in. So I took on the challenge of re-creating that environment in 3D inside Stingray.

The challenges


The poolroom gave me some major challenges. Some I knew would be trouble from the start, but some I didn’t realize until I started rendering lightmaps. Most of my difficulties came down to handling materials properly.

Figure 3 : Poolroom Bar

Coming to grips with physically based shaders

In addition to being my first complete Arch-Viz scene in Stingray, this was also my first real stab at using physically based shading (PBS). Although physically based shading is similar in many regards to traditional texturing, it has its own set of tricks and gotchas. I actually had to re-do the scenes materials more than once as I learned the proper way to do things.

For example, my scene was predominantly dark woods. With dark woods, you really have to be sure you get the albedo material in the correct luminosity range or you end up with difficulties when you light the scene. In my first attempts, I found my light being just eaten up by the darkness of the wood’s color map. I kept cranking up the light Intensities, but this would flood the scene and lead to harsh and broken light bakes.

Figure 4 : Arcade Game /p>

Eventually, once I understood the effect of the color map’s luminosity and got the values in line, I started getting great results with normalized light intensities. My lighting began responding favorably with deep, rich lightmap bakes. When you get the physical properties of the materials right, Stingray’s light baker is both fast and very good. But I can’t stress enough: with PBS, you must ensure that your luminosity values are accurate.

Reference photo was HDR

When I was building out the scene and trying to mimic the reference photo’s lighting, I realized that the original image was made using some high-dynamic range techniques. I couldn’t seem to get the same level of exposure and visual detail in the shadowed areas of my scene.

Figure 5 : Before Ambient Fills

Figure 6 : After Ambient Fills

Because of this, I had to do some pretty fun trickery with my scene lighting. In the end, I got it by placing some subtle, non-shadow casting lights in key areas to bring up the brightness a little in those areas.

Figure 6 : Soft Controlled Lighting

All in all, the scene took a lot of lighting work to get just right. I have to say that I was very happy with how closely I was able to match the lighting, given that the original photo was HDR.

Lived-in but not dirty

The last big challenge was also related to materials. I had to find that fine balance of a room that is clean and tidy but also obviously lived-in. So often I find Arch-Viz work feels unnaturally smooth and clean, which can destroy the belief of the space. I really wanted my scene to break through the uncanny valley and feel real.

I handled this mostly by creating some very simple grunge maps, and applying them to the roughness maps using a simple custom shader. This was easy to build in Stingray’s node-based shader graph:

Figure 8 : Simple RMA style shader with tiling and grunge map with adjustment.

I have this shader set up so I can control the tiling of the color map, normals and other textures. The grunge map, on the other hand, is sampled using UV coordinates from the lightmap channel. This helps to hide the tiling over large areas like the walls, because the grunge value that gets multiplied in to the roughness is always different each time the other textures repeat.

Balancing the grunge properly was the biggest challenge here, but in the end, some still shots even get me doing a double-take. When that happens, I know I’m doing well. I also posted progress along the way on my Facebook page — when I had friends saying, “whoa, when can I come visit?” I knew I was nailing it.

3D modeling


Figure 9 : Record Player Model in Maya LT

I don’t have much that’s special to say about the 3D modeling process. I simply modeled all my assets the same way anyone would. Attention to detail is really the trick, and making sure that I created hand-made lightmap UVs for every object was critical to ensure the best light baking. Otherwise it was just simple modeling.

Figure 10 : Poolroom Model in MayaLT

One thing to note, however, is that I only used 3D tools that came with the Stingray package, except for Substance Designer and a little Photoshop. I did the entire scene’s modeling in MayaLT. Sometimes people think cheap is not good, but I believe this proves otherwise. MayaLT is incredible. I am super happy with the results and speed at which you can work with it. Best of all, it’s part of the package, so no additional costs.

Material design


Laying out the materials in the scene was pretty straightforward for the most part. At one point, I experimented with using more species of wood, but the different parts of the room started to feel disconnected. I started removing materials from my list, and eventually when I ended up with only a small handful the room came together as you see it.

Figure 11 : Record Player Material Design in Substance

I guess something else I should mention is performance shaders. Stingray comes with a great, flexible standard shader, but I wanted to eke out every little bit of performance I could on this scene while keeping the quality very high. Without much trouble, I created a library of my own purpose-built shaders (like the one mentioned earlier). I used these for various tasks. Simple colors, RMA (roughness-metallic-ambient occlusion), RMA-tiling shaders and a few others came together really quickly. From this handful of shaders, I was able to increase performance while simplifying my design process. I find it comforting how Stingray deals with shaders… it is just very easy to iterate and save a version. Much better usability than other systems I have tried.

Figure 12 : Shader Library

Fun stuff


Well, most game dev is hard work, the fun is at the end when you get to finally relax and see your efforts paid off. But there were definitely some really fun parts of making the poolroom.

One was the clock. It’s a small, almost easter-egg kind of thing, but I programmed the clock fully. Meaning, its hands move, the pendulum swings, and it also rings the hour. So if you are exploring the poolroom and it happens to be when the hour changes in your system clock, the clock in the game rings the hour for you. So two o’clock rings two times,  four o’clock rings four times, etc. The half-hour always strikes once. I modeled the clock after one that my father gave me, so I put some extra love into it. It is basically exactly the clock that hangs in my living room.

Figure 13 : Clock Model in MayaLT

Figure 14 : Clock Model in Stingray

I also gave the record player some extra attention, because my good friend Mathew Harwood was kind enough to do all the audio for the project. I felt the music really set the scene, and he even worked on it over my twitch stream so we could get feedback from some people who were watching. So yeah, press + or - in the game to start and stop the record player, complete with animated tone arm. Nothing super crazy, just a nice little touch.

Figure 15 : Record Player in Stingray

Community effort


One thing I found really neat about this project was that I streamed the entire creation process on my Twitch channel. I have never streamed much before this project, but it made the process much more fun. I had people to talk with, and often my viewers were helpful to me in suggesting ideas and noticing things I had not noticed. It was very collaborative and a great learning exercise for me and for my viewers. We got to learn from each other, which is the dream!

For example, the record player likely would not have been done to the level I did it had one of my viewers not pushed me to make a really detailed player. Because of this push, it ended up being a focus of the level, and even has some animation and basic controls a user can interact with.

Stop by my Twitch channel sometime at twitch.tv/paulkind3d and say hi, I’d love to meet you.

Sunday, January 31, 2016

Hot Reloadable JavaScript, Batman!

JavaScript is my new favorite prototyping language. Not because the language itself is fantastic. I mean, it's not too bad. It actually has a lot of similarity to Lua, but it's hidden under a heavy layer of WAT!?, like:

  • Browser incompatibilities!?
  • Semi-colons are optional, but you "should" put them there anyway!?
  • Propagation of null, undefined and NaN until they cause an error very far from where they originated!?
  • Weird type conversions!? "0" == false!?
  • Every function is also an object constructor!? x = new add(5,7)!?
  • Every function is also a method!?
  • You must check everything with hasOwnProperty() when iterating over objects!?

But since Lua is a work of genius and beauty, being a half-assed version of Lua is still pretty good. You could do worse, as languages go.

And JavaScript is actually getting better. Browser compatibility is improving, automatic updates is a big factor in this. And if your goal is just to prototype and play, as opposed to building robust web applications, you can just pick your favorite browser, go with that and don't worry about compatibility. The ES6 standard also adds a lot of nice little improvements, like let, const, class, lexically scoped this (for arrow functions), etc.

But more than the language, the nice thing about JavaScript is that comes with a lot of the things you need to do interesting stuff -- a user interface, 2D and 3D drawing, a debugger, a console REPL, etc. And it's ubiquitous -- everybody has a web browser. If you do something interesting and want to show it to someone else, it is as easy as sending a link.

OK, so it doesn't have file system access (unless you run it through node.js), but who cares? What's so fun about reading and writing files anyway? The 60's called, they want their programming textbooks back!

I mean in JavaScript I can quickly whip up a little demo scene, add some UI controls and then share it with a friend. That's more exciting. I'm sure someone will tell me that I can do that in Ruby too. I'm sure I could, if I found the right gems to install, picked what UI library I wanted to use and learned how to use that, found some suitable bundling tools that could package it up in an executable, preferably cross-platform. But I would probably run into some annoying and confusing error along the way and just give up.

With increasing age I have less and less patience for the sysadmin part of programming. Installing libraries. Making sure that the versions work together. Converting a configure.sh script to something that works with our build system. Solving PATH conflicts between multiple installed cygwin and mingw based toolchains. Learning the intricacies of some weird framework that will be gone in 18 months anyway. There is enough of that stuff that I have to deal with, just to do my job. I don't need any more. When I can avoid it, I do.

One thing I've noticed since I started to prototype in JavaScript is that since drawing and UI work is so simple to do, I've started to use programming for things that I previously would have done in other ways. For example, I no longer do graphs like this in a drawing program:

Instead I write a little piece of JavaScript code that draws the graph on an HTML canvas (code here: pipeline.js).

JavaScript canvas drawing cannot only replace traditional drawing programs, but also Visio (for process diagrams), Excel (graphs and charts), Photoshop and Graphviz. And it can do more advanced forms of visualization and styling, that are not possible in any of these programs.

For simple graphs, you could ask if this really saves any time in the long run, as compared to using a regular drawing program. My answer is: I don't know and I don't care. I think it is more important to do something interesting and fun with time than to save it. And for me, using drawing programs stopped being fun some time around when ClarisWorks was discontinued. If you ask me, so called "productivity software" has just become less and less productive since then. These days, I can't open a Word document without feeling my pulse racing. You can't even print the damned things without clicking through a security warning. Software PTSD. Programmers, we should be ashamed of ourselves. Thank god for Markdown.

Another thing I've stopped using is slide show software. That was never any fun either. Keynote was at least tolerable, which is more than you can say about Powerpoint. Now I just use Remark.js instead and write my slides directly in HTML. I'm much happier and I've lost 10 pounds! Thank you, JavaScript!

But I think for my next slide deck, I'll write it directly in JavaScript instead of using Remark. That's more fun! Frameworks? I don't need no stinking frameworks! Then I can also finally solve the issue of auto-adapting between 16:9 and 4:3 so I don't have to letterbox my entire presentation when someone wants me to run it on a 1995 projector. Seriously, people!

This is not the connector you are looking for!

And I can put HTML 5 videos directly in my presentation, so I don't have to shut down my slide deck to open a video in a separate program. Have you noticed that this is something that almost every speaker does at big conferences? Because apparently they haven't succeeded in getting their million dollar presentation software to reliably present a video file! Software! Everything is broken!

Anyhoo... to get back off topic, one thing that surprised me a bit about JavaScript is that there doesn't seem to be a lot of interest in hot-reloading workflows. Online there is JSBin, which is great, but not really practical for writing bigger things. If you start googling for something you can use offline, with your own favorite text editor, you don't find that much. This is a bit surprising, since JavaScript is a dynamic language -- hot reloading should be a hot topic.

There are some node modules that can do this, like budo. But I'd like something that is small and hackable, that works instantly and doesn't require installing a bunch of frameworks. By now, you know how I feel about that.

After some experimentation I found that adding a script node dynamically to the DOM will cause the script to be evaluated. What is a bit surprising is that you can remove the script node immediately afterwards and everything will still work. The code will still run and update the JavaScript environment. Again, since this is only for my personal use I've not tested it on Internet Explorer 3.0, only on the browsers I play with on a daily basis, Safari and Chrome Canary.

What this means is that we can write a require function for JavaScript like this:

function require(s)
{
    var script = document.createElement("script");
    script.src = s + "?" + performance.now();
    script.type = "text/javascript";
    var head = document.getElementsByTagName("head")[0];
    head.appendChild(script);
    head.removeChild(script);
}

We can use this to load script files, which is kind of nice. It means we don't need a lot of <script> tags in the HTML file. We can just put one there for our main script, index.js, and then require in the other scripts we need from there.

Also note the deftly use of + "?" + performance.now() to prevent the browser from caching the script files. That becomes important when we want to reload them.

Since for dynamic languuages, reloading a script is the same thing as running it, we can get automatic reloads by just calling require on our own script from a timer:

function reload()
{
    require("index.js");
    render();
}

if (!window.has_reload) {
    window.has_reload = true;
    window.setInterval(reload, 250);
}

This reloads the script every 250 ms.

I use the has_reload flag on the window to ensure that I set the reload timer only the first time the file is run. Otherwise we would create more and more reload timers with every reload which in turn would cause even more reloads. If I had enough power in my laptop the resulting chain reaction would vaporize the universe in under three minutes. Sadly, since I don't all that will happen is that my fans will spin up a bit. Damnit, I need more power!

After each reload() I call my render() function to recreate the DOM, redraw the canvas, etc with the new code. That function might look something like this:

function render()
{
    var body = document.getElementsByTagName("body")[0];
    while (body.hasChildNodes()) {
        body.removeChild(body.lastChild);
    }

    var canvas = document.createElement("canvas");
    canvas.width = 650;
    canvas.height = 530;
    var ctx = canvas.getContext("2d");
    drawGraph(ctx);
    body.appendChild(canvas);
}

Note that I start by removing all the DOM elements under <body>. Otherwise each reload would create more and more content. That's still linear growth, so it is better than the exponential chain reaction you can get from the reload timer. But linear growth of the DOM is still pretty bad.

You might think that reloading all the scripts and redrawing the DOM every 250 ms would create a horrible flickering display. But so far, for my little play projects, everything works smoothly in both Safari and Chrome. Glad to see that they are double buffering properly.

If you do run into problems with flickering you could try using the Virtual DOM method that is so popular with JavaScript UI frameworks these days. But try it without that first and see if you really need it, because ugh frameworks, amirite?

Obviously it would be better to reload only when the files actually change and not every 250 ms. But to do that you would need to do something like adding a file system watcher connected to a web socket that could send a message when a reload was needed. Things would start to get complicated, and I like it simple. So far, this works well enough for my purposes.

As a middle ground you could have a small bootstrap script for doing the reload:

window.version = 23;
if (window.version != window.last_version) {
    window.last_version = window.version;
    reload();
}

You would reload this small bootstrap script every 250 ms. But it would only trigger a reload of the other scripts and a re-render when you change the version number. This avoids the reload spamming, but it also removes the immediate feedback loop -- change something and see the effect immediately which I think is really important.

As always with script reloads, you must be a bit careful with how you write your scripts to ensure thy work nicely with the reload feature. For example, if you write:

class Rect
{
    ...
};

It works well in Safari, but Chrome Canary complains on the second reload that you are redefining a class. You can get around that by instead writing:

var Rect = class {

Now Chrome doesn't complain anymore, because obviously you are allowed to change the content of a variable.

To preserve state across reloads, I just put the all the state in a global variable on the window:

window.state = window.state || {}

The first time this is run, we get an empty state object, but on future reloads we keep the old state. The render() function uses the state to determine what to draw. For example, for a slide deck I would put the current slide number in the state, so that we stay on the same page after a reload.

Here is a GIF of the hot reloading in action. Note that the browser view changes as soon as I save the file in Atom:

(No psychoactive substances where consumed during the production of this blog post. Except caffeine. Maybe I should stop drinking coffee?)

Friday, January 29, 2016

Stingray Support -- Hello, I Am Someone Who Can Help


Hello, I am someone who can help. 


Here at the Autodesk Games team, we pride ourselves on supporting users of the Stingray game engine in the best ways possible – so to start, let’s cover where you can find information!



General Information Here!

Games Solutions Learning Channel on YouTube:
This is a series of videos about Stingray by the Autodesk Learning Team. They'll be updating the playlist with new videos over time. They're pretty responsive to community requests on the videos, so feel free to log in and comment if there's something specific you'd like to see.
Check out the playlist on YouTube.

Autodesk Stingray Quick Start Series, with Josh from Digital Tutors:
We enlisted the help from Digital Tutors to set up a video series that runs through the major sections of Stingray so you can get up and running quickly.
Check out the playlist on YouTube.

Autodesk Make Games learning site:
This is a site that we've made for people who are brand new to making games. If you've never made a game before, or never touched complex 3D tools or a game engine, this is a good place to start. We run you through Concept Art and Design phases, 3D content creation, and then using a game engine. We've also made a bunch of assets available to help brand new game makers get started.

Creative Market:
The Creative Market is a storefront where game makers can buy or sell 3D content. We've got a page set up just for Stingray, and it includes some free assets to help new game makers get started.

Stingray Online Help
Here you'll find more getting started movies, how-to topics, and references for the scripting and visual programming interfaces. We're working hard to get you all the info you need, and we're really excited to hear your feedback.

Forum Support Tutorial Channel on YouTube:
This is a series of videos that answers recurring forums questions by the Autodesk Support Team. They'll be updating the playlist with new videos over time. They're pretty responsive to community requests on the videos, so feel free to log in and comment if there's something specific you'd like to see.
Check out the playlist on YouTube.

You should also visit the Stingray Public Forums here, as there is a growing wealth of information and knowledge to search from.



Let's Get Started

Let’s get started. Hi, I’m Dan, nice to meet you. I am super happy to help you with any of your Stingray problems, issues, needs or general questions! However, I’m going to need to ask you to HELP ME, HELP YOU!!





It’s not always apparent when a user asks for help just exactly what that user is asking for. That being the case, here is some useful information on how to ask for help and what to provide us so that we can help you better and more quickly!
  •  Make sure you are very clear on what your specific problem is and describe it as best you can.
    • Include pictures or screen shots you may have
  • Tell us how you came to have this problem         
    • Give us detailed reproduction steps on how to arrive at the issue you are seeing
  • Attach your log files!
    • They can be found here: C:\Users\”USERNAME”\AppData\Local\Autodesk\Stingray\Logs
  • Attach any file that is a specific problem (zip it so it attaches to forum post)
  •  Make sure to let us know your system specifications 
  •  Make sure to let us know what Stingray engine version you are using


On another note … traduire, traduzir, 翻, Übersetzen, þýða, переведите, ਅਨੁਵਾਦ, , and ... translate! We use English as our main support language, however, these days – translate.google.com is really, really good! If English is not your first language, please feel free to write your questions and issues in your native language and we will translate it and get back to you. I often find that it is easier to understand from a translation and this helps us get you help just that much more quickly!





In Conclusion

So just to recap, make sure you are ready when you come to ask us a question! Have your issue sorted out, how to reproduce it, what engine version you are running, your system specs and attach your log files. This will help us, help you, just that much faster and we can get you on your way to making super awesome content in the Stingray game engine. Thanks!

Dan Matlack
Product Support Specialist – Games Solutions
Autodesk, Inc.





Wednesday, January 20, 2016

Introducing the Stingray Package Manager (spm)

The Stingray Package Manager, or spm, is a small Ruby program that is responsible for downloading specific versions of the external artifacts (libraries, sample projects, etc) that are needed to build the Stingray engine and tools. It's a small but important piece of what makes one-button builds possible.

By one-button builds I mean that it should be possible to build Stingray with a single console command and no human intervention. It should work for any version in the code history. It should build all tools, plugins and utilities that are part of the project (as well as meaningful subsets of those for faster builds). In addition, it should work for all target platforms, build configurations (debug, development, release) and options (enabling/disabling Steam, Oculus, AVX, etc).

Before you have experienced one-button builds it's easy to think: So what? What's the big deal? I can download a few libraries manually, change some environment variables when needed, open a few Visual Studio projects and build them. Sure, it is a little bit of work every now and then, but not too bad.

In fact, there are big advantages to having a one-button build system in place:

  • New developers and anyone else interested in the code can dive right in and don't have to spend days trying to figure out how to compile the damned thing.

  • Build farms don't need as much baby sitting (of course build farms always need some baby sitting).

  • All developers build the engine in the same way, the results are repeatable and you don't get bugs from people building against the wrong libraries.

  • There is a known way to build any previous version of the engine, so you can fix bugs in old releases, do bisect tests to locate bad commits, etc.

But more than these specific things, having one-button builds also gives you one less thing to worry about. As programmers we are always trying to fit too much stuff into our brains. We should just come to terms with the fact that as a species, we're really not smart enough to be doing this. That is why I think that simplicity is the most important virtue of programming. Any way we can find to reduce cognitive load and context switching will allow us to focus more on the problem at hand.

In addition to spm there are two other important parts of our one-button build system:

  • The cmake configuration files for building the various targets.

  • A front-end ruby script (make.rb) that parses command-line parameters specifying which configuration to build and makes the appropriate calls to spm and cmake.

But let's get back to spm. As I said at the top, the responsibility of spm is to download and install external artifacts that are needed by the build process. There are some things that are important:

  • Exact versions of these artifacts should be specified so that building a specific version of the source (git hash) will always use the same exact artifacts and yield a predictable result.

  • Since some of these libraries are big, hundreds of megabytes, even when zipped (computers are a sadness), it is important not to download more than absolutely necessary for making the current build.

  • For the same reason we also need control over how we cache older versions of the artifacts. We don't want to evict them immediately, because then we have to download hundreds of megabytes every time we switch branch. But we don't want to keep all old versions either, because then we would pretty soon run out of space on small disks.

The last two points are the reason why something like git-lfs doesn't solve this problem out-of-the box and some more sophisticated package management is needed.

spm takes inspiration from popular package managers like npm and gem and offers a similar set of sub commands. spm install to install a package. spm uninstall to uninstall, etc. At it's heart, what spm does is a pretty simple operation:

Upon request, spm downloads a specific artifact version (identified by a hash) from an artifact repository. We support multiple artifact repositories, such as S3, git and Artifactory. The artifact is unzipped and stored in a local library folder where it can be accessed by the build scripts. As specific artifact versions are activated and deactivated we move them in and out of the local artifact cache.

We don't use unique folder names for artifact versions. So the folder name of an artifact (e.g., luajit-2.1.0-windows) doesn't tell us the exact version (y0dqqY640edvzOKu.QEE4Fjcwxc8FmlM). spm keeps track of that in internal data structures.

There are advantages and disadvantages to this approach:

  • We don't have to change the build scripts when we do minor fixes to a library, only the version hash used by spm.
  • We avoid ugly looking hashes in the folder names and don't have to invent our own version numbering scheme, in addition to the official one.
  • We can't see at a glance which specific library versions are installed without asking spm.
  • We can't have two versions of the same library installed simultaneously, since their names could collide, so we can't run parallel builds that use different library versions.
  • If library version names were unique we wouldn't even need the cache folder, we could just keep all the versions in the library folder.

I'm not 100 % sure we have made the right choice, it might be better to enforce unique names. But it is not a huge deal so unless there is a big impetus for change we will stay on the current track.

spm knows which versions of the artifacts to install by reading configuration files that are checked in as part of the source code. These configuration files are simple JSON files with entries like this:

cmake = {
    groups = ["cmake", "common"]
    platforms = ["win64", "win32", "osx", "ios", "android", "ps4", "xb1", "webgl"]
    lib = "cmake-3.4.0-r1"
    version = "CZRgSJOqdzqVXey1IXLcswEuUkDtmwvd"
    source =  {
        type = "s3"
        bucket = "..."
        access-key-id = "..."
        secret-access-key = "..."
    }
}

This specifies the name of the packet (cmake), the folder name to use for the install (cmake-3.4.0-r1), the version hash and how to retrieve it from the artifact repository (these source parameters can be re-used between different libraries).

To update a library, you simply upload the new version to the repository, modify the version hash and check in the updated configuration file.

The platforms parameter specifies which platforms this library is used on and groups is used to group packages together in meaningful ways that make spm easier to use. For example, there is an engine group that contains all the packages needed to build the engine runtime and a corresponding editor group for building the editor.

So if you want to install all libraries needed to build the engine on Xbox One, you would do:

spm install-group -p xb1 engine

This will install only the libraries needed to build the engine for Xbox One and nothing else. For each library, spm will:

  • If the library is already installed -- do nothing.
  • If the library is in the cache -- move it to the library folder.
  • Otherwise -- download it from the repository.

Downloads are done in parallel, for maximum speed, with a nice command-line based progress report:

The cache is a simple MRU cache that can be pruned either by time (throw away anything I haven't used in a month) or by size (trim the cache down to 10 GB, keeping only the most recently used stuff).

Of course, you usually never have even have to worry about calling spm directly, because make.rb will automatically call it for you with the right arguments, based on the build parameters you have specified to make.rb. It all happens behind the scene.

Even the cmake binary itself is installed by the spm system, so the build is able to bootstrap itself to some extent. Unfortunately, the bootstrap process is not 100 % complete -- there are still some things that you need to do manually before you can start using the one-button builds:

  • Install Ruby (for running spm.rb and make.rb).
  • Specify the location of your library folder (with an SR_LIB_DIR environment variable).
  • Install a suitable version of Visual Studio and/or XCode.
  • Install the platform specific SDKs and toolchains for the platforms you want to target.

I would like to get rid of all of this and have a zero-configuration bootstrap procedure. You sync the repository, give one command and bam -- you have everything you need.

But some of these things are a bit tricky. Without Ruby we need something else for the initial step that at least is capable of downloading and installing Ruby. We can't put restricted software in public repositories and it might be considered hostile to automatically run installers on the users' behalf. Also, some platform SDKs need to be installed globally and don't offer any way of switching quickly between different SDK versions, thwarting any attempt to support quick branch switching.

But we will continue to whittle away at these issues, taking the simplifications where we can find them.

Friday, December 18, 2015

Data Driven Rendering in Stingray


We’re all familiar with the benefits that a data driven architecture brings to gameplay: code is decoupled from data, enabling live linking and rapid iteration. Placing new objects in the editor or modifying the speed of a character has an immediate effect on a live game instance. Really speeds up the development process as you fine tune scripts, gameplay and other content.

What about graphics programming?  It turns out that the same architecture and associated benefits apply to Stingray’s renderer.

Just by modifying configuration files (albeit somewhat complex configuration files) we can implement new shader programs, post-processing effects and even different cascading shadow map implementations. All in real time, on a live game instance. Which is a big win for graphics programmers: try out new ideas, fine tune shaders all with real-time feedback. No more of that long edit/compile/run/debug cycle. And this applies to the entire rendering pipeline: everything from the object space to world space transforms to shadow casting and the final rendering pass is all exposed as config file data, not as C++ code as with traditional architectures.

I gave a presentation on this topic a while back which has now found it’s way to our YouTube channel:

https://www.youtube.com/channel/UC0fIe6XV1PjilADTei9JMOA

By the way, there’s a lot of other great Stingray content up there so please check it out! The renderer presentation can be found under “Stingray Render Config Tutorial.”

The details as well as a PowerPoint can be found there. The code changes to add a trivial greyscale post-processing effect involve:

settings.ini: 

The render_config variable points to the renderer.config file. Settings.ini also provides a section to override default settings found in the next file, renderer.render_config

core/stingray_renderer/renderer.render_config:

Points to our shader libraries, text files containing actual shader programs. A section called global_resources allocates graphics buffers, such as scratch buffers for the cascading shadow maps and G-buffers for deferred rendering along with the main framebuffer. And most of the actual rendering is invoked in the resource_generators section. Again, more details in the YouTube video though a surprising amount can be learned just by grepping through the various config files and playing with the settings. Which is easy to do since it’s all data driven!

core/stingray_renderer/shader_libraries/development.shader_source:

One of several shader libraries. While shader code can be entered as text here, Stingray also provides a graphical node-based shader editor. And we support ShaderFX materials from Max or Maya. It’s often easier (and more portable) to implement shaders graphically.

But whatever method you choose to implement shaders in, the key point is that Stingray's entire rendering pipeline is fully accessible through configuration files. With our data driven architecture making complex rendering changes, while still non-trivial, is a whole lot faster and easier (and portable!) than working with platform-specific C++ code.