New Year, New Focus

After a complete break over the Christmas holiday period, with ~460km between me and my development machine, work is now back in full swing. While the game project is progressing well, my immediate focus is now on releasing the standalone sky rendering Unity package: DeepSky.

As this is intended for pushing out into the world at large, the requirements for usability and robustness are much higher than if it was simply part of the game. This week has therefore been about boring but essential project planning and creating the framework code that allows DeepSky to work nicely in Unity’s editor. So this post is a grab-bag of useful scripting bits and bobs discovered this week.

ScriptableObjects And Custom UI

Often you just need to store a block of data, without any associated logic or having to add it as a game object component. Unity provides a handy base class to derive from that allows you store data as an asset – the ScriptableObject. This also allows you create data assets that play nicely with the serialization pipeline and make loading/saving data easy.
DeepSky uses a ScriptableObject to hold all the data about what state the sky is in: all the cloud modelling, lighting, time and weather parameters. By chucking all this in a separate object to the actual DeepSky component, it’s trivial to save/load sky presets and hot-swap skies on the fly.
While this is great, it does present a very minor issue. Typically, the Inspector window will list all the serializable fields of a component (fields that are either public or tagged with the [SerializeField] attribute). This still works fine for a field referencing a ScriptableObject, but the UI for it isn’t particularly nice – you have to expand the field and then the UI is just a little ‘meh’. Normally you’d create a custom editor for a component to draw the Inspector however you’d like, but for a custom type like a ScriptableObject you can go one better. Enter the PropertyDrawer class.
In the same way you can use the [CustomEditor(type)] attribute on an Editor-derived type to override the Inspector UI (and other things) for a component, you can use a custom PropertyDrawer to override the UI display for a custom field type. This means even if you don’t have a custom editor for a component, any fields of your custom type will still automatically display nicely in the Inspector. Neat.
One thing that took a little while to figure out however – if you just go ahead and draw some custom UI it’s assumed to fit within the confines of a normal field (ie. one row in the Inspector). If you’re customizing the look of a reference to a ScriptableObject containing multiple fields, it’s necessary to override the GetPropertyHeight function to make sure Unity knows how much space the UI is actually going to take up. Otherwise Unity doesn’t know your property is actually bigger than normal and will draw all over it.

public override float GetPropertyHeight(SerializedProperty property, GUIContent label)
{
return base.GetPropertyHeight(property, label) + EditorGUIUtility.singleLineHeight * 22; //<--- the size of a normal field, plus however much extra you need.
}

One slightly annoying issue is you can’t use the ‘Layout’ version of the EditorGUI class in a PropertyDrawer to automatically handle the sizing of UI components. This means you have to calculate the Rect required by each control manually. I do this by simply incrementing an offset each control, to avoid having to manually adjust lots of numbers by hand. Note the use of EditorGUIUtility.singleLineHeight to get the standard vertical size of a control.

public override void OnGUI(Rect position, SerializedProperty property, GUIContent label)
{
EditorGUI.BeginProperty(position, label, property);

int offset = 0;
float lineHeight = EditorGUIUtility.singleLineHeight;

// Calculate all the Rects upfront.
Rect lightingRect = new Rect(position.x, position.y + lineHeight * offset++, position.width, lineHeight);
Rect extinctionRect = new Rect(position.x, position.y + lineHeight * offset++, position.width, lineHeight);
Rect inScatteringRect = new Rect(position.x, position.y + lineHeight * offset++, position.width, lineHeight);
// --- You get the idea... ---

// Draw the GUI.
EditorGUI.LabelField(lightingRect, "Lighting:", EditorStyles.boldLabel);
EditorGUI.PropertyField(extinctionRect, property.FindPropertyRelative("m_Extinction"));
EditorGUI.PropertyField(inScatteringRect, property.FindPropertyRelative("m_InScattering"));
// --- You get the idea, again... ---

EditorGUI.EndProperty();
}

Also note the use of property.FindPropertyRelative – this is the UI for the ‘parent’ field, ie. the reference to the ScriptableObject, so to display the actual fields from the ScriptableObject we have to find them by name.

Now any time a component has a field such as:

public DS_Context myDeepSkyContext;

it will automatically be drawn in the Inspector, custom component editor or not, like so:

Note how the other fields continue drawing as normal before and after our ScriptableObject. Horay.

Using Unity’s Object Picker Pop-up Properly

For DeepSky, I need the functionality to save and load presets and ideally I wanted to use the standard object picker for consistency and to, you know, avoid work. When you click the little circle next to a field in the Inspector of one of Unity’s object types (eg. GameObject, Texture2D etc.), Unity pop’s up a little object picker to allow selection from objects in either the scene or the entire project. It’s a nice little dialog that is filtered to only show valid types for that field – wouldn’t it be great if you could use it for your own custom types? Turns out, you can. You can show it pretty easily with something like (for my custom ScriptableObject type):

if (GUILayout.Button("Load Preset"))
{
int ctrlID = EditorGUIUtility.GetControlID(FocusType.Passive);
EditorGUIUtility.ShowObjectPicker<DS_ScriptableContext>(null, false, "", ctrlID);
}

Note we need to get a new control ID to pass in. This allows you to bring up more than one at a time and distinguish between them. So far, so not too difficult. But how do you actually get the result? The docs are, unfortunately, not too clear on this as there’s no example. You can get the object using EditorGUIUtility.GetObjectPickerObject() and the picker window uses events to inform the calling OnGUI function when it’s OK to do so. To do something useful with the picker window, we need to listen out for those events during our OnInspectorGUI function (or the OnGUI you called ShowObjectPicker from). You can update as you go by listening for ObjectSelectorUpdated, I just need the result so am only listening for ObjectSelectorClosed like so:

// Check for messages returned by the object picker.
if (Event.current.commandName == "ObjectSelectorClosed")
{
DS_ScriptableContext cxtSO = EditorGUIUtility.GetObjectPickerObject() as DS_ScriptableContext;
if (cxtSO != null)
{
LoadFromContextPreset(cxtSO.Context);
}
}

So that’s my custom type displaying nicely in the editor, just as if Unity natively knew what to do with it. Epic.

Clouds, By Perlin And Worley

Continuing now to bring things out of the ‘prototype’ phase and into something more production worthy. One of the main visual features sitting in the ‘proven but not yet looking pretty’ camp was the volumetric cloud rendering tech. While most of the hard stuff was done, I decided to spend some time on the 3D noise textures driving the cloud shapes, which has led to re-visiting the noise generator tool I wrote some time ago.

The noise generator is a little tool for making 3D textures filled with procedural noise that (hopefully) look like clouds. For the initial tests it was using the classic Improved Perlin 3D noise, with some hacks to make it tile seamlessly. These hacks were, well, hacky and not very robust. Also, while Perlin noise is a great basis, it’s not capable of creating the most realistic shapes on its own. My ex-colleagues in Guerrilla Amsterdam conveniently released their SIGGRAPH paper on cloud rendering for Horizon: Zero Dawn a couple of months ago, and one of the most interesting points was the additional use of Worley noise to create their cloud shapes.

Worley Noise

The core idea behind Worley noise is to set the intensity of a pixel based on it’s distance to the nearest randomly generated ‘feature point’. This creates gradients in ‘blobby’ shapes, with the highest intensities found at the boundaries between equally near feature points (corresponding to the Voronoi diagram, for the interested).

Using some ‘clever maths’, rather than pre-compute all the feature points, the algorithm finds the cube containing the pixel and generates the feature points using a pseudo-random number generator seeded with the cube’s hashed coordinates. The cube is aligned to integer boundaries (so a point p(3.65, 1.27) sits in cube c(3, 1)) and using its coordinates for the RNG ensures we get the same feature points each time and don’t need to store anything.

The shapes created by inverting the output values look closer to the puffy, rounded shapes of the various types of cumuliform clouds.

So thanks to Ken Perlin and Steven Worley, we can create pretty convincing clouds entirely procedurally. The only drawback is both original algorithms are designed to be evaluated ‘on the fly’ during rendering, so baking the results into a texture suitable for use at different scales requires a few tweaks.

Repeating Noise. Repeating Noise. Repeating Noise.

Volume textures are pretty memory heavy, so they need to be small, but we still want lots of detail at small scales – both aspects require tiling textures at different frequencies. I found various bits of information around the Interwebs on making Perlin tileable, but most of it relates to the 2D case and simply uses existing 4D implementations and arcane mapping of circles through the hypercube (incidentally, if anyone’s wondering what the Tesseract is from the Avengers movies – that’s the name for a 4D hypercube). To use the same technique would require expanding both noise basis to the 6th dimensional hypercube and, while mathematically robust, that sounds like too much power for a mere mortal to wield.

Fortunately after getting my head around the actual algorithms for both Perlin and Worley, there’s a much easier way. The key is in the way both algorithms employ an integer grid of cubes.

Perlin noise in a lengthy nutshell works like this: for a point p(x, y, z), Perlin noise is finding the eight vectors to each vertex of the integer cube surrounding p. Each vertex also has a randomly assigned gradient vector, and it’s the dot products between these two vectors for each vertex that are interpolated to produce the final noise value. The interpolation is based on how far p is from the cube’s origin, passed through a curve to give a smoother result. It makes more sense in an image:

Perlin_Noise_ExpAs with Worley, a hash is generated from the cube’s coordinates meaning we get the same gradients for each point in that particular cube. The gradients themselves are the vectors from the center of the cube to each of the edges (12, with some repetition to pad it to 16). The actual theory is pretty straight-forward, but the original implementation uses some crazy bit-shifting maths, making it quite hard to understand by just looking at Perlin’s code. But then he did provide one of the foundations of procedural texture generation and win a Technical Achievement Academy Award, so he gets a free pass.

What all this faff means is: we can make the noise tile by manipulating the cube coordinates to make sure they wrap around and generate the same hash values. So as you approach the edge of the image we need to make sure the gradients being interpolated (for Perlin) and the feature points being generated (for Worley) are the same as the cubes on the opposite side. All of which is accomplished by passing the cube coordinates through this little bit of code:

// For Perlin:
private static int wrap(int n, int period)
{
    n++;
    return period > 0 ? n % period : n;
}

// For Worley
private static int wrap(int n, int period)
{
    return n >= 0 ? n % period : period + n;
}

The period is defined as the integer range over which the noise should loop, and the pixel coordinates are scaled to fit within it. So for a texture with a width of 128 pixels and a period of 6 the x coordinate for noise lookup is x * (period / 128). By adjusting the period we can change the size of the resulting noise. The period also needs to be adjusted for each octave of added noise, otherwise the looping can be seen. In my case that simply means doubling the period along with the frequency.

Pictures Already!

So here’s the new spangly noise generator, showing a blend of Perlin and Worley noise:

NoiseGen_Perlin_Worley_BlendedBoth are generated separately and then combined in the preview so I can adjust the mix without needing to regenerate the whole texture. Same for the brightness/contrast adjustments. There are also some normalization factors in there to make sure the output uses the full [0, 1] brightness range; when combining multiple octaves of noise it can tend to flatten the image otherwise.

And here’s a little GIF of the clouds in action using a new texture. This is also showing the new sky lighting on the clouds, using the dynamic ambient light probe, but that’s for another day!

Cloud_Animated

To Tell A Story Part 2

With a Game Manager system in place, it was time to hook up the Story Manager and make progression actually link to something meaningful. This required a few changes to the story XML format and the integration of a localization system. The game may well end up only in English, but I’m building in the flexibility now for additional language support – just in case.

Localization is one of those boring (to me!) issues that is very tempting to put off, but can really only become more complicated as the project progresses. Luckily, thanks to the Unity Asset Store I don’t need to roll-my-own localization system, which has taken a huge (well, all) the work out of it. I’m currently using the free version of Smart Localization 2.2.1 for testing and so far it’s been rather ace.

I’d already included support for localization keys in the XML definition of a story, but a little re-jigging was needed to tidy it up and generally improve the schema. It also became apparent that each ‘chapter’ would need to support multi-line subtitles rather than just trying to dump all the text to screen at once, so the text element is now split into ‘phrases’ that display in sequence while the audio is playing.

StoryXMLNotepad

Listening To @NeilHimself

Anyone looking closely at the screenshot above (and seriously, why wouldn’t you look closely at a picture of text?) might notice my story of choice for testing is ‘The Price’ by the rather wonderfully awesome (and beardy) Neil Gaiman. It’s a good placeholder, as I have both the text and the audio read by the man himself to hand, so I can check the full process of splitting the audio track, setting up the story XML and playing it back in-game*. The only downside is occasionally forgetting I’m trying to work and just listening to him tell me a tale…

I’m afraid there aren’t any pretty pictures this week as it’s been all programmery stuff – but hey, it’s progress.

*Note for the legal types – this is for personal testing purposes only and is NOT included in any builds or otherwise distributed. All stories in the final game will be original. Which is a shame really, because I’d love to play a game written and narrated by Neil Gaiman.

Getting In The Game Flow

After a bit of hair-pulling, the main game flow is now up and running. You can progress from a title screen, through a rudimentary main menu to each scene in the game (with fantastic placeholder art), with support for pausing as well. That’s basically the whole game ‘scaffolding’ in place now.

Once everything started coming together there were inevitably some issues that required a bit of re-factoring. The most difficult one being working out exactly what the separation between game states and Unity scenes should be.

Getting In A Right State

The abstract game flow is implemented as a pretty standard Finite State Machine (FSM), with discreet states for things like the front end, intro, non-interactive sequences, and each monthly chapter. But these don’t necessarily relate directly to Unity scenes; the environment is split into ‘acts’ (no prizes for guessing there are three of them…), with several monthly chapters taking place within each area.

Why is this an issue? Well, Unity likes to keep everything neatly within scenes and on loading a new scene chucks out everything. While you can flag objects as persistent (using the DontDestroyOnLoad function), or use additive scene loading, it can get a little awkward to manage with a lot of stuff. So this really suggests a complete separation is needed of ‘what state is the game in right now?’ and ‘which area and data are we playing in?’.

Each of my Unity scenes now has an object that deals with player progression within that scene – and makes requests to the game manager to change state accordingly. As the game manager exists outside of the scene, it is persistent and can deal with the overall game flow, while all the specific scene stuff is handled by the ‘in-scene’ manager for which ever scene is currently loaded. Here’s a diagram, because diagrams are fun:

Game_Manager_Act_Scene_FlowThe Game Manager doesn’t care how progression happens, and the Act Scene doesn’t care that it’s just one part of a larger game.

Cleaning Up

With that working, I’ve spent a bit of time refactoring to make things a bit tidier so my future self doesn’t get angry. All the Unity scenes that make up the different acts can be paused, so I shifted that functionality out into a base class. Nothing fancy, just provides a single place to access the pause menu and tell the Game Manager we want to pause. I’ve also made the standard Awake(), Start() and Update() private, which instead call into AwakeScene(), StartScene() and UpdateScene() - which can be overridden without accidentally hiding the standard calls (as I will probably forget to call base from the derived classes). Ooo, code:

namespace LightBulbBox.Scenes
{
    [AddComponentMenu("")]
    public abstract class PausableScene : MonoBehaviour
    {
        protected GameManager m_GameManager;

        private void Awake()
        {
            m_GameManager = GameManager.Instance;
            AwakeScene();
        }

        // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        // SOME OTHER STUFF HERE LIVES HERE
        // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

        private void Update()
        {
            if (Input.GetButtonUp("Pause"))
            {
                m_GameManager.m_IsPaused = !m_GameManager.m_IsPaused;
                if (m_GameManager.m_IsPaused)
                {
                    Time.timeScale = 0;
                }
                else
                {
                    Time.timeScale = 1;
                }
            }
            UpdateScene();
        }

        public abstract void AwakeScene();
        public abstract void UpdateScene();
        public abstract void StartScene();
    }
}

Pausing does nothing amazing at the moment, just notifies the Game Manager and sets the time scale.

The various game states used by the FSM are now based on C# generics with a standard interface as well, which has moved all the shared code into one nice tidy place while maintaining individual types. The Game Manager just sees the IGameState interface, so again, doesn’t care what the states actually do as long as they implement Enter, Exit and Update methods.

Future me is sighing with relief.

Pictures and Post-Its

A bit of everything seems to have happened this week. The initial implementation of the Game Manager system went in, followed by lots of learning, some sketching and an explosion of Post-It notes.

Although this isn’t the most complex game in terms of systems, I’m still mindful of making everything too ‘one-off’, without considering the potential for future re-use and extension. I’m also wary of having to invest too much time in refactoring later on, given that I’m a one-man band and time is precious. So a lot of this week has been reading up on design patterns and brushing up on some areas of C#.

This description of Dependency Injection on Code Project in particular caught my eye for being nice and to the point.

Going Post(it)al

Armed with the Power of Knowledge then, the main game flow and support systems started to take shape. When designing, I like to step away from the computer and actually write things down with, like, a pen and stuff – which results in a Post-It¹ note wall:

Design_PostitsExcept I don’t have smooth walls, so my District 9 one-sheet has to make do. I also appear to have inadvertently followed the shape of the spaceship.

Top tip: buy a pack with multiple colours to help with clarity – here the green ones are abstract descriptions of the general game flow, yellow the actual game states (as used in the Game Manager state machine), orange are systems classes and the disgusting hot pink are the interfaces between them.

There’s plenty more fleshing out to do, but the major systems are all represented on the wall now.

¹other brands of sticky notes are available.

Sketchy Ideas

To even out the sides of my brain being used, I also started sketching out ideas for the story locations. As each story relates to a location(s) in the environment, these are the landmarks that will populate Red’s world. The first story, and where the game begins, revolves around a lighthouse and attached cottage:

Lighthouse_SketchAt this stage just trying to get a feel for the landscape shapes. Composition is… suspect.

The next couple of weeks will be very code-heavy, so I’ll probably start going into more detail on specific systems like the Game Manager.

Which will be super fun!

To Tell A Story

This is going to be a largely technical post documenting the story management system so I’ll apologise in advance for its lack of pictures. But hey, that’s one of the major game systems implemented now so that’s nice.

The world Red explores contains a number of stories to be discovered, each linked to a location and revealing themselves in chapters as the months progress. As a technical artist, I’m all about the workflows and pipelines, so naturally I’ve tried to come up with a system that makes it really easy to drop stories into the game. Making the whole lot data-driven was important to minimize the amount of custom code needed to get a story working, and as it happens the structure of my game doesn’t require a massively complex system anyway. The goal was to be able to write the stories in external tools and simply save them somewhere in the Unity project for the game to automatically pick up.

Brilliantly, it works!

Serializing A Serial

I’ve not really touched on Serialization/Deserialization before in my dabblings with tools code (which is kind of surprising in hind-sight) but for my story needs it seemed the way to go. I wanted to be able to write the stories in a simple XML format, with extra attributes and bits to tell the game what to do with it, as well as the actual text and audio resources. Something like this (simplified for blogability):

<story title="Story Title">
    <chapters>
        <chapter locationID="LIGHTHOUSE" month="SEPT">
            <text>Tell an amazing story here.</text>
            <audio>Audio Resource ID</audio>
        </chapter>
    </chapters>
</story>

This is backed up by an XML schema to highlight missing attributes/fields and generally tell me if I’m being forgetful. I’ve also made a template file, so adding a new story is simply a case of copying it and filling in the required bits. All pretty simple stuff, but helps with the flow and avoids errors later on.

To actually write the stories, I’m using XML Notepad and Notepad++ with the XML Tools plugin – I haven’t decided which I prefer yet, but XML Notepad seems perfect for my needs.

The great thing about this approach is the XML layout corresponds directly to C# objects. The Story Manager loads all the XML files it finds in the story directory and the whole lot simply gets deserialized on loading. I don’t have to do anything much at all:

XmlSerializer xmlSer = new XmlSerializer(typeof(Story));
using(FileStream fStream = new FileStream(strPath, FileMode.Open))
{
    return xmlSer.Deserialize(fStream) as Story;
}

For reference, part of the Story object is defined like so:

    [XmlRoot("story")]
    public class Story
    {
        public Chapter[] m_Chapters;
        private string m_Title;
        private int m_CurrentChapter;
        private bool m_Finished;

        [XmlArray("chapters"), XmlArrayItem("chapter")]
        public Chapter[] Chapters
        {
            get { return m_Chapters; }
            set { m_Chapters = value; }
        }

        [XmlAttribute("title")]
        public string Title
        {
            get { return m_Title; }
            set { m_Title = value; }
        }

Telling A Tale

For the stories to progress, there are trigger objects around the key locations that inform the Story Manager when Red has entered them. To avoid unnecessary checks, only the locations that are actually going to be used during a particular month will be enabled. Once the story chapter has been triggered, the manager disables the location object until it’s needed in a later month.

These location triggers are simple GameObjects with a public location ID that corresponds to the IDs used in the story XMLs. To help avoid spelling errors, on loading a story the manager checks that there is in fact a GameObject in the scene with the required location ID and dumps warnings if not.

Once the Story Manager has established that a story should be ‘read’, it asks the story for the current chapter text and audio resources, updating the UI and making noise accordingly. This will get a little more complicated if/when I have to deal with localization, but it’s pretty simple so extending it will be no bother.

Brain dump over. As usual, it’s good to get another techy bit out the way.