Friday 18 November 2011

Water spreading, and other videos

Example of a small planet:
Editing Terrain Example:



And Finally it is! Spreading water! Milcho did a great job and we can see the results on video below. Sorry for bad quality and editing, but I was in hurry, I made that video for debug pursposes but I thought I will share it with you. Once a better version is ready I will re-edit this post and write some more about water too.


Friday 28 October 2011

Is it Oblivion?

For those who (maybe) are waiting for some news, here are two screenshots from latest build of our engine. Yes, you see it right - we got water and tall grass implemented, and whole terrain received some visual updates. Still a lot of work to do but I just wanted to let you know we're working and will be posting soon some interesting technical posts about our latest achievements :)

Update: New download link posted (see on the left), and new video!




Monday 10 October 2011

Triplanar Texturing and Normal Mapping

Normal mapping was something that we've talked about for a while, but there were a number of other things to do with texturing.

For our texturing, we had no choice but to use tri-planar texture mapping - since we generate an actual a planet and the terrain can be oriented in any direction. Combine that with the fact that the terrain is diggable, we had to make the texture adapt to any angle. Triplanar mapping was the perfect solution.

Doing normal mapping on top of triplanar mapping may seem hard at first, but it's just a little harder than triplanar texture mapping.

To obtain the final fragment color for triplanar mapping, you basically sample the same texture as though it was oriented along the three planes (See diagram on right).

Once you have a sample from each of these planar projections, you combine the three samples depending on the normal vector of the fragment. The normal vector essentially tells you how close to each plane the projection actually is. So if you have a mostly horizontal plane, the normal vector would be vertical and thus you would sample mostly from the horizontal projection.

This same principle can be used to compute the normal from a sample from a normal map. Instead of sampling from the texture, you would sample from the normal map. The RGB color you get would give you the normal vector, as seen in that plane. Then you can combine these normals using the same weights that you use to compute the mixture from the texture coordinates.

Basically you obtain three normal vectors, one on each plane, and each having a certain coordinate system that is aligned with the texture on the side.

On the picture on the right, the red, green and blue are the axis on each projection of the texture, while the dark purple is a sample normal vector. You can imagine, the closer the fragment's normal is to each plane the more it samples from that plane. One thing is that unlike texture mapping, is that when the normal is close to the plane's, but is facing the opposite direction, you have to reverse the normal map's results.

This is what the code for obtaining the normal of one texture from its three normal projections looks like in our terrain shader:

vec4 bump1 = texture2DArray(normalArray, vec3(coordXY.xy, index));
vec4 bump2 = texture2DArray(normalArray, vec3(coordXZ.xy, index));
vec4 bump3 = texture2DArray(normalArray, vec3(coordYZ.xy, index));

vec3 bumpNormal1 = bump1.r * vec3(1, 0, 0) + bump1.g * vec3(0, 1, 0) + bump1.b * vec3(0, 0, 1);
vec3 bumpNormal2 = bump2.r * vec3(0, 0, 1) + bump2.g * vec3(1, 0, 0) + bump2.b * vec3(0, 1, 0);
vec3 bumpNormal3 = bump3.r * vec3(0, 1, 0) + bump3.g * vec3(0, 0, 1) + bump3.b * vec3(1, 0, 0);
return vec3(weightXY * bumpNormal1 + weightXZ * bumpNormal2 + weightYZ * bumpNormal3);

Where weightXY, weightXZ and weightYZ are determined like so from the normal that's calculated at that fragment:
weightXY = fNormal.z;
weightXZ = fNormal.y;
weightYZ = fNormal.x;

I realize that it sounds a bit counter-intuitive that we need the normal before we can calculate the per-fragment normals, but this normal can be simply obtained by other means, such as per-vertex normal calculations. (We obtain it through density difference calculations of the voxels)

Finally, to get good results you need an actual good normal texture. We only had time to create one (neither of us are graphics designers), so here's a video of the rock triplanar normal map, with a short day length on our planet:

Friday 16 September 2011

In the desert...

Hello there whoever is reading our blog!

(by the way if you have any questions, don't be shy and post comments here)

I'm writing a short report in between some more informational posts to let you know we're working hard on terrain LOD, volumetric clouds and tweaking sky shaders. Once we're happy with the results we're planning to move on to objects and fluids. 

Below you can see some shots from day spent in our desert biome, which also received some work, so it looks more like a desert. Hope you enjoy it - even if it's just beginning, with very basic shaders. Stay tuned!
Sunrise
Almost midday
Sunset

Sunday 11 September 2011

Day / night on the planet, part 2

The next task, after introducing the sun, was dealing with the sky.

So far, our sky has been left a dull solid color. It would be trivial to make the sky sphere shader change the color of the sky, but it needed to correspond with the sun position in the sky, as seen by the player.

There are several good solutions posted on the web. All attempt to simulate atmospherical scattering. However, most of those methods include heavy calculations, that are perhaps not best suited for a real-time sky. Figuring we would need something simpler, I decided to tackle the problem on my own.

The solution involves simply drawing a sphere around the player, and rendering the sky on that. Since the sky can be thought of as a incredibly large sphere that's far away, this approach suited us. And since we have no plans to allow the player to leave the planet, it is unnecessary to simulate an atmosphere from outside the atmosphere.

Consider the following diagram:
This image (which is not to scale) shows the rough idea of how we draw the sky - it is drawn on the tiny sphere (the light blue circle in the diagram) that surrounds the player everywhere he goes.
 The three main things that determine the color of the sky are the direction to the sun, the 'up' direction (used for horizon calcuations), and the direction to the voxel being drawn.

Given these three vectors in the fragment shader, we can compute the three dot products between them - dotSunUp, dotSunFrag, dotFragUp. These dot products yield a normalized value in the [-1,1] range, and give us enough information about how to draw the sky.

The main check here is the position of the sun in the sky. There are three cases - the sun is high in the sky, the sun is near the horizon or the sun is below the horizon. The three cases corerspond ot daytime, sunrise/sunset and nighttime.
Here's a brief explanation of the cases.

1. if dotSunUp > TransitionConstant
In this case the sun is high up in the sky, so the entire sky can be drawn blue. It's also possible to lighten the near-horizon fragments, for a more realistic look. This lightening is done by using the dotFragUp value - which determines how close to the up direction (And the horizon) a fragment is.

2. if dotSunUp > NightTransitionConstant
In this case we are in a sunrise or sunset phase. The sky color is blended between a day and night color, depending on the dotSunUp value. Here we also start to add a glow that surrounds the sun. This glow is calculated using the dotSunFragment value, which determines how close to the direction of the sun a fragment is.

Variation of color of the near-horizon fragments can be used to also change their color to increase realism.

3. All other cases
In this case we are in nighttime. The sky color is simply set to the night time color, again with possible variation to the near-horizon fragments.

The two constant mentioned above can be set to different values to increase the transition between night and day, as well as change the time that the glow persists after the sun has gone  below the horizon. (Note that a dotSunUp value < 0 means the sun is below the horizon, while the NightTransitionConstant can be set to negative values - in the current setting it's -0.2

Finally after all these checks are done, an additional check can be performed using dotSunFrag, to see if the fragment drawn is actually really close to the sun's position - in which case the fragment is actually a part of the sun's disk. A similar method is used to set the sun's glow.

Finally, here's a short video of the sunrise and sunset on the planet. There's still some tweaking to be done on the colors, as well as the mentioned near-horizon color changes. Considering the fragment shader is based entirely on dot products and linear interpolation (both very fast on a GPU) - the results are very satisfactory for a real time implementation.




Monday 29 August 2011

Day / night on the planet, part 1

Problem: vertices with normals pointing into sun's direction
are lit even when the sun is on the other side of planet.
If you looked at our Screenshots section you probably noticed something strange about these screenshots - most of them are really dark and have some small areas randomly lit. There is a reason for this kind of behavior and I'm going to reveal it now - our world is actually a true planet! Before we just had a regular flat terrain and our sun was set somewhere in the sky above and it lit the terrain correctly. Once we switched to planet, we ended with one part of it being lit properly, and another being mostly dark. Because of nature of the planet, we just couldn't have all terrain lit anymore. What we needed was a day/night cycle.

Now, a few words about planet - we decided to go that way because we thought it will add a nice realistic touch to how players experience the world - for example, they can actually go around and arrive at the same place from which they departed. There are a lot of other cool things about having a planet, but it also introduces new problems. One of them is how diffuse sunlight works.

First problem was easily solved - our sun was static and didn't really move. It turned out that all the code for moving was already written by Milcho, but Update() function for the planet wasn't called. Once I fixed it we got a day/night cycle with sun circling around the planet (its of course not how it happens in real world, but moving planet around the sun would be much harder and not really worth the effort, so we use usual simplistic model), but one major problem remained - because some vertex normals pointed downwards towards the planet core, they were lit by the sun while it was on the other side of the planet! This is the reason of strangely lit holes on our screenshot page.

After a quick brain storm we found some solution that had a slight chance of success - we needed a "horizon" of some kind. Because we no longer had a flat terrain, we couldn't just hardcode horizon to be, for example along XZ plane. For each vertex on the planet's surface, horizon will be different. This is what we came up with:


Problem: vertex is lit even at night because its normal
points towards the sun
Solution: adding a "fake" horizon that determines
area where the sun is "active" for a given vertex

Our horizon is a plane that goes through vertex position and is perpendicular to vertex up vector. We compute dot product of up vector (U) and vertex to sun vector (VS). If the result of dot product is negative, it means sun is outside of our horizon (night), if its positive or zero - its in the horizon (day).


Pseudo-code in GLSL vertex shader looks like this:



    upVector = normalize(vertexPosition - corePosition)
    vertexToSun = normalize(sunPosition - vertexPosition)
    sunInHorizon = dot(upVector, vertexToSun)
    sunIntensity = 1.0
    sunFade = 0.2      // sunlight will fade when the dot product is less or equal 0.2
    if (sunInHorizon < 0)
    {
        // no diffuse light
    sunIntensity = 0.0 
    }
    else
    {
        // we fade sunlight
        if (sunInHorizon <= sunFade)
        {
          sunIntensity = sunInHorizon / sunFade
        }
    }

    diffuse = sunIntensity * diffuseLight

This way we solved problem of lighting vertices that have normals pointing into sun direction while sun is on wrong side of the planet and achieved a nice day/night transition over the whole planet. Timelapse of this effect can be seen on this video:


Its probably still far from perfect, but works ok for now. In next part of day/night series I'd like to optimize and correct some things presented today and also add day/night effects to our skydome, which right now remains white despite time of day change.

Wednesday 24 August 2011

Bottlenecks and smooth noise functions

Over the past few days, we've been trying to improve the current bottlenecks that slow down the engine.

The first bottleneck was simply a graphics one - how many polygons can be drawn on screen. The standard solution is frustum culling. In our case, this was a fairly easy task, since all our visual data is separated into 'visual blocks'. Each one is simply a cube, 12m on each side, which holds all visual data. Frustum culling for that simply meant checking whether each corner is visible, and if none of them are, drawing it was skipped.

Another improvement to graphics was reducing the sheer number of draw calls. As you can imagine, with one visual block being a 12x12x12m cube, in order to get good visual distance between them, we have quite a few of them. One way to decrease the number of draw calls is to simply add only full ones to a list when rebuilding, and draw only that list. The downside is that in order to accurately maintain that list, when the user moves, a rather large number of them have to be rechecked. This is an acceptable trade off, for the most part at least.

But another bottleneck, even worse is the sheer number of data that can be covered. In fact, 21*21*21 = 9261 visual blocks give us a little over 120m view distance on each side of the player - and cover a huge volume of 16 million cubic meters! - which means we need 16 million data samples to construct that volume!!!

This presents a generation bottleneck - the terrain generation function must be fast enough to generate that much data in a reasonable time. Of course one generated the data is stored to disk, and can be read from there.
For generating data, we were able to obtain a decent looking terrain at approx. 16 datablock generations per second. This means a point generation rate of 27,000 data points per second, while when loading datablocks, we were able to load of speeds ranging beteween 100 and 200 datablocks/second  - which is an average of 259,000 datapoints per second.

This leads to the major problem in this fiasco, and that is, the speed at which the density function operates. For our project, we currently use libnoise, and we use something like 7 independent one octave functions, and two 2 octave functions  Four of these (including one of the 2 octave) are combined together in different frequencies and amplitudes to generate terrain, while the other 3 are used for picking biomes and determining oceans and seas and mountain areas.

For those familiar with how Perlin noise works, this means a huge number of lookups and trilinear interpolations. Which makes it slow, slow, slow. ... Unfortunately, the slowness is greatly due to the sheer number of points generated, not only the speed of the noise. A quick switch to an implementation of Simplex Noise (also authored by Ken Perlin), showed no significant improvement in speed. (the measured speed is not only dependent on the generation but on number of other factors). While Simplex noise may be faster in higher dimensions we only use 3d noise anyway, and while straight forward tests of speed may show it faster in lesser dimensions, it is still comprable to Perlin noise when used in a real world application. Of course its speed isn't the only thing that it has improved over Perlin, but for now, it won't help our bottleneck.

The only way to help the generation bottleneck is to redesign the density function used to use less noise modules, and combine them in more ingenius ways.

The good thing is, that while this is a bottleneck as such, it is only applicable when the terrain is first generated. Often times players will move around generated terrain, which, as mentioned, is much faster. There's also the possibilty of pre-generating terrain while nothing else is pending - but this also has the drawback of starting to store excessive data, so it too must be balanced.

In other words, without more major overhaul, the speed of the density function is not likely to show significant improvement by just switching noise generators.

Sunday 21 August 2011

The importance of data and visuals separation

At the start of the project, some four months ago, the idea was only for procedural terrain, without editing. At that point, what we now call 'visual blocks' was the only data structure around.

Visual blocks are simply a collection of cubes with density values on each corner. These cubes naturally share corners, at most 8 cubes per corner. Visual blocks were generated on the fly,
extracting a surface from the density function using an implementation of the Marching Cubes Algorithm (now patent-free). This happened without storing any data to disk, as the terrain was not editable, thus the density function described it perfectly at all times.

As the other guy on the project joined, we decided to go for editable terrain. This meant that we had to store terrain data on the disk, since when edited the density function would no longer be an accurate representation. In the previous post, I mentioned how to store these discretely.

Now, the other problem when storing data is somewhat tricky. Neighboring visual blocks share
densities along the border. These densities can be considered as part of both visual blocks. So the question was, do we store this data as part of both visual blocks, or just one?

Storing it in just one visual block would require loading the whole next one to obtain this data, but even worse, would lead to recursive calls! The newly loaded visual block would require the next one, and so on and so on. There are ways of avoiding this, but not entirely too pretty.


Storing the densities as part of both visual blocks seems wasteful. If we consider that a visual block is in fact cubical, and all its sides border another, this would mean large amounts of densities to be duplicated.

And perhaps, most importantly, storing the data on per-visual block basis, ties the underlying data structure to the basic unit of display. If we wanted larger visual blocks (say for LoD), we couldn't do that.

The solution was to completely detach the underlying data from the visuals. In something called the Terrain Data Manager (which to some extent is described in the project wiki here), we manage both writing / reading form disk and the data currently in memory. The TDM stores datablocks - which are completely unrelated and independent from visual blocks. Datablocks are non-overlapping but adjacent, and are the basic atomic unit of data that can get loaded or generated.
The picture on the right illustrates how visual blocks can overlap several parts of datablocks.
Since the datablock was the smallest unit of data we would load, it made sense for it's size to be small, but due to costly time of reading/writing to disk, this size could not be too small. The current implementation holds datablock size at 20 voxels along each axis (or in other words, 20 m^3, since voxels are stored 1/m), while the visual blocks size is still being experimented upon. The separation of these two ideas gives large amounts of freedom

A visual block can be of any size and range, while the underlying data management remains untouched. The visual blocks don't even have to be of size that's multiple - or anywhere close to that of the datablock. This allows tweaking of both visual blocks size and datablock size independently, to optimize the time required for building and generating. It also eliminates the problem of storing any duplicates or causing recursive calls.

Further, the datablocks are the only data that actually gets generated - and stored. Visual blocks no longer concern themselves with where data comes from - the disk or is generated - they only request data from the TDM. The TDM hides all internals, and only returns certain things, like density at a specific point, or voxel-type (as in dirt, sand, etc) at a point.

The introduction of the TDM was one of the major steps in making the editable terrain behave correctly, and it allowed us a lot of freedoms in experimenting with visuals.

Saturday 20 August 2011

Density Function & Saving discrete densities

What is a density function? Well, basically it's a function that returns a value at any given x,y,z coordinate (that's a 3d, there's 2d versions as well, but we use 3d).
The values of that density function are then interpreted as either inside or outside terrain, the simples assumption being that if the value is negative, it's outside terrain, and if it's positive, it's inside terrain.
The simplest density function, that generates a flat ground is this:

DensityFunction(x,y,z)
{
  return -y;
}

Anywhere above y = 0, this function will return negative, and below that positive. This information can be used by something like the Marching Cube Algorithm to extract a surface. 
Now, the problem with this is that it returns an incredibly huge range of densities. Obviously we don't need them all, since whether a point is at -100 or at -10000, it is clearly outside of terrain.

Section of the voxel grid used to generate terrain.
Yellow cubes are sand voxels, transparent ones are air.

What we wanted is a way to store this density to disk, in a way that would make it editable. This disqualifies storing just density function parameters (although this also needs to be stored for future generation of terrain), since the density function is not editable. The easiest example, if i wanted to make a hole in terrain somewhere, I'd have to modify all surrounding space to have a negative density. It is possible to store this information and to force the function to check if it's near that hole, and modify it's output.

However, for storing an indefinitely large amount of holes this wouldn't work.

A better solution is to start storing the density function on a discrete grid, and modifying it at each grid point, storing these values back to the grid. This would allow modifying on a large scale, since no matter what, we'd only have a certain finite amount of grid points to store.

Now the problem is storage space. The original version of the density function used a double as its output. The size of a double is 8 bytes. This is pretty tiny, but if we consider storing just a 100x100x100 meter cube, we get:

100*100*100*8 = 8,000,000 bytes, or a little under 8mb. This is just for a relatively small block of 100m. If we wanted to store a 1km x 1km x 1km block, that would make the required storage space about 7.5Gb

Obviously, a 1km cube on each side is a lot of volume, but is still incredibly small compared to what a player can walk to. The double just wasn't going to cut it

So, what if we were to store a byte for density? That would essentially cut all storage by 8, and while still significant, it would give us more wiggle room. That's a fine idea, but how do you map the output of type double down to a byte? A byte can only hold 256 values, after all!

The idea for the solution was mentioned earlier - whether a density is -100, or -1000 it doesn't make much of a difference. The results we obtained are highly dependent on the frequency of the density function used. However, with our density function aimed to generated good amount of roughness at around every 1 meter, we found that we can take the output value of the density function and basically map it like this:

  • If the value is less than -1, return -128 as a byte
  • If the value is greater than 1, return 127 as a byte (due to 0, a signed byte's range is actually [-128,127])
  • If the value is between -1 and 1, basically map it to the discrete range of [-128,127].
With this, we were able to store 1 byte per datapoint, and, using the marching cubes algorithm, still obtain smooth terrain.
The datapoints are stored on every 1m, making it convenient to just refer to them by integer locations. For our purposes, this became known as a Voxel, even though it wasn't quite what a traditional Voxel definition is.

Friday 19 August 2011

The story so far...

In brief, this started as one guy working on some procedural terrain generation at the beginning of 2011. Some months later, he joined forces with another guy who had also been working on a similar project.
And now they're working together, as the title says, on a Procedural Editable Terrain - aiming to create a dynamic world, which is both realistic, and completely changeable by the user.

In short, the terrain is based on a density function and an implementation of the Marching Cubes Algorithm. The storage of variable densities used alongside the MCA, allows for smooth, good-quality editable terrain.

The project is being developed in C++, with OpenGL for graphics, and various supporting libraries

What's been done so far (more of this will be elaborated in later posts):
  • Generation of density-based terrain from a combination of perlin noise functions
  • Discrete storage and saving of the density based terrain in so-called 'voxels'
  • Meaningful material assignment to voxels (such as dirt, snow, rock etc.)
  • Triplanar texture mapping and blending between any two given textures
  • Digging of terrain
  • Basic 'tool' implementation, including several debugging tools
  • De-coupling of the data and visuals
  • Variable distance drawing based on data
  • On-the-fly generation and storage of necessary data
  • Basic biomes, supporting different terrain features and blending between said features
  • Occlusion culling
This is a short list of the features that have been currently implemented. Some of these will be expanded upon in later posts - hopefully with screenshots of the things at work