Posted by on September 1st, 2017

Hi, everyone. This is John Jackson, one of the engine devs at Nerd Kingdom who is currently responsible for working on the latest iteration of our material system. For those of you who are new to the concept, materials are a collection of textures, properties, and shaders that when in used in combination describe how a particular object is to be rendered on screen. I’m going to discuss very briefly how our previous material system has worked, what it was lacking, what we ultimately wanted out of our materials, and how that’s being implemented.

Previous Material System

As stated before, materials are a collection of various items working in tandem to describe how any given game object should be rendered to the screen. In the simplest terms possible, for each object or collection of objects (in the case of instancing), we grab all of the textures and material specific properties to be used, “bind” the shader to be used with this particular material and submit to the graphics card to render our game object. As our artists follow a PBR workflow (Physically Based Rendering), we had a fairly clear understanding early on what our materials would need regarding all of these items.

Originally, all of our materials consisted of the base following properties:

  • Array of textures for each of the given properties corresponding to a typical PBR material
    • Albedo – the base color of a given material absent any lighting information
    • Normal – the perturbed, per-pixel normal information of the material
    • Metallic – the “metalness” of a material at any given pixel (typically either 0.0 or 1.0)
    • Roughness – describes the microfacet variations across a surface and is used to   calculate the diffusion of light ( the rougher the surface the more diffuse )
    • Ambient Occlusion – how much or how little ambient light is to affect the surface
    • Emissive – how much self-illumination the surface emits
  • 1:1 corresponding array of colors that can be used to adjust any of the following textures to alter the effect of the final result
  • Shared shaders between materials:
    • Static Deferred Geometry
    • Instanced Deferred Geometry
    • Skinned Geometry
    • Transparent Geometry (Forward Rendered)

With just these colors, textures, and shared shaders and not much else (a few other properties are not listed here), it was fairly easy to be able to satisfy most of the requirements given by the artists for the game objects they needed to represent – anything from rocks to glowing swords to glass windows. Of course, there were still many limitations to this system that eventually started to become more and more apparent as time went on.

Issues / What was lacking

Firstly, regardless of the needs of a particular material, every single one was required to have these given properties. You have no need for metallic, roughness, or emissive maps and a constant value defined in a shader will suffice? Tough. Your material will still have these arrays for textures and colors and we will still have to allocate memory for it.

This might not seem like too big a deal at first, but as an example to demonstrate the concern this causes an engine programmer, let’s assume we have a simple material and all it needs are Albedo and Normal texture maps to achieve the desired effect. Using this current material system, we’ve just wasted space due to 4 other pre-allocated texture slots as well as 6 pre-allocated colors.

Secondly, as versatile as this set-up potentially is for most materials, it’s still limited due to being a basic surface material. What do I mean by that? If you remember, all of these materials are restricted to a few shared shaders that the engine team has written for the artists and these shaders are ultimately responsible for telling your graphics card how to take all of the input textures and properties given to it and ultimately draw the object to the screen. What’s the problem with this? Well, what if none of the shaders have instructions for implementing a very custom, specific effect one of the artists requests? For example, what if I want to have a material that emits a blinking light based on a sin wave or animate its verticies using a noise texture or tile a material’s albedo texture depending on how close the camera is from a given pixel or…?

Okay, hopefully it’s obvious to you that we’re missing out on some cool stuff now. So what do we do about it?

Well, if this previous material setup is to continue to be used and these desired effects are to be implemented, we have two basic options to choose from:

  1. Add the desired properties to the materials and shared shaders that have already been written.
  2. Derive a new material type from our base material that will hold all of these properties and create a completely different shader for this specific effect.

Both of these, while feasible and will certainly work in the short term, have fairly significant problems in the end.

If the first option is chosen, our materials have now become even more wasteful than before. For instance, simply wanting to scroll a texture in a material requires the material holding a variable for panning speed, which is a 2 float vector. Even this small variable means that all of our materials are now inflated by another 8 bytes, which obviously doesn’t scale well at all when you consider just how many more variables you’ll start to add on for other effects.

The second option is actually what we originally implemented once certain effects were being requested. We have specific implementations written to handle animated materials, flipbook materials, dissolve materials and even added parameters for wind controls. Each of these materials derives from our base graphics material class, they each hold specific properties required for the effect, and each corresponds to its own specific, handwritten shader that handles how it is to be rendered.

For a small number of materials for specific effects, this is a perfectly acceptable solution and has worked for us for a while. But as the number of requested effects continues to grow and experimentation becomes more and more desirable for different materials, this solution becomes very restrictive and time consuming. Again, just as the first option, this simply doesn’t scale to meet the desires of our team.

Inspiration / Design for new system

Sometimes finding guidance for how to implement a new system can be a challenging task, especially if you’ve never worked on anything similar to it before. Luckily, there are plenty of great engines out there that serve as a source of inspiration regarding design and features, so it didn’t take long to do some research regarding how other engines handle their own material implementations and compile a list of features that we wanted ours to have.

After discussions amongst the team and a preliminary planning/research stage, we had a fairly decent idea what we wanted our material system to be:

  • Artist / Designer friendly material creation system that allowed for easy iteration and experimentation, preferably using a node-based editor that they are used to using in other design/creation suites
  • More memory efficient than previous implementation (no unnecessary properties held in materials that didn’t need to use them)
  • Clear workflow that is easily scalable and preferably as data-driven as possible

New System Overview

Essentially, the new system works like this:

  • A shader graph, which is the backbone of the entire system, is created and edited using a custom shader graph editing tool. This tool is a node-based editor very similar to how Unreal Engine’s Material Editor or Stingray’s Shader Graph Editor work.
  • Upon creation, shader graphs are compiled and are responsible for parsing their node network to generate the required shader code as well as any corresponding properties for the material to be rendered correctly.
  • Once successfully compiled, a shader graph can be assigned to a material. Any number of materials can share the same shader graph.
  • Upon material creation, materials are assigned a default shader graph by the engine that can be made unique to be edited or can simply be replaced by an existing shader graph.
  • Once assigned, the material has the option to use the default properties given by the shader graph or “override” the properties with its own.

Shader Graphs

All shader graphs are defined by a collection of nodes that together describe how a particular material and its shaders are to be created. Each of these nodes has its own functionality and ultimately are responsible for outputting some snippet of predefined shader code. As there was a strong desire to make this system as data-driven as possible, all of the nodes are defined in a node template file which is used by the engine to fill out specific, unique data for each evaluated node in the graph. This makes it very easy to tweak the behavior of already established nodes or create new templates very quickly.

Node Templates

Template definition for a ConstantVec2Node.

Shown above is an example of what one of these node templates looks like. Most of the properties should be fairly self-explanatory, such as “NumberInputs”, but fields like “VariableDeclaration” and “VariableDefinition” require some explanation.

All nodes used in the shader graph once evaluated boil down to variables of the type defined by their output. For instance, a ConstantVec2Node will evaluate to some variable of type vec2, as is illustrated above under the “Output” section of the template.

For each node to be evaluated, we must declare its variable equivalent in the header of our generated shader as well as define this variable in the main body of our shader. This is what these two sections are responsible for doing. Obviously simply declaring and defining a vec2 is trivial, but using this system it is possible to define whole blocks of complicated shader code under the “VariableDeclaration” and “VariableDefinition” sections of the template.

Node Template Mark-up

What’s also notable are sections of the node template that contain custom-defined markup language that is replaced upon evaluation of the shader graph. For instance, all nodes have a unique name that is used for its shader variable name as well, so anytime the system sees the markup #NODE_NAME it knows to replace this text with the given name for that particular node. The #INPUT() markup looks at the “Inputs” field for the template and uses the specified input fields in the parentheses as replacements (in this instance, the “X” and “Y” fields respectively). There are many others, such as #G_WORLD_TIME for global world time of the engine as well as markup for vertex, camera, and pixel information.

Material Size

All materials following this new system have now been reduced to two main properties:

  • The shader graph that it references for all of its default properties and shaders to be used
  • A hashmap of property overrides

Editor

The shader graph editor, even though in a very early stage, can be used to edit and create new shader graphs used for materials. The final output for the material is the “MainNode”, which can be seen in the picture below.

Example shader graph in editor. Two input nodes on left connecting to MainNode on right.

Here we have an example of a shader graph that creates the simple material that I was describing at the beginning of this post and requires only texture inputs for the albedo and normal channels. All other channels will use default values that will be constants within the shader and therefore not require any extra storage.

Small collection of available node templates.

Examples of Graphs and Materials

My hope is that moving forward with this new material system will allow our artists and designers to explore and iterate on more and more creative options.

Below are some examples of some material effects that I threw together to show off what can be done with the new material system. What’s most important to me is that iteration on these was very easy and took only a small amount of time to create.

 

Animated emissive material applied to rock

 

Scrolling material with texture tiling

 

Floating cloud with vertex animations based on object’s world space position and sin wave

 

Have a great weekend!

Leave a Comment

Posted by on July 28th, 2017

Greetings! 5ubtlety here, systems designer at Nerd Kingdom and the programmer behind the player character controller. I’m here to discuss our approach to designing and implementing this critical component that tethers the player to the world. There are simply too many low-level implementation details to fully expound upon in this post. Instead, we’re going to get a high-level overview of the considerations to take into account and challenges that can be faced when developing a character controller.


So what is a character controller? The character controller is simply responsible for moving the player avatar through the game world. This entails translating player input into movement mechanics and processing any game-specific movement dynamics. A character controller will also need to be able to detect and react to its environment, check for whether the character is on the ground or airborne, manage character state, and integrate with other game systems, including physics, animation, and the camera.

There are two main types of character controllers, dynamic (entirely physics-driven) and kinematic (non-reactive collisions). Character controller implementation is highly game-specific, but most opt for kinematic controllers. Very few games aim for complete physical realism. Most have stylized physical reactions tailor-made to feel great within the context of their gameplay environments.

The character controller is modeled as a capsule-shaped rigid body. The rounded capsule shape helps the character controller slide off of surfaces as it moves through the environment. It’s affected by gravity and constrained by terrain and other colliders by the physics engine. The orientation of the capsule is locked to an upright position, but may be manually adjusted in special cases, such as handling acceleration tilt, which pivots the character around its center of mass based on acceleration. Unless handled by the physics engine, movement will need to be projected onto the ground plane so the character can properly move up and down terrain.

Raycasts (and other geometric casts) are your main tool for sensing the environment immediately around the character controller so you may react properly. These casts give information such as direction and distance to the nearby colliders as well as their surface normals.


In open-world games, movement is typically the principal mechanic that consumes the player’s time. Therefore, movement needs to feel great as the player navigates through the world. Minimally, it needs to be functional, responsive, and intuitive. Depending on the game, you may have secondary goals such as allowing for greater player expression, or aiming for a high-degree of player control and precision, such as in a fast-paced platformer. Often, trade-offs will need to be made, so there is no universal solution to these matters. For example, consider the following graphic in which the left figure has a flat velocity, while the right figure applies acceleration. The left figure allows for a higher level of precision in movement, while the right is more realistic and may look and feel better in certain game contexts.

Image Credit: Marius Holstad, Source

Every game is going to have specific movement affordances, such as the ability to sprint, swim, double jump, wall-run, grab ledges, climb ladders, swing from ropes, etc. Every new verb added to this list can vastly expand the player’s mobility. Defining these is just the beginning though. There is much nuance in how they are used, how they feel, and how they interact with other game elements.

Even if all your character can do is simply move side to side and jump, you’re quickly going to run into “The Door Problem” of Game Design. Here are a few of the questions you might start asking:

  • How fast should the player move? What are the maximum and minimum movement speeds? Can the player choose to move at intermediate values?
  • Can the player stop and pivot on a dime?
  • Should the player accelerate and decelerate over time? How quickly?
  • Will your game have different kinds of terrain that affect player movement, such as quicksand or ice?
  • How do the character controller and animation system interact with one another?
  • What size and shape should the character’s collider be?
  • Can the character push or be pushed by other objects when they press against one another?
  • What kind of environmental geometry does your world feature? Sharp and flat edges, or organic, bumpy terrain?
  • Is the player able to walk up slopes? What are the minimum and maximum inclines?
  • How about steps? What are the minimum and maximum height?
  • Is movement speed slower when walking uphill?
  • Is controller input supported? How will input be handled differently between a keyboard and analogue stick?
  • How does the camera follow the player?
  • How high can the player jump? Are running jumps higher?
  • Is momentum conserved when jumping?
  • What should the force of gravity be? Is this the only factor that determines the player’s fall speed?
  • Is there air friction (drag)?
  • Should the character have a terminal velocity?
  • Does the character have a momentary hang-time at the jump’s apex, or does it immediately begin decelerating downwards?
  • Can the player jump higher by holding the jump button longer?
  • Does the player have any amount of air control, or is mid-air input simply ignored?

This is just the beginning. As development progresses, new questions and issues will arise as environmental variables impose new constraints on the initial design. You should develop your controller gradually, making steady incremental improvements. In our case, we developed a playground scene where we can test our iterative approach in a consistent, controlled environment. Spoiler Alert: Most of your development time is going to be addressing engine and game-specific edge cases!


Following are some features we explored while prototyping our character controller. Note not all of these elements will be relevant to every game.

Camera-Relative Movement

In most 3rd-person perspective games, movement is relative to the camera rather than the avatar, which is more intuitive for the player to process. Some games intentionally break this convention to impart a feeling of vulnerability.

Motion Alignment

When moving, the pawn automatically pivots over time (with some smoothing) to align with the movement direction.

Image Credit: Marius Holstad, Source

Jump Input Buffering and Latency Forgiveness

This helps with jump timing in the case the player presses the jump button a few frames before actually reaching the ground. Additionally, this permits the player to execute a jump even if they pressed the button immediately after walking off a ledge and consequently entered the airborne state. This pattern can be applied to other kinds of character input as well.

Air Control

This allows the player to adjust their airborne velocity, but with reduced effect.

Animation

  • Animation Blending
  • Upper/Lower Body Animation Layers
  • Root Motion Control
    • Adjust capsule position and/or orientation as a result of playing certain animations.
  • Inverse Kinematic Limb Placement
    • Place feet when walking/running. Particularly useful for steps and slopes.
    • Place hands when climbing or interacting with game objects.
    • Intelligent Ragdolls

Spline-Stepping

This assists elevating the character up detected steps by smoothing movement with a curved spline over a period of time.

Here is a prototype of our character controller walking up some stairs in our playground scene.

Ground Normal Smoothing

This will eliminate anomalies in ground normal calculation by performing multiple raycasts at various sample points at the base of the player’s capsule and averaging the results to calculate the final ground normal. The resultant vector is then smoothed between consecutive frames.

Here is a prototype of our character controller walking over rounded surfaces in our playground scene.

Slope Fatigue System

Any slope above a certain threshold incline will induce “slope fatigue” in the player over a short period of time. The more fatigued the player is, the more slowly he will ascend the surface in the upward direction of the incline. After a certain amount of fatigue has accumulated, based on slope steepness, the player will begin sliding down the slope. Slope fatigue will recover once the player is on a more level surface.

Wall Avoidance

Automatic wall avoidance allows for smoother steering behavior when walking around walls and corners. The character controller raycasts ahead in the direction of movement to detect walls and other obstructions that would block movement. If detected, and the angle of incidence is shallow, the player is steered away from the surface. On the left side of the following image, the player sticks to the wall as he brushes against it. On the right side of the image, the player gently slides off the surface as his steering is adjusted.

Credit: Marius Holstad, Source

Analogue Input Processing

Analogue movement input from a thumbstick is a very different approach to controlling direction and speed than the keyboard’s 8-way digital input. In order to sanitize this raw axis data and map it to movement inputs the controller can read, we filter it through dead zones and interpolate the results.

Inner Dead Zone

Outer Dead Zone

Radial Dead Zone

Range Mapping

Non-Linear Interpolation

Image Credit: Ryan Juckett, Source


Hopefully this post provided some insight into the design and implementation of character controllers and some of the considerations to take into account when developing one. The bottom line is that there is no one right solution that works in all situations. Every game’s needs are very different and developing a solid character controller is going to largely be an iterative process of discovery and polish. The final 10% is what separates a clumsy, buggy controller from a responsive one that works well and immerses the player. This is one of the game’s most critical components that the player continually interfaces with during gameplay. It can easily make or break an entire game, so take the proper time and make it feel great!

 

Leave a Comment

Posted by on June 16th, 2017

Hello everyone! The name is Duane and I am the Animation Director here at Nerd Kingdom.

During my lengthy career in game development, I have certainly been here before.  Well, not really here, as here is a bit different.  However, in some aspects, it is almost entirely the same.  The outstanding difference is the Eternus Game Engine currently under development at Nerd Kingdom.  Built from the ground up, Eternus holds the promise of a groundbreaking game development engine and tools upon which its flagship product will be developed.  So, it’s deja vu all over again…or is it?

The first time I heard the term “virtual reality” was when I began my career in 1994 as a Lead Animator at Virtual World Entertainment (VWE). The Chicago game studio made two products, a first-person shooter (FPS) “walking tank” game called Battletech and a “hovercraft racing” title called Red Planet.



Each product was built from the ground up on a proprietary game engine, completely unique to the requirements of gameplay for a multiplayer FPS and space-based racing title respectively.  Each engine included its own set of development tools and export processes, designed and built with essential integration toward the support of an efficient iterative creative process.  Nothing was borrowed or modded, and middleware was non-existent.  All of it was brand new, completely from scratch.  (Ok, truth be told, some code between Battletech and Red Planet was recycled.  But, I’m trying to make a point here.)

Fresh out of college, I was the studio’s first and only Lead Animator and it fell to me to collaborate with a newly hired Junior Programmer to design, test, and implement an integrated LOD Texturing tool.  The sky was the limit and… “What the hell is an LOD anyway?”

So, there I was, tasked with one of the most important art tools for Battletech’s and Red Planet’s CG art development.  Not because I was particularly suited for the role, but because I was “the new guy” and no one else wanted the job.

If you’ve ever wanted to make games for a living and knew nothing about the process, I knew exactly what you did when I began my career.  Lucky for me, this first challenge was a remarkable Art Tools design experience and quite an education.

Trial by fire, I learned how to make LODs by hand expeditiously, a method of reducing an object or character’s total number of polygons while maintaining its shape and silhouette.  I made four Levels of Detail (LOD) for each of the 20+ Mechs (aka “walking tanks”) and 12+ VTOL (“vertical take-off and landing”) racing craft.  That’s 128 LODs plus the original 32+ models.

Then, I learned about creating UV Maps followed by applying textures via Planar Projection mapping for the many texture groups within a single model.  At the time, Planar Projection mapping was all that this tool would provide.

The number of texture groups per model was exponential.  I had to rotate and place each Planar Projection, an intermediate object represented by a 3D Plane, over every single polygon group or group of facets (aka “face group”).  It was meticulous work.  But then, that’s why we were developing the LOD Texturing tool in the first place, to expedite this laborious process.  Ultimately, our efforts allowed Artists to texture any 3D model and all of its LODs based solely on the original models UV textures.  It was a profound success and increased my passion for making games and inventing game development technologies, in general.


By the way, is it really work if you love what you do for a living?  For me personally, animating for games is truly a dream come true.  I remember when a Tippett Studios’ VP at Siggraph once said, “These guys will work for nothing and do it all night long.  They love it!  They’re gamers and artist.”  I thought, “Holy sh*t, she knows our secret!”  But, it’s true.  Game developers will work long after their salaries have exhausted a full day’s work.  We are habitual over-achievers with a relentless work ethic.  Like some kind of digital junkie, looking forward to that next first moment of realized innovation in VR immersion.  It’s addictive!  That’s why most of look the way we do…trying to score that next (top-selling) digital hit.  Thank God mobile game development offers the same euphoric affects at smaller doses.  And, with the recent debates over VR/AR/MR, virtual reality, augmented reality, and mixed reality respectively, the digital chug-wagon continues.

I remember when I was in college, learning Alias|Wavefront software on a Thompson Digital Image machine back in the early 90’s.  No one knew what they were doing.  The teachers that were teaching the 3D Art and Animation curriculum at Columbia College Chicago had no clue what 3D was or even how to teach it.  Every student dove into the manuals and surpassed their instructors before the end of the second week, too impatient to watch some “old dude” struggle to understand the poorly written tutorials.

Anyway, I digress, back to the topic at hand.


Other things that haven’t changed in game development for decades?  How ’bout the division of labor across three main groups – Programmers, Designers, and Artists.  At VWE, I learned about five disparate teams the studio employed in their game development process – Owner/Managers, Programmers, Designers, Artists/Animators, and Testers.  And that right there was the pecking order by status and salary.  How little has changed in the industry as a whole.

Each of these teams worked in silos as focused but independent specialists prior to pre-production and were brought together as one homogenized unit as the pre-production “vertical slice” neared completion.  No, “vertical slice” has nothing to do with bread or ninja skills – Google it.

Over the years, the terminology for “development meetings with prioritized schedules or milestones” mutated into words like Sprint, Scrum, Agile, and Agile/Scrum.  Call it what you like, it has been the same process since the dawn of game development.  In its most basic form, it goes something like this – create a series of meetings based on a prioritized schedule of milestones around the topics of concepts/game ideas, dev, design, art, scope, and schedules.  Then, build and test the plethora of advancing software.  This is usually followed by cycles of wash/rinse/repeat.  Critical to the successful development of this cycle is smart, honest decisions by talented and experienced key team members…and yadda, yadda, yadda – it’s boring stuff, but absolutely necessary.

Another enduring oddity in game development is something called “studio culture”.  Here’s a checklist of things that, in my experience, have existed in every studio I’ve ever worked for:

⦁           Very smart, technical/analytical problem-solving academics who love games and are “kids at heart”

⦁           A fascination with technology trends, games, movies & music, art & animation, and science fiction/fantasy.

⦁           Communal eating spaces/kitchens with free drinks – a game developer’s divine right.

⦁           Tattoos, piercings, long hair.  Occasional bad hygene?  Perhaps.

⦁           Action figures

⦁           Nerf guns

⦁           Darkened work space that are quiet, but at times rowdy on a good day (aka productive day).

⦁           Flexible 8 hour work schedules

⦁           Casual clothes – bare feet (aka sandle or flip-flops), bare legs (aka shorts), baseball caps, and enigmatic t-shirts.

⦁           The mention of manga/anime, Weird Al (Yankovic) for some reason, and anything sci-fi…most likely a Star Wars reference.

And then, there’s the “proximity task”.  Happens all the time in game development.  It can usually fall to the person who is simply absent at the wrong time during a formal team meeting.  But when it’s an informal discussion, simply sitting at your desk near one can get you saddled with a task that no one wants.  Like today, for example, when I was asked to write this blog.  Happy reading!

By the way, if you’ve made it this far into the article, then bless you for your unwarranted attention.  You are a saint!  Take heed, I’m almost done.

One last thing that is ever present in this industry are the abundant proprietary processes developed and never shared by the multitude of game developers the world over.  With most new games and especially with innovative immersive AR/VR experiences on new hardware, a new engine, SDK, and game product are under simultaneous development.  In my experience, the lineage of this simultaneous development started on PC, followed by the original Xbox console, then Xbox 360, Kinect, HoloLens, and Magic Leap.

And now, finally, “Back to Eternus”.  Sounds like a great sci-fi epic, doesn’t it?

Here at Nerd Kingdom, I ran into an old friend of mine not mentioned above, good ol’ Mister Frame Rate.  “How have you been, Old Chum?  It’s been awhile.  Wife and kids?  Goooood.”  Ever the divisive arbiter of quality graphics versus render speed, Frame Rate could often be an allusive collaborator.  But last week, he sauntered up to me with a drink, “Here, knock this back.  Oh, I forgot. You don’t drink. (Chug! Slurp.)  Let’s talk, shall we?”

So, after closing time, there we were, old Frame Rate and I, talkin’ ’bout the Good Ol’ Days and the mischief he put me through as a Director of Animation under fire for the largest memory footprint that character animation had ever occupied in VWE’s history.  Now, I can’t say that I remember those days with as rosy a resplendent recall, but I do remember the relief I felt when we were able to solve the issue with a technical art solution, an animation export tool, that we could all agree upon.

Allow me to blather on in detail about this very familiar topic.  In the early days of game development, when you would export a character animation for a game, whether authored in Maya, 3D Studio Max, or some other CG software of choice, the animation asset was exported as a linear keyframe for every frame of motion exhibited by each joint or node in a character’s skeletal hierarchy, regardless if its value changed or not, for the duration of the motion.

Well, as we research a popular export format, it is creating a similar result – a keyframe on every frame.  And so, it’s not surprising that discussions about frame rates and reducing file sizes have stirred this air of frame rate nostalgia.  Suffice it to say, there is a lot of keyframe data that can be filtered and omitted from animation assets that will reduce the size of every animation file, thereby reducing its memory footprint, load times, and in turn increase frame rate.

The last time I helped solve this puzzle, we decided upon a proprietary export tool that would allow the Technical Animator or Animator to provide an overall attribute value, as well as an attribute value per joint (per axis) to influence the total number of keyframes that would be generated along a curve.  These attribute values would then generate a range of results, interpreting the motion (based on angle deviation) as “a keyframe every frame” to “a reduced or filtered key set based on the degree of change (by angle deviation) along a curve” to “omitting keyframes completely”.

Said differently, the algorithm inspected the curve and re-created it as a slimmer version of itself (in bits).  Where there were more changes in value, more keyframes were exported or maintained along that portion of the curve.  Where there were fewer changes in value, the placement of keyframes was farther apart.  Whatever solution is devised for Eternus, we are certain to surpass the current state of our technology as of this writing.  And, I can’t wait to revisit that feeling of overwhelming accomplishment when the motion in-game is identical at less than half its original file size.

Oh, the nostalgia for innovative thinking.  All of it, in pursuit of making great gaming experiences with Eternus that will entertain and occupy the masses.  I guess you can go home again.

All that’s old is new again – for the first time.  May you enjoy playing our product in its many pre-launch versions.  And may the God of Shipped Titles smile upon us as we run head-long into the many game development cycles of deja vu and repeated timelines.  Wash. Rinse. Repeat. Game.

Have a wonderful weekend!

Leave a Comment

Posted by on April 14th, 2017

Hey all! This is Northman from Nerd Kingdom here to share some AI bytes with you. Specifically I’d like to talk about Pathfinding and how it relates to TUG and our characters.

Pathfinding is the act of finding the best path between two points in the world. The key word there is “best. The definition of best depends on the type of game you are making and the type of character you are trying to find a path for. I think a small thought experiment helps to clarify the set of problems pathfinding tries to solve:

Imagine a mountain goat and a human facing a mountain that extends as far to their left and right as their eyes can see. Directly in front of them is a door. On the door is a sign that reads: “Tunnel to the Other Side”. The human does not have any climbing gear and the mountain is far too steep to for the human to scale it without proper equipment. If both the human and the goat want to get to the other side of the mountain what do they do? The goat does not have hands to open doors nor the ability to read. However, the goat is a sure-footed climber and does not have any problem scaling the mountain so it goes on its goaty way over the top of the mountain. Conversely, the human does have hands and can read so they take the tunnel. The path the goat and the human found are both the best path they can muster by their definition of best even if they are both very different.

 

 


image

By Darklich14 (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)],  via Wikimedia Commons

*Insert funny remark about goats here.*

“Ropes? We don’t  need no stinkin’ ropes!”


 

 

So now that we have defined Pathfinding and talked about what “best” means we can look at what tools we have for finding paths. Most pathfinding techniques break up the world into spatial subsections (nodes) and store information about how those nodes are connected (edges). In Computer Science we call a set of nodes and edges a “graph”. Graphs are cool because they have been studied by mathematicians since the 18th century (check out the Seven Bridges of Königsberg). What this means to us is there are well known techniques, also known as algorithms, for dealing with graphs and finding best paths on them (see Pathfinding on Wikipedia). One of the most common algorithms used in games for pathfinding is A*. I won’t get into the details of A* in this post because it gets technical very quickly but the image below provides a good visual representation of a typical A* search.

 

 


image

 

By Subh83  (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via  Wikimedia Commons

An  illustration of an A* search. The initial red node indicates the starting position and the green node indicates the goal position. The gray polygon represents  an obstacle. The goal is to find the shortest distance path from the starting  node to the goal node while avoiding the obstacle.The path found is highlighted in green at the end of the animation.


 

 

We are currently exploring algorithms and graph representations for our world in TUG but so far we have implemented a navigational grid. In graph terms the grid is made up of nodes that all represent the same amount of space (one square meter) and each node has edges to its immediate neighboring nodes (grid cells): top left, top, top right, right, bottom right, bottom, bottom left, and left. These cells can be blocked by obstacles (shown in the image below in red) or open (shown in the image below in blue). This allows us to run A* searches to find best paths for our characters that avoid obstacles.

image

Blue Cells:  Areas a character can navigate in.

Red Cells:  Areas blocked by an object.

 

 

I hope you have enjoyed this introduction to pathfinding. Pathfinding is a large topic with many different techniques available depending on the pathfinding problem at hand. If you have any questions please feel free to email me at “northman at nerdkingdom dot com”. We are working hard to refine our pathfinding approaches for our characters and look forward to sharing more with you soon!

Have a great weekend!

Leave a Comment

Posted by on March 31st, 2017

Hey everyone! I hope everyone is having splendid week.

We are still hammering away on the engine and tools so there are not many visuals to show yet. However, we do want to keep you all updated with more goods so I will be sharing videos from our tech demo Fridays. Until then, here are a few WIP shots of some of our new tools.

 

image

[Biome Tool Graph]

 

image

[Animweb Graph]

 

image

[Biome Generated #1]

 

image

[Biome Generated #2]

 

[Early rough video of our floating island with sample of edges]

 

[8 blend target]


In a previous blog, we answered many of the planning and lore questions from the community. Today, we’ll focus on the TUGv2 and engine remake questions.

TUGv2

  1. Can you give an overview of the new TUG in a couple of short paragraphs, as if to new potential customers? What’s the “elevator pitch” description of the game?
    1. The game takes place in procedurally generated worlds filled with floating islands that are enriched with resources, populated with diverse cultures and held together by a deeply rooted narrative.
    2. Players slowly discover the beginnings of their universe through interactions and relationships developed within the game as well as uncovering the history of their own purpose for being.
    3. Through a series of relationships and a string of choices our players determine their friends, their enemies and their eventual place with their creators.
  2. With regards to features:
    1. Will we see any of the Kickstarter stretch goals being developed?
      1. Yes, without question, however timing is subject to the most impact it will have in early development for the community to experience.
    2. It’s been mentioned that manipulation of voxels in v2 will differ from that of v1? How much so, and what will the new system be like?
      1. Voxel manipulation will be available in our creative mode. During gameplay the focus shifts more to surviving within the world and building up a home world using prefab objects. Though, this may change to be more robust, once we have fully developed the game, and advanced some concepts of usability. Still, this is largely TBD
    3. What other features are likely to change from v1 (crafting, combat, building etc.)?
      1. Generally speaking, each feature has undergone revisions to create a more simplified / optimized approach to how they worked and to improve the user experience. We have done our best to keep the player in the game and not in the menus.
    4. What are the initial focuses of TUGs use of AI?
      1. AI plays a vital role in the experience players will have. Our focus on AI is one that feels alive with respect to how creatures behave in the wild, how companions seek to help the player as they build their worlds, and how enemies respond to our actions during combat. At each interaction we a setting a goal that keeps the player immersed in the world and the event that they are a part of.
    5. Are there any features which investors have decided on?
      1. Investors have played a supportive role in our goals and vision. They have allowed us the freedom to design and develop the game based on our recommendation for what works best for the world of TUG.
    6. How much input will the community get on new and current features? How will Nerd Kingdom receive this input?
      1. TBD, we will be leveraging a lot of contribution from the community from our exposed tools, as a medium to allow us to understand real insights from play, and reward those that assist in experimentation. This is a large motivational factor in why we have been pushing harder on ease of use for tools, and platform.
  3. With regards to modding:
    1. Can we add more complicated custom models to build with, e.g., a statue/non-voxel based?
      1. You will have the ability to add custom models of all shapes and sizes. Though to work best with the game there will be a list of guidelines for those that want to add their own custom content.
    2. This Update showed off lots of what NK has been working on over the last year; is there anything you’d like to give an extra highlight or detail on?
      1. As excited as we are about the game we’re building on top of a great set of technologies, we are super excited about being able to develop the tools and pipeline that will empower our users with the ability to quickly and easily mod everything that we do.
    3. Are there any plans to help push community driven development?
      1. Along with the tools to support and broaden community development we also have in the works a platform that simplifies and organizes everything for a community to be built around.
    4. Are weekly or monthly builds planned? (like minecraft snapshots)
      1. TBD
  4. Will there be documentation/tutorials/guidance for new players, or will that mostly be left to player-made tutorials/wikis?
    1. We have planned out to support players with
      1.  In game tutorials
    2. We have planned out to support developers with
      1. Documentation
      2. Tutorials (written/video)

The Engine Remake

  1. There is a lot to be said about what the engine does, and can do, but i’ll let Josh speak more to that in an update early next year
    1. When is early next year?
      1. TBD
    2. Could you please get into specifics i.e. how the engine handles voxels, networking, memory management, mesh and collision mesh construction etc.
      1.  TBD, can be addressed with engine writeup
    3. If possible some statistics on before and after measures.
      1. TBD, can share this information with sentiment of community in a write-up on our data and research, with a post-mortem on what we have learned, and will be applying as we develop the game further.
  2. At a time when updates were starting to become a regular thing (0.8.9), a decision was made to remake the engine. Why?
    1. During development of TUG we reached a point where there were several opportunities to create a new game engine which could be supported by a full suite of tools for modding and game development. Recognizing the scope of our plans for providing a platform for players and creators we took the chance and are now beginning to use the tools and tech that drove us to depart from the old and get started with the new. It was a big decision then and one that will continue to be what supports all of our adventures moving forward.
  3. With regards to hardware specs:
    1. What are the expected hardware requirements of TUG at release?
      1. TBD, actively testing and working towards lowest specs possible, but as we have stated for years, it’s a balancing act
  4. What are the new capabilities of the engine, that the old one could not do?
      1. TBD, but can offer a write-up from tech team in an upcoming update with some samples of work, and capabilities.
    1.  How easy will it be to modify the engine going forwards, both from a developer and a modders perspective?
      1. TBD, with recent implementations of the development pipeline, we have made modding and development significantly faster. Which impacts our ability to create content, and the ability of others within the community.

Have a great weekend!

@Cambo

Leave a Comment

Posted by on March 3rd, 2017

Hey everybody! Cambo here and Happy Friday!

I’ll go ahead and update everyone since Ino is lost in the world of Zelda today. The team has been very busy the past month working on our engine tools and gameplay features, and core experiences. I will personally be working on having more frequent blog posts, dev updates, and videos when they are available. I am aiming for at least twice a month and video snippets from our demo Fridays. Importantly, I hope to shift our blog posts to our website in the future.

In the last blog, we mentioned that we would take note of the QA document that was compiled from the community. I got to work and nagged the team leads to answer them, nagging is what I do best. You can find the questions here. Thanks for organizing and bringing this to our attention @Rawr! The leads answered most of the questions but we will wrap up the rest over the next couple blog posts as we have more concrete answers. Get ready for the wall of text!

Planning

  1. Could we please have a roadmap? I.e.:
    1. Lists of features you’re working towards
      1. We are currently working towards a dynamic list of features to support an exciting experience with our sandbox adventure style game. A general list of features would include:
        1. Building, Resource Gathering, Crafting, Combat / Weapons, Multiplayer, Natural AI, Companions, Large Multi Biome Worlds, Portals, Questing
    2. Up and coming dev milestones.
      1. By mid-June we expect to have our “vertical slice” as our core gameplay experience is due to submit to our team and investors.
    3. Dates we can expect to hear from you guys (which can literally be an announcement of an announcement)
      1. Mid-June will be our next big announcement. We are stepping up our interaction with the community over the next four months with developer blogs and videos twice a month to showcase progress and let you guys see all the fun stuff that we are all excited about!
    4. When should we be hearing about the “plan for coms”.
      1. You just heard about them in the last question. =)
        1. 2 x a month with developer blogs and videos
  2. With regards to the roadmap:
    1. What features are expected to be in the alpha? When will it be released?
      1. Our focus for alpha is a strong compulsive game loop built around our core features to explore, collect, create and engage in combat all within a multiplayer experience. 
    2. What features are expected to be in the beta? When will it be released?
      1. As we progress to beta we will deepen and broaden our core features as well as add support for questing, portals and logic.
    3. What features are you aiming for with full release? When?
      1. Creating a robust feature set that supports an adventure sandbox game would not be complete without beginning to delve into the lore that brings our world to life. We will have a write-up from gameplay and design by the end of next month on what to expect.
    4. Will OSX/Linux usability be part of that release?
      1. Unlikely. The core parts of the engine that make it Windows-specific have been abstracted behind layers which will allow us to more easily approach OSX and Linux, and supporting OpenGL also moves us towards that goal. Unfortunately, as with many things in development, you cannot be 100% confident in anything unless you can see it working. We do not actively test on OSX or Linux.
    5. When will people start getting access to v2? Does anyone outside of NK have access already?
      1. A select group of testers will be using the software during development to assist with bugs and feedback. Please note, this is not an early access release, this is a development release. We are looking forward to getting the software ready for our early access users and as we get closer to being able to provide information on a time frame you all will be the first to know.
    6. Will you need to sign up for the Alpha?
      1. Over the next few months we will be better positioned to start putting a date to when we can expect to have a closed release to the public. Our team is excited to bring people on board for the experience and just as soon as we can, we’ll let everyone know. 
  3. You mentioned “Playtest Fridays” on your youtube a few years back; do you still plan to do something like this?
    1. As we move closer to our Alpha we can get back to our playtest Fridays. In the interim, we have demo day Fridays where developers have the opportunity to showcase their work to the team. These works will make their way to the community in the coming weeks and months and give you all a better insight to what we are currently working on.

Lore

  1. How much has the core lore/story concept changed since the kickstarter?
    1. Not significantly at all, in fact, we actively keep towards the framework of lore when defining design and various aspects of gameplay.
  2. Are the entities (gods?) from this post still part of the game/story?
    1. There are still god like characters that the player will learn about and we’ll write more about them in a future post.
  3. Why was the gem removed from seed’s hands?
    1. As the lore has changed the reasons for removing the gem were necessary. More to be revealed at a later date.
  4. Is Tuggles still going to be used?
    1. Not likely.

image

Radial Menu UI prototype

Have a great weekend and don’t forget to follow the leads!

Cambo

Engine Programmer Lead @JoshuaBrookover

Gameplay Lead @Scriptslol

Leave a Comment

Next Page »

Follow Us!

Stay up to date!

To get the latest updates on Nerd Kingdom tech sent right to your e-mail, fill out the form below