Posted by on September 1st, 2017

Hi, everyone. This is John Jackson, one of the engine devs at Nerd Kingdom who is currently responsible for working on the latest iteration of our material system. For those of you who are new to the concept, materials are a collection of textures, properties, and shaders that when in used in combination describe how a particular object is to be rendered on screen. I’m going to discuss very briefly how our previous material system has worked, what it was lacking, what we ultimately wanted out of our materials, and how that’s being implemented.

Previous Material System

As stated before, materials are a collection of various items working in tandem to describe how any given game object should be rendered to the screen. In the simplest terms possible, for each object or collection of objects (in the case of instancing), we grab all of the textures and material specific properties to be used, “bind” the shader to be used with this particular material and submit to the graphics card to render our game object. As our artists follow a PBR workflow (Physically Based Rendering), we had a fairly clear understanding early on what our materials would need regarding all of these items.

Originally, all of our materials consisted of the base following properties:

  • Array of textures for each of the given properties corresponding to a typical PBR material
    • Albedo – the base color of a given material absent any lighting information
    • Normal – the perturbed, per-pixel normal information of the material
    • Metallic – the “metalness” of a material at any given pixel (typically either 0.0 or 1.0)
    • Roughness – describes the microfacet variations across a surface and is used to   calculate the diffusion of light ( the rougher the surface the more diffuse )
    • Ambient Occlusion – how much or how little ambient light is to affect the surface
    • Emissive – how much self-illumination the surface emits
  • 1:1 corresponding array of colors that can be used to adjust any of the following textures to alter the effect of the final result
  • Shared shaders between materials:
    • Static Deferred Geometry
    • Instanced Deferred Geometry
    • Skinned Geometry
    • Transparent Geometry (Forward Rendered)

With just these colors, textures, and shared shaders and not much else (a few other properties are not listed here), it was fairly easy to be able to satisfy most of the requirements given by the artists for the game objects they needed to represent – anything from rocks to glowing swords to glass windows. Of course, there were still many limitations to this system that eventually started to become more and more apparent as time went on.

Issues / What was lacking

Firstly, regardless of the needs of a particular material, every single one was required to have these given properties. You have no need for metallic, roughness, or emissive maps and a constant value defined in a shader will suffice? Tough. Your material will still have these arrays for textures and colors and we will still have to allocate memory for it.

This might not seem like too big a deal at first, but as an example to demonstrate the concern this causes an engine programmer, let’s assume we have a simple material and all it needs are Albedo and Normal texture maps to achieve the desired effect. Using this current material system, we’ve just wasted space due to 4 other pre-allocated texture slots as well as 6 pre-allocated colors.

Secondly, as versatile as this set-up potentially is for most materials, it’s still limited due to being a basic surface material. What do I mean by that? If you remember, all of these materials are restricted to a few shared shaders that the engine team has written for the artists and these shaders are ultimately responsible for telling your graphics card how to take all of the input textures and properties given to it and ultimately draw the object to the screen. What’s the problem with this? Well, what if none of the shaders have instructions for implementing a very custom, specific effect one of the artists requests? For example, what if I want to have a material that emits a blinking light based on a sin wave or animate its verticies using a noise texture or tile a material’s albedo texture depending on how close the camera is from a given pixel or…?

Okay, hopefully it’s obvious to you that we’re missing out on some cool stuff now. So what do we do about it?

Well, if this previous material setup is to continue to be used and these desired effects are to be implemented, we have two basic options to choose from:

  1. Add the desired properties to the materials and shared shaders that have already been written.
  2. Derive a new material type from our base material that will hold all of these properties and create a completely different shader for this specific effect.

Both of these, while feasible and will certainly work in the short term, have fairly significant problems in the end.

If the first option is chosen, our materials have now become even more wasteful than before. For instance, simply wanting to scroll a texture in a material requires the material holding a variable for panning speed, which is a 2 float vector. Even this small variable means that all of our materials are now inflated by another 8 bytes, which obviously doesn’t scale well at all when you consider just how many more variables you’ll start to add on for other effects.

The second option is actually what we originally implemented once certain effects were being requested. We have specific implementations written to handle animated materials, flipbook materials, dissolve materials and even added parameters for wind controls. Each of these materials derives from our base graphics material class, they each hold specific properties required for the effect, and each corresponds to its own specific, handwritten shader that handles how it is to be rendered.

For a small number of materials for specific effects, this is a perfectly acceptable solution and has worked for us for a while. But as the number of requested effects continues to grow and experimentation becomes more and more desirable for different materials, this solution becomes very restrictive and time consuming. Again, just as the first option, this simply doesn’t scale to meet the desires of our team.

Inspiration / Design for new system

Sometimes finding guidance for how to implement a new system can be a challenging task, especially if you’ve never worked on anything similar to it before. Luckily, there are plenty of great engines out there that serve as a source of inspiration regarding design and features, so it didn’t take long to do some research regarding how other engines handle their own material implementations and compile a list of features that we wanted ours to have.

After discussions amongst the team and a preliminary planning/research stage, we had a fairly decent idea what we wanted our material system to be:

  • Artist / Designer friendly material creation system that allowed for easy iteration and experimentation, preferably using a node-based editor that they are used to using in other design/creation suites
  • More memory efficient than previous implementation (no unnecessary properties held in materials that didn’t need to use them)
  • Clear workflow that is easily scalable and preferably as data-driven as possible

New System Overview

Essentially, the new system works like this:

  • A shader graph, which is the backbone of the entire system, is created and edited using a custom shader graph editing tool. This tool is a node-based editor very similar to how Unreal Engine’s Material Editor or Stingray’s Shader Graph Editor work.
  • Upon creation, shader graphs are compiled and are responsible for parsing their node network to generate the required shader code as well as any corresponding properties for the material to be rendered correctly.
  • Once successfully compiled, a shader graph can be assigned to a material. Any number of materials can share the same shader graph.
  • Upon material creation, materials are assigned a default shader graph by the engine that can be made unique to be edited or can simply be replaced by an existing shader graph.
  • Once assigned, the material has the option to use the default properties given by the shader graph or “override” the properties with its own.

Shader Graphs

All shader graphs are defined by a collection of nodes that together describe how a particular material and its shaders are to be created. Each of these nodes has its own functionality and ultimately are responsible for outputting some snippet of predefined shader code. As there was a strong desire to make this system as data-driven as possible, all of the nodes are defined in a node template file which is used by the engine to fill out specific, unique data for each evaluated node in the graph. This makes it very easy to tweak the behavior of already established nodes or create new templates very quickly.

Node Templates

Template definition for a ConstantVec2Node.

Shown above is an example of what one of these node templates looks like. Most of the properties should be fairly self-explanatory, such as “NumberInputs”, but fields like “VariableDeclaration” and “VariableDefinition” require some explanation.

All nodes used in the shader graph once evaluated boil down to variables of the type defined by their output. For instance, a ConstantVec2Node will evaluate to some variable of type vec2, as is illustrated above under the “Output” section of the template.

For each node to be evaluated, we must declare its variable equivalent in the header of our generated shader as well as define this variable in the main body of our shader. This is what these two sections are responsible for doing. Obviously simply declaring and defining a vec2 is trivial, but using this system it is possible to define whole blocks of complicated shader code under the “VariableDeclaration” and “VariableDefinition” sections of the template.

Node Template Mark-up

What’s also notable are sections of the node template that contain custom-defined markup language that is replaced upon evaluation of the shader graph. For instance, all nodes have a unique name that is used for its shader variable name as well, so anytime the system sees the markup #NODE_NAME it knows to replace this text with the given name for that particular node. The #INPUT() markup looks at the “Inputs” field for the template and uses the specified input fields in the parentheses as replacements (in this instance, the “X” and “Y” fields respectively). There are many others, such as #G_WORLD_TIME for global world time of the engine as well as markup for vertex, camera, and pixel information.

Material Size

All materials following this new system have now been reduced to two main properties:

  • The shader graph that it references for all of its default properties and shaders to be used
  • A hashmap of property overrides

Editor

The shader graph editor, even though in a very early stage, can be used to edit and create new shader graphs used for materials. The final output for the material is the “MainNode”, which can be seen in the picture below.

Example shader graph in editor. Two input nodes on left connecting to MainNode on right.

Here we have an example of a shader graph that creates the simple material that I was describing at the beginning of this post and requires only texture inputs for the albedo and normal channels. All other channels will use default values that will be constants within the shader and therefore not require any extra storage.

Small collection of available node templates.

Examples of Graphs and Materials

My hope is that moving forward with this new material system will allow our artists and designers to explore and iterate on more and more creative options.

Below are some examples of some material effects that I threw together to show off what can be done with the new material system. What’s most important to me is that iteration on these was very easy and took only a small amount of time to create.

 

Animated emissive material applied to rock

 

Scrolling material with texture tiling

 

Floating cloud with vertex animations based on object’s world space position and sin wave

 

Have a great weekend!

Leave a Comment

Posted by on July 28th, 2017

Greetings! 5ubtlety here, systems designer at Nerd Kingdom and the programmer behind the player character controller. I’m here to discuss our approach to designing and implementing this critical component that tethers the player to the world. There are simply too many low-level implementation details to fully expound upon in this post. Instead, we’re going to get a high-level overview of the considerations to take into account and challenges that can be faced when developing a character controller.


So what is a character controller? The character controller is simply responsible for moving the player avatar through the game world. This entails translating player input into movement mechanics and processing any game-specific movement dynamics. A character controller will also need to be able to detect and react to its environment, check for whether the character is on the ground or airborne, manage character state, and integrate with other game systems, including physics, animation, and the camera.

There are two main types of character controllers, dynamic (entirely physics-driven) and kinematic (non-reactive collisions). Character controller implementation is highly game-specific, but most opt for kinematic controllers. Very few games aim for complete physical realism. Most have stylized physical reactions tailor-made to feel great within the context of their gameplay environments.

The character controller is modeled as a capsule-shaped rigid body. The rounded capsule shape helps the character controller slide off of surfaces as it moves through the environment. It’s affected by gravity and constrained by terrain and other colliders by the physics engine. The orientation of the capsule is locked to an upright position, but may be manually adjusted in special cases, such as handling acceleration tilt, which pivots the character around its center of mass based on acceleration. Unless handled by the physics engine, movement will need to be projected onto the ground plane so the character can properly move up and down terrain.

Raycasts (and other geometric casts) are your main tool for sensing the environment immediately around the character controller so you may react properly. These casts give information such as direction and distance to the nearby colliders as well as their surface normals.


In open-world games, movement is typically the principal mechanic that consumes the player’s time. Therefore, movement needs to feel great as the player navigates through the world. Minimally, it needs to be functional, responsive, and intuitive. Depending on the game, you may have secondary goals such as allowing for greater player expression, or aiming for a high-degree of player control and precision, such as in a fast-paced platformer. Often, trade-offs will need to be made, so there is no universal solution to these matters. For example, consider the following graphic in which the left figure has a flat velocity, while the right figure applies acceleration. The left figure allows for a higher level of precision in movement, while the right is more realistic and may look and feel better in certain game contexts.

Image Credit: Marius Holstad, Source

Every game is going to have specific movement affordances, such as the ability to sprint, swim, double jump, wall-run, grab ledges, climb ladders, swing from ropes, etc. Every new verb added to this list can vastly expand the player’s mobility. Defining these is just the beginning though. There is much nuance in how they are used, how they feel, and how they interact with other game elements.

Even if all your character can do is simply move side to side and jump, you’re quickly going to run into “The Door Problem” of Game Design. Here are a few of the questions you might start asking:

  • How fast should the player move? What are the maximum and minimum movement speeds? Can the player choose to move at intermediate values?
  • Can the player stop and pivot on a dime?
  • Should the player accelerate and decelerate over time? How quickly?
  • Will your game have different kinds of terrain that affect player movement, such as quicksand or ice?
  • How do the character controller and animation system interact with one another?
  • What size and shape should the character’s collider be?
  • Can the character push or be pushed by other objects when they press against one another?
  • What kind of environmental geometry does your world feature? Sharp and flat edges, or organic, bumpy terrain?
  • Is the player able to walk up slopes? What are the minimum and maximum inclines?
  • How about steps? What are the minimum and maximum height?
  • Is movement speed slower when walking uphill?
  • Is controller input supported? How will input be handled differently between a keyboard and analogue stick?
  • How does the camera follow the player?
  • How high can the player jump? Are running jumps higher?
  • Is momentum conserved when jumping?
  • What should the force of gravity be? Is this the only factor that determines the player’s fall speed?
  • Is there air friction (drag)?
  • Should the character have a terminal velocity?
  • Does the character have a momentary hang-time at the jump’s apex, or does it immediately begin decelerating downwards?
  • Can the player jump higher by holding the jump button longer?
  • Does the player have any amount of air control, or is mid-air input simply ignored?

This is just the beginning. As development progresses, new questions and issues will arise as environmental variables impose new constraints on the initial design. You should develop your controller gradually, making steady incremental improvements. In our case, we developed a playground scene where we can test our iterative approach in a consistent, controlled environment. Spoiler Alert: Most of your development time is going to be addressing engine and game-specific edge cases!


Following are some features we explored while prototyping our character controller. Note not all of these elements will be relevant to every game.

Camera-Relative Movement

In most 3rd-person perspective games, movement is relative to the camera rather than the avatar, which is more intuitive for the player to process. Some games intentionally break this convention to impart a feeling of vulnerability.

Motion Alignment

When moving, the pawn automatically pivots over time (with some smoothing) to align with the movement direction.

Image Credit: Marius Holstad, Source

Jump Input Buffering and Latency Forgiveness

This helps with jump timing in the case the player presses the jump button a few frames before actually reaching the ground. Additionally, this permits the player to execute a jump even if they pressed the button immediately after walking off a ledge and consequently entered the airborne state. This pattern can be applied to other kinds of character input as well.

Air Control

This allows the player to adjust their airborne velocity, but with reduced effect.

Animation

  • Animation Blending
  • Upper/Lower Body Animation Layers
  • Root Motion Control
    • Adjust capsule position and/or orientation as a result of playing certain animations.
  • Inverse Kinematic Limb Placement
    • Place feet when walking/running. Particularly useful for steps and slopes.
    • Place hands when climbing or interacting with game objects.
    • Intelligent Ragdolls

Spline-Stepping

This assists elevating the character up detected steps by smoothing movement with a curved spline over a period of time.

Here is a prototype of our character controller walking up some stairs in our playground scene.

Ground Normal Smoothing

This will eliminate anomalies in ground normal calculation by performing multiple raycasts at various sample points at the base of the player’s capsule and averaging the results to calculate the final ground normal. The resultant vector is then smoothed between consecutive frames.

Here is a prototype of our character controller walking over rounded surfaces in our playground scene.

Slope Fatigue System

Any slope above a certain threshold incline will induce “slope fatigue” in the player over a short period of time. The more fatigued the player is, the more slowly he will ascend the surface in the upward direction of the incline. After a certain amount of fatigue has accumulated, based on slope steepness, the player will begin sliding down the slope. Slope fatigue will recover once the player is on a more level surface.

Wall Avoidance

Automatic wall avoidance allows for smoother steering behavior when walking around walls and corners. The character controller raycasts ahead in the direction of movement to detect walls and other obstructions that would block movement. If detected, and the angle of incidence is shallow, the player is steered away from the surface. On the left side of the following image, the player sticks to the wall as he brushes against it. On the right side of the image, the player gently slides off the surface as his steering is adjusted.

Credit: Marius Holstad, Source

Analogue Input Processing

Analogue movement input from a thumbstick is a very different approach to controlling direction and speed than the keyboard’s 8-way digital input. In order to sanitize this raw axis data and map it to movement inputs the controller can read, we filter it through dead zones and interpolate the results.

Inner Dead Zone

Outer Dead Zone

Radial Dead Zone

Range Mapping

Non-Linear Interpolation

Image Credit: Ryan Juckett, Source


Hopefully this post provided some insight into the design and implementation of character controllers and some of the considerations to take into account when developing one. The bottom line is that there is no one right solution that works in all situations. Every game’s needs are very different and developing a solid character controller is going to largely be an iterative process of discovery and polish. The final 10% is what separates a clumsy, buggy controller from a responsive one that works well and immerses the player. This is one of the game’s most critical components that the player continually interfaces with during gameplay. It can easily make or break an entire game, so take the proper time and make it feel great!

 

Leave a Comment

Posted by on June 30th, 2017

Hey everyone!

This update will be short as I don’t have many screenshots or prototype videos to share today.  However, we did prepare a 30-minute playthrough video for you all! Since the last progress update, we’ve been working hard on polishing our current features and systems. We’ve made incredible progress this month but we still have a long way to go. Have a great weekend and 4th of July!

Leave a Comment

Posted by on June 16th, 2017

Hello everyone! The name is Duane and I am the Animation Director here at Nerd Kingdom.

During my lengthy career in game development, I have certainly been here before.  Well, not really here, as here is a bit different.  However, in some aspects, it is almost entirely the same.  The outstanding difference is the Eternus Game Engine currently under development at Nerd Kingdom.  Built from the ground up, Eternus holds the promise of a groundbreaking game development engine and tools upon which its flagship product will be developed.  So, it’s deja vu all over again…or is it?

The first time I heard the term “virtual reality” was when I began my career in 1994 as a Lead Animator at Virtual World Entertainment (VWE). The Chicago game studio made two products, a first-person shooter (FPS) “walking tank” game called Battletech and a “hovercraft racing” title called Red Planet.



Each product was built from the ground up on a proprietary game engine, completely unique to the requirements of gameplay for a multiplayer FPS and space-based racing title respectively.  Each engine included its own set of development tools and export processes, designed and built with essential integration toward the support of an efficient iterative creative process.  Nothing was borrowed or modded, and middleware was non-existent.  All of it was brand new, completely from scratch.  (Ok, truth be told, some code between Battletech and Red Planet was recycled.  But, I’m trying to make a point here.)

Fresh out of college, I was the studio’s first and only Lead Animator and it fell to me to collaborate with a newly hired Junior Programmer to design, test, and implement an integrated LOD Texturing tool.  The sky was the limit and… “What the hell is an LOD anyway?”

So, there I was, tasked with one of the most important art tools for Battletech’s and Red Planet’s CG art development.  Not because I was particularly suited for the role, but because I was “the new guy” and no one else wanted the job.

If you’ve ever wanted to make games for a living and knew nothing about the process, I knew exactly what you did when I began my career.  Lucky for me, this first challenge was a remarkable Art Tools design experience and quite an education.

Trial by fire, I learned how to make LODs by hand expeditiously, a method of reducing an object or character’s total number of polygons while maintaining its shape and silhouette.  I made four Levels of Detail (LOD) for each of the 20+ Mechs (aka “walking tanks”) and 12+ VTOL (“vertical take-off and landing”) racing craft.  That’s 128 LODs plus the original 32+ models.

Then, I learned about creating UV Maps followed by applying textures via Planar Projection mapping for the many texture groups within a single model.  At the time, Planar Projection mapping was all that this tool would provide.

The number of texture groups per model was exponential.  I had to rotate and place each Planar Projection, an intermediate object represented by a 3D Plane, over every single polygon group or group of facets (aka “face group”).  It was meticulous work.  But then, that’s why we were developing the LOD Texturing tool in the first place, to expedite this laborious process.  Ultimately, our efforts allowed Artists to texture any 3D model and all of its LODs based solely on the original models UV textures.  It was a profound success and increased my passion for making games and inventing game development technologies, in general.


By the way, is it really work if you love what you do for a living?  For me personally, animating for games is truly a dream come true.  I remember when a Tippett Studios’ VP at Siggraph once said, “These guys will work for nothing and do it all night long.  They love it!  They’re gamers and artist.”  I thought, “Holy sh*t, she knows our secret!”  But, it’s true.  Game developers will work long after their salaries have exhausted a full day’s work.  We are habitual over-achievers with a relentless work ethic.  Like some kind of digital junkie, looking forward to that next first moment of realized innovation in VR immersion.  It’s addictive!  That’s why most of look the way we do…trying to score that next (top-selling) digital hit.  Thank God mobile game development offers the same euphoric affects at smaller doses.  And, with the recent debates over VR/AR/MR, virtual reality, augmented reality, and mixed reality respectively, the digital chug-wagon continues.

I remember when I was in college, learning Alias|Wavefront software on a Thompson Digital Image machine back in the early 90’s.  No one knew what they were doing.  The teachers that were teaching the 3D Art and Animation curriculum at Columbia College Chicago had no clue what 3D was or even how to teach it.  Every student dove into the manuals and surpassed their instructors before the end of the second week, too impatient to watch some “old dude” struggle to understand the poorly written tutorials.

Anyway, I digress, back to the topic at hand.


Other things that haven’t changed in game development for decades?  How ’bout the division of labor across three main groups – Programmers, Designers, and Artists.  At VWE, I learned about five disparate teams the studio employed in their game development process – Owner/Managers, Programmers, Designers, Artists/Animators, and Testers.  And that right there was the pecking order by status and salary.  How little has changed in the industry as a whole.

Each of these teams worked in silos as focused but independent specialists prior to pre-production and were brought together as one homogenized unit as the pre-production “vertical slice” neared completion.  No, “vertical slice” has nothing to do with bread or ninja skills – Google it.

Over the years, the terminology for “development meetings with prioritized schedules or milestones” mutated into words like Sprint, Scrum, Agile, and Agile/Scrum.  Call it what you like, it has been the same process since the dawn of game development.  In its most basic form, it goes something like this – create a series of meetings based on a prioritized schedule of milestones around the topics of concepts/game ideas, dev, design, art, scope, and schedules.  Then, build and test the plethora of advancing software.  This is usually followed by cycles of wash/rinse/repeat.  Critical to the successful development of this cycle is smart, honest decisions by talented and experienced key team members…and yadda, yadda, yadda – it’s boring stuff, but absolutely necessary.

Another enduring oddity in game development is something called “studio culture”.  Here’s a checklist of things that, in my experience, have existed in every studio I’ve ever worked for:

⦁           Very smart, technical/analytical problem-solving academics who love games and are “kids at heart”

⦁           A fascination with technology trends, games, movies & music, art & animation, and science fiction/fantasy.

⦁           Communal eating spaces/kitchens with free drinks – a game developer’s divine right.

⦁           Tattoos, piercings, long hair.  Occasional bad hygene?  Perhaps.

⦁           Action figures

⦁           Nerf guns

⦁           Darkened work space that are quiet, but at times rowdy on a good day (aka productive day).

⦁           Flexible 8 hour work schedules

⦁           Casual clothes – bare feet (aka sandle or flip-flops), bare legs (aka shorts), baseball caps, and enigmatic t-shirts.

⦁           The mention of manga/anime, Weird Al (Yankovic) for some reason, and anything sci-fi…most likely a Star Wars reference.

And then, there’s the “proximity task”.  Happens all the time in game development.  It can usually fall to the person who is simply absent at the wrong time during a formal team meeting.  But when it’s an informal discussion, simply sitting at your desk near one can get you saddled with a task that no one wants.  Like today, for example, when I was asked to write this blog.  Happy reading!

By the way, if you’ve made it this far into the article, then bless you for your unwarranted attention.  You are a saint!  Take heed, I’m almost done.

One last thing that is ever present in this industry are the abundant proprietary processes developed and never shared by the multitude of game developers the world over.  With most new games and especially with innovative immersive AR/VR experiences on new hardware, a new engine, SDK, and game product are under simultaneous development.  In my experience, the lineage of this simultaneous development started on PC, followed by the original Xbox console, then Xbox 360, Kinect, HoloLens, and Magic Leap.

And now, finally, “Back to Eternus”.  Sounds like a great sci-fi epic, doesn’t it?

Here at Nerd Kingdom, I ran into an old friend of mine not mentioned above, good ol’ Mister Frame Rate.  “How have you been, Old Chum?  It’s been awhile.  Wife and kids?  Goooood.”  Ever the divisive arbiter of quality graphics versus render speed, Frame Rate could often be an allusive collaborator.  But last week, he sauntered up to me with a drink, “Here, knock this back.  Oh, I forgot. You don’t drink. (Chug! Slurp.)  Let’s talk, shall we?”

So, after closing time, there we were, old Frame Rate and I, talkin’ ’bout the Good Ol’ Days and the mischief he put me through as a Director of Animation under fire for the largest memory footprint that character animation had ever occupied in VWE’s history.  Now, I can’t say that I remember those days with as rosy a resplendent recall, but I do remember the relief I felt when we were able to solve the issue with a technical art solution, an animation export tool, that we could all agree upon.

Allow me to blather on in detail about this very familiar topic.  In the early days of game development, when you would export a character animation for a game, whether authored in Maya, 3D Studio Max, or some other CG software of choice, the animation asset was exported as a linear keyframe for every frame of motion exhibited by each joint or node in a character’s skeletal hierarchy, regardless if its value changed or not, for the duration of the motion.

Well, as we research a popular export format, it is creating a similar result – a keyframe on every frame.  And so, it’s not surprising that discussions about frame rates and reducing file sizes have stirred this air of frame rate nostalgia.  Suffice it to say, there is a lot of keyframe data that can be filtered and omitted from animation assets that will reduce the size of every animation file, thereby reducing its memory footprint, load times, and in turn increase frame rate.

The last time I helped solve this puzzle, we decided upon a proprietary export tool that would allow the Technical Animator or Animator to provide an overall attribute value, as well as an attribute value per joint (per axis) to influence the total number of keyframes that would be generated along a curve.  These attribute values would then generate a range of results, interpreting the motion (based on angle deviation) as “a keyframe every frame” to “a reduced or filtered key set based on the degree of change (by angle deviation) along a curve” to “omitting keyframes completely”.

Said differently, the algorithm inspected the curve and re-created it as a slimmer version of itself (in bits).  Where there were more changes in value, more keyframes were exported or maintained along that portion of the curve.  Where there were fewer changes in value, the placement of keyframes was farther apart.  Whatever solution is devised for Eternus, we are certain to surpass the current state of our technology as of this writing.  And, I can’t wait to revisit that feeling of overwhelming accomplishment when the motion in-game is identical at less than half its original file size.

Oh, the nostalgia for innovative thinking.  All of it, in pursuit of making great gaming experiences with Eternus that will entertain and occupy the masses.  I guess you can go home again.

All that’s old is new again – for the first time.  May you enjoy playing our product in its many pre-launch versions.  And may the God of Shipped Titles smile upon us as we run head-long into the many game development cycles of deja vu and repeated timelines.  Wash. Rinse. Repeat. Game.

Have a wonderful weekend!

Leave a Comment

Posted by on June 2nd, 2017

Hello everyone and happy Friday!

Today we’re excited to share the progress we have made in the past few weeks. Development of Eternus 2 is making great strides as we are now starting to streamline how integration works for Art and Gameplay teams. For example, we can now directly bind to an Art authored UI layout instead of Programmer placeholders. Our radial menu is now implemented in is going through more polishing as we continue to test it. You can check out the building prototype and new radial menu in the video below.

Importantly, it’s been almost 1 year since Eternus 2 development started! We have learned a lot as a team and will continue to grow as we keep moving forward.



 


Have a great weekend!

-Cambo

5 Comments

Posted by on May 19th, 2017

Hi everyone!

Jake (theFlying3.14) here, Lead of Tool Development here at Nerd Kingdom. Several powerful systems have begun to come online in the Eternus engine recently. To support these systems we’ve designed several tool prototypes to aid designers in creating content. Today I’d like to share one of the more important systems that are being reused in multiple instances to provide a comprehensive functional experience going forward: the Visual Node Programing platform, or VNP.

VNP is a node programming platform that allows users to script functionality across different aspects of the game. The system is already being used in a few early tool prototypes: the biome tool, the animation web, and an AI behavior scripter. Future tools such as the material editor, shader creator, and quest editor are planned for VNP implementations.

Developed from the MIT licensed ThreeNodes.js – a WebGL shader tool – we heavily reworked the basic data structures and assumptions built into the library. Although there is still a lot we would like to do with it, what we’ve ended up with gives us great scalability.

The Visual Node Programming platform exists as an abstract application that we employ within each tool implementation, customizing it to fit the context. This means when you open biome tool, you will be greeted with a similar experience as the animation web. However, in reality, each tool might need to operate slightly differently. For example, the biome system reads the node graph from right to left, whereas the animation system reads “state strings” from left to right. To accommodate this each implementation of VNP has its own override of several fundamental objects: nodes, connections, workspaces. This allows great flexibility when developing and updating tools developed with VNP.

“So great another node programming tool….”

Obviously we are not the first to do this. There are, however, benefits from a node programming system being used alongside Eternus that you don’t see many other places. First, all of our current prototypes, including VNP are all written in javascript/typescript. This allows for extreme extensibility and accessibility versus platforms written in lower level languages. Another aspect of node programming we wanted to tackle was large groups of functionality – trying to make large graphs manageable. To do this we completely redesigned how groups worked in the original library. Providing the ability to group nodes on the fly, and use those groups in multiple webs across a project. We hope this significantly cuts down on development time.

 

Over the past several months we have gotten to experiment with a few different approaches to VNP integration. The first approach we took was to build the node graph, save the data models needed specifically for the node graph (like node.x and node.y, etc), and then grab just the data we needed for the engine resource, and send it in one big packet. Of course, this worked until we started building big graphs. Once the save packet got too big to pass between the frontend to the backend we smartened up.

The animweb tool took a different approach: each time a node is connected to the graph, the system evaluates where it is and dynamically adds it to the resource. This resulted in live coding. Being able to edit a resource’s node graph and see it change immediately. It also resulted in a lot of edge cases that are still giving me nightmares. For example, deleting nodes or removing one connection from a node that’s still connected to another field become really tedious.

Our overall goal for user-facing tools is to create simple interfaces that developers at any skill level will be able to leverage. VNP provides a familiar interface for designers as similar platforms are used in engines like Unreal and Unity. While programming with nodes can easier than scripting, this is not our final destination. We decided to tackle VNP first to provide us with a clear functional foundation of what designers need. Since nodal programming lends itself to so many situations, we can provide a consistent feeling experience across the game development workflow. Then later we can develop more specialized tools to streamline certain common practices and make it easier for less experienced devs.

I hope you enjoyed this look at our Visual Node Programming platform, and I’m excited to get our tool suite ready for feedback from our awesome community.

Leave a Comment

Next Page »

Follow Us!

Stay up to date!

To get the latest updates on Nerd Kingdom tech sent right to your e-mail, fill out the form below