Hi, everyone. This is John Jackson, one of the engine devs at Nerd Kingdom who is currently responsible for working on the latest iteration of our material system. For those of you who are new to the concept, materials are a collection of textures, properties, and shaders that when in used in combination describe how a particular object is to be rendered on screen. I’m going to discuss very briefly how our previous material system has worked, what it was lacking, what we ultimately wanted out of our materials, and how that’s being implemented.
Previous Material System
As stated before, materials are a collection of various items working in tandem to describe how any given game object should be rendered to the screen. In the simplest terms possible, for each object or collection of objects (in the case of instancing), we grab all of the textures and material specific properties to be used, “bind” the shader to be used with this particular material and submit to the graphics card to render our game object. As our artists follow a PBR workflow (Physically Based Rendering), we had a fairly clear understanding early on what our materials would need regarding all of these items.
Originally, all of our materials consisted of the base following properties:
With just these colors, textures, and shared shaders and not much else (a few other properties are not listed here), it was fairly easy to be able to satisfy most of the requirements given by the artists for the game objects they needed to represent – anything from rocks to glowing swords to glass windows. Of course, there were still many limitations to this system that eventually started to become more and more apparent as time went on.
Issues / What was lacking
Firstly, regardless of the needs of a particular material, every single one was required to have these given properties. You have no need for metallic, roughness, or emissive maps and a constant value defined in a shader will suffice? Tough. Your material will still have these arrays for textures and colors and we will still have to allocate memory for it.
This might not seem like too big a deal at first, but as an example to demonstrate the concern this causes an engine programmer, let’s assume we have a simple material and all it needs are Albedo and Normal texture maps to achieve the desired effect. Using this current material system, we’ve just wasted space due to 4 other pre-allocated texture slots as well as 6 pre-allocated colors.
Secondly, as versatile as this set-up potentially is for most materials, it’s still limited due to being a basic surface material. What do I mean by that? If you remember, all of these materials are restricted to a few shared shaders that the engine team has written for the artists and these shaders are ultimately responsible for telling your graphics card how to take all of the input textures and properties given to it and ultimately draw the object to the screen. What’s the problem with this? Well, what if none of the shaders have instructions for implementing a very custom, specific effect one of the artists requests? For example, what if I want to have a material that emits a blinking light based on a sin wave or animate its verticies using a noise texture or tile a material’s albedo texture depending on how close the camera is from a given pixel or…?
Okay, hopefully it’s obvious to you that we’re missing out on some cool stuff now. So what do we do about it?
Well, if this previous material setup is to continue to be used and these desired effects are to be implemented, we have two basic options to choose from:
Both of these, while feasible and will certainly work in the short term, have fairly significant problems in the end.
If the first option is chosen, our materials have now become even more wasteful than before. For instance, simply wanting to scroll a texture in a material requires the material holding a variable for panning speed, which is a 2 float vector. Even this small variable means that all of our materials are now inflated by another 8 bytes, which obviously doesn’t scale well at all when you consider just how many more variables you’ll start to add on for other effects.
The second option is actually what we originally implemented once certain effects were being requested. We have specific implementations written to handle animated materials, flipbook materials, dissolve materials and even added parameters for wind controls. Each of these materials derives from our base graphics material class, they each hold specific properties required for the effect, and each corresponds to its own specific, handwritten shader that handles how it is to be rendered.
For a small number of materials for specific effects, this is a perfectly acceptable solution and has worked for us for a while. But as the number of requested effects continues to grow and experimentation becomes more and more desirable for different materials, this solution becomes very restrictive and time consuming. Again, just as the first option, this simply doesn’t scale to meet the desires of our team.
Inspiration / Design for new system
Sometimes finding guidance for how to implement a new system can be a challenging task, especially if you’ve never worked on anything similar to it before. Luckily, there are plenty of great engines out there that serve as a source of inspiration regarding design and features, so it didn’t take long to do some research regarding how other engines handle their own material implementations and compile a list of features that we wanted ours to have.
After discussions amongst the team and a preliminary planning/research stage, we had a fairly decent idea what we wanted our material system to be:
New System Overview
Essentially, the new system works like this:
All shader graphs are defined by a collection of nodes that together describe how a particular material and its shaders are to be created. Each of these nodes has its own functionality and ultimately are responsible for outputting some snippet of predefined shader code. As there was a strong desire to make this system as data-driven as possible, all of the nodes are defined in a node template file which is used by the engine to fill out specific, unique data for each evaluated node in the graph. This makes it very easy to tweak the behavior of already established nodes or create new templates very quickly.
Shown above is an example of what one of these node templates looks like. Most of the properties should be fairly self-explanatory, such as “NumberInputs”, but fields like “VariableDeclaration” and “VariableDefinition” require some explanation.
All nodes used in the shader graph once evaluated boil down to variables of the type defined by their output. For instance, a ConstantVec2Node will evaluate to some variable of type vec2, as is illustrated above under the “Output” section of the template.
For each node to be evaluated, we must declare its variable equivalent in the header of our generated shader as well as define this variable in the main body of our shader. This is what these two sections are responsible for doing. Obviously simply declaring and defining a vec2 is trivial, but using this system it is possible to define whole blocks of complicated shader code under the “VariableDeclaration” and “VariableDefinition” sections of the template.
Node Template Mark-up
What’s also notable are sections of the node template that contain custom-defined markup language that is replaced upon evaluation of the shader graph. For instance, all nodes have a unique name that is used for its shader variable name as well, so anytime the system sees the markup #NODE_NAME it knows to replace this text with the given name for that particular node. The #INPUT() markup looks at the “Inputs” field for the template and uses the specified input fields in the parentheses as replacements (in this instance, the “X” and “Y” fields respectively). There are many others, such as #G_WORLD_TIME for global world time of the engine as well as markup for vertex, camera, and pixel information.
All materials following this new system have now been reduced to two main properties:
The shader graph editor, even though in a very early stage, can be used to edit and create new shader graphs used for materials. The final output for the material is the “MainNode”, which can be seen in the picture below.
Here we have an example of a shader graph that creates the simple material that I was describing at the beginning of this post and requires only texture inputs for the albedo and normal channels. All other channels will use default values that will be constants within the shader and therefore not require any extra storage.
Examples of Graphs and Materials
My hope is that moving forward with this new material system will allow our artists and designers to explore and iterate on more and more creative options.
Below are some examples of some material effects that I threw together to show off what can be done with the new material system. What’s most important to me is that iteration on these was very easy and took only a small amount of time to create.
Have a great weekend!
Greetings! 5ubtlety here, systems designer at Nerd Kingdom and the programmer behind the player character controller. I’m here to discuss our approach to designing and implementing this critical component that tethers the player to the world. There are simply too many low-level implementation details to fully expound upon in this post. Instead, we’re going to get a high-level overview of the considerations to take into account and challenges that can be faced when developing a character controller.
So what is a character controller? The character controller is simply responsible for moving the player avatar through the game world. This entails translating player input into movement mechanics and processing any game-specific movement dynamics. A character controller will also need to be able to detect and react to its environment, check for whether the character is on the ground or airborne, manage character state, and integrate with other game systems, including physics, animation, and the camera.
There are two main types of character controllers, dynamic (entirely physics-driven) and kinematic (non-reactive collisions). Character controller implementation is highly game-specific, but most opt for kinematic controllers. Very few games aim for complete physical realism. Most have stylized physical reactions tailor-made to feel great within the context of their gameplay environments.
The character controller is modeled as a capsule-shaped rigid body. The rounded capsule shape helps the character controller slide off of surfaces as it moves through the environment. It’s affected by gravity and constrained by terrain and other colliders by the physics engine. The orientation of the capsule is locked to an upright position, but may be manually adjusted in special cases, such as handling acceleration tilt, which pivots the character around its center of mass based on acceleration. Unless handled by the physics engine, movement will need to be projected onto the ground plane so the character can properly move up and down terrain.
Raycasts (and other geometric casts) are your main tool for sensing the environment immediately around the character controller so you may react properly. These casts give information such as direction and distance to the nearby colliders as well as their surface normals.
In open-world games, movement is typically the principal mechanic that consumes the player’s time. Therefore, movement needs to feel great as the player navigates through the world. Minimally, it needs to be functional, responsive, and intuitive. Depending on the game, you may have secondary goals such as allowing for greater player expression, or aiming for a high-degree of player control and precision, such as in a fast-paced platformer. Often, trade-offs will need to be made, so there is no universal solution to these matters. For example, consider the following graphic in which the left figure has a flat velocity, while the right figure applies acceleration. The left figure allows for a higher level of precision in movement, while the right is more realistic and may look and feel better in certain game contexts.
Image Credit: Marius Holstad, Source
Every game is going to have specific movement affordances, such as the ability to sprint, swim, double jump, wall-run, grab ledges, climb ladders, swing from ropes, etc. Every new verb added to this list can vastly expand the player’s mobility. Defining these is just the beginning though. There is much nuance in how they are used, how they feel, and how they interact with other game elements.
Even if all your character can do is simply move side to side and jump, you’re quickly going to run into “The Door Problem” of Game Design. Here are a few of the questions you might start asking:
This is just the beginning. As development progresses, new questions and issues will arise as environmental variables impose new constraints on the initial design. You should develop your controller gradually, making steady incremental improvements. In our case, we developed a playground scene where we can test our iterative approach in a consistent, controlled environment. Spoiler Alert: Most of your development time is going to be addressing engine and game-specific edge cases!
Following are some features we explored while prototyping our character controller. Note not all of these elements will be relevant to every game.
In most 3rd-person perspective games, movement is relative to the camera rather than the avatar, which is more intuitive for the player to process. Some games intentionally break this convention to impart a feeling of vulnerability.
When moving, the pawn automatically pivots over time (with some smoothing) to align with the movement direction.
Image Credit: Marius Holstad, Source
Jump Input Buffering and Latency Forgiveness
This helps with jump timing in the case the player presses the jump button a few frames before actually reaching the ground. Additionally, this permits the player to execute a jump even if they pressed the button immediately after walking off a ledge and consequently entered the airborne state. This pattern can be applied to other kinds of character input as well.
This allows the player to adjust their airborne velocity, but with reduced effect.
This assists elevating the character up detected steps by smoothing movement with a curved spline over a period of time.
Ground Normal Smoothing
This will eliminate anomalies in ground normal calculation by performing multiple raycasts at various sample points at the base of the player’s capsule and averaging the results to calculate the final ground normal. The resultant vector is then smoothed between consecutive frames.
Slope Fatigue System
Any slope above a certain threshold incline will induce “slope fatigue” in the player over a short period of time. The more fatigued the player is, the more slowly he will ascend the surface in the upward direction of the incline. After a certain amount of fatigue has accumulated, based on slope steepness, the player will begin sliding down the slope. Slope fatigue will recover once the player is on a more level surface.
Automatic wall avoidance allows for smoother steering behavior when walking around walls and corners. The character controller raycasts ahead in the direction of movement to detect walls and other obstructions that would block movement. If detected, and the angle of incidence is shallow, the player is steered away from the surface. On the left side of the following image, the player sticks to the wall as he brushes against it. On the right side of the image, the player gently slides off the surface as his steering is adjusted.
Credit: Marius Holstad, Source
Analogue Input Processing
Analogue movement input from a thumbstick is a very different approach to controlling direction and speed than the keyboard’s 8-way digital input. In order to sanitize this raw axis data and map it to movement inputs the controller can read, we filter it through dead zones and interpolate the results.
Image Credit: Ryan Juckett, Source
Hopefully this post provided some insight into the design and implementation of character controllers and some of the considerations to take into account when developing one. The bottom line is that there is no one right solution that works in all situations. Every game’s needs are very different and developing a solid character controller is going to largely be an iterative process of discovery and polish. The final 10% is what separates a clumsy, buggy controller from a responsive one that works well and immerses the player. This is one of the game’s most critical components that the player continually interfaces with during gameplay. It can easily make or break an entire game, so take the proper time and make it feel great!
Hello everyone! The name is Duane and I am the Animation Director here at Nerd Kingdom.
During my lengthy career in game development, I have certainly been here before. Well, not really here, as here is a bit different. However, in some aspects, it is almost entirely the same. The outstanding difference is the Eternus Game Engine currently under development at Nerd Kingdom. Built from the ground up, Eternus holds the promise of a groundbreaking game development engine and tools upon which its flagship product will be developed. So, it’s deja vu all over again…or is it?
The first time I heard the term “virtual reality” was when I began my career in 1994 as a Lead Animator at Virtual World Entertainment (VWE). The Chicago game studio made two products, a first-person shooter (FPS) “walking tank” game called Battletech and a “hovercraft racing” title called Red Planet.
Each product was built from the ground up on a proprietary game engine, completely unique to the requirements of gameplay for a multiplayer FPS and space-based racing title respectively. Each engine included its own set of development tools and export processes, designed and built with essential integration toward the support of an efficient iterative creative process. Nothing was borrowed or modded, and middleware was non-existent. All of it was brand new, completely from scratch. (Ok, truth be told, some code between Battletech and Red Planet was recycled. But, I’m trying to make a point here.)
Fresh out of college, I was the studio’s first and only Lead Animator and it fell to me to collaborate with a newly hired Junior Programmer to design, test, and implement an integrated LOD Texturing tool. The sky was the limit and… “What the hell is an LOD anyway?”
So, there I was, tasked with one of the most important art tools for Battletech’s and Red Planet’s CG art development. Not because I was particularly suited for the role, but because I was “the new guy” and no one else wanted the job.
If you’ve ever wanted to make games for a living and knew nothing about the process, I knew exactly what you did when I began my career. Lucky for me, this first challenge was a remarkable Art Tools design experience and quite an education.
Trial by fire, I learned how to make LODs by hand expeditiously, a method of reducing an object or character’s total number of polygons while maintaining its shape and silhouette. I made four Levels of Detail (LOD) for each of the 20+ Mechs (aka “walking tanks”) and 12+ VTOL (“vertical take-off and landing”) racing craft. That’s 128 LODs plus the original 32+ models.
Then, I learned about creating UV Maps followed by applying textures via Planar Projection mapping for the many texture groups within a single model. At the time, Planar Projection mapping was all that this tool would provide.
The number of texture groups per model was exponential. I had to rotate and place each Planar Projection, an intermediate object represented by a 3D Plane, over every single polygon group or group of facets (aka “face group”). It was meticulous work. But then, that’s why we were developing the LOD Texturing tool in the first place, to expedite this laborious process. Ultimately, our efforts allowed Artists to texture any 3D model and all of its LODs based solely on the original models UV textures. It was a profound success and increased my passion for making games and inventing game development technologies, in general.
By the way, is it really work if you love what you do for a living? For me personally, animating for games is truly a dream come true. I remember when a Tippett Studios’ VP at Siggraph once said, “These guys will work for nothing and do it all night long. They love it! They’re gamers and artist.” I thought, “Holy sh*t, she knows our secret!” But, it’s true. Game developers will work long after their salaries have exhausted a full day’s work. We are habitual over-achievers with a relentless work ethic. Like some kind of digital junkie, looking forward to that next first moment of realized innovation in VR immersion. It’s addictive! That’s why most of look the way we do…trying to score that next (top-selling) digital hit. Thank God mobile game development offers the same euphoric affects at smaller doses. And, with the recent debates over VR/AR/MR, virtual reality, augmented reality, and mixed reality respectively, the digital chug-wagon continues.
I remember when I was in college, learning Alias|Wavefront software on a Thompson Digital Image machine back in the early 90’s. No one knew what they were doing. The teachers that were teaching the 3D Art and Animation curriculum at Columbia College Chicago had no clue what 3D was or even how to teach it. Every student dove into the manuals and surpassed their instructors before the end of the second week, too impatient to watch some “old dude” struggle to understand the poorly written tutorials.
Anyway, I digress, back to the topic at hand.
Other things that haven’t changed in game development for decades? How ’bout the division of labor across three main groups – Programmers, Designers, and Artists. At VWE, I learned about five disparate teams the studio employed in their game development process – Owner/Managers, Programmers, Designers, Artists/Animators, and Testers. And that right there was the pecking order by status and salary. How little has changed in the industry as a whole.
Each of these teams worked in silos as focused but independent specialists prior to pre-production and were brought together as one homogenized unit as the pre-production “vertical slice” neared completion. No, “vertical slice” has nothing to do with bread or ninja skills – Google it.
Over the years, the terminology for “development meetings with prioritized schedules or milestones” mutated into words like Sprint, Scrum, Agile, and Agile/Scrum. Call it what you like, it has been the same process since the dawn of game development. In its most basic form, it goes something like this – create a series of meetings based on a prioritized schedule of milestones around the topics of concepts/game ideas, dev, design, art, scope, and schedules. Then, build and test the plethora of advancing software. This is usually followed by cycles of wash/rinse/repeat. Critical to the successful development of this cycle is smart, honest decisions by talented and experienced key team members…and yadda, yadda, yadda – it’s boring stuff, but absolutely necessary.
Another enduring oddity in game development is something called “studio culture”. Here’s a checklist of things that, in my experience, have existed in every studio I’ve ever worked for:
⦁ Very smart, technical/analytical problem-solving academics who love games and are “kids at heart”
⦁ A fascination with technology trends, games, movies & music, art & animation, and science fiction/fantasy.
⦁ Communal eating spaces/kitchens with free drinks – a game developer’s divine right.
⦁ Tattoos, piercings, long hair. Occasional bad hygene? Perhaps.
⦁ Action figures
⦁ Nerf guns
⦁ Darkened work space that are quiet, but at times rowdy on a good day (aka productive day).
⦁ Flexible 8 hour work schedules
⦁ Casual clothes – bare feet (aka sandle or flip-flops), bare legs (aka shorts), baseball caps, and enigmatic t-shirts.
⦁ The mention of manga/anime, Weird Al (Yankovic) for some reason, and anything sci-fi…most likely a Star Wars reference.
And then, there’s the “proximity task”. Happens all the time in game development. It can usually fall to the person who is simply absent at the wrong time during a formal team meeting. But when it’s an informal discussion, simply sitting at your desk near one can get you saddled with a task that no one wants. Like today, for example, when I was asked to write this blog. Happy reading!
By the way, if you’ve made it this far into the article, then bless you for your unwarranted attention. You are a saint! Take heed, I’m almost done.
One last thing that is ever present in this industry are the abundant proprietary processes developed and never shared by the multitude of game developers the world over. With most new games and especially with innovative immersive AR/VR experiences on new hardware, a new engine, SDK, and game product are under simultaneous development. In my experience, the lineage of this simultaneous development started on PC, followed by the original Xbox console, then Xbox 360, Kinect, HoloLens, and Magic Leap.
And now, finally, “Back to Eternus”. Sounds like a great sci-fi epic, doesn’t it?
Here at Nerd Kingdom, I ran into an old friend of mine not mentioned above, good ol’ Mister Frame Rate. “How have you been, Old Chum? It’s been awhile. Wife and kids? Goooood.” Ever the divisive arbiter of quality graphics versus render speed, Frame Rate could often be an allusive collaborator. But last week, he sauntered up to me with a drink, “Here, knock this back. Oh, I forgot. You don’t drink. (Chug! Slurp.) Let’s talk, shall we?”
So, after closing time, there we were, old Frame Rate and I, talkin’ ’bout the Good Ol’ Days and the mischief he put me through as a Director of Animation under fire for the largest memory footprint that character animation had ever occupied in VWE’s history. Now, I can’t say that I remember those days with as rosy a resplendent recall, but I do remember the relief I felt when we were able to solve the issue with a technical art solution, an animation export tool, that we could all agree upon.
Allow me to blather on in detail about this very familiar topic. In the early days of game development, when you would export a character animation for a game, whether authored in Maya, 3D Studio Max, or some other CG software of choice, the animation asset was exported as a linear keyframe for every frame of motion exhibited by each joint or node in a character’s skeletal hierarchy, regardless if its value changed or not, for the duration of the motion.
Well, as we research a popular export format, it is creating a similar result – a keyframe on every frame. And so, it’s not surprising that discussions about frame rates and reducing file sizes have stirred this air of frame rate nostalgia. Suffice it to say, there is a lot of keyframe data that can be filtered and omitted from animation assets that will reduce the size of every animation file, thereby reducing its memory footprint, load times, and in turn increase frame rate.
The last time I helped solve this puzzle, we decided upon a proprietary export tool that would allow the Technical Animator or Animator to provide an overall attribute value, as well as an attribute value per joint (per axis) to influence the total number of keyframes that would be generated along a curve. These attribute values would then generate a range of results, interpreting the motion (based on angle deviation) as “a keyframe every frame” to “a reduced or filtered key set based on the degree of change (by angle deviation) along a curve” to “omitting keyframes completely”.
Said differently, the algorithm inspected the curve and re-created it as a slimmer version of itself (in bits). Where there were more changes in value, more keyframes were exported or maintained along that portion of the curve. Where there were fewer changes in value, the placement of keyframes was farther apart. Whatever solution is devised for Eternus, we are certain to surpass the current state of our technology as of this writing. And, I can’t wait to revisit that feeling of overwhelming accomplishment when the motion in-game is identical at less than half its original file size.
Oh, the nostalgia for innovative thinking. All of it, in pursuit of making great gaming experiences with Eternus that will entertain and occupy the masses. I guess you can go home again.
All that’s old is new again – for the first time. May you enjoy playing our product in its many pre-launch versions. And may the God of Shipped Titles smile upon us as we run head-long into the many game development cycles of deja vu and repeated timelines. Wash. Rinse. Repeat. Game.
Have a wonderful weekend!
Jake (theFlying3.14) here, Lead of Tool Development here at Nerd Kingdom. Several powerful systems have begun to come online in the Eternus engine recently. To support these systems we’ve designed several tool prototypes to aid designers in creating content. Today I’d like to share one of the more important systems that are being reused in multiple instances to provide a comprehensive functional experience going forward: the Visual Node Programing platform, or VNP.
VNP is a node programming platform that allows users to script functionality across different aspects of the game. The system is already being used in a few early tool prototypes: the biome tool, the animation web, and an AI behavior scripter. Future tools such as the material editor, shader creator, and quest editor are planned for VNP implementations.
Developed from the MIT licensed ThreeNodes.js – a WebGL shader tool – we heavily reworked the basic data structures and assumptions built into the library. Although there is still a lot we would like to do with it, what we’ve ended up with gives us great scalability.
The Visual Node Programming platform exists as an abstract application that we employ within each tool implementation, customizing it to fit the context. This means when you open biome tool, you will be greeted with a similar experience as the animation web. However, in reality, each tool might need to operate slightly differently. For example, the biome system reads the node graph from right to left, whereas the animation system reads “state strings” from left to right. To accommodate this each implementation of VNP has its own override of several fundamental objects: nodes, connections, workspaces. This allows great flexibility when developing and updating tools developed with VNP.
“So great another node programming tool….”
Over the past several months we have gotten to experiment with a few different approaches to VNP integration. The first approach we took was to build the node graph, save the data models needed specifically for the node graph (like node.x and node.y, etc), and then grab just the data we needed for the engine resource, and send it in one big packet. Of course, this worked until we started building big graphs. Once the save packet got too big to pass between the frontend to the backend we smartened up.
The animweb tool took a different approach: each time a node is connected to the graph, the system evaluates where it is and dynamically adds it to the resource. This resulted in live coding. Being able to edit a resource’s node graph and see it change immediately. It also resulted in a lot of edge cases that are still giving me nightmares. For example, deleting nodes or removing one connection from a node that’s still connected to another field become really tedious.
Our overall goal for user-facing tools is to create simple interfaces that developers at any skill level will be able to leverage. VNP provides a familiar interface for designers as similar platforms are used in engines like Unreal and Unity. While programming with nodes can easier than scripting, this is not our final destination. We decided to tackle VNP first to provide us with a clear functional foundation of what designers need. Since nodal programming lends itself to so many situations, we can provide a consistent feeling experience across the game development workflow. Then later we can develop more specialized tools to streamline certain common practices and make it easier for less experienced devs.
I hope you enjoyed this look at our Visual Node Programming platform, and I’m excited to get our tool suite ready for feedback from our awesome community.
Hey all! This is Northman from Nerd Kingdom here to share some AI bytes with you. Specifically I’d like to talk about Pathfinding and how it relates to TUG and our characters.
Pathfinding is the act of finding the best path between two points in the world. The key word there is “best”. The definition of best depends on the type of game you are making and the type of character you are trying to find a path for. I think a small thought experiment helps to clarify the set of problems pathfinding tries to solve:
Imagine a mountain goat and a human facing a mountain that extends as far to their left and right as their eyes can see. Directly in front of them is a door. On the door is a sign that reads: “Tunnel to the Other Side”. The human does not have any climbing gear and the mountain is far too steep to for the human to scale it without proper equipment. If both the human and the goat want to get to the other side of the mountain what do they do? The goat does not have hands to open doors nor the ability to read. However, the goat is a sure-footed climber and does not have any problem scaling the mountain so it goes on its goaty way over the top of the mountain. Conversely, the human does have hands and can read so they take the tunnel. The path the goat and the human found are both the best path they can muster by their definition of best even if they are both very different.
By Darklich14 (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons
*Insert funny remark about goats here.*
“Ropes? We don’t need no stinkin’ ropes!”
So now that we have defined Pathfinding and talked about what “best” means we can look at what tools we have for finding paths. Most pathfinding techniques break up the world into spatial subsections (nodes) and store information about how those nodes are connected (edges). In Computer Science we call a set of nodes and edges a “graph”. Graphs are cool because they have been studied by mathematicians since the 18th century (check out the Seven Bridges of Königsberg). What this means to us is there are well known techniques, also known as algorithms, for dealing with graphs and finding best paths on them (see Pathfinding on Wikipedia). One of the most common algorithms used in games for pathfinding is A*. I won’t get into the details of A* in this post because it gets technical very quickly but the image below provides a good visual representation of a typical A* search.
By Subh83 (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons
An illustration of an A* search. The initial red node indicates the starting position and the green node indicates the goal position. The gray polygon represents an obstacle. The goal is to find the shortest distance path from the starting node to the goal node while avoiding the obstacle.The path found is highlighted in green at the end of the animation.
We are currently exploring algorithms and graph representations for our world in TUG but so far we have implemented a navigational grid. In graph terms the grid is made up of nodes that all represent the same amount of space (one square meter) and each node has edges to its immediate neighboring nodes (grid cells): top left, top, top right, right, bottom right, bottom, bottom left, and left. These cells can be blocked by obstacles (shown in the image below in red) or open (shown in the image below in blue). This allows us to run A* searches to find best paths for our characters that avoid obstacles.
Blue Cells: Areas a character can navigate in.
Red Cells: Areas blocked by an object.
I hope you have enjoyed this introduction to pathfinding. Pathfinding is a large topic with many different techniques available depending on the pathfinding problem at hand. If you have any questions please feel free to email me at “northman at nerdkingdom dot com”. We are working hard to refine our pathfinding approaches for our characters and look forward to sharing more with you soon!
Have a great weekend!
Happy Friday! Cambo here to kick the weekend off by bringing back the dev tech blogs. We plan to keep them coming at least once a month as we make more progress on development. In regards to the Q&A, most of them are answered and I’ll be sure to post it in our next update blog. For now, here’s our infrastructure dude, Maylyon!
Hey everybody! Maylyon here with a new non-game related, non-engine related, non-tools related tech blog! Hint: this is your tune-out point if those are the topics you are looking for. 3 … 2 … 1 … Still here? Excellent!
After literally years of silence about Devotus, I wanted to follow-up with a snapshot of where Devotus is today. If your memory is a little rusty, Devotus will be our mod content distribution pipeline to help mod authors create and manage their home-brewed content and deliver it to end-users. To get context for this blog entry, you should definitely read those first two blogs. Without further ado, the “what has been happening?” (aka: “you guys still work on that thing?”).
In case you didn’t know, there were mods on Devotus’s developmental servers from a TUG v1 ModJam in early 2016. Don’t go rushing to find them now; they’re gone. They were sacrificed to the binary gods in order to make way for…
Suspend your understanding that the term “serverless” is a lie because there are always servers somewhere and play along for a bit. The old Devotus architecture was built on AWS EBS-backed EC2 instances running a mix of Node.js, C++, and MongoDB. It looked a little bit like this:
The primary detriments to this approach were:
1. Paying for these servers (even extremely small servers) when nobody was using them,
2. Scalability at each layer of the stack would incur even more financial cost and contribute to…
3. Complexity of the implementation.
Moving to this setup allows us to:
1. Greatly reduce the costs associated with Devotus (especially when nobody is using it),
2. Offload most of the scalability problem to AWS (less work = more naps),
3. Synergize our implementation with the other microservices we have been developing on the Infrastructure team.
Devotus now allows mod authors to create git repositories on GitLab in addition to GitHub. It’s actually been there for a while but wasn’t there in the last blog I wrote. By supporting GitLab and their awesome pricing model, Devotus allows a mod author to choose whether they want their mod’s git repository to be public or private at mod creation time. This choice does not apply to mod’s created on GitHub because their pricing model is less awesome (but still pretty awesome) and I’m cheap (see previous section for proof).
In the “bad old” days (read as “a month ago”), mod download count was just an unsigned integer. Download request comes in, number gets incremented by one. Commence spamming download of your own mod to falsely inflate its popularity! Everybody wins! … Except for the people who want to use the system.
Now, in the “brave new world” days, mod downloads are tracked per-user, per-version. This allows mod authors to track their mod’s popularity throughout its release history and allows end-users to trust that a mod’s popularity is probably because of an amazing mod author rather than a mod author’s amazing spam-bot.
That’s all I have for this installment. I (or somebody from my team) will be back with future Infrastructure updates as we get new and/or exciting things to share. In the meantime, be sure to jot down all those cool mod ideas you have kicking around in your brain into a little leather-bound notebook so that WHEN TUG v2 is launched and WHEN Devotus is client-facing, you will be ready!
Have a great weekend!