Posted by on July 28th, 2017

Greetings! 5ubtlety here, systems designer at Nerd Kingdom and the programmer behind the player character controller. I’m here to discuss our approach to designing and implementing this critical component that tethers the player to the world. There are simply too many low-level implementation details to fully expound upon in this post. Instead, we’re going to get a high-level overview of the considerations to take into account and challenges that can be faced when developing a character controller.


So what is a character controller? The character controller is simply responsible for moving the player avatar through the game world. This entails translating player input into movement mechanics and processing any game-specific movement dynamics. A character controller will also need to be able to detect and react to its environment, check for whether the character is on the ground or airborne, manage character state, and integrate with other game systems, including physics, animation, and the camera.

There are two main types of character controllers, dynamic (entirely physics-driven) and kinematic (non-reactive collisions). Character controller implementation is highly game-specific, but most opt for kinematic controllers. Very few games aim for complete physical realism. Most have stylized physical reactions tailor-made to feel great within the context of their gameplay environments.

The character controller is modeled as a capsule-shaped rigid body. The rounded capsule shape helps the character controller slide off of surfaces as it moves through the environment. It’s affected by gravity and constrained by terrain and other colliders by the physics engine. The orientation of the capsule is locked to an upright position, but may be manually adjusted in special cases, such as handling acceleration tilt, which pivots the character around its center of mass based on acceleration. Unless handled by the physics engine, movement will need to be projected onto the ground plane so the character can properly move up and down terrain.

Raycasts (and other geometric casts) are your main tool for sensing the environment immediately around the character controller so you may react properly. These casts give information such as direction and distance to the nearby colliders as well as their surface normals.


In open-world games, movement is typically the principal mechanic that consumes the player’s time. Therefore, movement needs to feel great as the player navigates through the world. Minimally, it needs to be functional, responsive, and intuitive. Depending on the game, you may have secondary goals such as allowing for greater player expression, or aiming for a high-degree of player control and precision, such as in a fast-paced platformer. Often, trade-offs will need to be made, so there is no universal solution to these matters. For example, consider the following graphic in which the left figure has a flat velocity, while the right figure applies acceleration. The left figure allows for a higher level of precision in movement, while the right is more realistic and may look and feel better in certain game contexts.

Image Credit: Marius Holstad, Source

Every game is going to have specific movement affordances, such as the ability to sprint, swim, double jump, wall-run, grab ledges, climb ladders, swing from ropes, etc. Every new verb added to this list can vastly expand the player’s mobility. Defining these is just the beginning though. There is much nuance in how they are used, how they feel, and how they interact with other game elements.

Even if all your character can do is simply move side to side and jump, you’re quickly going to run into “The Door Problem” of Game Design. Here are a few of the questions you might start asking:

  • How fast should the player move? What are the maximum and minimum movement speeds? Can the player choose to move at intermediate values?
  • Can the player stop and pivot on a dime?
  • Should the player accelerate and decelerate over time? How quickly?
  • Will your game have different kinds of terrain that affect player movement, such as quicksand or ice?
  • How do the character controller and animation system interact with one another?
  • What size and shape should the character’s collider be?
  • Can the character push or be pushed by other objects when they press against one another?
  • What kind of environmental geometry does your world feature? Sharp and flat edges, or organic, bumpy terrain?
  • Is the player able to walk up slopes? What are the minimum and maximum inclines?
  • How about steps? What are the minimum and maximum height?
  • Is movement speed slower when walking uphill?
  • Is controller input supported? How will input be handled differently between a keyboard and analogue stick?
  • How does the camera follow the player?
  • How high can the player jump? Are running jumps higher?
  • Is momentum conserved when jumping?
  • What should the force of gravity be? Is this the only factor that determines the player’s fall speed?
  • Is there air friction (drag)?
  • Should the character have a terminal velocity?
  • Does the character have a momentary hang-time at the jump’s apex, or does it immediately begin decelerating downwards?
  • Can the player jump higher by holding the jump button longer?
  • Does the player have any amount of air control, or is mid-air input simply ignored?

This is just the beginning. As development progresses, new questions and issues will arise as environmental variables impose new constraints on the initial design. You should develop your controller gradually, making steady incremental improvements. In our case, we developed a playground scene where we can test our iterative approach in a consistent, controlled environment. Spoiler Alert: Most of your development time is going to be addressing engine and game-specific edge cases!


Following are some features we explored while prototyping our character controller. Note not all of these elements will be relevant to every game.

Camera-Relative Movement

In most 3rd-person perspective games, movement is relative to the camera rather than the avatar, which is more intuitive for the player to process. Some games intentionally break this convention to impart a feeling of vulnerability.

Motion Alignment

When moving, the pawn automatically pivots over time (with some smoothing) to align with the movement direction.

Image Credit: Marius Holstad, Source

Jump Input Buffering and Latency Forgiveness

This helps with jump timing in the case the player presses the jump button a few frames before actually reaching the ground. Additionally, this permits the player to execute a jump even if they pressed the button immediately after walking off a ledge and consequently entered the airborne state. This pattern can be applied to other kinds of character input as well.

Air Control

This allows the player to adjust their airborne velocity, but with reduced effect.

Animation

  • Animation Blending
  • Upper/Lower Body Animation Layers
  • Root Motion Control
    • Adjust capsule position and/or orientation as a result of playing certain animations.
  • Inverse Kinematic Limb Placement
    • Place feet when walking/running. Particularly useful for steps and slopes.
    • Place hands when climbing or interacting with game objects.
    • Intelligent Ragdolls

Spline-Stepping

This assists elevating the character up detected steps by smoothing movement with a curved spline over a period of time.

Here is a prototype of our character controller walking up some stairs in our playground scene.

Ground Normal Smoothing

This will eliminate anomalies in ground normal calculation by performing multiple raycasts at various sample points at the base of the player’s capsule and averaging the results to calculate the final ground normal. The resultant vector is then smoothed between consecutive frames.

Here is a prototype of our character controller walking over rounded surfaces in our playground scene.

Slope Fatigue System

Any slope above a certain threshold incline will induce “slope fatigue” in the player over a short period of time. The more fatigued the player is, the more slowly he will ascend the surface in the upward direction of the incline. After a certain amount of fatigue has accumulated, based on slope steepness, the player will begin sliding down the slope. Slope fatigue will recover once the player is on a more level surface.

Wall Avoidance

Automatic wall avoidance allows for smoother steering behavior when walking around walls and corners. The character controller raycasts ahead in the direction of movement to detect walls and other obstructions that would block movement. If detected, and the angle of incidence is shallow, the player is steered away from the surface. On the left side of the following image, the player sticks to the wall as he brushes against it. On the right side of the image, the player gently slides off the surface as his steering is adjusted.

Credit: Marius Holstad, Source

Analogue Input Processing

Analogue movement input from a thumbstick is a very different approach to controlling direction and speed than the keyboard’s 8-way digital input. In order to sanitize this raw axis data and map it to movement inputs the controller can read, we filter it through dead zones and interpolate the results.

Inner Dead Zone

Outer Dead Zone

Radial Dead Zone

Range Mapping

Non-Linear Interpolation

Image Credit: Ryan Juckett, Source


Hopefully this post provided some insight into the design and implementation of character controllers and some of the considerations to take into account when developing one. The bottom line is that there is no one right solution that works in all situations. Every game’s needs are very different and developing a solid character controller is going to largely be an iterative process of discovery and polish. The final 10% is what separates a clumsy, buggy controller from a responsive one that works well and immerses the player. This is one of the game’s most critical components that the player continually interfaces with during gameplay. It can easily make or break an entire game, so take the proper time and make it feel great!

 

Leave a Comment

Posted by on June 16th, 2017

Hello everyone! The name is Duane and I am the Animation Director here at Nerd Kingdom.

During my lengthy career in game development, I have certainly been here before.  Well, not really here, as here is a bit different.  However, in some aspects, it is almost entirely the same.  The outstanding difference is the Eternus Game Engine currently under development at Nerd Kingdom.  Built from the ground up, Eternus holds the promise of a groundbreaking game development engine and tools upon which its flagship product will be developed.  So, it’s deja vu all over again…or is it?

The first time I heard the term “virtual reality” was when I began my career in 1994 as a Lead Animator at Virtual World Entertainment (VWE). The Chicago game studio made two products, a first-person shooter (FPS) “walking tank” game called Battletech and a “hovercraft racing” title called Red Planet.



Each product was built from the ground up on a proprietary game engine, completely unique to the requirements of gameplay for a multiplayer FPS and space-based racing title respectively.  Each engine included its own set of development tools and export processes, designed and built with essential integration toward the support of an efficient iterative creative process.  Nothing was borrowed or modded, and middleware was non-existent.  All of it was brand new, completely from scratch.  (Ok, truth be told, some code between Battletech and Red Planet was recycled.  But, I’m trying to make a point here.)

Fresh out of college, I was the studio’s first and only Lead Animator and it fell to me to collaborate with a newly hired Junior Programmer to design, test, and implement an integrated LOD Texturing tool.  The sky was the limit and… “What the hell is an LOD anyway?”

So, there I was, tasked with one of the most important art tools for Battletech’s and Red Planet’s CG art development.  Not because I was particularly suited for the role, but because I was “the new guy” and no one else wanted the job.

If you’ve ever wanted to make games for a living and knew nothing about the process, I knew exactly what you did when I began my career.  Lucky for me, this first challenge was a remarkable Art Tools design experience and quite an education.

Trial by fire, I learned how to make LODs by hand expeditiously, a method of reducing an object or character’s total number of polygons while maintaining its shape and silhouette.  I made four Levels of Detail (LOD) for each of the 20+ Mechs (aka “walking tanks”) and 12+ VTOL (“vertical take-off and landing”) racing craft.  That’s 128 LODs plus the original 32+ models.

Then, I learned about creating UV Maps followed by applying textures via Planar Projection mapping for the many texture groups within a single model.  At the time, Planar Projection mapping was all that this tool would provide.

The number of texture groups per model was exponential.  I had to rotate and place each Planar Projection, an intermediate object represented by a 3D Plane, over every single polygon group or group of facets (aka “face group”).  It was meticulous work.  But then, that’s why we were developing the LOD Texturing tool in the first place, to expedite this laborious process.  Ultimately, our efforts allowed Artists to texture any 3D model and all of its LODs based solely on the original models UV textures.  It was a profound success and increased my passion for making games and inventing game development technologies, in general.


By the way, is it really work if you love what you do for a living?  For me personally, animating for games is truly a dream come true.  I remember when a Tippett Studios’ VP at Siggraph once said, “These guys will work for nothing and do it all night long.  They love it!  They’re gamers and artist.”  I thought, “Holy sh*t, she knows our secret!”  But, it’s true.  Game developers will work long after their salaries have exhausted a full day’s work.  We are habitual over-achievers with a relentless work ethic.  Like some kind of digital junkie, looking forward to that next first moment of realized innovation in VR immersion.  It’s addictive!  That’s why most of look the way we do…trying to score that next (top-selling) digital hit.  Thank God mobile game development offers the same euphoric affects at smaller doses.  And, with the recent debates over VR/AR/MR, virtual reality, augmented reality, and mixed reality respectively, the digital chug-wagon continues.

I remember when I was in college, learning Alias|Wavefront software on a Thompson Digital Image machine back in the early 90’s.  No one knew what they were doing.  The teachers that were teaching the 3D Art and Animation curriculum at Columbia College Chicago had no clue what 3D was or even how to teach it.  Every student dove into the manuals and surpassed their instructors before the end of the second week, too impatient to watch some “old dude” struggle to understand the poorly written tutorials.

Anyway, I digress, back to the topic at hand.


Other things that haven’t changed in game development for decades?  How ’bout the division of labor across three main groups – Programmers, Designers, and Artists.  At VWE, I learned about five disparate teams the studio employed in their game development process – Owner/Managers, Programmers, Designers, Artists/Animators, and Testers.  And that right there was the pecking order by status and salary.  How little has changed in the industry as a whole.

Each of these teams worked in silos as focused but independent specialists prior to pre-production and were brought together as one homogenized unit as the pre-production “vertical slice” neared completion.  No, “vertical slice” has nothing to do with bread or ninja skills – Google it.

Over the years, the terminology for “development meetings with prioritized schedules or milestones” mutated into words like Sprint, Scrum, Agile, and Agile/Scrum.  Call it what you like, it has been the same process since the dawn of game development.  In its most basic form, it goes something like this – create a series of meetings based on a prioritized schedule of milestones around the topics of concepts/game ideas, dev, design, art, scope, and schedules.  Then, build and test the plethora of advancing software.  This is usually followed by cycles of wash/rinse/repeat.  Critical to the successful development of this cycle is smart, honest decisions by talented and experienced key team members…and yadda, yadda, yadda – it’s boring stuff, but absolutely necessary.

Another enduring oddity in game development is something called “studio culture”.  Here’s a checklist of things that, in my experience, have existed in every studio I’ve ever worked for:

⦁           Very smart, technical/analytical problem-solving academics who love games and are “kids at heart”

⦁           A fascination with technology trends, games, movies & music, art & animation, and science fiction/fantasy.

⦁           Communal eating spaces/kitchens with free drinks – a game developer’s divine right.

⦁           Tattoos, piercings, long hair.  Occasional bad hygene?  Perhaps.

⦁           Action figures

⦁           Nerf guns

⦁           Darkened work space that are quiet, but at times rowdy on a good day (aka productive day).

⦁           Flexible 8 hour work schedules

⦁           Casual clothes – bare feet (aka sandle or flip-flops), bare legs (aka shorts), baseball caps, and enigmatic t-shirts.

⦁           The mention of manga/anime, Weird Al (Yankovic) for some reason, and anything sci-fi…most likely a Star Wars reference.

And then, there’s the “proximity task”.  Happens all the time in game development.  It can usually fall to the person who is simply absent at the wrong time during a formal team meeting.  But when it’s an informal discussion, simply sitting at your desk near one can get you saddled with a task that no one wants.  Like today, for example, when I was asked to write this blog.  Happy reading!

By the way, if you’ve made it this far into the article, then bless you for your unwarranted attention.  You are a saint!  Take heed, I’m almost done.

One last thing that is ever present in this industry are the abundant proprietary processes developed and never shared by the multitude of game developers the world over.  With most new games and especially with innovative immersive AR/VR experiences on new hardware, a new engine, SDK, and game product are under simultaneous development.  In my experience, the lineage of this simultaneous development started on PC, followed by the original Xbox console, then Xbox 360, Kinect, HoloLens, and Magic Leap.

And now, finally, “Back to Eternus”.  Sounds like a great sci-fi epic, doesn’t it?

Here at Nerd Kingdom, I ran into an old friend of mine not mentioned above, good ol’ Mister Frame Rate.  “How have you been, Old Chum?  It’s been awhile.  Wife and kids?  Goooood.”  Ever the divisive arbiter of quality graphics versus render speed, Frame Rate could often be an allusive collaborator.  But last week, he sauntered up to me with a drink, “Here, knock this back.  Oh, I forgot. You don’t drink. (Chug! Slurp.)  Let’s talk, shall we?”

So, after closing time, there we were, old Frame Rate and I, talkin’ ’bout the Good Ol’ Days and the mischief he put me through as a Director of Animation under fire for the largest memory footprint that character animation had ever occupied in VWE’s history.  Now, I can’t say that I remember those days with as rosy a resplendent recall, but I do remember the relief I felt when we were able to solve the issue with a technical art solution, an animation export tool, that we could all agree upon.

Allow me to blather on in detail about this very familiar topic.  In the early days of game development, when you would export a character animation for a game, whether authored in Maya, 3D Studio Max, or some other CG software of choice, the animation asset was exported as a linear keyframe for every frame of motion exhibited by each joint or node in a character’s skeletal hierarchy, regardless if its value changed or not, for the duration of the motion.

Well, as we research a popular export format, it is creating a similar result – a keyframe on every frame.  And so, it’s not surprising that discussions about frame rates and reducing file sizes have stirred this air of frame rate nostalgia.  Suffice it to say, there is a lot of keyframe data that can be filtered and omitted from animation assets that will reduce the size of every animation file, thereby reducing its memory footprint, load times, and in turn increase frame rate.

The last time I helped solve this puzzle, we decided upon a proprietary export tool that would allow the Technical Animator or Animator to provide an overall attribute value, as well as an attribute value per joint (per axis) to influence the total number of keyframes that would be generated along a curve.  These attribute values would then generate a range of results, interpreting the motion (based on angle deviation) as “a keyframe every frame” to “a reduced or filtered key set based on the degree of change (by angle deviation) along a curve” to “omitting keyframes completely”.

Said differently, the algorithm inspected the curve and re-created it as a slimmer version of itself (in bits).  Where there were more changes in value, more keyframes were exported or maintained along that portion of the curve.  Where there were fewer changes in value, the placement of keyframes was farther apart.  Whatever solution is devised for Eternus, we are certain to surpass the current state of our technology as of this writing.  And, I can’t wait to revisit that feeling of overwhelming accomplishment when the motion in-game is identical at less than half its original file size.

Oh, the nostalgia for innovative thinking.  All of it, in pursuit of making great gaming experiences with Eternus that will entertain and occupy the masses.  I guess you can go home again.

All that’s old is new again – for the first time.  May you enjoy playing our product in its many pre-launch versions.  And may the God of Shipped Titles smile upon us as we run head-long into the many game development cycles of deja vu and repeated timelines.  Wash. Rinse. Repeat. Game.

Have a wonderful weekend!

Leave a Comment

Posted by on May 19th, 2017

Hi everyone!

Jake (theFlying3.14) here, Lead of Tool Development here at Nerd Kingdom. Several powerful systems have begun to come online in the Eternus engine recently. To support these systems we’ve designed several tool prototypes to aid designers in creating content. Today I’d like to share one of the more important systems that are being reused in multiple instances to provide a comprehensive functional experience going forward: the Visual Node Programing platform, or VNP.

VNP is a node programming platform that allows users to script functionality across different aspects of the game. The system is already being used in a few early tool prototypes: the biome tool, the animation web, and an AI behavior scripter. Future tools such as the material editor, shader creator, and quest editor are planned for VNP implementations.

Developed from the MIT licensed ThreeNodes.js – a WebGL shader tool – we heavily reworked the basic data structures and assumptions built into the library. Although there is still a lot we would like to do with it, what we’ve ended up with gives us great scalability.

The Visual Node Programming platform exists as an abstract application that we employ within each tool implementation, customizing it to fit the context. This means when you open biome tool, you will be greeted with a similar experience as the animation web. However, in reality, each tool might need to operate slightly differently. For example, the biome system reads the node graph from right to left, whereas the animation system reads “state strings” from left to right. To accommodate this each implementation of VNP has its own override of several fundamental objects: nodes, connections, workspaces. This allows great flexibility when developing and updating tools developed with VNP.

“So great another node programming tool….”

Obviously we are not the first to do this. There are, however, benefits from a node programming system being used alongside Eternus that you don’t see many other places. First, all of our current prototypes, including VNP are all written in javascript/typescript. This allows for extreme extensibility and accessibility versus platforms written in lower level languages. Another aspect of node programming we wanted to tackle was large groups of functionality – trying to make large graphs manageable. To do this we completely redesigned how groups worked in the original library. Providing the ability to group nodes on the fly, and use those groups in multiple webs across a project. We hope this significantly cuts down on development time.

 

Over the past several months we have gotten to experiment with a few different approaches to VNP integration. The first approach we took was to build the node graph, save the data models needed specifically for the node graph (like node.x and node.y, etc), and then grab just the data we needed for the engine resource, and send it in one big packet. Of course, this worked until we started building big graphs. Once the save packet got too big to pass between the frontend to the backend we smartened up.

The animweb tool took a different approach: each time a node is connected to the graph, the system evaluates where it is and dynamically adds it to the resource. This resulted in live coding. Being able to edit a resource’s node graph and see it change immediately. It also resulted in a lot of edge cases that are still giving me nightmares. For example, deleting nodes or removing one connection from a node that’s still connected to another field become really tedious.

Our overall goal for user-facing tools is to create simple interfaces that developers at any skill level will be able to leverage. VNP provides a familiar interface for designers as similar platforms are used in engines like Unreal and Unity. While programming with nodes can easier than scripting, this is not our final destination. We decided to tackle VNP first to provide us with a clear functional foundation of what designers need. Since nodal programming lends itself to so many situations, we can provide a consistent feeling experience across the game development workflow. Then later we can develop more specialized tools to streamline certain common practices and make it easier for less experienced devs.

I hope you enjoyed this look at our Visual Node Programming platform, and I’m excited to get our tool suite ready for feedback from our awesome community.

Leave a Comment

Posted by on April 14th, 2017

Hey all! This is Northman from Nerd Kingdom here to share some AI bytes with you. Specifically I’d like to talk about Pathfinding and how it relates to TUG and our characters.

Pathfinding is the act of finding the best path between two points in the world. The key word there is “best. The definition of best depends on the type of game you are making and the type of character you are trying to find a path for. I think a small thought experiment helps to clarify the set of problems pathfinding tries to solve:

Imagine a mountain goat and a human facing a mountain that extends as far to their left and right as their eyes can see. Directly in front of them is a door. On the door is a sign that reads: “Tunnel to the Other Side”. The human does not have any climbing gear and the mountain is far too steep to for the human to scale it without proper equipment. If both the human and the goat want to get to the other side of the mountain what do they do? The goat does not have hands to open doors nor the ability to read. However, the goat is a sure-footed climber and does not have any problem scaling the mountain so it goes on its goaty way over the top of the mountain. Conversely, the human does have hands and can read so they take the tunnel. The path the goat and the human found are both the best path they can muster by their definition of best even if they are both very different.

 

 


image

By Darklich14 (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)],  via Wikimedia Commons

*Insert funny remark about goats here.*

“Ropes? We don’t  need no stinkin’ ropes!”


 

 

So now that we have defined Pathfinding and talked about what “best” means we can look at what tools we have for finding paths. Most pathfinding techniques break up the world into spatial subsections (nodes) and store information about how those nodes are connected (edges). In Computer Science we call a set of nodes and edges a “graph”. Graphs are cool because they have been studied by mathematicians since the 18th century (check out the Seven Bridges of Königsberg). What this means to us is there are well known techniques, also known as algorithms, for dealing with graphs and finding best paths on them (see Pathfinding on Wikipedia). One of the most common algorithms used in games for pathfinding is A*. I won’t get into the details of A* in this post because it gets technical very quickly but the image below provides a good visual representation of a typical A* search.

 

 


image

 

By Subh83  (Own work) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via  Wikimedia Commons

An  illustration of an A* search. The initial red node indicates the starting position and the green node indicates the goal position. The gray polygon represents  an obstacle. The goal is to find the shortest distance path from the starting  node to the goal node while avoiding the obstacle.The path found is highlighted in green at the end of the animation.


 

 

We are currently exploring algorithms and graph representations for our world in TUG but so far we have implemented a navigational grid. In graph terms the grid is made up of nodes that all represent the same amount of space (one square meter) and each node has edges to its immediate neighboring nodes (grid cells): top left, top, top right, right, bottom right, bottom, bottom left, and left. These cells can be blocked by obstacles (shown in the image below in red) or open (shown in the image below in blue). This allows us to run A* searches to find best paths for our characters that avoid obstacles.

image

Blue Cells:  Areas a character can navigate in.

Red Cells:  Areas blocked by an object.

 

 

I hope you have enjoyed this introduction to pathfinding. Pathfinding is a large topic with many different techniques available depending on the pathfinding problem at hand. If you have any questions please feel free to email me at “northman at nerdkingdom dot com”. We are working hard to refine our pathfinding approaches for our characters and look forward to sharing more with you soon!

Have a great weekend!

Leave a Comment

Posted by on March 17th, 2017

Happy Friday! Cambo here to kick the weekend off by bringing back the dev tech blogs. We plan to keep them coming at least once a month as we make more progress on development. In regards to the Q&A, most of them are answered and I’ll be sure to post it in our next update blog. For now, here’s our infrastructure dude, Maylyon!

 


 

Hey everybody! Maylyon here with a new non-game related, non-engine related, non-tools related tech blog!  Hint: this is your tune-out point if those are the topics you are looking for.  3 … 2 … 1 …  Still here? Excellent!

After literally years of silence about Devotus, I wanted to follow-up with a snapshot of where Devotus is today. If your memory is a little rusty, Devotus will be our mod content distribution pipeline to help mod authors create and manage their home-brewed content and deliver it to end-users.  To get context for this blog entry, you should definitely read those first two blogs.  Without further ado, the “what has been happening?” (aka: “you guys still work on that thing?”).

 

ModJam 2016 ARMAGEDDON!

In case you didn’t know, there were mods on Devotus’s developmental servers from a TUG v1 ModJam in early 2016.  Don’t go rushing to find them now; they’re gone.  They were sacrificed to the binary gods in order to make way for…

 

Going “Serverless”

Suspend your understanding that the term “serverless” is a lie because there are always servers somewhere and play along for a bit.  The old Devotus architecture was built on AWS EBS-backed EC2 instances running a mix of Node.js, C++, and MongoDB.  It looked a little bit like this:

image

The primary detriments to this approach were:

1.    Paying for these servers (even extremely small servers) when nobody was using them,

2.    Scalability at each layer of the stack would incur even more financial cost and contribute to…

3.    Complexity of the implementation.

Leveraging AWS API Gateway and AWS Lambda, we have moved to an architecture that looks like:

image

Moving to this setup allows us to:

1.    Greatly reduce the costs associated with Devotus (especially when nobody is using it),

2.    Offload most of the scalability problem to AWS (less work = more naps),

3.    Synergize our implementation with the other microservices we have been developing on the Infrastructure team.

 

Support for GitLab

Devotus now allows mod authors to create git repositories on GitLab in addition to GitHub.  It’s actually been there for a while but wasn’t there in the last blog I wrote. By supporting GitLab and their awesome pricing model, Devotus allows a mod author to choose whether they want their mod’s git repository to be public or private at mod creation time.  This choice does not apply to mod’s created on GitHub because their pricing model is less awesome (but still pretty awesome) and I’m cheap (see previous section for proof).

 

Improved Download Metrics

In the “bad old” days (read as “a month ago”), mod download count was just an unsigned integer.  Download request comes in, number gets incremented by one.  Commence spamming download of your own mod to falsely inflate its popularity!  Everybody wins!  … Except for the people who want to use the system.

Now, in the “brave new world” days, mod downloads are tracked per-user, per-version.  This allows mod authors to track their mod’s popularity throughout its release history and allows end-users to trust that a mod’s popularity is probably because of an amazing mod author rather than a mod author’s amazing spam-bot.

 

The Future Is…?

That’s all I have for this installment.  I (or somebody from my team) will be back with future Infrastructure updates as we get new and/or exciting things to share.  In the meantime, be sure to jot down all those cool mod ideas you have kicking around in your brain into a little leather-bound notebook so that WHEN TUG v2 is launched and WHEN Devotus is client-facing, you will be ready!

Have a great weekend!

Leave a Comment

Posted by on September 24th, 2015

Up in the sky! It’s a bird! It’s a plane! No! It’s a flying dessert?

Hey folks! Time for another new face: Flying3.14 is the name, full stack is the game. I wandered in a few months ago and have been banging on things behind the scenes; recently, I’ve been  focusing on the user experience of the new modding system that Maylon introduced us to last blog, Devotus.

Access to this new mod system comes in two parts: a web portal and the game launcher. This blog will be focusing on the development of the web portal which will facilitate mod creation, versioning, and multi-author management. We’ll go over how to create a mod with Devotus, upload the source to GitHub, how to tag that code as your first version, and download the mod for the first time!

To create a mod, users must have both a Nerd Kingdom and GitHub account. After providing some basic information about the mod, the front end sends the create request while the user is free to browse the rest of the portal. On the backend, Devotus is busy creating the GitHub repo and preparing the customizable marking page. After a few moments the mod is created, a push notification is sent to the user and the new mod is available from the My Mods page.

image

All mods are created with an empty git repository, just waiting for awesome code. To add source to your project you’ll need to clone the repo and commit as usual. If you are new to git or GitHub, here is a resource to get you started. Helpful links such as the mod’s GitHub page can be found on the Management page. Here you can edit the basic info entered earlier, add authors, add dependencies, add media, and publish updates.

image

As explained in the last blog, one of the common problems we wanted to solve was mods with multiple authors. The Authors tab within the Management page allows you to add multiple authors, and Devotus will do the footwork to make sure GitHub knows who can access the repository.

image

Once everyone is on board and pushed their code, the Versioning tab will help you tag your release. Updates are made simply by creating a tag on any commit in the master branch. You can do this by visiting the GitHub Tags & Releases page via a link located in the Versioning tab. The tag must be formatted like so: vX.X.X-release. In most cases Devotus will be listening for these tags and will automatically start building the new download package in the background. In the event a manual check of the tags needs to be made, a link is provided in the Versioning tab.

image

Once Devotus is finished creating the download package, the status in the Management portal will update, and a ‘Test Download’ link will be available. Wa-lah! A single .zip package that contains your mod! Soon the Launcher will be collecting these, installing and managing the updates automagically!

image

Each mod comes with a marketing page that can be customized using the Nerd Kingdom Page Editor.  Share this page with players and followers on social media to give a detailed insight into what your mod provides. This tool is found on the Management page and runs off the same media and information provided throughout the portal. Change the look and feel through the theme menu, and add images or YouTube videos in the media manager. Entire new sections can be added containing multiple types of content including lists and tabs, allowing you to fully explain the features and usage of your mod. We believe by providing modders with an easily customizable interface to reach players, each mod’s presence can be a little larger than the typical profile page on a mod management service. To try out the Nerd Kingdom Page Editor head over to the ExampleMod page where you can fiddle to your heart’s desire.

image

The frontend is still a work in progress, but it demonstrates the direction modding is heading, and we are excited about all the opportunities that brings.

Leave a Comment

Next Page »

Follow Us!

Stay up to date!

To get the latest updates on Nerd Kingdom tech sent right to your e-mail, fill out the form below