Posted by on July 28th, 2017

Greetings! 5ubtlety here, systems designer at Nerd Kingdom and the programmer behind the player character controller. I’m here to discuss our approach to designing and implementing this critical component that tethers the player to the world. There are simply too many low-level implementation details to fully expound upon in this post. Instead, we’re going to get a high-level overview of the considerations to take into account and challenges that can be faced when developing a character controller.


So what is a character controller? The character controller is simply responsible for moving the player avatar through the game world. This entails translating player input into movement mechanics and processing any game-specific movement dynamics. A character controller will also need to be able to detect and react to its environment, check for whether the character is on the ground or airborne, manage character state, and integrate with other game systems, including physics, animation, and the camera.

There are two main types of character controllers, dynamic (entirely physics-driven) and kinematic (non-reactive collisions). Character controller implementation is highly game-specific, but most opt for kinematic controllers. Very few games aim for complete physical realism. Most have stylized physical reactions tailor-made to feel great within the context of their gameplay environments.

The character controller is modeled as a capsule-shaped rigid body. The rounded capsule shape helps the character controller slide off of surfaces as it moves through the environment. It’s affected by gravity and constrained by terrain and other colliders by the physics engine. The orientation of the capsule is locked to an upright position, but may be manually adjusted in special cases, such as handling acceleration tilt, which pivots the character around its center of mass based on acceleration. Unless handled by the physics engine, movement will need to be projected onto the ground plane so the character can properly move up and down terrain.

Raycasts (and other geometric casts) are your main tool for sensing the environment immediately around the character controller so you may react properly. These casts give information such as direction and distance to the nearby colliders as well as their surface normals.


In open-world games, movement is typically the principal mechanic that consumes the player’s time. Therefore, movement needs to feel great as the player navigates through the world. Minimally, it needs to be functional, responsive, and intuitive. Depending on the game, you may have secondary goals such as allowing for greater player expression, or aiming for a high-degree of player control and precision, such as in a fast-paced platformer. Often, trade-offs will need to be made, so there is no universal solution to these matters. For example, consider the following graphic in which the left figure has a flat velocity, while the right figure applies acceleration. The left figure allows for a higher level of precision in movement, while the right is more realistic and may look and feel better in certain game contexts.

Image Credit: Marius Holstad, Source

Every game is going to have specific movement affordances, such as the ability to sprint, swim, double jump, wall-run, grab ledges, climb ladders, swing from ropes, etc. Every new verb added to this list can vastly expand the player’s mobility. Defining these is just the beginning though. There is much nuance in how they are used, how they feel, and how they interact with other game elements.

Even if all your character can do is simply move side to side and jump, you’re quickly going to run into “The Door Problem” of Game Design. Here are a few of the questions you might start asking:

  • How fast should the player move? What are the maximum and minimum movement speeds? Can the player choose to move at intermediate values?
  • Can the player stop and pivot on a dime?
  • Should the player accelerate and decelerate over time? How quickly?
  • Will your game have different kinds of terrain that affect player movement, such as quicksand or ice?
  • How do the character controller and animation system interact with one another?
  • What size and shape should the character’s collider be?
  • Can the character push or be pushed by other objects when they press against one another?
  • What kind of environmental geometry does your world feature? Sharp and flat edges, or organic, bumpy terrain?
  • Is the player able to walk up slopes? What are the minimum and maximum inclines?
  • How about steps? What are the minimum and maximum height?
  • Is movement speed slower when walking uphill?
  • Is controller input supported? How will input be handled differently between a keyboard and analogue stick?
  • How does the camera follow the player?
  • How high can the player jump? Are running jumps higher?
  • Is momentum conserved when jumping?
  • What should the force of gravity be? Is this the only factor that determines the player’s fall speed?
  • Is there air friction (drag)?
  • Should the character have a terminal velocity?
  • Does the character have a momentary hang-time at the jump’s apex, or does it immediately begin decelerating downwards?
  • Can the player jump higher by holding the jump button longer?
  • Does the player have any amount of air control, or is mid-air input simply ignored?

This is just the beginning. As development progresses, new questions and issues will arise as environmental variables impose new constraints on the initial design. You should develop your controller gradually, making steady incremental improvements. In our case, we developed a playground scene where we can test our iterative approach in a consistent, controlled environment. Spoiler Alert: Most of your development time is going to be addressing engine and game-specific edge cases!


Following are some features we explored while prototyping our character controller. Note not all of these elements will be relevant to every game.

Camera-Relative Movement

In most 3rd-person perspective games, movement is relative to the camera rather than the avatar, which is more intuitive for the player to process. Some games intentionally break this convention to impart a feeling of vulnerability.

Motion Alignment

When moving, the pawn automatically pivots over time (with some smoothing) to align with the movement direction.

Image Credit: Marius Holstad, Source

Jump Input Buffering and Latency Forgiveness

This helps with jump timing in the case the player presses the jump button a few frames before actually reaching the ground. Additionally, this permits the player to execute a jump even if they pressed the button immediately after walking off a ledge and consequently entered the airborne state. This pattern can be applied to other kinds of character input as well.

Air Control

This allows the player to adjust their airborne velocity, but with reduced effect.

Animation

  • Animation Blending
  • Upper/Lower Body Animation Layers
  • Root Motion Control
    • Adjust capsule position and/or orientation as a result of playing certain animations.
  • Inverse Kinematic Limb Placement
    • Place feet when walking/running. Particularly useful for steps and slopes.
    • Place hands when climbing or interacting with game objects.
    • Intelligent Ragdolls

Spline-Stepping

This assists elevating the character up detected steps by smoothing movement with a curved spline over a period of time.

Here is a prototype of our character controller walking up some stairs in our playground scene.

Ground Normal Smoothing

This will eliminate anomalies in ground normal calculation by performing multiple raycasts at various sample points at the base of the player’s capsule and averaging the results to calculate the final ground normal. The resultant vector is then smoothed between consecutive frames.

Here is a prototype of our character controller walking over rounded surfaces in our playground scene.

Slope Fatigue System

Any slope above a certain threshold incline will induce “slope fatigue” in the player over a short period of time. The more fatigued the player is, the more slowly he will ascend the surface in the upward direction of the incline. After a certain amount of fatigue has accumulated, based on slope steepness, the player will begin sliding down the slope. Slope fatigue will recover once the player is on a more level surface.

Wall Avoidance

Automatic wall avoidance allows for smoother steering behavior when walking around walls and corners. The character controller raycasts ahead in the direction of movement to detect walls and other obstructions that would block movement. If detected, and the angle of incidence is shallow, the player is steered away from the surface. On the left side of the following image, the player sticks to the wall as he brushes against it. On the right side of the image, the player gently slides off the surface as his steering is adjusted.

Credit: Marius Holstad, Source

Analogue Input Processing

Analogue movement input from a thumbstick is a very different approach to controlling direction and speed than the keyboard’s 8-way digital input. In order to sanitize this raw axis data and map it to movement inputs the controller can read, we filter it through dead zones and interpolate the results.

Inner Dead Zone

Outer Dead Zone

Radial Dead Zone

Range Mapping

Non-Linear Interpolation

Image Credit: Ryan Juckett, Source


Hopefully this post provided some insight into the design and implementation of character controllers and some of the considerations to take into account when developing one. The bottom line is that there is no one right solution that works in all situations. Every game’s needs are very different and developing a solid character controller is going to largely be an iterative process of discovery and polish. The final 10% is what separates a clumsy, buggy controller from a responsive one that works well and immerses the player. This is one of the game’s most critical components that the player continually interfaces with during gameplay. It can easily make or break an entire game, so take the proper time and make it feel great!

 

Leave a Comment

Posted by on June 30th, 2017

Hey everyone!

This update will be short as I don’t have many screenshots or prototype videos to share today.  However, we did prepare a 30-minute playthrough video for you all! Since the last progress update, we’ve been working hard on polishing our current features and systems. We’ve made incredible progress this month but we still have a long way to go. Have a great weekend and 4th of July!

Leave a Comment

Posted by on June 16th, 2017

Hello everyone! The name is Duane and I am the Animation Director here at Nerd Kingdom.

During my lengthy career in game development, I have certainly been here before.  Well, not really here, as here is a bit different.  However, in some aspects, it is almost entirely the same.  The outstanding difference is the Eternus Game Engine currently under development at Nerd Kingdom.  Built from the ground up, Eternus holds the promise of a groundbreaking game development engine and tools upon which its flagship product will be developed.  So, it’s deja vu all over again…or is it?

The first time I heard the term “virtual reality” was when I began my career in 1994 as a Lead Animator at Virtual World Entertainment (VWE). The Chicago game studio made two products, a first-person shooter (FPS) “walking tank” game called Battletech and a “hovercraft racing” title called Red Planet.



Each product was built from the ground up on a proprietary game engine, completely unique to the requirements of gameplay for a multiplayer FPS and space-based racing title respectively.  Each engine included its own set of development tools and export processes, designed and built with essential integration toward the support of an efficient iterative creative process.  Nothing was borrowed or modded, and middleware was non-existent.  All of it was brand new, completely from scratch.  (Ok, truth be told, some code between Battletech and Red Planet was recycled.  But, I’m trying to make a point here.)

Fresh out of college, I was the studio’s first and only Lead Animator and it fell to me to collaborate with a newly hired Junior Programmer to design, test, and implement an integrated LOD Texturing tool.  The sky was the limit and… “What the hell is an LOD anyway?”

So, there I was, tasked with one of the most important art tools for Battletech’s and Red Planet’s CG art development.  Not because I was particularly suited for the role, but because I was “the new guy” and no one else wanted the job.

If you’ve ever wanted to make games for a living and knew nothing about the process, I knew exactly what you did when I began my career.  Lucky for me, this first challenge was a remarkable Art Tools design experience and quite an education.

Trial by fire, I learned how to make LODs by hand expeditiously, a method of reducing an object or character’s total number of polygons while maintaining its shape and silhouette.  I made four Levels of Detail (LOD) for each of the 20+ Mechs (aka “walking tanks”) and 12+ VTOL (“vertical take-off and landing”) racing craft.  That’s 128 LODs plus the original 32+ models.

Then, I learned about creating UV Maps followed by applying textures via Planar Projection mapping for the many texture groups within a single model.  At the time, Planar Projection mapping was all that this tool would provide.

The number of texture groups per model was exponential.  I had to rotate and place each Planar Projection, an intermediate object represented by a 3D Plane, over every single polygon group or group of facets (aka “face group”).  It was meticulous work.  But then, that’s why we were developing the LOD Texturing tool in the first place, to expedite this laborious process.  Ultimately, our efforts allowed Artists to texture any 3D model and all of its LODs based solely on the original models UV textures.  It was a profound success and increased my passion for making games and inventing game development technologies, in general.


By the way, is it really work if you love what you do for a living?  For me personally, animating for games is truly a dream come true.  I remember when a Tippett Studios’ VP at Siggraph once said, “These guys will work for nothing and do it all night long.  They love it!  They’re gamers and artist.”  I thought, “Holy sh*t, she knows our secret!”  But, it’s true.  Game developers will work long after their salaries have exhausted a full day’s work.  We are habitual over-achievers with a relentless work ethic.  Like some kind of digital junkie, looking forward to that next first moment of realized innovation in VR immersion.  It’s addictive!  That’s why most of look the way we do…trying to score that next (top-selling) digital hit.  Thank God mobile game development offers the same euphoric affects at smaller doses.  And, with the recent debates over VR/AR/MR, virtual reality, augmented reality, and mixed reality respectively, the digital chug-wagon continues.

I remember when I was in college, learning Alias|Wavefront software on a Thompson Digital Image machine back in the early 90’s.  No one knew what they were doing.  The teachers that were teaching the 3D Art and Animation curriculum at Columbia College Chicago had no clue what 3D was or even how to teach it.  Every student dove into the manuals and surpassed their instructors before the end of the second week, too impatient to watch some “old dude” struggle to understand the poorly written tutorials.

Anyway, I digress, back to the topic at hand.


Other things that haven’t changed in game development for decades?  How ’bout the division of labor across three main groups – Programmers, Designers, and Artists.  At VWE, I learned about five disparate teams the studio employed in their game development process – Owner/Managers, Programmers, Designers, Artists/Animators, and Testers.  And that right there was the pecking order by status and salary.  How little has changed in the industry as a whole.

Each of these teams worked in silos as focused but independent specialists prior to pre-production and were brought together as one homogenized unit as the pre-production “vertical slice” neared completion.  No, “vertical slice” has nothing to do with bread or ninja skills – Google it.

Over the years, the terminology for “development meetings with prioritized schedules or milestones” mutated into words like Sprint, Scrum, Agile, and Agile/Scrum.  Call it what you like, it has been the same process since the dawn of game development.  In its most basic form, it goes something like this – create a series of meetings based on a prioritized schedule of milestones around the topics of concepts/game ideas, dev, design, art, scope, and schedules.  Then, build and test the plethora of advancing software.  This is usually followed by cycles of wash/rinse/repeat.  Critical to the successful development of this cycle is smart, honest decisions by talented and experienced key team members…and yadda, yadda, yadda – it’s boring stuff, but absolutely necessary.

Another enduring oddity in game development is something called “studio culture”.  Here’s a checklist of things that, in my experience, have existed in every studio I’ve ever worked for:

⦁           Very smart, technical/analytical problem-solving academics who love games and are “kids at heart”

⦁           A fascination with technology trends, games, movies & music, art & animation, and science fiction/fantasy.

⦁           Communal eating spaces/kitchens with free drinks – a game developer’s divine right.

⦁           Tattoos, piercings, long hair.  Occasional bad hygene?  Perhaps.

⦁           Action figures

⦁           Nerf guns

⦁           Darkened work space that are quiet, but at times rowdy on a good day (aka productive day).

⦁           Flexible 8 hour work schedules

⦁           Casual clothes – bare feet (aka sandle or flip-flops), bare legs (aka shorts), baseball caps, and enigmatic t-shirts.

⦁           The mention of manga/anime, Weird Al (Yankovic) for some reason, and anything sci-fi…most likely a Star Wars reference.

And then, there’s the “proximity task”.  Happens all the time in game development.  It can usually fall to the person who is simply absent at the wrong time during a formal team meeting.  But when it’s an informal discussion, simply sitting at your desk near one can get you saddled with a task that no one wants.  Like today, for example, when I was asked to write this blog.  Happy reading!

By the way, if you’ve made it this far into the article, then bless you for your unwarranted attention.  You are a saint!  Take heed, I’m almost done.

One last thing that is ever present in this industry are the abundant proprietary processes developed and never shared by the multitude of game developers the world over.  With most new games and especially with innovative immersive AR/VR experiences on new hardware, a new engine, SDK, and game product are under simultaneous development.  In my experience, the lineage of this simultaneous development started on PC, followed by the original Xbox console, then Xbox 360, Kinect, HoloLens, and Magic Leap.

And now, finally, “Back to Eternus”.  Sounds like a great sci-fi epic, doesn’t it?

Here at Nerd Kingdom, I ran into an old friend of mine not mentioned above, good ol’ Mister Frame Rate.  “How have you been, Old Chum?  It’s been awhile.  Wife and kids?  Goooood.”  Ever the divisive arbiter of quality graphics versus render speed, Frame Rate could often be an allusive collaborator.  But last week, he sauntered up to me with a drink, “Here, knock this back.  Oh, I forgot. You don’t drink. (Chug! Slurp.)  Let’s talk, shall we?”

So, after closing time, there we were, old Frame Rate and I, talkin’ ’bout the Good Ol’ Days and the mischief he put me through as a Director of Animation under fire for the largest memory footprint that character animation had ever occupied in VWE’s history.  Now, I can’t say that I remember those days with as rosy a resplendent recall, but I do remember the relief I felt when we were able to solve the issue with a technical art solution, an animation export tool, that we could all agree upon.

Allow me to blather on in detail about this very familiar topic.  In the early days of game development, when you would export a character animation for a game, whether authored in Maya, 3D Studio Max, or some other CG software of choice, the animation asset was exported as a linear keyframe for every frame of motion exhibited by each joint or node in a character’s skeletal hierarchy, regardless if its value changed or not, for the duration of the motion.

Well, as we research a popular export format, it is creating a similar result – a keyframe on every frame.  And so, it’s not surprising that discussions about frame rates and reducing file sizes have stirred this air of frame rate nostalgia.  Suffice it to say, there is a lot of keyframe data that can be filtered and omitted from animation assets that will reduce the size of every animation file, thereby reducing its memory footprint, load times, and in turn increase frame rate.

The last time I helped solve this puzzle, we decided upon a proprietary export tool that would allow the Technical Animator or Animator to provide an overall attribute value, as well as an attribute value per joint (per axis) to influence the total number of keyframes that would be generated along a curve.  These attribute values would then generate a range of results, interpreting the motion (based on angle deviation) as “a keyframe every frame” to “a reduced or filtered key set based on the degree of change (by angle deviation) along a curve” to “omitting keyframes completely”.

Said differently, the algorithm inspected the curve and re-created it as a slimmer version of itself (in bits).  Where there were more changes in value, more keyframes were exported or maintained along that portion of the curve.  Where there were fewer changes in value, the placement of keyframes was farther apart.  Whatever solution is devised for Eternus, we are certain to surpass the current state of our technology as of this writing.  And, I can’t wait to revisit that feeling of overwhelming accomplishment when the motion in-game is identical at less than half its original file size.

Oh, the nostalgia for innovative thinking.  All of it, in pursuit of making great gaming experiences with Eternus that will entertain and occupy the masses.  I guess you can go home again.

All that’s old is new again – for the first time.  May you enjoy playing our product in its many pre-launch versions.  And may the God of Shipped Titles smile upon us as we run head-long into the many game development cycles of deja vu and repeated timelines.  Wash. Rinse. Repeat. Game.

Have a wonderful weekend!

Leave a Comment

Posted by on June 2nd, 2017

Hello everyone and happy Friday!

Today we’re excited to share the progress we have made in the past few weeks. Development of Eternus 2 is making great strides as we are now starting to streamline how integration works for Art and Gameplay teams. For example, we can now directly bind to an Art authored UI layout instead of Programmer placeholders. Our radial menu is now implemented in is going through more polishing as we continue to test it. You can check out the building prototype and new radial menu in the video below.

Importantly, it’s been almost 1 year since Eternus 2 development started! We have learned a lot as a team and will continue to grow as we keep moving forward.



 


Have a great weekend!

-Cambo

5 Comments

Posted by on May 19th, 2017

Hi everyone!

Jake (theFlying3.14) here, Lead of Tool Development here at Nerd Kingdom. Several powerful systems have begun to come online in the Eternus engine recently. To support these systems we’ve designed several tool prototypes to aid designers in creating content. Today I’d like to share one of the more important systems that are being reused in multiple instances to provide a comprehensive functional experience going forward: the Visual Node Programing platform, or VNP.

VNP is a node programming platform that allows users to script functionality across different aspects of the game. The system is already being used in a few early tool prototypes: the biome tool, the animation web, and an AI behavior scripter. Future tools such as the material editor, shader creator, and quest editor are planned for VNP implementations.

Developed from the MIT licensed ThreeNodes.js – a WebGL shader tool – we heavily reworked the basic data structures and assumptions built into the library. Although there is still a lot we would like to do with it, what we’ve ended up with gives us great scalability.

The Visual Node Programming platform exists as an abstract application that we employ within each tool implementation, customizing it to fit the context. This means when you open biome tool, you will be greeted with a similar experience as the animation web. However, in reality, each tool might need to operate slightly differently. For example, the biome system reads the node graph from right to left, whereas the animation system reads “state strings” from left to right. To accommodate this each implementation of VNP has its own override of several fundamental objects: nodes, connections, workspaces. This allows great flexibility when developing and updating tools developed with VNP.

“So great another node programming tool….”

Obviously we are not the first to do this. There are, however, benefits from a node programming system being used alongside Eternus that you don’t see many other places. First, all of our current prototypes, including VNP are all written in javascript/typescript. This allows for extreme extensibility and accessibility versus platforms written in lower level languages. Another aspect of node programming we wanted to tackle was large groups of functionality – trying to make large graphs manageable. To do this we completely redesigned how groups worked in the original library. Providing the ability to group nodes on the fly, and use those groups in multiple webs across a project. We hope this significantly cuts down on development time.

 

Over the past several months we have gotten to experiment with a few different approaches to VNP integration. The first approach we took was to build the node graph, save the data models needed specifically for the node graph (like node.x and node.y, etc), and then grab just the data we needed for the engine resource, and send it in one big packet. Of course, this worked until we started building big graphs. Once the save packet got too big to pass between the frontend to the backend we smartened up.

The animweb tool took a different approach: each time a node is connected to the graph, the system evaluates where it is and dynamically adds it to the resource. This resulted in live coding. Being able to edit a resource’s node graph and see it change immediately. It also resulted in a lot of edge cases that are still giving me nightmares. For example, deleting nodes or removing one connection from a node that’s still connected to another field become really tedious.

Our overall goal for user-facing tools is to create simple interfaces that developers at any skill level will be able to leverage. VNP provides a familiar interface for designers as similar platforms are used in engines like Unreal and Unity. While programming with nodes can easier than scripting, this is not our final destination. We decided to tackle VNP first to provide us with a clear functional foundation of what designers need. Since nodal programming lends itself to so many situations, we can provide a consistent feeling experience across the game development workflow. Then later we can develop more specialized tools to streamline certain common practices and make it easier for less experienced devs.

I hope you enjoyed this look at our Visual Node Programming platform, and I’m excited to get our tool suite ready for feedback from our awesome community.

Leave a Comment

Posted by on May 5th, 2017

Happy Friday everyone!

I apologize for the delayed update as I was out with the flu last week. It is good to feel alive again! Getting back on topic, our engine team is cranking away on the Eternus engine and tools. Our gameplay team is improving our building methods, islands and more. Art team has been hard at work creating new assets for TUG.  You can some images and video examples below.

Meanwhile, @x_nekochu_x is hiding away in the sound booth working on sound production stuff. I have been nagging him a lot to let me help out with sound effects. There’s just something extremely fun about making sound effects. For example, breaking things… many things.

We also updated our website and relocated our blog to the main site. You may find a few older posts with broken links and images but there is not much we can do to fix them.





The community questions have been finalized and we hope to have more details in the future. If you missed out, don’t worry! We can do another in the near future. We apologize for some of the TBD answers as those are still in discussion.

Marketing/Business

  1. With regards to the game’s business model:
    1. F2P was mentioned, although you last stated you were leaning away from this idea, are we any closer to a definitive business model and where is the “grey areas” currently?
      1. TBD
    2. What is TUG likely to cost (if F2P, how will you be making the money you need)
      1. TBD
    3. You mentioned the core engine was to be made available for developers to make their own games from, can you expand on that?
      1. Our goal initially is to provide access to the tools, documentation, tutorials and examples for devs to tie into our engine and develop their own mods and games.
    4. What is the plan for the kickstarter rewards? (Pets, mounts, wisps, etc…)
      1. TBD
  2. How do you plan to deal with the negative reviews and discussions on steam? Have you thought about removing the old steam page?
    1. We have had discussions about that, but at this point we are waiting to move the product further along before we make any adjustments to community discussions
  3. If you remain on Steam, but remove the original game listing, will you be handing out keys to all original players?
    1. TBD
  4. What are your current plans to draw in and maintain a large user base?
    1. The game we are developing we hope to be fun and engaging for users to play and as important is the platform we are creating for modders and developers to be a part of. A combination of both a great game and tools we believe will draw in and maintain users in large numbers.
  5. Do you have a community manager / QA member at NK? Do you have plans to get one on the timeline?
    1. As we are beginning to make efforts to refocus on our community we have moved resources to fill the roles for managing the community. They will be providing updates 2 times a week and participating in the forums as well.
    2. In regards to PR, stop saying “early next year” or “soon,” etc. Either give a timeframe or don’t talk about it yet/in that way.
      1. Noted
  6. How many people currently work at NK?
    1. 40+
    2. How many are from the original kickstart team?
      1. Our original team size was about 10 developers. From then till now we have grown a lot and 8 of the original guys are still here on the team, or actively collaborating from other projects they have joined.
    3. Do you have multiple development teams? What are they working on?
      1. Our full focus is on our single product and the development of the technologies to support it. The teams working towards this effort encompass
        1. Infrastructure / Tools / Engine / Gameplay / Data Sciences and Art

Data Science

  1. What kind of data is going to be gathered?
    1. Our focus on data is about how players play. We are looking at when and where do players go in the world, what things do they gather, etc.
    2. What will the data be used for?
      1. Gathering this data enables us to improve the experience for players based on a more scientific approach
    3. Will the data be anonymous.?
      1. Yes
    4. Who will have access to view this data? (Eg.. all, devs, or sold)
      1. Nerd Kingdom and developers will have access, as well as modders, and various academic institutions that are open to publishing findings publicly, but never without consent of players.
  2. Regarding inter-player activities and conflicts:
    1. What’s the plan for managing Griefing and the Kill-on-Sight attitude many have in survival games?
      1. TBD
    2. What’s the plan for managing harassment and other illegal activity?
      1. TBD
    3. If you’re running servers open to minors, will you have any special rules dealing with age differences among players?
      1. TBD
  3. What kind of in-game communication options/resources will players have?
    1. In game chat

Cambo

3 Comments

Next Page »

Follow Us!

Stay up to date!

To get the latest updates on Nerd Kingdom tech sent right to your e-mail, fill out the form below