Greetings! 5ubtlety here, systems designer at Nerd Kingdom and the programmer behind the player character controller. I’m here to discuss our approach to designing and implementing this critical component that tethers the player to the world. There are simply too many low-level implementation details to fully expound upon in this post. Instead, we’re going to get a high-level overview of the considerations to take into account and challenges that can be faced when developing a character controller.
So what is a character controller? The character controller is simply responsible for moving the player avatar through the game world. This entails translating player input into movement mechanics and processing any game-specific movement dynamics. A character controller will also need to be able to detect and react to its environment, check for whether the character is on the ground or airborne, manage character state, and integrate with other game systems, including physics, animation, and the camera.
There are two main types of character controllers, dynamic (entirely physics-driven) and kinematic (non-reactive collisions). Character controller implementation is highly game-specific, but most opt for kinematic controllers. Very few games aim for complete physical realism. Most have stylized physical reactions tailor-made to feel great within the context of their gameplay environments.
The character controller is modeled as a capsule-shaped rigid body. The rounded capsule shape helps the character controller slide off of surfaces as it moves through the environment. It’s affected by gravity and constrained by terrain and other colliders by the physics engine. The orientation of the capsule is locked to an upright position, but may be manually adjusted in special cases, such as handling acceleration tilt, which pivots the character around its center of mass based on acceleration. Unless handled by the physics engine, movement will need to be projected onto the ground plane so the character can properly move up and down terrain.
Raycasts (and other geometric casts) are your main tool for sensing the environment immediately around the character controller so you may react properly. These casts give information such as direction and distance to the nearby colliders as well as their surface normals.
In open-world games, movement is typically the principal mechanic that consumes the player’s time. Therefore, movement needs to feel great as the player navigates through the world. Minimally, it needs to be functional, responsive, and intuitive. Depending on the game, you may have secondary goals such as allowing for greater player expression, or aiming for a high-degree of player control and precision, such as in a fast-paced platformer. Often, trade-offs will need to be made, so there is no universal solution to these matters. For example, consider the following graphic in which the left figure has a flat velocity, while the right figure applies acceleration. The left figure allows for a higher level of precision in movement, while the right is more realistic and may look and feel better in certain game contexts.
Image Credit: Marius Holstad, Source
Every game is going to have specific movement affordances, such as the ability to sprint, swim, double jump, wall-run, grab ledges, climb ladders, swing from ropes, etc. Every new verb added to this list can vastly expand the player’s mobility. Defining these is just the beginning though. There is much nuance in how they are used, how they feel, and how they interact with other game elements.
Even if all your character can do is simply move side to side and jump, you’re quickly going to run into “The Door Problem” of Game Design. Here are a few of the questions you might start asking:
This is just the beginning. As development progresses, new questions and issues will arise as environmental variables impose new constraints on the initial design. You should develop your controller gradually, making steady incremental improvements. In our case, we developed a playground scene where we can test our iterative approach in a consistent, controlled environment. Spoiler Alert: Most of your development time is going to be addressing engine and game-specific edge cases!
Following are some features we explored while prototyping our character controller. Note not all of these elements will be relevant to every game.
In most 3rd-person perspective games, movement is relative to the camera rather than the avatar, which is more intuitive for the player to process. Some games intentionally break this convention to impart a feeling of vulnerability.
When moving, the pawn automatically pivots over time (with some smoothing) to align with the movement direction.
Image Credit: Marius Holstad, Source
Jump Input Buffering and Latency Forgiveness
This helps with jump timing in the case the player presses the jump button a few frames before actually reaching the ground. Additionally, this permits the player to execute a jump even if they pressed the button immediately after walking off a ledge and consequently entered the airborne state. This pattern can be applied to other kinds of character input as well.
This allows the player to adjust their airborne velocity, but with reduced effect.
This assists elevating the character up detected steps by smoothing movement with a curved spline over a period of time.
Ground Normal Smoothing
This will eliminate anomalies in ground normal calculation by performing multiple raycasts at various sample points at the base of the player’s capsule and averaging the results to calculate the final ground normal. The resultant vector is then smoothed between consecutive frames.
Slope Fatigue System
Any slope above a certain threshold incline will induce “slope fatigue” in the player over a short period of time. The more fatigued the player is, the more slowly he will ascend the surface in the upward direction of the incline. After a certain amount of fatigue has accumulated, based on slope steepness, the player will begin sliding down the slope. Slope fatigue will recover once the player is on a more level surface.
Automatic wall avoidance allows for smoother steering behavior when walking around walls and corners. The character controller raycasts ahead in the direction of movement to detect walls and other obstructions that would block movement. If detected, and the angle of incidence is shallow, the player is steered away from the surface. On the left side of the following image, the player sticks to the wall as he brushes against it. On the right side of the image, the player gently slides off the surface as his steering is adjusted.
Credit: Marius Holstad, Source
Analogue Input Processing
Analogue movement input from a thumbstick is a very different approach to controlling direction and speed than the keyboard’s 8-way digital input. In order to sanitize this raw axis data and map it to movement inputs the controller can read, we filter it through dead zones and interpolate the results.
Image Credit: Ryan Juckett, Source
Hopefully this post provided some insight into the design and implementation of character controllers and some of the considerations to take into account when developing one. The bottom line is that there is no one right solution that works in all situations. Every game’s needs are very different and developing a solid character controller is going to largely be an iterative process of discovery and polish. The final 10% is what separates a clumsy, buggy controller from a responsive one that works well and immerses the player. This is one of the game’s most critical components that the player continually interfaces with during gameplay. It can easily make or break an entire game, so take the proper time and make it feel great!