Making a third-person character controller includes extra than simply transferring an object round a 3D scene. Reasonable motion, grounded physics, responsive leaping, and animation mixing are important for a cultured really feel. This text explores how these components will be assembled — not via conventional guide coding, however by way of AI-assisted growth utilizing Bolt.new, a browser-based AI-assisted growth instrument that generates net code from pure language prompts, backed by Claude 3.7 Sonnet and Claude 3.5 Sonnet LLMs. It supplies a light-weight surroundings the place builders can deal with describing performance slightly than writing boilerplate.
For this character controller, Bolt dealt with duties like organising physics, integrating animations, and managing enter methods, making it simpler to check concepts and iterate shortly with out switching between instruments or writing the whole lot from scratch.
In case you’re curious to be taught extra, try this text on Codrops, which additionally explores the platform’s capabilities and showcases one other real-world challenge constructed solely with AI.
The ultimate challenge is powered by React Three Fiber, Three.js, and Rapier, and showcases how a designer or developer can create advanced, interactive 3D experiences by guiding AI — specializing in habits and construction slightly than syntax.
Step 1: Setting Up Physics with a Capsule and Floor
The character controller begins with a easy setup: a capsule collider for the participant and a floor airplane to work together with. Rapier, a quick and light-weight physics engine in-built WebAssembly, handles gravity, inflexible physique dynamics, and collisions. This types the muse for participant motion and world interplay.
The capsule form was chosen for its stability when sliding throughout surfaces and climbing over small obstacles — a standard sample in real-time video games.
Step 2: Actual-Time Tuning with a GUI
To allow fast iteration and steadiness gameplay really feel, a visible GUI was launched (utilizing Leva.js). This panel exposes parameters resembling:
- Participant motion velocity
- Leap drive
- Gravity scale
- Observe digicam offset
- Debug toggles
By integrating this immediately into the expertise, builders can tune the controller stay without having to edit or recompile code, rushing up testing and design selections.
Step 3: Floor Detection with Raycasting
A raycast is used to detect whether or not the participant is grounded. This easy but efficient test prevents the character from leaping mid-air or triggering a number of jumps in sequence.
The logic is executed on each body, casting a ray downward from the bottom of the capsule collider. When contact is confirmed, the soar enter is enabled. This system additionally permits easy transitions between grounded and falling states within the animation system.
Step 4: Integrating a Rigged Character with Animation States
The visible character makes use of a rigged GLB mannequin by way of Mixamo, with three key animations: Idle, Run, and Fall. These are built-in as follows:
- The GLB character is hooked up as a toddler of the capsule collider
- The animation state switches dynamically based mostly on velocity and grounded standing
- Transitions are dealt with by way of animation mixing for a pure really feel
This setup retains the visuals in sync with physics, whereas preserving modular management over the bodily capsule.
Step 5: World Constructing and Asset Integration
The surroundings was organized in Blender, then exported as a single .glb
file and imported into the bolt.new challenge scene. This strategy permits for environment friendly scene composition whereas protecting asset administration easy.
For net, utilizing .glb
retains geometry and textures bundled collectively. To keep up efficiency, it’s beneficial to maintain textures at 1024×1024 decision or different sq. power-of-two sizes (e.g. 256, 512, 2048). This ensures optimum GPU reminiscence utilization and quicker load occasions throughout units.
Particular due to KayLousberg for the low-poly 3D equipment used for prototyping.
Step 6: Cross-Platform Enter Assist
The controller was designed to work seamlessly throughout desktop, cell, and gamepad platforms — all constructed utilizing AI-generated logic via Bolt.
Gamepad assist was added utilizing the Gamepad API, permitting gamers to plug in a controller and play with analog enter.
On desktop, the controller makes use of commonplace keyboard enter (WASD or arrow keys) and mouse motion for digicam management.
On cell, AI-generated code enabled an on-screen joystick and soar button, making the sport absolutely touch-compatible.
All enter varieties management the identical physics-driven character, guaranteeing constant habits throughout units — whether or not you’re enjoying on a laptop computer, touchscreen, or sport controller.
This cross-platform assist was applied solely via pure language prompts, showcasing how AI can translate high-level intent into working enter methods.
The Position of AI within the Workflow
What makes this controller distinctive isn’t the mechanics — it’s the method. Each system was generated by AI via descriptive prompts, permitting the developer to work extra like a inventive director than a standard engineer.
AI dealt with the boilerplate, the physics setup, the animation switching logic — all based mostly on clear inventive objectives. This opens new doorways for prototyping and interactive design, the place iteration velocity issues greater than syntax.
This character controller demo consists of:
- Capsule collider with physics
- Grounded detection by way of raycast
- State-driven animation mixing
- GUI controls for tuning
- Surroundings interplay with static/dynamic objects
- Cross-Platform Enter Assist
It’s a powerful place to begin for creating browser-based video games, interactive experiences, or prototyping new concepts — all with the assistance of AI.
Take a look at the complete sport constructed utilizing this setup as a base: 🎮 Demo Sport
Thanks for following alongside — have enjoyable constructing 😊