Inside GoalBot: Neural Networks vs. Heuristics

8 min read AI / Game Design 2026

GoalBot isn't a penalty kick simulator—it's a machine learning sandbox. You're a penalty taker facing an AI goalkeeper that adapts to your style. The twist: the AI's sophistication depends entirely on difficulty. On Legend, a TensorFlow.js neural network predicts your next shot. On Easy and Medium, it uses simple heuristics. Hard splits the difference. Every save is calculated; every miss is deliberate. TensorFlow.js is an open-source ML library by Google, used under Apache 2.0 license.

How GoalBot Actually Works

GoalBot is a penalty kick shooter with rounds (8-15 based on difficulty). You swipe to shoot. The goalkeeper reacts. The AI doesn't learn your personality—it doesn't need to. Instead, it uses real-time pattern recognition on your last 10 shots to predict direction, injects strategic errors, and decides when to miss on purpose to keep the difficulty target (55% player success rate) steady.

  • Swipe-Based Shooting: Diagonal swipes bend the ball. Horizontal swipes shoot straight. The power bar charges, and the surge system unlocks at higher power levels (BOOST → HYPER → ULTRA → LEGEND), increasing shot complexity.
  • Adaptive Difficulty Scaling: Rolling 10-shot success window. Target = 55% player success. If you're winning too much, the keeper tightens (up to +0.5 difficulty modifier). If you're losing, the AI relaxes (-0.5 smoothing at 0.15 rate).
  • Legend Difficulty Uses Real ML: Only on Legend does a 12-input neural network activate. Easy/Medium use biased random. Hard uses heuristics. This isn't lazy design—it's intentional. Easier modes should feel fast and fair, not engineered.
  • Player Skill Profiling: The system tracks your accuracy, preferred direction (L/C/R), power tendency, and composite entropy score. This data is logged for analytics only—not fed back into the AI decisions (by design).

The Three Components of Goalkeeper AI

GoalBot's goalkeeper system has three layers: a machine learning model, adaptive difficulty scaling, and a multi-step dive algorithm that ties it all together.

1. GoalkeeperML: The TensorFlow.js Neural Network

Architecture: 12 inputs → 32 hidden units (ReLU) → 16 hidden units (ReLU) → 3 outputs (softmax). The network only activates on Legend difficulty. On Easy and Medium, it's completely bypassed in favor of simpler random and heuristic approaches.

The 12 Input Features:

The network retrains every 3 shots when at least 5 training samples exist, running 8 epochs. When pattern detection fails (no clear alternating patterns), it falls back to statistical analysis. This hybrid approach means Legend mode has genuine adaptive intelligence without overfitting.

2. Adaptive Difficulty Scaling

Every match uses a rolling 10-shot success window. The target = 55% player success rate. If your success rate exceeds this, the keeper adjusts difficulty up (+0.5 modifier). If you're below target, it eases (−0.5). The smoothing rate is 0.15, so difficulty shifts gradually rather than jerking between extremes.

Target save rates by difficulty:

This ensures that regardless of difficulty, the match stays balanced around 55% expected player success. The AI isn't trying to beat you—it's trying to present a calibrated challenge.

3. The Multi-Step Dive Algorithm (_computeKeeperDive)

This is where prediction becomes action. The dive algorithm runs seven sequential steps:

  • Step 1: ML Prediction + Confidence. The neural network (or heuristics on lower difficulty) outputs a predicted shot direction. Confidence is scaled by difficulty: Easy gets 0.3x, Legend gets full 1.0x confidence weight. This means the keeper on Easy guesses; on Legend, it commits.
  • Step 2: Prediction Error Injection. Random chance to output a wrong direction. This prevents the keeper from being too perfect and keeps matches feel organic.
  • Step 3: Direction → Position Mapping. Convert the predicted direction into a dive position with accuracy bias (some directions are easier to defend than others).
  • Step 4: Overcommit Chance. 5–22% chance the keeper dives too far in one direction, leaving the other side exposed. Range depends on difficulty.
  • Step 5: Position Jitter. Add ±15px random noise to the dive position so the keeper doesn't land in pixel-perfect spots every time.
  • Step 6: Intentional Miss Chance. 2–28% chance the keeper simply doesn't make the save, even though they could. Range depends on difficulty. This preserves the 55% target success rate without feeling rigged.
  • Step 7: Confidence-Driven Behavior. High confidence = fast dive, early commitment. Low confidence = conservative positioning, late reaction.

Technical Deep Dive: How the Network Works

The GoalkeeperML neural network is the heart of Legend difficulty. Here's how it works:

Network Architecture

A straightforward feedforward network optimized for mobile:

GoalkeeperML Architecture
Input Layer (12 features) ↓ Dense(32, activation='relu') // Feature extraction ↓ Dense(16, activation='relu') // Pattern compression ↓ Dense(3, activation='softmax') // Predict: LEFT, CENTER, RIGHT ↓ Output: [P(left), P(center), P(right)]

The 12 input features capture your shooting history:

Training & Retraining

The network trains incrementally. Every 3 shots, if at least 5 labeled examples exist, it retrains for 8 epochs using your shot history as ground truth. The output is a probability distribution over three zones (LEFT, CENTER, RIGHT). A softmax layer ensures probabilities sum to 1.

When pattern detection fails (no clear alternation, random shooting), the network falls back to statistical alternation detection: if you shot LEFT then CENTER, it predicts RIGHT. This hybrid approach prevents overfitting on noisy input.

Difficulty Scaling & Confidence

The network's output is scaled by difficulty:

Confidence Scaling by Difficulty
confidence = network_probability[predicted_direction] Easy: confidence *= 0.3 // Keeper guesses, low commitment Medium: confidence *= 0.6 // Keeper partially commits Hard: confidence *= 0.8 // Keeper mostly trusts the net Legend: confidence *= 1.0 // Full ML weight

On Easy, the network might predict RIGHT with 0.8 probability, but 0.3× scaling drops confidence to 0.24. The keeper dives RIGHT weakly, still vulnerable to CENTER. On Legend, the keeper commits fully to the dive.

The 7-Step Dive Execution

Once the network predicts a direction, the dive algorithm executes in sequence:

STEP 1: ML Prediction + Confidence (scaled by difficulty) ↓ STEP 2: Error Injection (random wrong direction) ↓ STEP 3: Direction → Position Mapping (with accuracy bias) ↓ STEP 4: Overcommit Check (5–22% chance to overextend) ↓ STEP 5: Position Jitter (±15px noise) ↓ STEP 6: Intentional Miss (2–28% chance to miss anyway) ↓ STEP 7: Confidence-Driven Behavior (fast vs. conservative)

Why 7 steps? Step 1 generates a prediction. Steps 2–5 inject realism (errors, noise, overcommits). Step 6 enforces the difficulty target (55% success rate). Step 7 makes the dive feel responsive to how confident the keeper actually is.

The result: saves that feel earned, not unfair. Misses that feel possible, not handed to you.

Why Legend Uses ML, Others Don't

Easy and Medium modes skip the neural network entirely:

This is intentional. Easier modes prioritize speed and fairness. Legend is where the ML shine. On Legend, you're not fighting randomness—you're fighting a genuine adaptive opponent.

What GoalBot Doesn't Have (And Why That Matters)

To be honest about what we built, here's what's NOT in GoalBot:

These aren't limitations—they're design choices. Keeping the system focused (penalty kicks only, pattern-based prediction, no personality matrix) makes it easier to test, balance, and iterate. Complexity has a cost in mobile game performance and maintainability.

What's Actually Impressive

Let's be honest about what works well:

The Real Takeaway: GoalBot's AI isn't trying to fool you into thinking it's smarter than it is. On Legend, it genuinely learns your patterns. On Easy, it's honest randomness with fairness built in. That's the design.

Experience GoalBot's Intelligent Goalkeeper

Face an AI opponent that adapts to your playing style across multiple difficulty levels, from simple heuristics to advanced neural networks.

Back to ExaFabs