Vision Fusion & Filtration
In the FRC 2026: REBUILT game, absolute field localization is critical. MARSLib’s MARSVision subsystem combines data from multiple cameras and strictly filters out “hallucinations” using techniques pioneered by elite teams. We utilize AprilTags as the primary landmarks for global pose estimation.
1. Strict Rejection Filters
Vision poses are frequently wrong during high-speed gameplay. Before a measurement reached the Pose Estimator, it must survive five strict boundary checks:
- Z-Height Hallucinations: If the pose estimates the robot is flying (Z > 0.5m), it’s rejected.
- Out of Bounds: If the pose is tracked outside the physical field, it’s rejected.
- Motion Blur: If the gyro reports > 120°/s yaw rate, vision is entirely blocked.
- Beaching (Pitch/Roll): If you ride over an obstacle and tilt > 15°, the pose is rejected.
- Ambiguity: Low quality single-tag solutions are discarded.
2. Quadratic StdDev Scaling
Vision poses should mathematically never be trusted equally. A pose from 6 meters away is tiny on the camera sensor—1 pixel of error dramatically shifts the calculated location.
MARSLib enforces Quadratic StdDev Scaling. Trust in vision decays exponentially at long range, stopping the robot from making violent odometry correction jumps based on far-away tags.
3. Simulating Imperfection
To ensure tuning works globally, the AprilTagVisionIOSim in MARSLib purposefully injects:
- Gaussian Noise: StdDev matched jitter scaled by distance.
- Dropped Frames: A 5% chance every frame that no pose is returned.
- Latency: Simulates a 10ms-30ms phase delay offset.
📖 Further Reading & External Resources
- Limelight MegaTag 2.0 Breakdown - Understanding how FRC 1690 style IMU yaw-seeding rejects rapid AprilTag noise variations.
- PhotonVision Hardware Guide - Best practices for illuminating targets globally.