Computer vision for autonomous navigation and real-time object detection from the air.
The Perception subteam gives the drone its "eyes." We develop the computer vision pipelines that let the drone see the world below, identify competition targets, and feed spatial information back to the autonomy stack.
We work with onboard cameras, train detection models, and implement real-time inference pipelines that must run reliably at altitude while the drone is flying autonomously.
Training and deploying object detection models (e.g., YOLO-based) to locate and classify competition targets β shapes, letters, colors, and objects β from aerial imagery.
Computing GPS coordinates of detected targets by combining camera geometry, drone pose (GPS + attitude), and ground-plane projection.
Detecting and localizing aerial obstacles (e.g., balloon obstacles in SUAS) using camera feeds so the autonomy stack can plan safe paths.
TO BE FINISHED LATER β detail specific approachBuilding labeled datasets of aerial imagery and training detection models. We use simulation renders and real field data to improve model robustness.
Our perception pipeline is built with:
Below is a high-level overview of our perception pipeline from raw camera input to target geolocation output.
Major
Year Β· Perception Lead
TO BE FINISHED LATER β add team member cards
Computer Science
Focus: Detection models
Computer Science
Focus: Geolocation
Any major
Focus: Data & training
Perception meets for model development, dataset work, and pipeline integration.
TO BE FINISHED LATER
TO BE FINISHED LATER