-
Notifications
You must be signed in to change notification settings - Fork 1
Vision Tracking
Vision Tracking is using optical data acquired via webcam or IP camera to control the robot, either directly or indirectly. We take the images from camera and send it through our pipeline, which processes and filters out unnecessary information, and returns a set of contours. These contours have x, y, width, and height values, which can be used to help position the robot.
These contours are then published onto the Network Tables. We are using the contours table, and posting their x, y, width, height, area, and ratio (width/height). Our vision subsystems (VisionRocket and VisionHabitat) take this data and filter it even further and discern what the contour data means in terms of movement. We have more than one because the context of what each contour means is different depending on what the robot is approaching.
The VisionRocket class is used to handle the contour information when approaching a Rocket. This class takes the data from the Network Tables and converts it into a more packaged 2D ArrayList of doubles. Each ArrayList that has contour data holds the information in this order:
index value
[0] -> x
[1] -> y
[2] -> width
[3] -> height
[4] -> area
[5] -> ratio
You shouldn't need to interact with these ArrayLists outside of the VisionRocket class.
Once the contours have been packaged into a more useable format, we can find "good pairs" of contours - contours that most likely fit the bill for being a pair of reflective tape strips. We do this by first sorting the list of all the contours by their Y value, low (higher up on the screen) to high (further down on the screen). We then compare each contour's Y with every other contour's Y, and find the ratio between the two of them. If the ratio is within .9 (a 90% similarity in Y value or better), we consider them a "good pair". This function will return as many good pairs as it can find, but in most cases, it will only be one, which is most likely the target.
We will be adding functions that you can use in the drivetrain shortly so that you don't have to interact with the contour data directly in programming the semi-autonomous functions.
The VisionHabitat class is used to handle the contour information when approaching the Habitat module.
We have not yet programmed this class.