Skip to content
dalethomas81 edited this page Apr 16, 2024 · 9 revisions

Theory of Operation

The vision system in ArfBotOS uses OpenCV written in Python. The current implementation utilizes the idea of 'template matching'. The algorithms used for this are a port of the Fast Template Matching C++ project by DennisLiu1993.

The way that ArfBotOS is able to run Python scripts from the Codesys runtime is via sockets. The basic architecture is a Python socket server running on the Pi that receives commands from the client (Codesys runtime) like sudo python /var/opt/codesys/PlcLogic/Application/Vision/TemplateMatch.py -s /var/opt/codesys/PlcLogic/visu/outputimage.jpg -t /var/opt/codesys/PlcLogic/Application/Vision/Template3.JPG -i 2 -j 0.0 -k 0.8 -l 90.0 -d true -w 2016 -h 2000.
menu

This tells the server to run the TemplateMatch.py vision program and output the result image to outputimage.jpg and search the captured image for the Template3.JPG template. Additionally, the below parameters:

  • -i 2 maximum parts to find
  • -j 0.0 maximum overlap of the parts
  • -k 0.8 minimum score to be counted as a found image
  • -l 90 search angle to be searched in both directions i.e. +90 degreed and -90 degrees
  • -d true debug is on
  • -w 2016 image capture width
  • -h 2000 image capture height

The server will run the vision program and return results to the server in the form of LOC obj:0 cx:123.45 cy:678.90 a:55.94 s:0.95 which the PLC program interpretes as:

  • obj:0 these are the results of the first part found (see the -i parameter above)
  • cx:123.45 this is the x coordinate of the found part
  • cy:678.90 this is the y coordinate of the found part
  • a:55.94 this is the rotation angle of the found part
  • s:0.95 this is the scaling of the found part
menu

The vision command processor parses the results and stores them in a vision register per your program commands designation.
menu
menu

Note: since the vision system utilizes sockets, it can easily be run on a different Raspberry Pi. It is recommended to run the vision on a separate Raspberry Pi 5 with 64-bit architecture.

Remote Vision System

To get the best performance from the vision system, you will want to install it on a Raspberry Pi 5 with 64-bit architecture. Once this is done, you can change the Ip address of the vision system on the Programming HMI.
menu

To install the vision system on a separate Raspberry Pi, you can follow the vision related setup in the installation manual under Software Installation including the following:

  1. Install Raspberry Pi OS 64-bit Lite (Lite is optional as we won't use the desktop)
  2. Enable i2c from raspi-config.
  3. Configure Arducam.
  4. Install OpenCV.
  5. Install the Vision Command Service and copy over vision files. (You will need to manually create directories).
  6. Install the Vision Web Utility Service and copy over the server files.

Calibration

Create a program that will calibrate the vision. Here the first step will move the robot to the pose 2 position. Then it will move to the camera position (in this case the camera is attached to the robot). Finally, the program will run the calibration command.
menu

Place a calibration checkerboard in the region of interest (this will be where you will use the vision system to search for parts) then run the program from the Main HMI. Once the program is finished and if the calibration was successful, you will will the results in the vision window with the mapping of the checkerboard overlayed the resulting image.
menu

The origin and x/y directions are determined by the orientation of the checkerboard. This is very important to know when setting the offset for the part coordinate systems PC1 or PC2. For example, in this calibration the origin is at the top left of the image and the x+ direction traverses left to right across the top row of checkers. Additionally, the y+ direction traverses top to bottom across the left column of checkers. You will want to calibrate the part coordinated system such that this orientation matches.
menu

In this example, the checkboard is oriented much differently. As such, the origin is near the bottom right of the image with the x+ and y+ directions extending out away from the checkerboard.
menu

Reference:

Calibration Files

The results of the calibration are stored here /var/opt/codesys/PlcLogic/Application/Vision/cal.yaml. Additionally, there is a file in this location called roi.yaml. This file is used to narrow down the region of interest for the vision program to search (this helps speed up the pattern matching algorithm). As of now, this file is static but being able to set this is on the roadmap. For now, the default ROI is set for a 640 x 400 image (you will need to manually change this if you are wanting to capture images of a different size).
menu

Templates

In order to use template matching, you will first need a template. You can manually place template files here /var/opt/codesys/PlcLogic/Application/Vision/Templates or you can create a template using the Template Utility of ArfBotOS. (note: the method to do this is highly experimental and unrefined).

Navigate to this site in your browser to access the Template Utility: http://arfbot.local:5000/. Alternatively you can access the utility from the Vision HMI button.

The Template Utility allows you to capture an image and crop it to create the image to search. Then you can give the template a name and it will save locally (see the file path mentioned above).

Once you have captured an image, you can left-click and drag to crop the image. Once you release the left-click, a new image will display in the bottom right. You can now name this image in the Template Name box at the bottom.
menu

Note: the width and height of the captured image is important as this will also need to be the same size image you configure the Vision Processor to capture. This way, the dimensions of the template are proportional to the image you are searching in template matching.

Once you have saved the template, you can navigate to the Files using the link at the top of the page. This will show each template saved in the Template folder and allow you to view them by clicking on them or delete them. menu

Testing

Now that the vision system is calibrated and a template has been created, a Locate program can be created using the Vision Processor command.

Click on a program step in your test program and press the Vision button. This will bring up the Vision Processor dialog and allow configuration. Fill in the template name that was saved earlier and choose a result index (this is the vision register where the vision results will be stored. This can be used to pick the object with the robot provided either the PC1 or PC2 coordinate spaces are configured for the calibrated vision coordinate space - more on how to test that later.)
menu

When ready to test, select the Program from the Main HMI screen and press START. Once the vision program is complete, the resulting image will be displayed in the lower-right corner.
menu

The vision register will contain the results of the position of the part (x, y, angle of rotation, and scale). The I/O register will contain the number of parts found (this number can be used to know of a part was found and if a pick should take place). If multiple parts are found, the results will increment through the vision register with the next found object being stored in the next register.
menu

Clone this wiki locally