diff --git a/README.md b/README.md index 595e8af..975e903 100644 --- a/README.md +++ b/README.md @@ -1,35 +1,142 @@ # Eurobot-2026-VisionOnBoard -This repository contains the vision system for the Eurobot 2026 competition. -## Start with Docker -### Build image and container + +Vision system for Eurobot 2026 competition. Supports 4 RealSense cameras with ArUco marker detection. + +## Quick Start + +### 1. Start Docker ```bash cd docker/ bash start_docker.sh ``` -### Launch realsense node and hazelnut detector (objector_detector.cpp) together +### 2. Launch Vision System + +**Option A: tmux (recommended for debugging)** +```bash +./start_vision_tmux.sh ``` -bash start_vision_tmux.sh +- Window 0: 4 camera panes (front, back, left, right) +- Window 1: Detector + Scanner nodes +- Window 2: RViz + +**Option B: Single launch file** +```bash +# Inside Docker container +source /opt/ros/humble/setup.bash +colcon build +source install/setup.bash +ros2 launch aruco_object multi_camera.launch.py ``` -### Run Object Detector with Parameters -You can run the object detector node with customizable parameters using the launch file: +## Launch Parameters + +| Parameter | Default | Description | +|-----------|---------|-------------| +| `default_active_camera` | `right` | Initial active camera (front/back/left/right) | +| `team_color` | `blue` | Team color (blue/yellow) | +| `log_level` | `info` | ROS log level | + +Example: ```bash -ros2 launch aruco_object object_detector.launch.py +ros2 launch aruco_object multi_camera.launch.py default_active_camera:=front team_color:=yellow ``` -This will load parameters from `src/aruco_ros/aruco_object/config/params.yaml`. -You can also override parameters from the command line: +## Switch Active Camera + +Only one camera processes at a time. Publish to `/robot/dock_side` (Int16): + +| Side | Value | +|------|-------| +| Front | 0 | +| Right | 1 | +| Back | 2 | +| Left | 3 | + +```bash +ros2 topic pub /robot/dock_side std_msgs/msg/Int16 "data: 2" -1 # Switch to back +``` + +## RViz + ```bash -ros2 launch aruco_object object_detector.launch.py cluster_radius:=0.5 +xhost +local:docker # Run BEFORE entering container +ros2 run rviz2 rviz2 +``` + +## Architecture + +``` +start_vision_tmux.sh / multi_camera.launch.py +├── 4× RealSense cameras (640x480@15fps, depth disabled) +├── 4× object_detector nodes (ArUco pose detection) +└── 4× aruco_row_scanner nodes (hazelnut flip detection) ``` -### Run rviz -Before going in the container, run this in terminal to allow allow GUI window inside container +All nodes subscribe to `/robot/dock_side` - only the matching camera side processes images. + +## Individual Node Launch + +Launch individual camera, detector, or scanner nodes with camera position: + +### Camera Node (RealSense) + +start docker first ``` -xhost +local:docker +docker exec -it vision-ws bash ``` -Open RViz to visualize camera images and TF frames: + +### Object Detector Node ```bash -ros2 run rviz2 rviz2 +ros2 run aruco_object aruco_detector_node --ros-args \ + -p camera_position:=left +``` + +### Row Scanner Node +```bash +ros2 run aruco_object aruco_row_scanner --ros-args \ + -p camera_position:=front \ + -p team_color:=blue +``` + +### Multi-Node Launch Files +```bash +# Launch all 4 detector nodes +ros2 launch aruco_object detector_multi.launch.py default_dock_side:=1 + +# Launch all 4 scanner nodes +ros2 launch aruco_object scanner_multi.launch.py default_dock_side:=1 team_color:=blue +``` + +## camera serial number +use `rs-enumerate-devices` to list the serial number +left: 218622276534 +right: 218622276687 +back: 419122270813 +front (d435): 313522070126 + +### launch with serial number +``` +ros2 launch realsense2_camera rs_launch.py \ + camera_namespace:=right \ + camera_name:=right \ + serial_no:="'218622276687'" \ + rgb_camera.color_profile:=640,480,15 \ + enable_depth:=false +``` +``` +ros2 launch realsense2_camera rs_launch.py \ + camera_namespace:=back \ + camera_name:=back \ + serial_no:="'419122270813'" \ + rgb_camera.color_profile:=640,480,15 \ + enable_depth:=false +``` +``` +ros2 launch realsense2_camera rs_launch.py \ + camera_namespace:=left \ + camera_name:=left \ + serial_no:="'218622276534'" \ + rgb_camera.color_profile:=640,480,15 \ + enable_depth:=false ``` diff --git a/docker/compose.yaml b/docker/compose.yaml index 34b848b..12dae6c 100644 --- a/docker/compose.yaml +++ b/docker/compose.yaml @@ -5,7 +5,7 @@ services: dockerfile: Dockerfile args: USER_UID: ${USER_UID:-1000} - image: eurobot-2026-vision:latest + image: eurobot-2026-vision-onboard:latest container_name: vision-ws stdin_open: true tty: true @@ -21,4 +21,4 @@ services: - ROS_WORKSPACE=/home/vision/vision_ws volumes: - /dev:/dev - - ../:/home/vision/vision_ws \ No newline at end of file + - ../:/home/vision/vision_ws diff --git a/inside_start_vision_tmux.sh b/inside_start_vision_tmux.sh new file mode 100644 index 0000000..abb99c9 --- /dev/null +++ b/inside_start_vision_tmux.sh @@ -0,0 +1,101 @@ +#!/bin/bash + +SESSION_NAME="vision_inside" + +# Check if the session exists +tmux has-session -t $SESSION_NAME 2>/dev/null + +if [ $? == 0 ]; then + # Session exists, kill it to start fresh + echo "Session $SESSION_NAME already exists. Killing it." + tmux kill-session -t $SESSION_NAME +fi + +# Create a new detached session +tmux new-session -d -s $SESSION_NAME + +# Get the window ID to be robust against base-index settings +WIN_ID=$(tmux list-windows -t $SESSION_NAME -F "#{window_index}" | head -n 1) + +# === Window 0: Vision System (2x2 grid) === +# Start with Pane 0 (Top Left) + +# 1. Split vertically: Top (0) and Bottom (1) +tmux split-window -v -t ${SESSION_NAME}:${WIN_ID} + +# 2. Split Top (0) horizontally: TL (0) and TR (1). Old Bottom becomes 2. +tmux split-window -h -t ${SESSION_NAME}:${WIN_ID}.0 + +# 3. Split Bottom (2) horizontally: BL (2) and BR (3). +tmux split-window -h -t ${SESSION_NAME}:${WIN_ID}.2 + +# Ensure tiled layout +tmux select-layout -t ${SESSION_NAME}:${WIN_ID} tiled + +# Pane 0 (Top Left): All Cameras +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.0 "export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && colcon build && source install/setup.bash && ros2 launch realsense2_camera rs_multi_camera_launch.py" C-m + +# Pane 1 (Top Right): Detector +DETECTOR_CMD=' +export ROS_DOMAIN_ID=13 +source /opt/ros/humble/setup.bash +source install/setup.bash + +echo "=== [Detector] Waiting for BACK, LEFT, RIGHT camera topics... ===" +while true; do + BACK=$(ros2 topic list 2>/dev/null | grep -E "/back/back/color/camera_info") + LEFT=$(ros2 topic list 2>/dev/null | grep -E "/left/left/color/camera_info") + RIGHT=$(ros2 topic list 2>/dev/null | grep -E "/right/right/color/camera_info") + if [ -n "$BACK" ] && [ -n "$LEFT" ] && [ -n "$RIGHT" ]; then + echo "=== [Detector] All 3 cameras found! ===" + break + fi + echo "[Detector] Waiting... (back:${BACK:-missing} left:${LEFT:-missing} right:${RIGHT:-missing})" + sleep 2 +done +sleep 2 +echo "=== [Detector] Launching ArUco Detector ===" +ros2 launch aruco_object detector_multi.launch.py +' +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.1 "$DETECTOR_CMD" C-m + +# Pane 2 (Bottom Left): Scanner +SCANNER_CMD=' +export ROS_DOMAIN_ID=13 +source /opt/ros/humble/setup.bash +source install/setup.bash + +echo "=== [Scanner] Waiting for BACK, LEFT, RIGHT camera topics... ===" +while true; do + BACK=$(ros2 topic list 2>/dev/null | grep -E "/back/back/color/camera_info") + LEFT=$(ros2 topic list 2>/dev/null | grep -E "/left/left/color/camera_info") + RIGHT=$(ros2 topic list 2>/dev/null | grep -E "/right/right/color/camera_info") + if [ -n "$BACK" ] && [ -n "$LEFT" ] && [ -n "$RIGHT" ]; then + echo "=== [Scanner] All 3 cameras found! ===" + break + fi + echo "[Scanner] Waiting... (back:${BACK:-missing} left:${LEFT:-missing} right:${RIGHT:-missing})" + sleep 2 +done +sleep 2 +echo "=== [Scanner] Launching ArUco Scanner ===" +ros2 launch aruco_object scanner_multi.launch.py +' +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.2 "$SCANNER_CMD" C-m + +# Pane 3 (Bottom Right): Shell / Utils +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.3 'export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && source install/setup.bash' C-m +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.3 'echo "Ready to use. Run: ros2 topic list"' C-m +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.3 'ros2 topic list' C-m + +# Function to kill the session when script exits +cleanup() { + tmux kill-session -t $SESSION_NAME +} + +# Trap EXIT signal (happens when script ends or user detaches) +trap cleanup EXIT + +# Attach to the tmux session to view the output +tmux select-window -t ${SESSION_NAME}:${WIN_ID} +tmux attach-session -t $SESSION_NAME diff --git a/src/aruco_ros/aruco_object/config/combined_params.yaml b/src/aruco_ros/aruco_object/config/combined_params.yaml index 44d0230..99897bf 100644 --- a/src/aruco_ros/aruco_object/config/combined_params.yaml +++ b/src/aruco_ros/aruco_object/config/combined_params.yaml @@ -4,15 +4,11 @@ object_detector: cluster_radius: 0.4 blue_id: 36 yellow_id: 47 - # Camera position: "front", "back", "left", or "right" - camera_position: "back" # Smoothing factor for pose filter (0.0-1.0, lower = more smoothing) smooth_alpha: 0.3 aruco_row_scanner: ros__parameters: - # Team color: "yellow" or "blue" (determines which IDs are targets) - team_color: "blue" blue_id: 36 yellow_id: 47 # Voting system: how many consistent frames needed diff --git a/src/aruco_ros/aruco_object/include/aruco_cluster_detect/object_detector_node.hpp b/src/aruco_ros/aruco_object/include/aruco_cluster_detect/object_detector_node.hpp index 64d1b57..6647eda 100644 --- a/src/aruco_ros/aruco_object/include/aruco_cluster_detect/object_detector_node.hpp +++ b/src/aruco_ros/aruco_object/include/aruco_cluster_detect/object_detector_node.hpp @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -11,6 +12,7 @@ #include #include #include +#include #include "aruco_cluster_detect/process_logic.hpp" class CameraOnBoardNode : public rclcpp::Node { @@ -25,6 +27,11 @@ class CameraOnBoardNode : public rclcpp::Node { rclcpp::Subscription::SharedPtr camera_info_sub_; rclcpp::Publisher::SharedPtr object_pose_publisher_; rclcpp::Publisher::SharedPtr marker_pub_; + rclcpp::Subscription::SharedPtr dock_side_sub_; + + int16_t my_side_ = -1; // This node's side number + int16_t active_side_ = -1; // Currently active side from /robot/dock_side + std::map position_to_side_; std::shared_ptr tf_broadcaster_; std::unique_ptr tf_buffer_; @@ -40,7 +47,7 @@ class CameraOnBoardNode : public rclcpp::Node { int YELLOW_ID_; std::string CAMERA_POSITION_; double SMOOTH_ALPHA_; - bool intrinsics_received_ = false; // because we only need to send intrinsic once + bool intrinsics_received_ = false; ProcessLogic logic_; diff --git a/src/aruco_ros/aruco_object/include/aruco_cluster_detect/process_logic.hpp b/src/aruco_ros/aruco_object/include/aruco_cluster_detect/process_logic.hpp index f7f4f8e..66fd3ec 100644 --- a/src/aruco_ros/aruco_object/include/aruco_cluster_detect/process_logic.hpp +++ b/src/aruco_ros/aruco_object/include/aruco_cluster_detect/process_logic.hpp @@ -25,8 +25,7 @@ class ProcessLogic { geometry_msgs::msg::PoseStamped compute_perpendicular_pose_from_floor_points( const std::vector &floor_points, const cv::Point2d &camera_in_base, - const rclcpp::Time &now, - rclcpp::Publisher::SharedPtr marker_pub); + const rclcpp::Time &now); private: double marker_length_; diff --git a/src/aruco_ros/aruco_object/launch/__pycache__/multi_camera.launch.cpython-312.pyc b/src/aruco_ros/aruco_object/launch/__pycache__/multi_camera.launch.cpython-312.pyc new file mode 100644 index 0000000..1412b99 Binary files /dev/null and b/src/aruco_ros/aruco_object/launch/__pycache__/multi_camera.launch.cpython-312.pyc differ diff --git a/src/aruco_ros/aruco_object/launch/detector_multi.launch.py b/src/aruco_ros/aruco_object/launch/detector_multi.launch.py new file mode 100644 index 0000000..c304e49 --- /dev/null +++ b/src/aruco_ros/aruco_object/launch/detector_multi.launch.py @@ -0,0 +1,51 @@ +from launch import LaunchDescription, LaunchContext +from launch_ros.actions import Node +from launch.substitutions import LaunchConfiguration +from launch.actions import DeclareLaunchArgument, ExecuteProcess, OpaqueFunction +import os +from ament_index_python.packages import get_package_share_directory + + +def launch_setup(context: LaunchContext): + """Launch 4 object_detector nodes""" + # Side mapping: 0=front, 1=right, 2=back, 3=left + camera_positions = ['front', 'back', 'left', 'right'] + default_side = int(LaunchConfiguration('default_dock_side').perform(context)) + + if default_side not in [0, 1, 2, 3]: + raise Exception(f"[ERROR] Invalid dock_side: '{default_side}'. Must be 0-3.") + + pkg_share = get_package_share_directory('aruco_object') + config_file = os.path.join(pkg_share, 'config', 'combined_params.yaml') + log_level = LaunchConfiguration('log_level').perform(context) + + nodes = [] + + for position in camera_positions: + node = Node( + package='aruco_object', + executable='aruco_detector_node', + name=f'object_detector_{position}', + parameters=[config_file, {'camera_position': position}], + output='screen', + arguments=['--ros-args', '--log-level', log_level] + ) + nodes.append(node) + + # Publish initial dock_side + publish_cmd = ExecuteProcess( + cmd=['bash', '-c', + f'sleep 2 && ros2 topic pub /robot/dock_side std_msgs/msg/Int16 "data: {default_side}" -1'], + output='screen' + ) + nodes.append(publish_cmd) + + return nodes + + +def generate_launch_description(): + return LaunchDescription([ + DeclareLaunchArgument('log_level', default_value='info'), + DeclareLaunchArgument('default_dock_side', default_value='1'), # 0=front, 1=right, 2=back, 3=left + OpaqueFunction(function=launch_setup) + ]) diff --git a/src/aruco_ros/aruco_object/launch/object_detector.launch.py b/src/aruco_ros/aruco_object/launch/object_detector.launch.py deleted file mode 100644 index 3252286..0000000 --- a/src/aruco_ros/aruco_object/launch/object_detector.launch.py +++ /dev/null @@ -1,30 +0,0 @@ -from launch import LaunchDescription -from launch_ros.actions import Node -from launch.substitutions import LaunchConfiguration -from launch.actions import DeclareLaunchArgument -import os -from ament_index_python.packages import get_package_share_directory - -def generate_launch_description(): - pkg_share = get_package_share_directory('aruco_object') - config_file = os.path.join(pkg_share, 'config', 'combined_params.yaml') - - logger_arg = DeclareLaunchArgument( - 'log_level', - default_value='info', - description='Log level' - ) - - object_detector_node = Node( - package='aruco_object', - executable='aruco_detector_node', - name='object_detector', - parameters=[config_file], - output='screen', - arguments=['--ros-args', '--log-level', LaunchConfiguration('log_level')] - ) - - return LaunchDescription([ - logger_arg, - object_detector_node - ]) diff --git a/src/aruco_ros/aruco_object/launch/scanner.launch.py b/src/aruco_ros/aruco_object/launch/scanner.launch.py deleted file mode 100644 index c53c254..0000000 --- a/src/aruco_ros/aruco_object/launch/scanner.launch.py +++ /dev/null @@ -1,23 +0,0 @@ -from launch import LaunchDescription -from launch_ros.actions import Node -import os -from ament_index_python.packages import get_package_share_directory - -def generate_launch_description(): - pkg_name = 'aruco_object' - - config = os.path.join( - get_package_share_directory(pkg_name), - 'config', - 'combined_params.yaml' - ) - - return LaunchDescription([ - Node( - package=pkg_name, - executable='aruco_row_scanner', - name='aruco_row_scanner', - output='screen', - parameters=[config] - ) - ]) diff --git a/src/aruco_ros/aruco_object/launch/scanner_multi.launch.py b/src/aruco_ros/aruco_object/launch/scanner_multi.launch.py new file mode 100644 index 0000000..b44636c --- /dev/null +++ b/src/aruco_ros/aruco_object/launch/scanner_multi.launch.py @@ -0,0 +1,55 @@ +from launch import LaunchDescription, LaunchContext +from launch_ros.actions import Node +from launch.substitutions import LaunchConfiguration +from launch.actions import DeclareLaunchArgument, ExecuteProcess, OpaqueFunction +import os +from ament_index_python.packages import get_package_share_directory + + +def launch_setup(context: LaunchContext): + """Launch 4 aruco_row_scanner nodes""" + # Side mapping: 0=front, 1=right, 2=back, 3=left + position_to_side = {'front': 0, 'right': 1, 'back': 2, 'left': 3} + camera_positions = ['front', 'back', 'left', 'right'] + default_side = int(LaunchConfiguration('default_dock_side').perform(context)) + + pkg_share = get_package_share_directory('aruco_object') + config_file = os.path.join(pkg_share, 'config', 'combined_params.yaml') + log_level = LaunchConfiguration('log_level').perform(context) + team_color = LaunchConfiguration('team_color').perform(context) + + valid_team_colors = ['blue', 'yellow'] + if team_color not in valid_team_colors: + raise Exception(f"[ERROR] Invalid team_color: '{team_color}'") + + nodes = [] + + for position in camera_positions: + node = Node( + package='aruco_object', + executable='aruco_row_scanner', + name=f'aruco_row_scanner_{position}', + parameters=[config_file, {'camera_position': position, 'team_color': team_color}], + output='screen', + arguments=['--ros-args', '--log-level', log_level] + ) + nodes.append(node) + + # Publish initial dock_side after nodes start + publish_cmd = ExecuteProcess( + cmd=['bash', '-c', + f'sleep 2 && ros2 topic pub /robot/dock_side std_msgs/msg/Int16 "data: {default_side}" -1'], + output='screen' + ) + nodes.append(publish_cmd) + + return nodes + + +def generate_launch_description(): + return LaunchDescription([ + DeclareLaunchArgument('log_level', default_value='info'), + DeclareLaunchArgument('team_color', default_value='blue'), + DeclareLaunchArgument('default_dock_side', default_value='1'), # 0=front, 1=right, 2=back, 3=left + OpaqueFunction(function=launch_setup) + ]) diff --git a/src/aruco_ros/aruco_object/src/aruco_row_scanner_node.cpp b/src/aruco_ros/aruco_object/src/aruco_row_scanner_node.cpp index 029cfb1..f9f7ed1 100644 --- a/src/aruco_ros/aruco_object/src/aruco_row_scanner_node.cpp +++ b/src/aruco_ros/aruco_object/src/aruco_row_scanner_node.cpp @@ -1,162 +1,197 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -class ArucoRowScannerNode : public rclcpp::Node { -public: - ArucoRowScannerNode() : Node("aruco_row_scanner") { - // Parameters - this->declare_parameter("team_color", "yellow"); - this->declare_parameter("blue_id", 36); - this->declare_parameter("yellow_id", 47); - this->declare_parameter("votes_needed", 5); - - team_color_ = this->get_parameter("team_color").as_string(); - votes_needed_ = this->get_parameter("votes_needed").as_int(); - - // logic: if team_color is blue, use yellow_id, else use blue_id - int64_t id = (team_color_ == "blue") ? - this->get_parameter("yellow_id").as_int() : - this->get_parameter("blue_id").as_int(); - - target_ids_.push_back(static_cast(id)); - - RCLCPP_INFO(this->get_logger(), "Scanner started for team: %s. Votes needed: %d", team_color_.c_str(), votes_needed_); - - // Initialize ArUco dictionary - dictionary_ = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50); - detector_params_ = cv::aruco::DetectorParameters::create(); - - // Subscriber - image_sub_ = this->create_subscription( - "/camera/camera/color/image_rect_raw", 10, - std::bind(&ArucoRowScannerNode::image_callback, this, std::placeholders::_1)); - - // Publisher - mask_pub_ = this->create_publisher("target_flip_mask", 10); - } - -private: - void image_callback(const sensor_msgs::msg::Image::SharedPtr msg) { - if (task_completed_) return; - - cv_bridge::CvImagePtr cv_ptr; - try { - cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); - } catch (cv_bridge::Exception &e) { - RCLCPP_ERROR(this->get_logger(), "cv_bridge exception: %s", e.what()); - return; + #include + #include + #include + #include + #include + #include + #include + #include + #include + #include + #include + + class ArucoRowScannerNode : public rclcpp::Node { + public: + ArucoRowScannerNode() : Node("aruco_row_scanner") { + // Declare camera_position FIRST (before using it for topics) + this->declare_parameter("camera_position", "front"); + camera_position_ = this->get_parameter("camera_position").as_string(); + // Build namespaced topic name based on camera position + // Topic format: /{camera_namespace}/{camera_name}/color/image_rect_raw + // Both camera_namespace and camera_name are set to camera_position in launch + std::string image_topic = "/" + camera_position_ + "/" + camera_position_ + "/color/image_rect_raw"; + RCLCPP_INFO(this->get_logger(), "Scanner [%s] (side=%d) subscribing to: %s", camera_position_.c_str(), my_side_, image_topic.c_str()); + + // Map camera position string to dock_side number + // dock_side: 0=front, 1=right, 2=back, 3=left + position_to_side_ = {{"front", 0}, {"right", 1}, {"back", 2}, {"left", 3}}; + my_side_ = position_to_side_[camera_position_]; + + // Parameters + this->declare_parameter("team_color", "yellow"); + this->declare_parameter("blue_id", 36); + this->declare_parameter("yellow_id", 47); + this->declare_parameter("votes_needed", 5); + + team_color_ = this->get_parameter("team_color").as_string(); + votes_needed_ = this->get_parameter("votes_needed").as_int(); + + // logic: if team_color is blue, use yellow_id, else use blue_id + int64_t id = (team_color_ == "blue") ? + this->get_parameter("yellow_id").as_int() : + this->get_parameter("blue_id").as_int(); + + target_ids_.push_back(static_cast(id)); + + // Initialize ArUco dictionary + dictionary_ = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50); + detector_params_ = cv::aruco::DetectorParameters::create(); + + // Image subscriber with namespaced topic + image_sub_ = this->create_subscription( + image_topic, 10, + std::bind(&ArucoRowScannerNode::image_callback, this, std::placeholders::_1)); + + // Publisher: /robot/vision/hazelnut/flip (Int32MultiArray with 5 elements) + flip_pub_ = this->create_publisher("/robot/vision/hazelnut/flip", 10); + + // Subscribe to dock_side topic for multi-camera activation control + dock_side_sub_ = this->create_subscription( + "/robot/dock_side", 10, + [this](std_msgs::msg::Int16::SharedPtr msg) { + active_side_ = msg->data; + bool is_active = (active_side_ == my_side_); + RCLCPP_INFO(this->get_logger(), "Dock side: %d -> %s", msg->data, is_active ? "ACTIVE" : "dormant"); + }); + + RCLCPP_INFO(this->get_logger(), "Scanner [%s] (side=%d) started for team: %s. Votes needed: %d", camera_position_.c_str(), my_side_, team_color_.c_str(), votes_needed_); } - std::vector ids; - std::vector> corners; - cv::aruco::detectMarkers(cv_ptr->image, dictionary_, corners, ids, detector_params_); + private: + void image_callback(const sensor_msgs::msg::Image::SharedPtr msg) { + // Multi-camera activation: dormant by default until dock_side matches our side + if (active_side_ != my_side_) { + return; + } - // Phase 2: Perfect 4 Check - if (ids.size() != 4) { - RCLCPP_INFO_THROTTLE(this->get_logger(), *this->get_clock(), 1000, "Seen %zu markers (need 4)", ids.size()); - return; // Ignore frame - } - struct Marker { - int id; - float y; - }; - - std::vector sorted_markers; - // calculate center y of each marker - for (size_t i = 0; i < ids.size(); ++i) { - float cy = 0; - for (const auto& p : corners[i]) cy += p.y; - cy /= 4.0; - sorted_markers.push_back({ids[i], cy}); - } - - // sort by ascending y (small to large) -> Top to Bottom in image - std::sort(sorted_markers.begin(), sorted_markers.end(), - [](const Marker& a, const Marker& b) { return a.y < b.y; }); + cv_bridge::CvImagePtr cv_ptr; + try { + cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); + } catch (cv_bridge::Exception &e) { + RCLCPP_ERROR(this->get_logger(), "cv_bridge exception: %s", e.what()); + return; + } + + std::vector ids; + std::vector> corners; + cv::aruco::detectMarkers(cv_ptr->image, dictionary_, corners, ids, detector_params_); - // Generate Instant Array - std::vector current_vote; - bool all_valid_ids = true; + // Phase 2: Perfect 4 Check + if (ids.size() != 4) { + RCLCPP_INFO_THROTTLE(this->get_logger(), *this->get_clock(), 1000, "Seen %zu markers (need 4)", ids.size()); + return; // Ignore frame + } - // iterate through all marker - for (const auto& m : sorted_markers) { - bool is_target = false; - // iterate through all target id - for (int t_id : target_ids_) { - if (m.id == t_id) { - is_target = true; - break; + struct Marker { + int id; + float y; + }; + + std::vector sorted_markers; + // calculate center y of each marker + for (size_t i = 0; i < ids.size(); ++i) { + float cy = 0; + for (const auto& p : corners[i]) cy += p.y; + cy /= 4.0; + sorted_markers.push_back({ids[i], cy}); + } + + // sort by ascending y (small to large) -> Top to Bottom in image + std::sort(sorted_markers.begin(), sorted_markers.end(), + [](const Marker& a, const Marker& b) { return a.y < b.y; }); + + // Generate Instant Array (4 flip values) + std::vector current_vote; + + // iterate through all marker + for (const auto& m : sorted_markers) { + bool is_target = false; + // iterate through all target id + for (int t_id : target_ids_) { + if (m.id == t_id) { + is_target = true; + break; + } } + // 0 = NO NEED to flip, 1 = NEED to flip + current_vote.push_back(is_target ? 1 : 0); } - current_vote.push_back(is_target ? 1 : 0); - } - RCLCPP_ERROR(this->get_logger(), "Current vote: [%d, %d, %d, %d]", - current_vote[0], current_vote[1], current_vote[2], current_vote[3]); + RCLCPP_INFO(this->get_logger(), "Current vote: [%d, %d, %d, %d]", current_vote[0], current_vote[1], current_vote[2], current_vote[3]); - // Phase 3: Voting - vote_buffer_.push_back(current_vote); - if (vote_buffer_.size() > (size_t)votes_needed_) { - vote_buffer_.pop_front(); - } + // Phase 3: Voting + vote_buffer_.push_back(current_vote); + if (vote_buffer_.size() > (size_t)votes_needed_) { + vote_buffer_.pop_front(); + } - if (vote_buffer_.size() == (size_t)votes_needed_) { - if (check_consensus()) { - publish_result(vote_buffer_.front()); - // Optional: task_completed_ = true; - vote_buffer_.clear(); // Reset buffer after success to avoid rapid re-triggers - } else { - // A strict sliding window is better than clearing fully, TODO - if (!check_consensus()) { - vote_buffer_.clear(); + if (vote_buffer_.size() == (size_t)votes_needed_) { + if (check_consensus()) { + publish_result(vote_buffer_.front()); + vote_buffer_.clear(); // Reset buffer after success to avoid rapid re-triggers + } else { + // Clear buffer if no consensus + vote_buffer_.clear(); } } } - } - bool check_consensus() { - if (vote_buffer_.empty()) return false; - const auto& first = vote_buffer_.front(); - for (const auto& v : vote_buffer_) { - if (v != first) return false; + bool check_consensus() { + if (vote_buffer_.empty()) return false; + const auto& first = vote_buffer_.front(); + for (const auto& v : vote_buffer_) { + if (v != first) return false; + } + return true; } - return true; - } - void publish_result(const std::vector& result) { - std_msgs::msg::Int8MultiArray msg; - msg.data = result; - mask_pub_->publish(msg); - RCLCPP_INFO(this->get_logger(), "Consensus reached! Published mask: [%d, %d, %d, %d]", - result[0], result[1], result[2], result[3]); - } + void publish_result(const std::vector& result) { + // Format: [flip0, flip1, flip2, flip3, side] + std_msgs::msg::Int32MultiArray msg; + msg.data.resize(5); + msg.data[0] = result[0]; // hazelnut 0 + msg.data[1] = result[1]; // hazelnut 1 + msg.data[2] = result[2]; // hazelnut 2 + msg.data[3] = result[3]; // hazelnut 3 + msg.data[4] = my_side_; // which side to do the action + + flip_pub_->publish(msg); + RCLCPP_INFO(this->get_logger(), "Consensus reached! Published flip: [%d, %d, %d, %d, side=%d]", + result[0], result[1], result[2], result[3], my_side_); + } - std::string team_color_; - std::vector target_ids_; - int votes_needed_; - - cv::Ptr dictionary_; - cv::Ptr detector_params_; - - rclcpp::Subscription::SharedPtr image_sub_; - rclcpp::Publisher::SharedPtr mask_pub_; - - std::deque> vote_buffer_; - bool task_completed_ = false; // Could be used if we want single-shot behavior -}; - -int main(int argc, char **argv) { - rclcpp::init(argc, argv); - auto node = std::make_shared(); - rclcpp::spin(node); - rclcpp::shutdown(); - return 0; -} + std::string team_color_; + std::vector target_ids_; + int votes_needed_; + + cv::Ptr dictionary_; + cv::Ptr detector_params_; + // pub, sub + rclcpp::Subscription::SharedPtr image_sub_; + rclcpp::Publisher::SharedPtr flip_pub_; + + std::deque> vote_buffer_; + std::string camera_position_; + int16_t my_side_ = -1; // This node's side number + int16_t active_side_ = -1; // Currently active side from /robot/dock_side + std::map position_to_side_; + rclcpp::Subscription::SharedPtr dock_side_sub_; + }; + + int main(int argc, char **argv) { + rclcpp::init(argc, argv); + auto node = std::make_shared(); + rclcpp::spin(node); + rclcpp::shutdown(); + return 0; + } diff --git a/src/aruco_ros/aruco_object/src/object_detector.cpp b/src/aruco_ros/aruco_object/src/object_detector.cpp index 131ed11..7b8a7f4 100644 --- a/src/aruco_ros/aruco_object/src/object_detector.cpp +++ b/src/aruco_ros/aruco_object/src/object_detector.cpp @@ -13,16 +13,48 @@ CameraOnBoardNode::CameraOnBoardNode() : Node("aruco_detector_node"), logic_(/*default constructed*/) { using std::placeholders::_1; + + // Declare parameters FIRST (before using them for topics) + MARKER_LENGTH_ = this->declare_parameter("marker_length", 0.03); + CLUSTER_RADIUS_ = this->declare_parameter("cluster_radius", 0.3); + BLUE_ID_ = this->declare_parameter("blue_id", 36); + YELLOW_ID_ = this->declare_parameter("yellow_id", 47); + CAMERA_POSITION_ = this->declare_parameter("camera_position", "front"); + SMOOTH_ALPHA_ = this->declare_parameter("smooth_alpha", 0.3); + + // Map camera position string to dock_side number + // dock_side: 0=front, 1=right, 2=back, 3=left + position_to_side_ = {{"front", 0}, {"right", 1}, {"back", 2}, {"left", 3}}; + my_side_ = position_to_side_[CAMERA_POSITION_]; + + // Build namespaced topic names based on camera position + // Topic format: /{camera_namespace}/{camera_name}/color/... + // Both camera_namespace and camera_name are set to CAMERA_POSITION_ in launch + std::string image_topic = "/" + CAMERA_POSITION_ + "/" + CAMERA_POSITION_ + "/color/image_rect_raw"; + std::string camera_info_topic = "/" + CAMERA_POSITION_ + "/" + CAMERA_POSITION_ + "/color/camera_info"; + + RCLCPP_INFO(this->get_logger(), "Detector [%s] (side=%d) subscribing to: %s", + CAMERA_POSITION_.c_str(), my_side_, image_topic.c_str()); + subscription_ = this->create_subscription( - "/camera/camera/color/image_rect_raw", 10, + image_topic, 10, std::bind(&CameraOnBoardNode::image_callback, this, _1)); camera_info_sub_ = this->create_subscription( - "/camera/camera/color/camera_info", 10, + camera_info_topic, 10, std::bind(&CameraOnBoardNode::camera_info_callback, this, _1)); object_pose_publisher_ = this->create_publisher("detected_dock_pose", 10); - marker_pub_ = this->create_publisher("visualization_marker", 10); + + // Subscribe to dock_side topic for multi-camera activation control + dock_side_sub_ = this->create_subscription( + "/robot/dock_side", 10, + [this](std_msgs::msg::Int16::SharedPtr msg) { + active_side_ = msg->data; + bool is_active = (active_side_ == my_side_); + RCLCPP_INFO(this->get_logger(), "Dock side: %d -> %s", + msg->data, is_active ? "ACTIVE" : "dormant"); + }); tf_broadcaster_ = std::make_shared(this); @@ -32,13 +64,6 @@ CameraOnBoardNode::CameraOnBoardNode() dictionary_ = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50); detector_params_ = cv::aruco::DetectorParameters::create(); - MARKER_LENGTH_ = this->declare_parameter("marker_length", 0.03); - CLUSTER_RADIUS_ = this->declare_parameter("cluster_radius", 0.3); - BLUE_ID_ = this->declare_parameter("blue_id", 36); - YELLOW_ID_ = this->declare_parameter("yellow_id", 47); - CAMERA_POSITION_ = this->declare_parameter("camera_position", "front"); - SMOOTH_ALPHA_ = this->declare_parameter("smooth_alpha", 0.3); - // configure logic with declared params logic_ = ProcessLogic(MARKER_LENGTH_, BLUE_ID_, YELLOW_ID_, CLUSTER_RADIUS_, CAMERA_POSITION_, SMOOTH_ALPHA_); @@ -51,10 +76,16 @@ CameraOnBoardNode::CameraOnBoardNode() cv::Point3f(-half_len, -half_len, 0) }; - RCLCPP_INFO(this->get_logger(), "Object detector started. cluster_radius: %.3f", CLUSTER_RADIUS_); + RCLCPP_INFO(this->get_logger(), "Object detector [%s] (side=%d) started. cluster_radius: %.3f", + CAMERA_POSITION_.c_str(), my_side_, CLUSTER_RADIUS_); } void CameraOnBoardNode::image_callback(const sensor_msgs::msg::Image::SharedPtr msg) { + // Multi-camera activation: dormant by default until dock_side matches our side + if (active_side_ != my_side_) { + return; + } + // get the camera's internal parameters if (!intrinsics_received_) { RCLCPP_WARN_THROTTLE(this->get_logger(), *this->get_clock(), 2000, "Waiting for camera intrinsics..."); @@ -78,7 +109,8 @@ void CameraOnBoardNode::image_callback(const sensor_msgs::msg::Image::SharedPtr // Optimize TF lookup: Get transform ONCE for this frame timestamp geometry_msgs::msg::TransformStamped cam_to_base; try { - cam_to_base = tf_buffer_->lookupTransform("base_footprint", "camera_color_optical_frame", + std::string camera_frame = CAMERA_POSITION_ + "_color_optical_frame"; + cam_to_base = tf_buffer_->lookupTransform("base_footprint", camera_frame, msg->header.stamp, rclcpp::Duration::from_seconds(0.1)); } catch (tf2::TransformException &ex) { @@ -156,7 +188,7 @@ void CameraOnBoardNode::image_callback(const sensor_msgs::msg::Image::SharedPtr cv::Point2d camera_in_base(cam_to_base.transform.translation.x, cam_to_base.transform.translation.y); geometry_msgs::msg::PoseStamped target_pose = logic_.compute_perpendicular_pose_from_floor_points( - floor_points, camera_in_base, this->get_clock()->now(), marker_pub_); + floor_points, camera_in_base, this->get_clock()->now()); if (!target_pose.header.frame_id.empty()) { // Log key results: number of selected markers, center and yaw diff --git a/src/aruco_ros/aruco_object/src/process_logic.cpp b/src/aruco_ros/aruco_object/src/process_logic.cpp index 020dabb..7e732f4 100644 --- a/src/aruco_ros/aruco_object/src/process_logic.cpp +++ b/src/aruco_ros/aruco_object/src/process_logic.cpp @@ -66,8 +66,7 @@ void ProcessLogic::select_clustered_aruco( geometry_msgs::msg::PoseStamped ProcessLogic::compute_perpendicular_pose_from_floor_points( const std::vector &floor_points, const cv::Point2d &camera_in_base, - const rclcpp::Time &now, - rclcpp::Publisher::SharedPtr marker_pub) { + const rclcpp::Time &now) { geometry_msgs::msg::PoseStamped result_pose; if (floor_points.empty()) return result_pose; @@ -78,31 +77,76 @@ geometry_msgs::msg::PoseStamped ProcessLogic::compute_perpendicular_pose_from_fl double center_x = sum_x / floor_points.size(); double center_y = sum_y / floor_points.size(); - // Simple Yaw Calculation: No PCA needed + // === PCA-based Wall Orientation === + // Use PCA to find the line direction of the markers (wall). + // The perpendicular to this line gives the wall normal (facing direction). + // Use camera_position to disambiguate which way the normal should point. + + cv::Mat data_pts((int)floor_points.size(), 2, CV_64F); + for (size_t i = 0; i < floor_points.size(); ++i) { + data_pts.at((int)i, 0) = floor_points[i].x; + data_pts.at((int)i, 1) = floor_points[i].y; + } + cv::PCA pca_analysis(data_pts, cv::Mat(), cv::PCA::DATA_AS_ROW); + + // First eigenvector = direction of the line (wall) + double line_dx = pca_analysis.eigenvectors.at(0, 0); + double line_dy = pca_analysis.eigenvectors.at(0, 1); + + // Perpendicular to line = wall normal candidates (two directions: +90° and -90°) + double perp_dx = -line_dy; + double perp_dy = line_dx; + + // === Disambiguate using camera_position === + // The wall normal should point TOWARDS the camera (i.e., away from the wall into the viewing area) + // Based on camera_position, we know which direction the camera is looking: + // - back: camera looks at -Y, so normal should point towards -Y (perp_dy < 0) + // - front: camera looks at +Y, so normal should point towards +Y (perp_dy > 0) + // - left: camera looks at -X, so normal should point towards -X (perp_dx < 0) + // - right: camera looks at +X, so normal should point towards +X (perp_dx > 0) + + if (camera_position_ == "back") { + // Camera looks at -Y, normal should point toward camera (negative Y) + if (perp_dy > 0) { perp_dx = -perp_dx; perp_dy = -perp_dy; } + } else if (camera_position_ == "front") { + // Camera looks at +Y, normal should point toward camera (positive Y) + if (perp_dy < 0) { perp_dx = -perp_dx; perp_dy = -perp_dy; } + } else if (camera_position_ == "left") { + // Camera looks at -X, normal should point toward camera (negative X) + if (perp_dx > 0) { perp_dx = -perp_dx; perp_dy = -perp_dy; } + } else if (camera_position_ == "right") { + // Camera looks at +X, normal should point toward camera (positive X) + if (perp_dx < 0) { perp_dx = -perp_dx; perp_dy = -perp_dy; } + } + + // === Calculate final yaw === // Robot coordinate system: Front = +Y, Right = +X - // The yaw is the angle from robot's +Y axis to the cluster center, - // adjusted by which camera is viewing it. + // The robot should orient so its front (+Y) points OPPOSITE to the wall normal + // (i.e., the robot faces the wall) - // atan2(x, y) gives angle from +Y axis (since front is +Y) - double angle_to_target = std::atan2(center_x, center_y); + // Wall normal angle (from +X axis in standard math coords) + double perp_angle = std::atan2(perp_dy, perp_dx); + // Convert to robot yaw: robot front (+Y) should face OPPOSITE to normal + // Robot yaw is measured from +Y axis, so we add 90° offset and flip double final_yaw; if (camera_position_ == "back") { - // Back camera: robot back (-Y) faces target, so robot front (+Y) points opposite - final_yaw = angle_to_target + M_PI; + // Robot back faces wall, so front points away (opposite to normal + 180°) + final_yaw = perp_angle + M_PI_2; // = perp_angle + PI/2 } else if (camera_position_ == "left") { - // Left camera (-X): target is to robot's left - final_yaw = angle_to_target + M_PI_2; + // Robot left faces wall + final_yaw = perp_angle + M_PI; // = perp_angle } else if (camera_position_ == "right") { - // Right camera (+X): target is to robot's right - final_yaw = angle_to_target - M_PI_2; + // Robot right faces wall + final_yaw = perp_angle; // = perp_angle - PI } else { - // Front camera (+Y): robot front faces target - final_yaw = angle_to_target; + // Robot front faces wall + final_yaw = perp_angle - M_PI_2; } - RCLCPP_INFO(rclcpp::get_logger("ProcessLogic"), "angle_to_target: %.2f deg, final_yaw: %.2f deg", - angle_to_target * (180.0 / M_PI), final_yaw * (180.0 / M_PI)); + RCLCPP_INFO(rclcpp::get_logger("ProcessLogic"), + "PCA perp_angle: %.2f deg, final_yaw: %.2f deg", + perp_angle * (180.0 / M_PI), final_yaw * (180.0 / M_PI)); // Yaw Consistency Check: DISABLED - Trust geometric check only // if (!first_pose_) { @@ -178,34 +222,5 @@ geometry_msgs::msg::PoseStamped ProcessLogic::compute_perpendicular_pose_from_fl // Store for next iteration last_pose_ = result_pose; - // // Publish Visualization Marker (Optional: Show the calculated approach vector) - // if (marker_pub) { - // visualization_msgs::msg::Marker arrow; - // arrow.header.frame_id = result_pose.header.frame_id; - // arrow.header.stamp = now; - // arrow.ns = "approach_vector"; - // arrow.id = 101; - // arrow.type = visualization_msgs::msg::Marker::ARROW; - // arrow.action = visualization_msgs::msg::Marker::ADD; - - // // Start point (Robot/Cluster Center) - // geometry_msgs::msg::Point p1, p2; - // p1.x = result_pose.pose.position.x; - // p1.y = result_pose.pose.position.y; - // p1.z = 0.0; - - // // End point (pointing into the wall) - // double len = 0.5; - // p2.x = result_pose.pose.position.x + perp_dx * len; - // p2.y = result_pose.pose.position.y + perp_dy * len; - // p2.z = 0.0; - - // arrow.points.push_back(p1); - // arrow.points.push_back(p2); - // arrow.scale.x = 0.05; arrow.scale.y = 0.1; arrow.scale.z = 0.1; - // arrow.color.r = 0.0f; arrow.color.g = 1.0f; arrow.color.b = 0.0f; arrow.color.a = 1.0f; - // marker_pub->publish(arrow); - // } - return result_pose; } diff --git a/src/object_feedback/CMakeLists.txt b/src/object_feedback/CMakeLists.txt new file mode 100644 index 0000000..8f03a38 --- /dev/null +++ b/src/object_feedback/CMakeLists.txt @@ -0,0 +1,71 @@ +cmake_minimum_required(VERSION 3.8) +project(object_feedback) + +find_package(ament_cmake REQUIRED) +find_package(rclcpp REQUIRED) +find_package(sensor_msgs REQUIRED) +find_package(cv_bridge REQUIRED) +find_package(OpenCV REQUIRED) +find_package(tf2 REQUIRED) +find_package(tf2_ros REQUIRED) +find_package(tf2_geometry_msgs REQUIRED) +find_package(tf2_eigen REQUIRED) +find_package(geometry_msgs REQUIRED) +find_package(std_msgs REQUIRED) +find_package(visualization_msgs REQUIRED) + + +include_directories( + include + ${OpenCV_INCLUDE_DIRS} +) + +add_executable(object_feedback_node + src/object_feedback.cpp +) + +add_executable(feedback_all_node + src/feedback_all.cpp +) + +ament_target_dependencies(object_feedback_node + rclcpp + sensor_msgs + std_msgs + cv_bridge + tf2_ros + tf2_geometry_msgs + tf2_eigen + geometry_msgs + visualization_msgs +) + +ament_target_dependencies(feedback_all_node + rclcpp + sensor_msgs + std_msgs + cv_bridge + tf2_ros +) + +target_link_libraries(object_feedback_node + ${OpenCV_LIBRARIES} +) + +install(TARGETS + object_feedback_node + feedback_all_node + DESTINATION lib/${PROJECT_NAME} +) + +# Install headers +# install(DIRECTORY include/ +# DESTINATION include +# ) + +# Install launch and config +install(DIRECTORY launch + DESTINATION share/${PROJECT_NAME} +) + +ament_package() diff --git a/src/object_feedback/launch/obj_fb_bringup.launch.py b/src/object_feedback/launch/obj_fb_bringup.launch.py new file mode 100644 index 0000000..632d8b6 --- /dev/null +++ b/src/object_feedback/launch/obj_fb_bringup.launch.py @@ -0,0 +1,134 @@ + +from launch import LaunchDescription +from launch.actions import DeclareLaunchArgument +from launch.substitutions import LaunchConfiguration +from launch_ros.actions import Node + +ARGUMENTS = [ + # Front feedback arguments + DeclareLaunchArgument( + "front_depth_topic", + default_value="/front/front/depth/image_rect_raw", + description="Front depth image topic subscribed by object_feedback_front", + ), + DeclareLaunchArgument( + "front_dist_topic", + default_value="/front/front/obj_distance", + description="Front distance topic published by object_feedback_front", + ), + + # Right feedback arguments + DeclareLaunchArgument( + "right_depth_topic", + default_value="/right/right/depth/image_rect_raw", + description="Right depth image topic subscribed by object_feedback_right", + ), + DeclareLaunchArgument( + "right_dist_topic", + default_value="/right/right/obj_distance", + description="Right distance topic published by object_feedback_right", + ), + + # Back feedback arguments + DeclareLaunchArgument( + "back_depth_topic", + default_value="/back/back/depth/image_rect_raw", + description="Back depth image topic subscribed by object_feedback_back", + ), + DeclareLaunchArgument( + "back_dist_topic", + default_value="/back/back/obj_distance", + description="Back distance topic published by object_feedback_back", + ), + + # Left feedback arguments + DeclareLaunchArgument( + "left_depth_topic", + default_value="/left/left/depth/image_rect_raw", + description="Left depth image topic subscribed by object_feedback_left", + ), + DeclareLaunchArgument( + "left_dist_topic", + default_value="/left/left/obj_distance", + description="Left distance topic published by object_feedback_left", + ), + + # Feedback all node arguments + DeclareLaunchArgument( + "success_topic", + default_value="/robot/vision/onTakeSuccess", + description="Success topic published by feedback_all_node", + ), + DeclareLaunchArgument( + "distance_threshold_mm", + default_value="190", + description="Distance threshold in mm for feedback_all_node to determine success", + ), +] + + +def generate_launch_description(): + + object_feedback_front = Node( + package="object_feedback", + executable="object_feedback_node", + name="object_feedback_front", + parameters=[ + {"depth_topic": LaunchConfiguration("front_depth_topic")}, + {"dist_topic": LaunchConfiguration("front_dist_topic")}, + ], + ) + + object_feedback_right = Node( + package="object_feedback", + executable="object_feedback_node", + name="object_feedback_right", + parameters=[ + {"depth_topic": LaunchConfiguration("right_depth_topic")}, + {"dist_topic": LaunchConfiguration("right_dist_topic")}, + ], + ) + + object_feedback_back = Node( + package="object_feedback", + executable="object_feedback_node", + name="object_feedback_back", + parameters=[ + {"depth_topic": LaunchConfiguration("back_depth_topic")}, + {"dist_topic": LaunchConfiguration("back_dist_topic")}, + ], + ) + + object_feedback_left = Node( + package="object_feedback", + executable="object_feedback_node", + name="object_feedback_left", + parameters=[ + {"depth_topic": LaunchConfiguration("left_depth_topic")}, + {"dist_topic": LaunchConfiguration("left_dist_topic")}, + ], + ) + + feedback_all = Node( + package="object_feedback", + executable="feedback_all_node", + name="feedback_all_node", + parameters=[ + {"front_dist_topic": LaunchConfiguration("front_dist_topic")}, + {"right_dist_topic": LaunchConfiguration("right_dist_topic")}, + {"back_dist_topic": LaunchConfiguration("back_dist_topic")}, + {"left_dist_topic": LaunchConfiguration("left_dist_topic")}, + {"distance_threshold_mm": LaunchConfiguration("distance_threshold_mm")}, + {"success_topic": LaunchConfiguration("success_topic")}, + ], + ) + + ld = LaunchDescription(ARGUMENTS) + + ld.add_action(object_feedback_front) + ld.add_action(object_feedback_right) + ld.add_action(object_feedback_back) + ld.add_action(object_feedback_left) + ld.add_action(feedback_all) + + return ld diff --git a/src/object_feedback/launch/single_feedback.launch.py b/src/object_feedback/launch/single_feedback.launch.py new file mode 100644 index 0000000..952599a --- /dev/null +++ b/src/object_feedback/launch/single_feedback.launch.py @@ -0,0 +1,37 @@ +from launch import LaunchDescription +from launch.actions import DeclareLaunchArgument +from launch.substitutions import LaunchConfiguration +from launch_ros.actions import Node + + +def generate_launch_description(): + + depth_topic_arg = DeclareLaunchArgument( + "depth_topic", + default_value="/camera/camera/depth/image_rect_raw", + description="Depth image topic subscribed by object_feedback_node", + ) + + dist_topic_arg = DeclareLaunchArgument( + "dist_topic", + default_value="/camera/camera/obj_distance", + description="Distance topic published by object_feedback_node", + ) + + # Launch single side feedback node + launch_fb = Node( + package="object_feedback", + executable="object_feedback_node", + name="object_feedback_node", + parameters=[ + {"depth_topic": LaunchConfiguration("depth_topic")}, + {"dist_topic": LaunchConfiguration("dist_topic")}, + ], + ) + + ld = LaunchDescription() + ld.add_action(depth_topic_arg) + ld.add_action(dist_topic_arg) + ld.add_action(launch_fb) + + return ld \ No newline at end of file diff --git a/src/object_feedback/package.xml b/src/object_feedback/package.xml new file mode 100644 index 0000000..89e658a --- /dev/null +++ b/src/object_feedback/package.xml @@ -0,0 +1,28 @@ + + + object_feedback + 0.0.2 + ROS2 C++ node for detecting ArUco markers and publishing object pose. + + Sean Chen + Apache-2.0 + + ament_cmake + + rclcpp + sensor_msgs + std_msgs + cv_bridge + tf2 + tf2_ros + tf2_geometry_msgs + tf2_eigen + geometry_msgs + visualization_msgs + + opencv + + + ament_cmake + + diff --git a/src/object_feedback/src/feedback_all.cpp b/src/object_feedback/src/feedback_all.cpp new file mode 100644 index 0000000..789bc6f --- /dev/null +++ b/src/object_feedback/src/feedback_all.cpp @@ -0,0 +1,94 @@ +#include +#include +#include + + +class FeedbackAll : public rclcpp::Node { +public: + FeedbackAll() : Node("feedback_all_node") { + + this->declare_parameter("left_topic", "/left/left/obj_distance"); + this->declare_parameter("right_topic", "/right/right/obj_distance"); + this->declare_parameter("front_topic", "/front/front/obj_distance"); + this->declare_parameter("back_topic", "/back/back/obj_distance"); + this->declare_parameter("success_topic", "/robot/vision/onTakeSuccess"); + this->declare_parameter("distance_threshold_mm", 100); + + left_topic_ = this->get_parameter("left_topic").as_string(); + right_topic_ = this->get_parameter("right_topic").as_string(); + front_topic_ = this->get_parameter("front_topic").as_string(); + back_topic_ = this->get_parameter("back_topic").as_string(); + + distance_threshold_mm_ = this->get_parameter("distance_threshold_mm").as_int(); + + success_topic_ = this->get_parameter("success_topic").as_string(); + success_publisher_ = this->create_publisher(success_topic_, 10); + + left_subscriber_ = this->create_subscription( + left_topic_, 10, + std::bind(&FeedbackAll::left_dist_callback, this, std::placeholders::_1)); + right_subscriber_ = this->create_subscription( + right_topic_, 10, + std::bind(&FeedbackAll::right_dist_callback, this, std::placeholders::_1)); + front_subscriber_ = this->create_subscription( + front_topic_, 10, + std::bind(&FeedbackAll::front_dist_callback, this, std::placeholders::_1)); + back_subscriber_ = this->create_subscription( + back_topic_, 10, + std::bind(&FeedbackAll::back_dist_callback, this, std::placeholders::_1)); + } + +private: + void left_dist_callback(const std_msgs::msg::Int32::SharedPtr msg) { + distances_[LEFT] = (msg->data < distance_threshold_mm_) ? 1 : 0; + // RCLCPP_INFO(this->get_logger(), "Received left distance: %d mm", msg->data); + check_success(); + } + + void right_dist_callback(const std_msgs::msg::Int32::SharedPtr msg) { + distances_[RIGHT] = (msg->data < distance_threshold_mm_) ? 1 : 0; + // RCLCPP_INFO(this->get_logger(), "Received right distance: %d mm", msg->data); + check_success(); + } + + void front_dist_callback(const std_msgs::msg::Int32::SharedPtr msg) { + distances_[FRONT] = (msg->data < distance_threshold_mm_) ? 1 : 0; + // RCLCPP_INFO(this->get_logger(), "Received front distance: %d mm", msg->data); + check_success(); + } + + void back_dist_callback(const std_msgs::msg::Int32::SharedPtr msg) { + distances_[BACK] = (msg->data < distance_threshold_mm_) ? 1 : 0; + // RCLCPP_INFO(this->get_logger(), "Received back distance: %d mm", msg->data); + check_success(); + } + + void check_success() { + std_msgs::msg::Int32MultiArray result_msg; + result_msg.data = distances_; + success_publisher_->publish(result_msg); + // RCLCPP_INFO(this->get_logger(), "Published success status: %d, %d, %d, %d", distances_[FRONT], distances_[RIGHT], distances_[BACK], distances_[LEFT]); + } + + rclcpp::Subscription::SharedPtr left_subscriber_; + rclcpp::Subscription::SharedPtr right_subscriber_; + rclcpp::Subscription::SharedPtr front_subscriber_; + rclcpp::Subscription::SharedPtr back_subscriber_; + rclcpp::Publisher::SharedPtr success_publisher_; + std::string left_topic_; + std::string right_topic_; + std::string front_topic_; + std::string back_topic_; + std::string success_topic_; + std::int32_t distance_threshold_mm_; + std::vector distances_{0, 0, 0, 0}; // front, right, back, left + enum Direction { FRONT = 0, RIGHT = 1, BACK = 2, LEFT = 3 }; +}; + +int main(int argc, char **argv) { + rclcpp::init(argc, argv); + auto node = std::make_shared(); + rclcpp::spin(node); + rclcpp::shutdown(); + return 0; +} \ No newline at end of file diff --git a/src/object_feedback/src/object_feedback.cpp b/src/object_feedback/src/object_feedback.cpp new file mode 100644 index 0000000..560b7b4 --- /dev/null +++ b/src/object_feedback/src/object_feedback.cpp @@ -0,0 +1,82 @@ +#include +#include +#include +#include +#include + + +class ObjectFeedback : public rclcpp::Node { +public: + ObjectFeedback() : Node("object_feedback_node") { + this->declare_parameter("depth_topic", "/camera/camera/depth/image_rect_raw"); + this->declare_parameter("dist_topic", "/camera/camera/obj_distance"); + depth_topic_ = this->get_parameter("depth_topic").as_string(); + dist_topic_ = this->get_parameter("dist_topic").as_string(); + dist_publisher_ = this->create_publisher(dist_topic_, 10); + depth_subscriber_ = this->create_subscription( + depth_topic_, 10, + std::bind(&ObjectFeedback::depth_img_callback, this, std::placeholders::_1)); + } + +private: + void depth_img_callback(const sensor_msgs::msg::Image::SharedPtr msg) { + try { + cv_bridge::CvImagePtr cv_ptr = cv_bridge::toCvCopy(msg, msg->encoding); + cv::Mat depth_frame = cv_ptr->image; + + int target_x = depth_frame.cols / 4; + double sum = 0.0; + int count = 0; + + for(int v = 0; v < depth_frame.rows; ++v) { + if (msg->encoding == "16UC1") { + uint16_t depth_value = depth_frame.at(v, target_x); + if (depth_value > 0) { + sum += depth_value; + count++; + } + } + else if (msg->encoding == "32FC1") { + float depth_value = depth_frame.at(v, target_x); + if (depth_value > 0.0f) { + sum += depth_value; + count++; + } + } + } + + if (count > 0) { + double average = sum / count; + std_msgs::msg::Int32 out_msg; + out_msg.data = static_cast(average); + dist_publisher_->publish(out_msg); + // RCLCPP_INFO(this->get_logger(), "Average depth at column %d: %.2f mm", target_x, average); + } + + // cv::Point pt1(depth_frame.cols / 4, 0); + // cv::Point pt2(depth_frame.cols / 4, depth_frame.rows); + // cv::Scalar line_color(255, 0, 0); + // int thickness = 2; + // cv::line(depth_frame, pt1, pt2, line_color, thickness); + + // cv::imshow("Depth View", depth_frame * 100); + // cv::waitKey(1); + + } catch (cv_bridge::Exception& e) { + RCLCPP_ERROR(this->get_logger(), "cv_bridge exception: %s", e.what()); + } + } + + rclcpp::Subscription::SharedPtr depth_subscriber_; + rclcpp::Publisher::SharedPtr dist_publisher_; + std::string depth_topic_; + std::string dist_topic_; +}; + +int main(int argc, char **argv) { + rclcpp::init(argc, argv); + auto node = std::make_shared(); + rclcpp::spin(node); + rclcpp::shutdown(); + return 0; +} \ No newline at end of file diff --git a/src/realsense_ros/realsense2_camera/launch/rs_launch.py b/src/realsense_ros/realsense2_camera/launch/rs_launch.py index 79a8e29..2ae96a5 100644 --- a/src/realsense_ros/realsense2_camera/launch/rs_launch.py +++ b/src/realsense_ros/realsense2_camera/launch/rs_launch.py @@ -47,7 +47,7 @@ {'name': 'depth_module.infra_format', 'default': 'RGB8', 'description': 'infra0 stream format'}, {'name': 'depth_module.infra1_format', 'default': 'Y8', 'description': 'infra1 stream format'}, {'name': 'depth_module.infra2_format', 'default': 'Y8', 'description': 'infra2 stream format'}, - {'name': 'depth_module.color_profile', 'default': '0,0,0', 'description': 'Depth module color stream profile for d405'}, + {'name': 'depth_module.color_profile', 'default': '848,480,60', 'description': 'Depth module color stream profile for d405'}, {'name': 'depth_module.color_format', 'default': 'RGB8', 'description': 'color stream format for d405'}, {'name': 'depth_module.exposure', 'default': '8500', 'description': 'Depth module manual exposure value'}, {'name': 'depth_module.gain', 'default': '16', 'description': 'Depth module manual gain value'}, @@ -139,19 +139,26 @@ def launch_setup(context, params, param_name_suffix=''): ) ] +# 0: '0.0', '0.125', '0.30', '1.571', '0.7854', '-1.571' 前面相機z應該是錯的 +# 1: '0.125', '0.0', '0.30', '0.0', '0.7854', '-1.571' +# 2: '0.0', '-0.125', '0.30', '-1.571', '0.7854', '-1.571' +# 3: '-0.125', '0.0', '0.30', '-3.142', '0.7854', '-1.571' +def launch_map_transform_publisher_node(context: LaunchContext): + node = launch_ros.actions.Node( + name='map_transform_publisher', + package="tf2_ros", + executable="static_transform_publisher", + arguments=[ + # 新的紅黑機數值 + '0.0', '-0.125', '0.30', '-1.571', '0.7854', '-1.571', + 'base_footprint', + context.launch_configurations['camera_name'] + '_link' + ] + ) + return [node] + def generate_launch_description(): return LaunchDescription(declare_configurable_parameters(configurable_parameters) + [ OpaqueFunction(function=launch_setup, kwargs = {'params' : set_configurable_parameters(configurable_parameters)}), - launch_ros.actions.Node( - name='map_transform_publisher', - package="tf2_ros", - executable="static_transform_publisher", - arguments=[ - # 新的紅黑機 back camera - '0.0', '-0.125', '0.30', # x, y, z - '-1.571', '0.7854', '-1.571', # yaw, pitch, roll - 'base_footprint', - [LaunchConfiguration('camera_name'), '_link'] - ] - ) + OpaqueFunction(function=launch_map_transform_publisher_node) # Uncomment to enable static transform publisher ]) diff --git a/src/realsense_ros/realsense2_camera/launch/rs_multi_camera_launch.py b/src/realsense_ros/realsense2_camera/launch/rs_multi_camera_launch.py index d7a2205..5c484dd 100644 --- a/src/realsense_ros/realsense2_camera/launch/rs_multi_camera_launch.py +++ b/src/realsense_ros/realsense2_camera/launch/rs_multi_camera_launch.py @@ -35,10 +35,15 @@ sys.path.append(str(pathlib.Path(__file__).parent.absolute())) import rs_launch -local_parameters = [{'name': 'camera_name1', 'default': 'camera1', 'description': 'camera1 unique name'}, - {'name': 'camera_name2', 'default': 'camera2', 'description': 'camera2 unique name'}, - {'name': 'camera_namespace1', 'default': 'camera1', 'description': 'camera1 namespace'}, - {'name': 'camera_namespace2', 'default': 'camera2', 'description': 'camera2 namespace'}, +local_parameters = [{'name': 'camera_name1', 'default': 'back', 'description': 'camera1 unique name'}, + {'name': 'camera_name2', 'default': 'left', 'description': 'camera2 unique name'}, + {'name': 'camera_namespace1', 'default': 'back', 'description': 'camera1 namespace'}, + {'name': 'camera_namespace2', 'default': 'left', 'description': 'camera2 namespace'}, + {'name': 'serial_no1', 'default': '_419122270813', 'description': 'camera1 serial number'}, + {'name': 'serial_no2', 'default': '_218622276534', 'description': 'camera2 serial number'}, + {'name': 'camera_name3', 'default': 'right', 'description': 'camera3 unique name'}, + {'name': 'camera_namespace3', 'default': 'right', 'description': 'camera3 namespace'}, + {'name': 'serial_no3', 'default': '_218622276687', 'description': 'camera3 serial number'}, ] def yaml_to_dict(path_to_yaml): @@ -53,6 +58,22 @@ def duplicate_params(general_params, posix): for param in local_params: param['original_name'] = param['name'] param['name'] += posix + if param['original_name'] == 'enable_depth': + param['default'] = 'true' + elif param['original_name'] == 'initial_reset': + param['default'] = 'false' + elif param['original_name'] == 'enable_color': + param['default'] = 'true' + elif param['original_name'] == 'enable_infra': + param['default'] = 'false' + elif param['original_name'] == 'enable_infra1': + param['default'] = 'false' + elif param['original_name'] == 'enable_infra2': + param['default'] = 'false' + elif param['original_name'] == 'enable_gyro': + param['default'] = 'false' + elif param['original_name'] == 'enable_accel': + param['default'] = 'false' return local_params def launch_static_transform_publisher_node(context : LaunchContext): @@ -67,24 +88,50 @@ def launch_static_transform_publisher_node(context : LaunchContext): log_message = "Launching as LifecycleNode" if use_lifecycle_node else "Launching as Normal ROS Node" # dummy static transformation from camera1 to camera2 - node = node_action( - namespace="", - name="tf2_static_transform_publisher", - package = "tf2_ros", - executable = "static_transform_publisher", - arguments = ["0", "0", "0", "0", "0", "0", - context.launch_configurations['camera_name1'] + "_link", - context.launch_configurations['camera_name2'] + "_link"] + node2 = node_action( + namespace="", + name="tf2_static_transform_publisher", + package = "tf2_ros", + executable = "static_transform_publisher", + arguments=[ + '0.0', '-0.125', '0.30', '-1.571', '0.7854', '-1.571', + 'base_footprint', + context.launch_configurations['camera_name1'] + '_link' + ] ) - return [LogInfo(msg=f"🚀 {log_message}"), node] + node3 = node_action( + namespace="", + name="tf2_static_transform_publisher", + package = "tf2_ros", + executable = "static_transform_publisher", + arguments=[ + '-0.125', '0.0', '0.30', '-3.142', '0.7854', '-1.571', + 'base_footprint', + context.launch_configurations['camera_name2'] + '_link' + ] + ) + node4 = node_action( + namespace="", + name="tf2_static_transform_publisher", + package = "tf2_ros", + executable = "static_transform_publisher", + arguments=[ + '0.125', '0.0', '0.30', '0.0', '0.7854', '-1.571', + 'base_footprint', + context.launch_configurations['camera_name3'] + '_link' + ] + ) + return [LogInfo(msg=f"🚀 {log_message}"), node2, node3, node4] def generate_launch_description(): params1 = duplicate_params(rs_launch.configurable_parameters, '1') params2 = duplicate_params(rs_launch.configurable_parameters, '2') + params3 = duplicate_params(rs_launch.configurable_parameters, '3') return LaunchDescription( rs_launch.declare_configurable_parameters(local_parameters) + rs_launch.declare_configurable_parameters(params1) + rs_launch.declare_configurable_parameters(params2) + + rs_launch.declare_configurable_parameters(params3) + [ OpaqueFunction(function=rs_launch.launch_setup, kwargs = {'params' : set_configurable_parameters(params1), @@ -92,5 +139,8 @@ def generate_launch_description(): OpaqueFunction(function=rs_launch.launch_setup, kwargs = {'params' : set_configurable_parameters(params2), 'param_name_suffix': '2'}), + OpaqueFunction(function=rs_launch.launch_setup, + kwargs = {'params' : set_configurable_parameters(params3), + 'param_name_suffix': '3'}), OpaqueFunction(function=launch_static_transform_publisher_node) ]) diff --git a/start_docker.sh b/start_docker.sh index c0d830e..c933b4b 100644 --- a/start_docker.sh +++ b/start_docker.sh @@ -2,5 +2,8 @@ cd docker/ || { echo "Directory 'docker/' not found"; exit 1; } -# Build AND then Up +# Take down running containers and networks first +docker compose down + +# Build and start fresh docker compose build && docker compose up -d \ No newline at end of file diff --git a/start_vision_tmux.sh b/start_vision_tmux.sh index aab294c..68b1234 100644 --- a/start_vision_tmux.sh +++ b/start_vision_tmux.sh @@ -12,44 +12,101 @@ if [ $? == 0 ]; then tmux kill-session -t $SESSION_NAME fi -# Create a new tmux session named 'vision' but don't attach to it yet +# Create a new detached session tmux new-session -d -s $SESSION_NAME -# Send commands to the first pane (pane 0) -# Enter the running docker container -tmux send-keys -t $SESSION_NAME:0 'docker exec -it vision-ws bash' C-m -# Wait a brief moment for the container shell to be ready -sleep 1 -# Source ROS 2 Humble setup and local workspace setup -# tmux send-keys -t $SESSION_NAME:0 'export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && colcon build && source install/setup.bash && ros2 launch realsense2_camera rs_launch.py camera_name:=rb_back_camera' C-m -tmux send-keys -t $SESSION_NAME:0 'export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && colcon build && source install/setup.bash && ros2 launch realsense2_camera rs_launch.py' C-m +# Get the window ID to be robust against base-index settings +WIN_ID=$(tmux list-windows -t $SESSION_NAME -F "#{window_index}" | head -n 1) + +# === Window 0: Vision System (2x2 grid) === +# Start with Pane 0 (Top Left) + +# 1. Split vertically: Top (0) and Bottom (1) +tmux split-window -v -t ${SESSION_NAME}:${WIN_ID} + +# 2. Split Top (0) horizontally: TL (0) and TR (1). Old Bottom becomes 2. +tmux split-window -h -t ${SESSION_NAME}:${WIN_ID}.0 + +# 3. Split Bottom (2) horizontally: BL (2) and BR (3). +tmux split-window -h -t ${SESSION_NAME}:${WIN_ID}.2 -# Split the window horizontally to create a second pane -tmux split-window -h -t $SESSION_NAME:0 +# Ensure tiled layout +tmux select-layout -t ${SESSION_NAME}:${WIN_ID} tiled -# Send commands to the second pane (pane 1) -# Enter the running docker container -tmux send-keys -t $SESSION_NAME:0.1 'docker exec -it vision-ws bash' C-m -# Wait a brief moment +# Common setup for all panes +# Note: Panes are indexed 0 (TL), 1 (TR), 2 (BL), 3 (BR) *after* the splits and select-layout tiled? +# Let's double check standard behavior or rely on targeting. +# With "tiled", indices might shift or be stable. +# Layout logic: +# 0 -> TL +# 1 -> TR +# 2 -> BL +# 3 -> BR +# Let's verify targeting. + +# Pane 0 (Top Left): All Cameras +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.0 'docker exec -it vision-ws bash' C-m +sleep 1 +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.0 "export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && colcon build && source install/setup.bash && ros2 launch realsense2_camera rs_multi_camera_launch.py" C-m + +# Pane 1 (Top Right): Detector +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.1 'docker exec -it vision-ws bash' C-m sleep 1 -# Source ROS 2 Humble setup and local workspace setup -tmux send-keys -t $SESSION_NAME:0.1 'export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && colcon build && source install/setup.bash && ros2 launch aruco_object object_detector.launch.py' C-m +DETECTOR_CMD=' +export ROS_DOMAIN_ID=13 +source /opt/ros/humble/setup.bash +source install/setup.bash -# Split the detector pane (pane 1) vertically to create a third pane for the scanner -tmux split-window -v -t $SESSION_NAME:0.1 +echo "=== [Detector] Waiting for BACK, LEFT, RIGHT camera topics... ===" +while true; do + BACK=$(ros2 topic list 2>/dev/null | grep -E "/back/back/color/camera_info") + LEFT=$(ros2 topic list 2>/dev/null | grep -E "/left/left/color/camera_info") + RIGHT=$(ros2 topic list 2>/dev/null | grep -E "/right/right/color/camera_info") + if [ -n "$BACK" ] && [ -n "$LEFT" ] && [ -n "$RIGHT" ]; then + echo "=== [Detector] All 3 cameras found! ===" + break + fi + echo "[Detector] Waiting... (back:${BACK:-missing} left:${LEFT:-missing} right:${RIGHT:-missing})" + sleep 2 +done +sleep 2 +echo "=== [Detector] Launching ArUco Detector ===" +ros2 launch aruco_object detector_multi.launch.py +' +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.1 "$DETECTOR_CMD" C-m -# Send commands to the third pane (pane 2) -tmux send-keys -t $SESSION_NAME:0.2 'docker exec -it vision-ws bash' C-m +# Pane 2 (Bottom Left): Scanner +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.2 'docker exec -it vision-ws bash' C-m sleep 1 -tmux send-keys -t $SESSION_NAME:0.2 'export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && colcon build && source install/setup.bash && ros2 launch aruco_object scanner.launch.py' C-m +SCANNER_CMD=' +export ROS_DOMAIN_ID=13 +source /opt/ros/humble/setup.bash +source install/setup.bash + +echo "=== [Scanner] Waiting for BACK, LEFT, RIGHT camera topics... ===" +while true; do + BACK=$(ros2 topic list 2>/dev/null | grep -E "/back/back/color/camera_info") + LEFT=$(ros2 topic list 2>/dev/null | grep -E "/left/left/color/camera_info") + RIGHT=$(ros2 topic list 2>/dev/null | grep -E "/right/right/color/camera_info") + if [ -n "$BACK" ] && [ -n "$LEFT" ] && [ -n "$RIGHT" ]; then + echo "=== [Scanner] All 3 cameras found! ===" + break + fi + echo "[Scanner] Waiting... (back:${BACK:-missing} left:${LEFT:-missing} right:${RIGHT:-missing})" + sleep 2 +done +sleep 2 +echo "=== [Scanner] Launching ArUco Scanner ===" +ros2 launch aruco_object scanner_multi.launch.py +' +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.2 "$SCANNER_CMD" C-m -# Create a new window for Rviz -tmux new-window -t $SESSION_NAME:1 -n 'rviz' -# Send commands to the new window -tmux send-keys -t $SESSION_NAME:1 'docker exec -it vision-ws bash' C-m +# Pane 3 (Bottom Right): Shell / Utils +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.3 'docker exec -it vision-ws bash' C-m sleep 1 -tmux send-keys -t $SESSION_NAME:1 'export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && source install/setup.bash' C-m -tmux send-keys -t $SESSION_NAME:1 'ros2 run rviz2 rviz2' C-m +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.3 'export ROS_DOMAIN_ID=13 && source /opt/ros/humble/setup.bash && source install/setup.bash' C-m +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.3 'echo "Ready to use. Run: ros2 topic list"' C-m +tmux send-keys -t ${SESSION_NAME}:${WIN_ID}.3 'ros2 topic list' C-m # Function to kill the session when script exits cleanup() { @@ -60,5 +117,5 @@ cleanup() { trap cleanup EXIT # Attach to the tmux session to view the output -tmux select-window -t $SESSION_NAME:0 +tmux select-window -t ${SESSION_NAME}:${WIN_ID} tmux attach-session -t $SESSION_NAME diff --git a/white_back_camera.sh b/white_back_camera.sh new file mode 100644 index 0000000..32d69a2 --- /dev/null +++ b/white_back_camera.sh @@ -0,0 +1,67 @@ +#!/bin/bash + +SESSION="white_back_camera_test" + +# Check if session exists and kill it if it does +tmux has-session -t $SESSION 2>/dev/null +if [ $? -eq 0 ]; then + echo "Session $SESSION already exists. Killing it." + tmux kill-session -t $SESSION +fi + +# Create a new detached session +tmux new-session -d -s $SESSION + +# Get the window ID to be robust against base-index settings +WIN_ID=$(tmux list-windows -t $SESSION -F "#{window_index}" | head -n 1) + +# Split the window into 4 panes in a grid +# Start with Pane 0 (Whole screen) + +# 1. Split vertically: Top (0) and Bottom (1) +tmux split-window -v -t ${SESSION}:${WIN_ID}.0 + +# 2. Split Top (0) horizontally: TL (0) and TR (1). Old Bottom becomes 2. +tmux split-window -h -t ${SESSION}:${WIN_ID}.0 + +# 3. Split Bottom (2) horizontally: BL (2) and BR (3). +tmux split-window -h -t ${SESSION}:${WIN_ID}.2 + +# Ensure tiled layout +tmux select-layout -t ${SESSION}:${WIN_ID} tiled + +# Common commands for ALL panes +for i in {0..3}; do + TARGET="${SESSION}:${WIN_ID}.${i}" + tmux send-keys -t $TARGET "docker exec -it vision-ws bash" C-m + tmux send-keys -t $TARGET "export ROS_DOMAIN_ID=11" C-m + tmux send-keys -t $TARGET "colcon build" C-m + tmux send-keys -t $TARGET "source install/setup.bash" C-m + tmux send-keys -t $TARGET "clear" C-m +done + +# Upper Left (Pane 0) +tmux send-keys -t ${SESSION}:${WIN_ID}.0 "ros2 launch realsense2_camera rs_launch.py \ + camera_namespace:=back \ + camera_name:=back \ + serial_no:=\"'218622278918'\" \ + rgb_camera.color_profile:=640,480,15 \ + enable_depth:=false" C-m + +# Upper Right (Pane 1) +tmux send-keys -t ${SESSION}:${WIN_ID}.1 "ros2 run aruco_object aruco_row_scanner --ros-args \ + -p camera_position:=back \ + -p team_color:=yellow" C-m + +# Lower Left (Pane 2) +tmux send-keys -t ${SESSION}:${WIN_ID}.2 "ros2 run aruco_object aruco_detector_node --ros-args \ + -p camera_position:=back" C-m + +# Lower Right (Pane 3) +tmux send-keys -t ${SESSION}:${WIN_ID}.3 "ros2 topic hz /detected_dock_pose" C-m + +# Attach to session +tmux attach -t $SESSION + +# Kill session when detached (script continues here after user detaches) +tmux kill-session -t $SESSION