Skip to content

ucla-mobility/OpenCDA-InfraX

Repository files navigation

OpenCDA-Infra

TDG Attribution Non-Commercial Share Alike

RELEASE

This is a multi-modality sensor configurator for CARLA

Feature

  • Multiple camera
  • Multiple LiDAR-camera

How to use

Step 0

You need to install CARLA prebuild version first.

Step 1 Install Package

git clone https://github.com/zhz03/OpenCDA-Infra.git
cd OpenCDA-Infra
conda create -n opencda_infra python=3.7 
conda activate opencda_infra
git checkout dev
pip install -r requirements.txt
python setup.py develop
export CARLA_HOME=/path/to/your/CARLA_ROOT # for example: /home/zzl/Carla/CARLA_0.9.12
export CARLA_VERSION=0.9.11 #or 0.9.12 depends on your CARLA
. setup.sh

Step 2 Launch Carla

Launch CARLA by following command:

cd /path/to/your/CARLA_ROOT
./CarlaUE4.sh

Step 3 Configure Yaml File

Modify your yaml file to create scenario that you want:

The location of yaml files is in [config_yaml](opencda_infra/config_yaml)

description: |-
  Copyright 2021 <UCLA Mobility Lab>
  Author: Zhaoliang Zheng <zhz03@g.ucla.edu>
  Content: This is the template configuration files in Town06

file_save: true # true # The main variable to control if files can be saved

world: # Tested all the parameters in world
  sync_mode: true
  town: 'Town06'
  client_port: 2000
  fixed_delta_seconds: &delta 0.1 # frame rate: 1/ delta = 10hz, 0.05 = 20 hz # // TESTED: 0.05, 0.1 
  seed: 0
  spectator_pos: [-1.3813, 20.0, 60.0, -90.00, 0.00, 0.00] # //  TESTED: [-1.3813, 45.0, 60.0, -90.00, 0.00, 0.00], [-1.3813, 20.0, 60.0, -90.00, 0.00, 0.00]
  time_period: 20 # // TESTED: 20, 40
  weather:
    sun_altitude_angle: 0 # 90 is the midday and -90 is the midnight # // TESTED: 90, -90
    cloudiness: 0 # 0 is the clean sky and 100 is the thickest cloud # // TESTED: 0, 50, 100
    precipitation: 0 # rain, 100 is the heaviest rain # // TESTED: 0, 50, 100
    precipitation_deposits: 0 # Determines the creation of puddles. Values range from 0 to 100, being 0 none at all and 100 a road completely capped with water.v # // TESTED: 0, 50, 100
    wind_intensity: 0 # it will influence the rain
    fog_density: 0 # fog thickness, 100 is the largest  # // TESTED: 0, 50, 100
    fog_distance: 0  # Fog start distance. Values range from 0 to infinite.
    fog_falloff: 0 # Density of the fog (as in specific mass) from 0 to infinity. The bigger the value, the more dense and heavy it will be, and the fog will reach smaller heights
    wetness: 0 

# display_base:
#   grid_size:
#     row: 2
#     col: 3
#   width: 640 # width of single window
#   height: 480 

# Define the basic parameters of the road side unit
rsu_base: &rsu_base
  sensors: &sensors_base
    cameras: &camera_base
      attached: None
      visualize: [0] # list [0,1,2] means index of camera images need to be visualized. 0 means no visualization for camera
      num: 1 # how many cameras are mounted on the smart infrastructure.
      frame_rate: 20 # need to modify 
      custom_enable: false
      custom_camera_name: # only when custom_enable == true it be access: ('ouster-64','rs-ruby-128')
        - 'stereo-zed'
      positions: # (x,y,z,pitch,yaw,roll)
        - [5.74, 36.64, 7.52, -21.21, 90.00, 0.00]
      parameters: # parameters name is confusing
        - image_size_x: 800 
          image_size_y: 600
          fov: 120
      save: [0] #[0] # if type(save)==int like 0, then save_path will not be used  
      save_path: '/home/zzl/zhaoliang/zhz03_github/Multi-Mod_Sensor_Config_Lib/code/camera_img' # useless
    lidars: &lidar_base
      attached: None
      visualize: [0] # 0 or # [0,1,2]
      num: 1 # how many lidars are mounted on the smart infrastructure.
      frame_rate: 10 
      custom_enable: false # false means to use the default lidar provided by carla, true means to use the custom lidar
      custom_lidar_name: # only when custom_enable == true it be access: ('ouster-64','rs-ruby-128')
        - 'ouster-64'
      positions: # position of each lidar sensors
        - [5.74, 36.64, 7.52, 0.0, 0.00, 0.00]
      parameters:
        - channels: 32 # default: 32
          range: 100.0 # default: 10.0 
          points_per_second: 1000000 # default: 56000
          rotation_frequency: 10.0 # default: 10.0
          # upper_fov: 20.0 # default: 10.0
          # lower_fov: -25.0 # default: -30.0
          # atmosphere_attenuation_rate: 0.004 # default: 0.004
          # dropoff_general_rate: 0.3 # default: 0.45
          # dropoff_intensity_limit: 0.7 # default: 0.8
          # dropoff_zero_intensity: 0.4 # default: 0.4 
          # noise_stddev: 0.02 # default: 0.0
      save: 0 # if false, then save_path will not be used  
      # please modify the follow path accordingly if save: true 
      save_path: '/home/zzl/zhaoliang/zhz03_github/Multi-Mod_Sensor_Config_Lib/data/lidar_pcd' 
    radars: &radar_base
      attached: None
      visualize: [0]
      num: 1 # how many radars are mounted on the smart infrastructure.
      frame_rate: 20 
      custom_enable: false
      custom_radar_name: # only when custom_enable == true it be access: ('ouster-64','rs-ruby-128')
        - 'SRIR144V3' # https://www.renesas.com/us/en/products/rf-products/automotive-radar-sensors/srir144v3-high-resolution-real-time-4d-imaging-radar-system-automotive
      positions:
        - [5.74, 36.64, 7.52, 0.0, 0.00, 0.00]
      parameters: &radar_param_base
        - channels: 32
          range: 40.0
          points_per_second: 10000
          horizontal_fov: 35
          vertical_fov: 20
      save: 0 # if false, then save_path will not be used  
      save_path: '/home/zzl/zhaoliang/zhz03_github/Multi-Mod_Sensor_Config_Lib/data/radar_pcd'

bg_veh_base: &bg_veh_base
  spawn_position: [12.00, 192.31, 0.3, 0, -90, 0]
  destination: [67.12, 150.2, 1.0]
  class: 'car' # 'car', 'truck', 'van', 'cyclist', 'bus' # not 'pedestrian'
  model_index: 0
  color: 'defualt' # 'defualt', 'random', 'white', 'black', 'red', 'green', 'blue', 'yellow', 'cyan'
  autopilot: true
  auto_lane_change: false
  ignore_lights_percentage: 0 # whether set the traffic ignore traffic lights
  ignore_signs_percentage: 0 # whether set the traffic ignore traffic signs, 0 is not to ignore, 100 is to ignore
  distance_to_leading_vehicle: 5 # the distance to the leading vehicle
  ignore_vehicles_percentage: 0 # whether set the traffic ignore vehicles, 0 is not to ignore, 100 is to ignore
  #  Carla default speed is 30 km/h, so -100 represents 60 km/h,
  # and 20 represents 24 km/h
  speed_perc: -50  
  route: # if autopilot is false, then route is required
    - [12.00, 192.31, 0.3]
    - [67.12, 150.2, 1.0]

bg_ped_base: &bg_ped_base
  ai_control: false # no ai control means that the pedestrian can go on the traffic road; with ai control, the pedestrian can cross the curb
  spawn_position: [12.00, 192.31, 0.3, 0, -90, 0]
  destination: [67.12, 150.2, 1.0]  
  model_index: 0  # model index of the pedestrian
  speed: 1.4 # speed of the pedestrian or max speed under ai control

rsu_list: # define the list of road side units
  - <<: *rsu_base
    id: 125
    center_pos: [-0.39227, 36.645, 2.5]
    sensors:
      <<: *sensors_base
      cameras:
        <<: *camera_base
        visualize: [0,1] # list [0,1,2] means index of camera images need to be visualized. 0 means no visualization for camera
        num: 2 # how many cameras are mounted on the smart infrastructure.
        positions:
          - [5.74, 36.64, 7.52, -21.21, 90.00, 0.00]
          - [-0.548747, 47.392021, 6.857137, -10.650908, -0.318420, 0.00]
        parameters:
          - image_size_x: 1920
            image_size_y: 1080
            fov: 120
          - image_size_x: 800
            image_size_y: 600
            fov: 60
        frame_rate: 10 # // TESTED: 10, 20 
        save: [1,1] # // TESTED: [0], [1]
      lidars:
        <<: *lidar_base
        frame_rate: 10 # // TESTED: 10
        save: [1]
      radars: 
        <<: *radar_base 
        positions:
          - [1.74, 136.64, 17.52, 0.0, 0.00, 0.00]
        parameters:
          - range: 20.0
            horizontal_fov: 35
            vertical_fov: 20
# define the background traffic control by carla
carla_traffic_manager:
  sync_mode: true # has to be same as the world setting
  global_distance: 5 # the minimum distance in meters that vehicles have to keep with the rest
  # Sets the difference the vehicle's intended speed and its current speed limit.
  #  Carla default speed is 30 km/h, so -100 represents 60 km/h,
  # and 20 represents 24 km/h, -30 means (1 + 0.3) * default speed
  global_speed_perc: -100
  set_osm_mode: true # Enables or disables the OSM mode.
  auto_lane_change: false
  ignore_lights_percentage: 0 # whether set the traffic ignore traffic lights
  random: true # whether to random select vehicles' color and model
  vehicle_list: ~  # a number or a list  # define in each scenario. If set to ~, then the vehicles be spawned in a certain range
  # Used only when vehicle_list is ~
  # x_min, x_max, y_min, y_max, x_step, y_step, vehicle_num
  range:
    - [ 2, 10, 0, 200, 3.5, 25, 30]

traffic_manager:
  sync_mode: true
  set_osm_mode: true # Enables or disables the OSM mode.
  # =================
  global_distance: 5 # the minimum distance in meters that vehicles have to keep with the rest
  # Sets the difference the vehicle's intended speed and its current speed limit.
  #  Carla default speed is 30 km/h, so -100 represents 60 km/h,
# and 20 represents 24 km/h, -30 means (1 + 0.3) * default speed
  global_speed_perc: -100
  set_osm_mode: true # Enables or disables the OSM mode.
  auto_lane_change: false
  ignore_lights_percentage: 0 # whether set the traffic ignore traffic lights
  # =================
  spawn_type: 'list' 
  # Used only when spawn_type is 'range' 
  # x_min, x_max, y_min, y_max, x_step, y_step, vehicle_num
  random: true # whether to random select vehicles' color and model
  range:
    - [ 2, 10, 0, 200, 3.5, 25, 30]
  # 'list': use the veh_list and ped_list below; 
  # 'range': generate vehicles and pedestrains by range 
  # 'mixed': generate vehicles and pedestrains by range and list
  veh_list:
    - <<: *bg_veh_base
      class: 'car'
      ignore_vehicles_percentage: 100
      ignore_lights_percentage: 100
      ignore_signs_percentage: 100
      spawn_position: [5.282584190368652, 66.81765747070312, 1.2122421264648438, 0, -90, 0]
      destination: [67.12, 150.2, 1.0]
      speed_perc: 0
    - <<: *bg_veh_base
      class: 'cyclist'
      ignore_vehicles_percentage: 100
      ignore_lights_percentage: 100
      ignore_signs_percentage: 100
      spawn_position: [6.050206184387207, 32.05589294433594, 0.8515968918800354, 0, -90, 0]
      destination: [67.12, 150.2, 1.0]
      speed_perc: -100
      model_index: 1
  ped_list:
    - << : *bg_ped_base
      ai_control: true
      spawn_position: [16.19649887084961, 29.52642059326172, 1.460180377960205, 0, -90, 0]
      destination: [15.181740760803223, 67.7674560546875, 1.1183170795440674, 0, -90, 0]
      model_index: 0
      speed: 1.5
    - << : *bg_ped_base
      ai_control: true
      spawn_position: [24.428836822509766, 59.77539825439453, 1.5682913064956665, 0, -90, 0]
      destination: [-21.417505264282227, 60.44722366333008, 1.2296725511550903, 0, 0, 0]
      model_index: 1
      speed: 1.4
    - << : *bg_ped_base
      ai_control: true
      spawn_position: [16.24782371520996, 59.50773620605469, 1.4464819431304932, 0, -90, 0]
      destination: [-1.8368762731552124, 73.67555236816406, 1.1498802185058594, 0, 0, 0]
      model_index: 1
      speed: 1.3
    - << : *bg_ped_base
      ai_control: false
      spawn_position: [-2.26867413520813, 22.841426849365234, 1.3622042894363403, 0, -90, 0]
      destination: [14.644152641296387, 43.73273849487305, 1.102763295173645, 0, -90, 0]
      model_index: 1
      speed: 1.2

Step 4 Launch code

There are two modes to run the scenario code:

  • single scenario mode
  • batch scenario mode
  • multi scenario mode

Single Mode

For single scenario mode, you should run the following code:

conda activate opencda
# in the root directory 
python scenario_runner.py -s {your_yaml_file_name}
  • {your_yaml_file_name} is where you put your yaml name. For example, your command can be:

    python scenario_runner.py -s fourway_town10_dense
    

You should be able to see the similar scenario in CARLA:

Batch Mode

For batch scenario mode, you should run the following code:

conda activate opencda
# in the root directory 
python scenario_runner.py -s {your_yaml_file_name} --batch
  • {your_yaml_file_name} is where you put your yaml name. For example, your command can be:

    python scenario_runner.py -s fourway_town10_dense --batch
    
  • To run this code, you need to specify sensor preset files in [sensor_presets](./opencda_infra/config_yaml/sensor_presets) and the name of preset file should be as same as the {your_yaml_file_name}. For example, in ./opencda_infra/config_yaml, the file structure should be like:

    .
    ├── fourway_town10_dense.yaml
    └── sensor_presets
        └── fourway_town10_dense.py
    

Multi-Scenario Mode

For multi-scenario mode, you should run the following code:

conda activate opencda
# in the root directory 
python scenario_runner.py -s {your_yaml_file_name1} {your_yaml_file_name2}

Step 5 Check Your saved data

Your data will be save in the [data_dumping](data_dumping) under the root directory of your repository.

  • Make sure you change file_save parameter from false to true before you save the data.

Q&A

  • "RuntimeError: trying to create rpc server for traffic manager; but the system failed to create because of bind error."

    • The default port for CARLA is 2000, and the default port for ScenarioRunner is 8000. If port 8000 is occupied by other process, this error will happen.

    • The condition of port 8000 can be checked by: sudo lsof -i:8000

      Use sudo kill -9 PID to kill any process that's occupying port 8000.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages