Skip to content

Latest commit

 

History

History
132 lines (95 loc) · 9.55 KB

README.md

File metadata and controls

132 lines (95 loc) · 9.55 KB

DOI standard-readme compliant

DeepCollision: Learning Configurations of Operating Environment of Autonomous Vehicles to Maximize their Collisions

To facilitate reviewing our proposed approach, reviewers please refer to the corresponding data in this repository.
The replication package that supports the findings of this study is also available in Zenodo with the identifier data DOI: https://doi.org/10.5281/zenodo.5906634.

This repository contains:

  1. algorithms - The algorithm of DeepCollision, which includes the network architecture and the DQN hyperparameter settings;
  2. pilot-study - All the raw data and plots for the pilot study;
  3. formal-experiment - A dataset contains all the raw data for analysis and the scenarios with detailed demand values;
  4. rest-api - The REST API endpoints for environment configuration and one example to show the usage of the APIs;
  5. deepcollision-project - The source code of DeepCollision, Random and Greedy baselines, as well as REST APIs.

Table of Contents

Contributions

  1. With the aim to test ADSs, we propose a novel RL-based approach to learn operating environment configurations of autonomous vehicles, including formalizing environment configuration learning as an MDP and adopting DQN as the RL solution;
  2. To handle the environment configuration process of an autonomous vehicle, we present a lightweight and extensible DeepCollision framework providing 52 REST API endpoints to configure the environment and obtain states of both the autonomous vehicle and its operating environment; and
  3. We conducted an extensive empirical study with an industrial scale ADS and simulator and results show that DeepCollision outperforms the baselines. Further, we provide recommendations of configuring DeepCollision with the most suitable time interval setting based on different road structures.

Overview of DeepCollision

DeepCollision learns environment configurations to maximize collisions of an Autonomous Vehicle Under Test (AVUT). As shown in the following figure, DeepCollision employs a Simulator (e.g., LGSVL) to simulate the Testing Environment comprising the AVUT and its operating environment. DeepCollision also integrates with an Autopilot Algorithm Platform (e.g., the Baidu Apollo) deployed on the AVUT to enable its autonomous driving.

DeepCollision employs a DQN component to generate a set of actions to configure the environment of the AVUT, e.g., weather condition, time of day. At each time step t, the DQN component observes a state St describing the current states of the AVUT and its environment. With the state, DeepCollision decides an action At based on the Q-network with the policy . With our developed Environment Configuration REST API, such an action At can be considered as an HTTP request for accessing the simulator to introduce new environment configurations.

After the AVUT driving into a new environment within a fixed time period, both the AVUT and its environment will enter a new state St+1. Based on the observed states of the AVUT and its environment, Reward calculator calculates a reward Rt for At and St at t+1. Then DQN stores them (as St, At, Rt, St+1) into the replay memory buffer.

Once the replay memory is full, Q-network will be updated based on the loss function by a mini-batch randomly selected from the updated replay memory. In addition, with St+1, the (updated) Q-network with policy decides the next action: At+1. In DeepCollision, an episode is finished once the AVUT arrives at its destination or the AVUT cannot move for a specific duration.

At each time step t, the information about the AVUT (e.g., its driving and collision status) and its environment (e.g., its status and driving scenarios) are stored as Environment Configuration Logs for further analyses and collision replaying.

More details of Hyperparameters of DQN used in DeepCollision can be accessed here hyperparameter settings.

DeepCollision Environment Configuration API

REST API List

To view all the implemented environment configuration REST API endpoints, please look at full-list.

Usage

To see detialed instructions of using REST APIs, please look at rest-api-example.

Also, we provide a running example on Bilibili.

Prerequisite

Users can access our server with Apollo and LGSVL deployed through our provided REST APIs. To call the APIs through Python Scripts, one needs to install requests:

$ python -m pip install requests

Visualization

We have integrated the REST APIs into LGSVL and Apollo and put the into online server, users can see the effects of the environment configuration via this link.

image

Step 1: Load scene and generate AVUT's initial position

There are two parameters in LoadScene API: the first one is Map, and the second one is the road which the AVUT will drive on.

import requests
requests.post("http://101.35.135.164:5000/LGSVL/LoadScene?scene=SanFrancisco&road_num=1")

Once the scene is loaded, the simulator will show the loaded SanFrancisco Map. See here.

Step 2: Configure the operating environment

Set rain level to light rain.

requests.post("http://101.35.135.164:5000/LGSVL/Control/Weather/Rain?rain_level=Light")

Once the weather of rain is configured, it will rain in the simulator. See here.

Step 3: Get state returned

r = requests.get("http://101.35.135.164:5000/LGSVL/Status/Environment/State")
a = r.json()
#### State returned after one configuration action executed.
state = np.zeros(12)
state[0] = a['x']
state[1] = a['y']
state[2] = a['z']
state[3] = a['rain']
state[4] = a['fog']
state[5] = a['wetness']
state[6] = a['timeofday']
state[7] = a['signal']
state[8] = a['rx']
state[9] = a['ry']
state[10] = a['rz']
state[11] = a['speed']

The returned state will be used as the new state St+1. Users can also use other GET method to obtain state like GPS Data, EGO vehicle status.

Related Efforts

  • LiveTCM: Restricted Natural Language and Model-based Adaptive Test Generation for Autonomous Driving
  • SPECTRE: Search-Based Selection and Prioritization of Test Scenarios for Autonomous Driving Systems

Paper

Lu, Chengjie, et al. "Learning Configurations of Operating Environment of Autonomous Vehicles to Maximize their Collisions." IEEE Transactions on Software Engineering (2022). https://doi.org/10.1109/TSE.2022.3150788.

Maintainers

@ChengjieLu, @YizeShi