🙌 Welcome to AIR-DREAM (Decision-making Research for Empowered AI Methods) Lab code repository! AIR-DREAM is a research group at Institute for AI Industry Research (AIR), Tsinghua University. Our research focus is to develop advanced learning-based data-driven decision-making theories and practical technologies that are robust, generalizable, and deployable to tackle real-world challenges. We work on fundamental learning algorithms, robust robotic control methods, optimization technologies for real-world AIoT systems, and data-driven decision-making tools & libraries.
Current available offline RL/IL algorithms and tools/libraries in our code repository include:
Algorithms:
- IVM: Instruction-Guided Visual Masking
- DecisionNCE (ICML 2024): Embodied Multimodal Representations via Implicit Preference Learning
- QPA (ICLR 2024 spotlight): Query-Policy Misalignment in Preference-Based Reinforcement Learning
- ODICE (ICLR 2024 spotlight): Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update
- FISOR (ICLR 2024): Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model
- PROTO: Iterative Policy Regularized Offline-to-Online Reinforcement Learning
- OMIGA (NeurIPS 2023): Offline Multi-Agent RL with Implicit Global-to-Local Value Regularization
- TSRL (NeurIPS 2023): Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline RL
- SQL/EQL (ICLR 2023 oral): Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization
- DOGE (ICLR 2023): When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning
- RGM (ICLR 2023): Mind the Gap: Offline Policy Optimization for Imperfect Rewards
- H2O (NeurIPS 2022 spotlight): When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning
- POR (NeurIPS 2022 oral): A Policy-Guided Imitation Approach for Offline Reinforcement Learning
- DWBC (ICML 2022): Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations
- DMIL (CoRL 2022): Discriminator-Guided Model-Based Offline Imitation Learning (available from D2C)
- CPQ (AAAI 2022): Constraints Penalized Q-Learning for Safe Offline Reinforcement Learning
- DeepThermal (AAAI 2022): Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning
- MOPP (IJCAI 2022): Model-Based Offline Planning with Trajectory Pruning
Tools/Libraries: