Ask a Question

Prefer a chat interface with context about you and your work?

Adversary Agnostic Robust Deep Reinforcement Learning

Adversary Agnostic Robust Deep Reinforcement Learning

Deep reinforcement learning (DRL) policies have been shown to be deceived by perturbations (e.g., random noise or intensional adversarial attacks) on state observations that appear at test time but are unknown during training. To increase the robustness of DRL policies, previous approaches assume that explicit adversarial information can be added …