ElegantRL
latest
Home
HelloWorld
Hello, World!
Networks:
net.py
Agents:
agent.py
Environment:
env.py
Main:
run.py
Quickstart
Overview
Key Concepts and Features
Cloud-native Paradigm
Muti-level Parallelism
Tutorials
Example 1: LunarLanderContinuous-v2
Example 2: BipedalWalker-v3
How to create a VecEnv on GPUs
How to run worker parallelism: Isaac Gym
How to run learner parallelism: REDQ
How to learn stably: H-term
Cloud Example 1: Generational Evolution
Cloud Example 2: Tournament-based Ensemble Training
Algorithms
DQN
Double DQN
DDPG
TD3
SAC
A2C
PPO
REDQ
MADDPG
MATD3
QMix
VDN
MAPPO
RLSolver
Overview
API Reference
Configuration:
config.py
Run:
run.py
Worker:
worker.py
Replay Buffer:
replay_buffer.py
Evaluator:
evaluator.py
Other
FAQ
ElegantRL
Index
Edit on GitHub
Index
A
|
B
|
C
|
D
|
E
|
G
|
K
|
P
|
Q
|
R
|
S
|
U
A
Actor (class in elegantrl.agents.net)
,
[1]
,
[2]
,
[3]
ActorDiscretePPO (class in elegantrl.agents.net)
,
[1]
ActorPPO (class in elegantrl.agents.net)
,
[1]
ActorSAC (class in elegantrl.agents.net)
,
[1]
,
[2]
AgentA2C (class in elegantrl.agents.AgentA2C)
AgentDDPG (class in elegantrl.agents.AgentDDPG)
AgentDiscreteA2C (class in elegantrl.agents.AgentA2C)
AgentDiscretePPO (class in elegantrl.agents.AgentPPO)
AgentDQN (class in elegantrl.agents.AgentDQN)
AgentMADDPG (class in elegantrl.agents.AgentMADDPG)
AgentModSAC (class in elegantrl.agents.AgentSAC)
AgentPPO (class in elegantrl.agents.AgentPPO)
AgentSAC (class in elegantrl.agents.AgentSAC)
AgentTD3 (class in elegantrl.agents.AgentTD3)
B
build_env() (in module elegantrl.train.config)
C
Critic (class in elegantrl.agents.net)
,
[1]
,
[2]
,
[3]
,
[4]
CriticPPO (class in elegantrl.agents.net)
,
[1]
CriticTwin (class in elegantrl.agents.net)
,
[1]
,
[2]
D
device (elegantrl.train.replay_buffer.ReplayBuffer attribute)
E
Evaluator (class in elegantrl.train.evaluator)
explore_one_env() (elegantrl.agents.AgentDQN.AgentDQN method)
(elegantrl.agents.AgentMADDPG.AgentMADDPG method)
(elegantrl.agents.AgentPPO.AgentDiscretePPO method)
(elegantrl.agents.AgentPPO.AgentPPO method)
explore_vec_env() (elegantrl.agents.AgentDQN.AgentDQN method)
(elegantrl.agents.AgentPPO.AgentDiscretePPO method)
(elegantrl.agents.AgentPPO.AgentPPO method)
G
get_obj_critic_per() (elegantrl.agents.AgentDQN.AgentDQN method)
get_obj_critic_raw() (elegantrl.agents.AgentDQN.AgentDQN method)
K
kwargs_filter() (in module elegantrl.train.config)
P
per_beta (elegantrl.train.replay_buffer.ReplayBuffer attribute)
Q
QNet (class in elegantrl.agents.net)
QNetDuel (class in elegantrl.agents.net)
QNetTwin (class in elegantrl.agents.net)
QNetTwinDuel (class in elegantrl.agents.net)
R
ReplayBuffer (class in elegantrl.train.replay_buffer)
S
save_or_load_agent() (elegantrl.agents.AgentMADDPG.AgentMADDPG method)
select_actions() (elegantrl.agents.AgentMADDPG.AgentMADDPG method)
U
update_agent() (elegantrl.agents.AgentMADDPG.AgentMADDPG method)
update_net() (elegantrl.agents.AgentMADDPG.AgentMADDPG method)
Read the Docs
v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds