- Feb 06, 2019
-
-
Jae Young Lee authored
And, changing # of hidden layers to 3 from 6.
-
Jae Young Lee authored
-
- Feb 05, 2019
-
-
Jae Young Lee authored
Retraining wait maneuver See merge request !4
-
Jae Young Lee authored
- Also added ManualWait class.
-
- Feb 04, 2019
-
-
Jae Young Lee authored
-
- Feb 01, 2019
-
-
Jae Young Lee authored
-
Aravind Balakrishnan authored
More improve follow and keep lane See merge request !3
-
Jae Young Lee authored
-
- Jan 31, 2019
-
-
Jae Young Lee authored
-
Jae Young Lee authored
-
- Jan 30, 2019
-
-
Jae Young Lee authored
-
Jae Young Lee authored
-
Ashish Gaurav authored
Improve follow See merge request !2
-
Jae Young Lee authored
# Conflicts: # backends/kerasrl_learner.py # env/simple_intersection/simple_intersection_env.py # options/simple_intersection/maneuvers.py
-
Jae Young Lee authored
-
Jae Young Lee authored
-
Jae Young Lee authored
Improve and bugfix low and high level training See merge request !1
-
Jae Young Lee authored
-
- Jan 29, 2019
-
-
Jae Young Lee authored
-
- Jan 24, 2019
-
-
Jae Young Lee authored
- Added RestrictedEpsGreedyPolicy and RestrictedGreedyPolicy and use them as policy and test_policy in DQNLearner. Now, the agent never chooses the action corresponding to -inf Q-value if there is at least one action with finite Q-value (if not, it chooses any action randomly, which is necessary for compatibility with keras-rl -- see the comments in select_action). - Now, generate_scenario in SimpleIntersectionEnv generates veh_ahead_scenario even when randomize_special_scenario = 1. - In EpisodicEnvBase, the terminal reward is by default determined by the minimum one; - Small change of initiation_condition of EpisodicEnvBase (simplified);
-
Jae Young Lee authored
-
- Jan 22, 2019
-
-
Jae Young Lee authored
The high-level policy was trained without changelane maneuver but with immediatestop maneuver. Two problems remain: 1) the agent chooses changelane maneuver too frequently; 2) before the stop region, immediatestop maneuver works but was not chosen property after 2.5m the high-level policy training...
-
- Jan 17, 2019
-
-
Jae Young Lee authored
Each low level policy was retrained with better LTL conditions and rewards, some parts of which are also designed to encourage exploration (to prevent the vehicle from being stopped all the time).
-
- Nov 19, 2018
-
-
Aravind Balakrishnan authored
Formatting See merge request !3
-
Ashish Gaurav authored
-
Ashish Gaurav authored
-
Ashish Gaurav authored
-
- Nov 18, 2018
-
-
Ashish Gaurav authored
-
Ashish Gaurav authored
Final test See merge request !2
-
Unknown authored
-
Unknown authored
-
Jae Young Lee authored
-
Jae Young Lee authored
-
Jae Young Lee authored
-
Jae Young Lee authored
-
Unknown authored
-
Unknown authored
-
Jae Young Lee authored
-
Jae Young Lee authored
-
Jae Young Lee authored
-