Environment Catalog
Browse published RL environments. 6 available.
Drone Grid Navigation
3D drone obstacle avoidance. Navigate to goal while avoiding random obstacles.
Morphing Grid Navigation
A 10x10 partially observable grid world where the agent navigates from bottom-left to a dynamically relocating goal while collecting resources. After every action, the environment stochastically morphs: walls toggle with 30% probability per cell, the goal teleports, and resources shift positions. The agent receives a 5x5 local view of walls and relative vectors to the goal and nearest resources. Features anti-oscillation and stagnation penalties to prevent reward hacking.
DynamicMazeWithSelfObservation
RESEARCH HYPOTHESIS: In dynamically changing environments where the environment state transitions after each agent action, agents that incorporate self-observation data (including recent action sequences, reward histories, environment change patterns, and strategy effectiveness metrics) into their observation space will demonstrate superior adaptation performance compared to agents that only observe external environment state SUB-HYPOTHESES: H1: Agents with access to their own recent action history and corresponding rewards will adapt faster to dynamic environment changes than agents without this self-observation capability; H2: Agents that track environment change patterns (how the environment responds to their actions) will develop more robust strategies in dynamic settings; H3: Agents that monitor their own strategy effectiveness (progress toward goals over recent timesteps) will avoid repeating ineffective action sequences and converge to better policies USER'S ORIGINAL IDEA: dynamic reinforcement learning environment and adaptive agent: environment that dynamically change in training, so the agent inside should adapt the new situation every time, everything in the environment changes after each action. the hypothesis: when the environment is dynamic and changes after agent's each action, thus the agent must store the experience and must do self observation while taking actions and also must store the self observation experience and learn from these experiences in order to adapt the dynamically changing environment. Senin hipotezim 3 katmanlı: Environment dinamik değişmeli → ✅ B Agent deneyimi depolamalı (experience storage) → agent'ın "bu durumda şu oldu, ortam böyle değişti" bilgisini açıkça saklaması. Agent self-observation yapmalı → Agent sadece dış ortamı gözlemliyor (duvarlar, hedef, kaynaklar). Ama kendi iç durumunu — "son 5 adımda ne yaptım, ne oldu, ortam nasıl değişti, stratejim işe yaradı mı" — gözlemlesin tezin env kodunda olması gereken şey: Observation space'e agent'ın kendi deneyim özeti eklenmeli: Son N adımdaki action dizisi (ne yaptım) Son N adımdaki reward dizisi (ne oldu) Ortam değişim vektörü (ortam nasıl değişti — duvar toggle oranı, hedef ne kadar kaydı) Strateji etkinliği (son K adımda hedefe yaklaştım mı uzaklaştım mı) Deneyim kalıpları (bu tür ortam değişiminde hangi stratejim işe yaradı) Yani agent'ın observation'ı sadece "dış dünya şu an nasıl" değil, "ben ne yaptım, ne oldu, ortam nasıl tepki verdi" bilgisini de içermeli CRITICAL: The environment MUST implement ALL aspects of the hypothesis, including any agent-side mechanisms (self-observation, experience storage, adaptive behavior tracking) as part of the OBSERVATION SPACE and REWARD FUNCTION. Do not just build the environment dynamics — also embed the agent-side requirements into the env's observation/reward design. ENVIRONMENT SPECIFICATION: OBSERVATION SPACE: 147-dimensional vector containing: (1) Current position (x,y) [2 dims], (2) Goal position (x,y) [2 dims], (3) 10x10 grid maze layout (walls=1, empty=0) [100 dims], (4) Last 5 actions taken [5 dims: 0=up,1=down,2=left,3=right,4=stay], (5) Last 5 rewards received [5 dims], (6) Environment change vector: wall toggle frequency in 3x3 local area over last 10 steps [9 dims], (7) Goal displacement distance over last 10 steps [10 dims], (8) Strategy effectiveness: distance-to-goal change over last 5 steps [5 dims], (9) Action pattern effectiveness: reward per action type over last 10 steps [4 dims], (10) Local exploration coverage: unique cells visited in last 15 steps [5 dims]. ACTION SPACE: Discrete(5) - up, down, left, right, stay. TRANSITION DYNAMICS: After each action: (a) 30% chance each wall in 5x5 area around agent toggles state, (b) Goal position shifts by 1-2 cells in random direction with 40% probability, (c) New walls may block current path to goal. REWARD FUNCTION: +10 for reaching goal, -0.1 per timestep, -1 for hitting wall, +0.5 for moving closer to goal, -0.5 for moving away, +1.0 bonus if agent reaches goal using fewer actions than previous 3 attempts (strategy improvement reward), -0.2 penalty for repeating same failed action sequence (last 3 actions) that previously led to wall collision. EPISODE TERMINATION: Goal reached, 200 timesteps elapsed, or agent stuck (same position for 10 consecutive steps). AGENT-SIDE REQUIREMENTS: Agent must maintain internal buffers for action history, reward history, and compute strategy effectiveness metrics that feed into observation space.
AdaptiveResourceGatheringWithExperienceTracking
RESEARCH HYPOTHESIS: In dynamically changing environments where the environment state transitions after each agent action, agents that incorporate self-observation data (including recent action sequences, reward histories, environment change patterns, and strategy effectiveness metrics) into their observation space will demonstrate superior adaptation performance compared to agents that only observe external environment state SUB-HYPOTHESES: H1: Agents with access to their own recent action history and corresponding rewards will adapt faster to dynamic environment changes than agents without this self-observation capability; H2: Agents that track environment change patterns (how the environment responds to their actions) will develop more robust strategies in dynamic settings; H3: Agents that monitor their own strategy effectiveness (progress toward goals over recent timesteps) will avoid repeating ineffective action sequences and converge to better policies USER'S ORIGINAL IDEA: dynamic reinforcement learning environment and adaptive agent: environment that dynamically change in training, so the agent inside should adapt the new situation every time, everything in the environment changes after each action. the hypothesis: when the environment is dynamic and changes after agent's each action, thus the agent must store the experience and must do self observation while taking actions and also must store the self observation experience and learn from these experiences in order to adapt the dynamically changing environment. Senin hipotezim 3 katmanlı: Environment dinamik değişmeli → ✅ B Agent deneyimi depolamalı (experience storage) → agent'ın "bu durumda şu oldu, ortam böyle değişti" bilgisini açıkça saklaması. Agent self-observation yapmalı → Agent sadece dış ortamı gözlemliyor (duvarlar, hedef, kaynaklar). Ama kendi iç durumunu — "son 5 adımda ne yaptım, ne oldu, ortam nasıl değişti, stratejim işe yaradı mı" — gözlemlesin tezin env kodunda olması gereken şey: Observation space'e agent'ın kendi deneyim özeti eklenmeli: Son N adımdaki action dizisi (ne yaptım) Son N adımdaki reward dizisi (ne oldu) Ortam değişim vektörü (ortam nasıl değişti — duvar toggle oranı, hedef ne kadar kaydı) Strateji etkinliği (son K adımda hedefe yaklaştım mı uzaklaştım mı) Deneyim kalıpları (bu tür ortam değişiminde hangi stratejim işe yaradı) Yani agent'ın observation'ı sadece "dış dünya şu an nasıl" değil, "ben ne yaptım, ne oldu, ortam nasıl tepki verdi" bilgisini de içermeli CRITICAL: The environment MUST implement ALL aspects of the hypothesis, including any agent-side mechanisms (self-observation, experience storage, adaptive behavior tracking) as part of the OBSERVATION SPACE and REWARD FUNCTION. Do not just build the environment dynamics — also embed the agent-side requirements into the env's observation/reward design. ENVIRONMENT SPECIFICATION: OBSERVATION SPACE: 89-dimensional vector containing: (1) Agent position (x,y) [2 dims], (2) Resource locations (5 resources, each with x,y,type,quantity) [20 dims], (3) Agent inventory (4 resource types) [4 dims], (4) Market prices for each resource type [4 dims], (5) Last 8 actions (0=move_up,1=move_down,2=move_left,3=move_right,4=gather,5=sell) [8 dims], (6) Last 8 rewards [8 dims], (7) Resource regeneration pattern: quantity change per resource over last 5 timesteps [25 dims], (8) Price volatility: price change per resource over last 4 timesteps [16 dims], (9) Gathering efficiency: resources gathered per gathering action over last 6 attempts [6 dims], (10) Market timing effectiveness: profit per sell action over last 4 sales [4 dims], (11) Exploration diversity: number of different resource types interacted with in last 10 actions [1 dim], (12) Strategy consistency score: correlation between action sequences and positive rewards over last 15 actions [1 dim]. ACTION SPACE: Discrete(6) - move in 4 directions, gather resource at current location, sell inventory at market. TRANSITION DYNAMICS: After each action: (a) Resource quantities change by ±20-50% with 60% probability, (b) Resource types may change (wood→stone, etc.) with 25% probability, (c) Market prices fluctuate ±10-30% based on agent's recent selling behavior, (d) New resources spawn randomly while others deplete. REWARD FUNCTION: +value for selling resources (based on quantity×price), +2 for gathering rare resources, -0.05 per timestep, +3.0 bonus for selling when prices are in top 25% of recent history (market timing), +1.5 bonus for maintaining diverse resource portfolio, -1.0 penalty for attempting same failed gathering sequence (last 4 actions) that previously yielded zero resources. EPISODE TERMINATION: 300 timesteps elapsed, total profit exceeds 100 units, or agent profit below -20 (bankruptcy). AGENT-SIDE REQUIREMENTS: Agent must track market timing patterns, resource availability changes, and maintain experience buffer linking action sequences to profitability outcomes.
MorphingMazeAdaptiveIntelligence
A 10x10 grid maze with continuous morphing dynamics where walls toggle probabilistically after every action, the goal relocates periodically, and resources respawn. The 63-dimensional observation space includes external state (position, local wall view), experience storage (action/reward history), self-observation metrics (distance trends, collision rates, exploration efficiency), and meta-awareness signals (environment change magnitude, pattern familiarity). Designed to test whether self-observational capabilities improve adaptation speed in non-stationary environments.
VariableMorphingComplexityEnvironment
A 12x12 morphing maze with parametric volatility (10-30% wall toggle), variable goal relocation (50-200 steps), and dynamic resource counts (2-5). Features a 75-dimensional observation space combining external state, experience storage, and enhanced self-observation metrics for studying adaptation to continuous environmental changes. Tests whether self-observing agents outperform external-only agents under varying morphing intensities.