1,050 questions
-3
votes
0
answers
17
views
Gym's box 2d (openAI) doesn't install successfully [closed]
i've tried updating all these packages but nothing worked.
brew update
brew install gcc
brew install swig
Both are the screenshots of my errors I got while installing gym box 2d
0
votes
0
answers
31
views
Reward not improving for a custom environment using PPO
I've been trying to train an agent on a custom environment I implemented with gym where the goal is to resolve voltage violations in a power grid by adjusting the active power (loads) at each node. I ...
-1
votes
0
answers
18
views
What is the best way to use JSON data for a Python auto-battler simulation: Direct access or pre-conversion or any other to Python classes? [closed]
I'm building a simulation for Hearthstone Battlegrounds in Python, focusing on creating an auto-battler system. I've already loaded card data from a JSON source, which includes details about minions ...
1
vote
0
answers
17
views
"Install TensorFlow and Gym with Conda for usage with Jupyter Notebook in macOS Sonoma, Intel processor
I am trying to set up a virtual environment using Conda to code a lab activity regarding RL, but it is proving to be quite a nightmare due to incompatibilities among different library versions. The ...
0
votes
0
answers
12
views
BrokenPipeError during multithreading AI training with SubprocVecEnv
I'm working on training a reinforcement learning agent to play super mario bros using SubprocVecEnv to parallelize environments and speed up the process. However, I encounter a BrokenPipeError when ...
0
votes
0
answers
19
views
PPO clip AI issues
I have been trying to make a PPO clip AI using openai's pseudocode from a while ago however it is not very good. I am only trying a simple environment (cartpole) and the issue appears to be that after ...
0
votes
0
answers
15
views
Cartpole gym spawn point
How can I change the initial spawn point on of the cartpole while resetting the environment? I have to use a custom reward in testing reward is like:
def new_reward(state, x0):
s = state[0]
...
1
vote
1
answer
37
views
Unable to use all (or most) of gym-retro games
My working ENV
System: ubuntu 18.04
Version: python 3.6/3.7/3.8(3 envvironments all the same result), gym 0.25.2, gym-retro 0.8.0
How did I do?
I follow this guide to install gym-retro. and I ...
0
votes
0
answers
27
views
Install gymnasium with atari games using conda
Sorry if this is a silly question, but I can't figure this one out.
I am trying to install gymnasium with Atari games using conda. Here is my setup.py:
from setuptools import find_packages
from ...
0
votes
0
answers
21
views
Garage Framework Dİmension or concatenated Error
I am working on the Robosuite framework to simulate a robotic control task with a reinforcement learning (RL) algorithm. I'm implementing the Cross-Entropy Method (CEM) from the garage library. I need ...
0
votes
0
answers
33
views
Applying reinforced-learning on combination of continuous and discrete action space environment
I have a custom gym environment, where it has 3 continuous and 1 discrete action space. I would like to apply a reinforcement-learning algorithm, however I am not sure what to use.
Below you can find ...
1
vote
1
answer
857
views
Module 'numpy' has no attribute 'bool8' In cartpole problem openai gym
I'm beginner & trying to run this simple code but it is giving me this exception "module 'numpy' has no attribute 'bool8'" as you can see in screenshot below. Gym version is 0.26.2 & ...
0
votes
1
answer
261
views
Error while running "pip install gym==0.21.0" in WSL 2: "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'"
I ran "pip install gym==0.21.0" then got this
Collecting gym==0.21.0
Using cached gym-0.21.0.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
...
0
votes
0
answers
37
views
error when trying to render a gym environment
I wrote and run this snippet of code some weeks ago, which it worked.
import gym
# Create predefined environment
env = gym.make('FrozenLake-v1')
# Print environment in terminal
env.render()
But ...
0
votes
0
answers
42
views
Render 'InvertedPendulumBulletEnv-v0' from different view - Pybullet, Gym
I trained an RL agent to control an inverted pendulum and wanted to create an animation of it and render the environment. I was able to create the animation and control the pendulum correctly, however ...
0
votes
0
answers
30
views
The reset() problem of RecurDyn simulation model with Gym for reinforcement learning training
I'm trying to integrate a RecurDyn simulation model with Gym for reinforcement learning training. The simulation model communicates with RecurDyn through an FMU file(FMI2.0) in Python. However, when I'...
0
votes
1
answer
81
views
Is it fine to make an API call inside a reinforcement learning program?
I have made a game simulation with rest of the API available, and I would like to create a reinforcement learning AI in Python using gym from OpenAI.
So, is it fine to make API calls inside the step ...
0
votes
0
answers
22
views
Model-based RL | How to convert a custom Gym environment with discrete actions to continuous
I am working on a project where I have a Gym environment with a discrete action space and a continuous observation space similar to the CartPole environment from Gym. I am trying to convert the ...
0
votes
0
answers
38
views
DRL Agent is getting stuck on the lower bound of my action space
I'm new to DRL and I've set up an env for an RIS assisted CoMP-NOMA network (this is unimportant I believe) but there's a function here that gives me some "rates". I'm supposed to be ...
0
votes
0
answers
41
views
How to preprocess observations for a DQN agent in Stable Baselines3?
I'm working on a reinforcement learning project using the Stable Baselines3 library to train a DQN agent on the Breakout-v4 environment from OpenAI Gym. I've written some code to preprocess the ...
0
votes
0
answers
32
views
Replay buffer in StableBaselines3 for a Gymnasium environment
I'm creating a customized replay buffer class based on ReplayBuffer from stable_baselines3.common.buffers, using a gymnasium environment instead of the gym environment.
The return value of the env....
0
votes
0
answers
41
views
How to implement Robust Adversarial Reinforcement Learning using Stable Baselines 3?
My goal is to implement a custom version of the Robust Adversarial Reinforcement Learning (RARL) algorithm using the the algorithms and functions already provided by Stable Baselines 3 (SB3).
The ...
0
votes
1
answer
95
views
TypeError: unsupported operand type(s) for >>: 'list' and 'int' in pokerenv
I am trying to use the pokerenv library for a reinforcement learning project, but even the example code provided by the documentation itself produces the following error:
------------------------------...
0
votes
0
answers
32
views
I have no idea why DQN is making only 0 for q_values.argmax().item()
'q_values.argmax().item()' I used this method to make an action for training , but this keeps making same value, 0, even when my action space is 4001~8000. Maybe some of my code structure is not ...
2
votes
2
answers
505
views
OverflowError When Setting Up gym_super_mario_bros Environment in Python on JupyterLab
I’ve been following a tutorial on YouTube (https://youtu.be/2eeYqJ0uBKE?si=Vx6gybqKh3ApSfXV) to build a Super Mario AI model using Python. However, when setting up the gym_super_mario_bros environment,...
0
votes
1
answer
44
views
How do I log observations after reset in Stable_Baselines3?
I want to log each observation obtained after reset during training, while using SB3.
Based on this issue message, I decided to use the Monitor wrapper instead of a callback.
However, the Monitor ...
0
votes
0
answers
84
views
How to Solve mujoco-py Installation Error Caused by Windows Path Length Limitation?
When using the OpenAI Gym HalfCheetah-v4 environment on windows, I encountered a path length limitation issue. The specific error is as follows:
could not create 'C:\Users\zchy\AppData\Local\Packages\...
0
votes
1
answer
60
views
requested array would exceed the maximum number of dimension of 1 issue in gym
let us suppose we have folloing code :
import gym
from stable_baselines3 import PPO
env = gym.make("CartPole-v1", render_mode="human")
model = PPO("MlpPolicy", env, ...
0
votes
0
answers
118
views
Overriding environment OpenAI gym registry error
To the best of my knowledge I followed this documentation for gym_examples verbatim and received this error
logger.warn(f"Overriding environment {new_spec.id} already in registry.")
I ...
0
votes
1
answer
229
views
Cython Compiler Error When Running GymEnv Library in Python
The error happens on the last line of this code section:
import warnings
warnings.filterwarnings("ignore")
from torch import multiprocessing
from collections import defaultdict
import ...
0
votes
0
answers
35
views
I'm getting a "invalid type "ResourceVariable" must be a string or Tensor." on the rL compile function when trying to train a DQM with Keras
I was trying to train a model on a gym environment when I encountered the following error:
TypeError: Argument `fetch` = <tf.Variable 'dense/kernel:0' shape=(4, 24) dtype=float32> has invalid ...
1
vote
0
answers
48
views
CUDAError: Not enough memory for an RL environment using Gymnasium
from TTset_main import load_TTset
# File paths for training and validation data
filted_tset_path = "path.csv"
filted_vset_path = "path.csv"
# Load training and validation sets
...
0
votes
0
answers
19
views
co-training of agents with message passing
Lets say I have two RL agents (A and B). I want to train 3 A and 1 B to gathere such that afetr each time-step there is a message passing from all A to B and something from B to all A. In other words ...
0
votes
0
answers
132
views
How to Build More Realistic simulation Models Using MESA Agent-Based model and Reinforcement Q-Learning Methods?
I am working on a project to compare different modeling techniques for optimizing the waiting time of consumers at a movie theater. Specifically, I have created simple models using:
Discrete Event ...
0
votes
0
answers
55
views
Unable to allocate memory for a Reinforcement Learning problem
import gym
from gym import spaces
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import ...
1
vote
0
answers
129
views
Integrating Dirichlet Distribution into PPO in Stable Baselines3
I'm working with the Stable Baselines3 library to train a Proximal Policy Optimization (PPO) model for a reinforcement learning project. I want to integrate a Dirichlet distribution for action ...
0
votes
0
answers
49
views
Keras RL DQNAgent error 'list' object has no attribute 'shape'
I am tryng to train a multiheaded DQN agent with keras rl. I creted a custom environment with gym and when I try to build the agent i get the error "'list' object has no attribute 'shape'"
I ...
0
votes
0
answers
94
views
AssertionError: Out of bounds observation when using action mask in Petting zoo env
I have a custom Petting Zoo parallel env, which worked fine until I added some action masks.
I follow the action_masking tutorial and try to implement action masking in this way (in a different class ...
0
votes
0
answers
57
views
How do I use gym supermario v3 env as model input but use gym supermario v0 env in the screen?
I want to run gym-supermario-bros on my computer, but I train it on supermario v3 env, I want to see it in v0 version, is there a good way to use my trained agent in supermario v3 env but render v0 ...
0
votes
0
answers
24
views
Why do I get a mismatch between box. space and tensor format in torch when I run the code about rayrllib?
When I try to use Ray RLlib for training, line 228 in the complex input network in raylib displays an error:
**SampleBatch.OBS: torch.reshape(
^^^^^^^^^^^^^^
TypeError: reshape(): argument 'input' (...
0
votes
0
answers
185
views
Environment "Breakout" not found in OpenAI Gym
I'm trying to create the "Breakout" environment using OpenAI Gym in Python, but I'm encountering an error stating that the environment doesn't exist. Here's my code:
import matplotlib.pyplot ...
1
vote
0
answers
131
views
Building a Reinforcement Learning Model by combining outputs of many different ML models
Compared to many people, I am an amateur about coding and I need some guidance.
After careful writing and tweaking to build a stock price predictor (hopefull a trading bot after that), I built 3 ...
0
votes
0
answers
211
views
AttributeError: 'NoneType' object has no attribute 'cuda'
I'm encountering an AttributeError when trying to run a PPO trainer on an OR-GYM environment for inventory management using Ray RLlib and PyTorch in a CPU-only setup. Despite explicitly setting ...
0
votes
0
answers
26
views
Expected type 'MyEnv', got 'Env' instead
I have created my custom environment from OpenGym, and I'm getting the previous warning on this line:
env: MyEnv= gym.make('gym_envs/MyEnv-v0')
When I remove MyEnv, I don't get the warning there but ...
0
votes
0
answers
258
views
IsaacGymEnvs: CUDA error: an illegal memory access was encountered --> FrankaCabinet task (modified)
I am creating a simpler FrankaCabinet task in IsaacGym in which I am replacing the cabinet with just a box with a cylinder to be reached to. For the same, I have deleted all the cabinet and props ...
0
votes
0
answers
88
views
start position gym minigrid
I am trying to modify the start position of the agent in the minigrid but "agent_pos" does not seem to work. Would anyone know what to do?
import gym
from gym_minigrid.minigrid import *
# ...
1
vote
0
answers
485
views
Keras Sequential Unhashable type: 'list error at import
I'm having problems with running a deep q-learning model with Keras-RL and OpenAI Gym in Python. In particular I get an error when loading the Sequential model from the keras package.
The code is the ...
1
vote
1
answer
597
views
how to fix wheel building error when installing Box2D
I am attempting to write a Reinforcement learning model using Box2D and TensorFlow in Google Colab. I have a simple one line install command for everything as I've found that Colab breaks whenever I ...
0
votes
0
answers
38
views
Why is my DQN exhibiting performance decrease over the training cycle to solve the Travelling Salesman Problem?
I am currently trying to train a DQN (using gym and pytorch) to solve small instances of the Travelling salesman problem (for now I just want to solve a size 10 problem as I know it is capable of ...
0
votes
0
answers
69
views
How fix ValueError: setting an array element with a sequence?
I was looking at the code for implementing Q-learning algorithm in gymnasium Pendulum-v1 environment and came across a github with a similar implementation, but when I try to run the code I get an ...