Skip to main content

All Questions

Tagged with
Filter by
Sorted by
Tagged with
1 vote
0 answers
17 views

"Install TensorFlow and Gym with Conda for usage with Jupyter Notebook in macOS Sonoma, Intel processor

I am trying to set up a virtual environment using Conda to code a lab activity regarding RL, but it is proving to be quite a nightmare due to incompatibilities among different library versions. The ...
Javier's user avatar
  • 191
0 votes
0 answers
35 views

I'm getting a "invalid type "ResourceVariable" must be a string or Tensor." on the rL compile function when trying to train a DQM with Keras

I was trying to train a model on a gym environment when I encountered the following error: TypeError: Argument `fetch` = <tf.Variable 'dense/kernel:0' shape=(4, 24) dtype=float32> has invalid ...
Jas Chawla's user avatar
1 vote
0 answers
132 views

Building a Reinforcement Learning Model by combining outputs of many different ML models

Compared to many people, I am an amateur about coding and I need some guidance. After careful writing and tweaking to build a stock price predictor (hopefull a trading bot after that), I built 3 ...
Kaan's user avatar
  • 13
1 vote
0 answers
486 views

Keras Sequential Unhashable type: 'list error at import

I'm having problems with running a deep q-learning model with Keras-RL and OpenAI Gym in Python. In particular I get an error when loading the Sequential model from the keras package. The code is the ...
Albifer's user avatar
  • 187
0 votes
0 answers
105 views

Error when creating DQNAgent with Keras-RL: "Keras symbolic inputs/outputs do not implement __len__"

I'm trying to create a DQNAgent using Keras-RL to train a CartPole-v0 environment. However, I'm encountering an issue related to the Keras symbolic inputs/outputs not implementing len. Here's the ...
Hassan Mslohi's user avatar
0 votes
0 answers
93 views

OpenAI Gym 'CarRacing-v2' (discrete!) ValueError wrapping in TensorFlow Py-Environment

I first instantiate the discrete version of the 'CarRacing v2' environment and then want to wrap it in TensorFlow(TF): env = gym.make("CarRacing-v2", continuous = False) #discrete version ...
Rouven Rieger's user avatar
0 votes
1 answer
565 views

Deep Reinforcement Learning on Raspberry Pi

I am trying to run a deep reinforcement problem on a Raspberry Pi 4. The code successfully runs on Colab but shows the following error on my Pi. /home/pi/.local/lib/python3.9/site-packages/...
Sk D's user avatar
  • 1
0 votes
1 answer
249 views

cannot import name '__version__' issue

i would like to implement Deep neural network+RL in python, here is my code : import random import gym from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense,...
data science's user avatar
0 votes
0 answers
138 views

tf_agents reset environment using actor

I'm trying to understand how to use Actor class in tf_agents. I am using DDPG (actor-critic, although this doesn't really matter per say). I also am learning off of gym package, although again this ...
brian_ds's user avatar
  • 367
0 votes
1 answer
101 views

Multiple errors in deep reinforcement learning project with keras and openai gym

I have copied code from a source about deep q-learning to try to learn from it but it is an older source so there are many things that are going wrong with both keras and openai gym. I have tried for ...
Skipper1504's user avatar
0 votes
1 answer
551 views

A2C and stable_baselines3

I´m trying to use this code from a repo in GitHub (https://github.com/nicknochnack/Reinforcement-Learning-for-Trading-Custom-Signals/blob/main/Custom%20Signals.ipynb) in Point 3: model = A2C('...
Unagi71's user avatar
1 vote
2 answers
590 views

DQN, TF, nested spcaes.Dict : How to deal with variable size observation space?

I am very new to RL and DQN. And trying to code an agent for my problem statement. I am using Replay Buffer concept and trying to learn to code this agent manually. My observation space is a ...
Ai_Nebula's user avatar
0 votes
1 answer
557 views

Attribute error in PPO algorithm for Cartpole gym environment

I'm trying to run the code from here (Github link on this page): https://keras.io/examples/rl/ppo_cartpole/ I'm getting an attribute error in the training section from observation = observation....
Max's user avatar
  • 13
0 votes
2 answers
337 views

Im having problems adding observation space to a custom Gym enviroment

I am using this code. I have modified it to work with a car (0 Left, 1 Straight, 2 Right). I would like to add some observations, such as Destination (XY), Car Location (XY) bearing (angle), ...
BBC Basic's user avatar
0 votes
1 answer
2k views

cannot import name 'load_model' from 'keras.engine.saving'

Issue : I am trying to use the gym connect4 env. When running this sample code : import gym from gym_connect_four import RandomPlayer, ConnectFourEnv env: ConnectFourEnv = gym.make("ConnectFour-...
Axel P 's user avatar
0 votes
0 answers
80 views

KerasRl ValueError: Error when checking input: expected input_3 to have 3 dimensions, but got array with shape (1, 1, 9, 9)

I made an env with Gym for Sudoku puzzle and I want to train an AI on it using KerasRL (I've removed the step reset and render method of the environment to not have too much code for StackOverflow). I ...
Lucas 'Snufkin' Gautier's user avatar
2 votes
0 answers
201 views

I need to install tensorflow 1.x and the code works until last week on google colab. Now it does not work

I need to install TensorFlow 1.x for solving my problem. When I run the below codes a week back on google colab (python version 3.8). It installed successfully, and since then I haven't changed ...
monir zaman's user avatar
1 vote
1 answer
2k views

How do you use OpenAI Gym 'wrappers' with a custom Gym environment in Ray Tune?

How do you use OpenAI Gym 'wrappers' with a custom Gym environment in Ray Tune? Let's say I built a Python class called CustomEnv (similar to the 'CartPoleEnv' class used to create the OpenAI Gym &...
hackr's user avatar
  • 81
2 votes
2 answers
383 views

Tf-agent Actor/Learner: TFUniform ReplayBuffer dimensionality issue - invalid shape of Replay Buffer vs. Actor update

I try to adapt the this tf-agents actor<->learner DQN Atari Pong example to my windows machine using a TFUniformReplayBuffer instead of the ReverbReplayBuffer which only works on linux machine ...
Sch_Stef's user avatar
0 votes
1 answer
282 views

TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices

Before I start, I know there are a lot of questions with the same error but none of them solved the issue for me. I have a PPO implementation for playing the CarRacing-v2 environment from gym (gym==0....
brownie's user avatar
  • 171
0 votes
1 answer
351 views

tf_agents and reverb produce incompatible tensor

I'm trying to implement a DDPG using tf_agents and reverb but I can't figure out how both libraries to work together. For this, I'm trying to use the code from the DQL-Tutorial from tf_agents with my ...
RobinW's user avatar
  • 322
0 votes
1 answer
267 views

How to take two arrays as output from Gym.Env to fit to DQN NN

Can't figure out how to make the gym.Env put out two separate arrays. It just seems to combine them into 1 array containing 2 arrays. But fitting to DQN NN expects two arrays. I'm hoping to put the ...
Cam Worrall's user avatar
0 votes
1 answer
129 views

ValueError: Error when checking input: expected flatten_input to have shape... but got the shape

When trying to implement a DQN with Tensorflow/Keras, on an openai-gym environment, I'm encountering this error: ValueError: Error when checking input: expected flatten_input to have shape (1, 4) but ...
ayuval's user avatar
  • 81
3 votes
6 answers
4k views

ValueError: Error when checking input: expected flatten_input to have shape (1, 4) but got array with shape (1, 2)

I'm fairly new to RL and i can't really understand why I'm getting this error. import random import numpy as np import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras....
Pedro Carvalho's user avatar
1 vote
1 answer
463 views

TensorFlow with custom gym environment: Layer "dense_6" expects 1 input(s), but it received 2 input tensors

I am trying to use TF to solve a custom gym environment, all within Google Colab. The main script is the TF "DQN Tutorial" available here. In place of env_name = "CartPole-v0" I ...
LarrySnyder610's user avatar
1 vote
2 answers
268 views

Why does my model not learn? Very high loss

I built a simulation model where trucks collect garbage containers based on their fill level. I used OpenAi Gym and Tensorflow/keras to create my Deep Reinforcement Learning model... But my training ...
Michele Raso's user avatar
2 votes
0 answers
966 views

Omnet++ with Reinforcement Learning Tools [ML]

I am currently failing into find an easy and modular framework to link openAI gym or tensorflow or keras with omnet++ in such a way I can produce communication between each tool and have online ...
lionyouko's user avatar
0 votes
1 answer
785 views

real tme keras rl DQN predictions

hello everyone I followed that tutorial https://www.youtube.com/watch?v=hCeJeq8U0lo&list=PLgNJO2hghbmjlE6cuKMws2ejC54BTAaWV&index=2 to train a DQN Agent everything works env = gym.make('...
abdelmoumen's user avatar
0 votes
0 answers
413 views

Error:DQN expects a model that has one dimension for each action, in this case (1, 2, 1, 0)

i am building an RL agent for which the model is defined: def build_model(states, actions): azioni = list(actions) model = Sequential() model.add(Dense(4, activation='relu', input_shape=[len(azioni)]))...
Michele Raso's user avatar
1 vote
0 answers
1k views

TypeError: 'Sequential' object is not subscriptable when using the model to predict something

i've tried to create a model and a function that plays one step in reinforcement learning in lunar lander. import gym env = gym.make("LunarLander-v2") this is the environment keras.backend....
bfdd's user avatar
  • 21
2 votes
1 answer
715 views

ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (1, 1, 2)

Edit: Problem Solved. Solution below. Attempting to build a RL model to handle a task. There are two inputs: x and y, both are measured on an int scale of 1 to 100. Based on these two inputs there ...
nick's user avatar
  • 35
0 votes
0 answers
131 views

Keras-RL error when training agent for tic tac toe game: "expected dense_16_input to have 2 dimensions, but got array with shape (1, 1, 3, 3)"

I just recently tried using Keras-RL to train an agent in a tictactoe game I made to practice making custom environments for my final third year project which involves doing this but on a much larger ...
Sultan Al-Rashed's user avatar
0 votes
1 answer
308 views

Bipedalwalker from openai gym solving using tensorflow

i am trying to solve the Bipedalwalker from openai. The Problem is that i always get the error: The shape of the output should be 4 values between -1 and 1(like: [ 0.45099565 -0.7659952 -0.01972992 ...
Malte Rothkamm's user avatar
5 votes
0 answers
661 views

Alternatives of Stable Baselines3

can you suggest some alternative of stable baselines that I can use to train my agent in reinforcement learning. P.s. I'm using gym mini-grid environment so tell me those who work in this environment.
Kunal Rawat's user avatar
2 votes
2 answers
2k views

Error: DQN expects a model that has one dimension for each action

I am building an RL agent for which the model is defined: def build_model(height,width,channels,actions): model = Sequential() model.add(Conv2D(32,(8,8),strides=(4,4),activation='relu',...
Shriman Keshri's user avatar
0 votes
1 answer
246 views

I get horrible results with my DDPG model TF2

Hello my DDPG model that I have implemented in TF 2 get's horrible results at every env on openai-gym that has continuous actions I need help to find what's the problem. I run this on my GPU. On env ...
Ajan16's user avatar
  • 41
0 votes
1 answer
629 views

How do I get Target Q-values in Bipedalwalker-v3 in openai-gym, reinforcement learning?

I am new to reinforcement learning and I was trying to solve the BipedalWalker-v3 using Deep Q learning. However I found out that the env.action_space.sample() = numpy array with 4 elements and I am ...
jigar's user avatar
  • 247
0 votes
1 answer
7k views

ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (1, 1, 15)

I am trying to make a custom Gym Environment so that I can use it in a Keras Network. But there is a problem that is happening to me when I try to fit de neural network. ValueError: Error when ...
Manu Jiménez's user avatar
0 votes
1 answer
496 views

How to use a trained RL model to make a prediction?

I would like to use my trained RL model for a discrete test prediction. This is how the model is built: model = Sequential() model.add(Dense(60, activation='relu', input_shape=states)) model.add(Dense(...
Vincent Roye's user avatar
  • 2,821
3 votes
1 answer
968 views

How to build a DQN that outputs 1 discrete and 1 continuous value as a pair?

I am building a DQN for an Open Gym environment. My observation space is only 1 discrete value but my actions are: self.action_space = (Discrete(3), Box(-100, 100, (1,))) ex: [1,56], [0,24], [2,-78].....
Vincent Roye's user avatar
  • 2,821
2 votes
1 answer
1k views

Simple DQN to slow to train

I have been trying to solve the OpenAI lunar lander game with a DQN taken from this paper https://arxiv.org/pdf/2006.04938v2.pdf The issue is that it takes 12 hours to train 50 episodes so something ...
Marc's user avatar
  • 16.5k
0 votes
1 answer
249 views

Python neural network with Keras runs on CPU, but crashes on the GPU

I implemented a neural network that learns to play PacMan using gym,box2d and gym[atari] with Keras models. The training was very slow so I tried to make in run on my GTX 1060 Max-Q. I installed the ...
nnenthusiast's user avatar
3 votes
0 answers
3k views

Reinforced learning agent build DQNAgent causes unknown problem

I wanted to get into reinforced learning a bit, so I started with the fairly simple example "Cartpole" by following a hands-on tutorial. Github link of the tutorial source code (identical ...
Born4Pizzas's user avatar
0 votes
2 answers
248 views

Deep Reinforcement Learning Motion in Observation

I am trying to implement a DRL (Deep Reinforcement Learning) Agent for self-driving vehicles. I am currently teaching my agent not to bump on other cars, using a simple camera. There are many ways to ...
LazyAnalyst's user avatar
0 votes
1 answer
450 views

saving RL agent by pickle, cannot save because of pickle.thread_RLock -- what is the source of this error?

I am trying to save my reinforcement learning agent class after training for further training later on by pickling it. The script used is: with open('agent.pickle','wb') as agent_file: pickle....
user avatar
0 votes
1 answer
978 views

How to use own environment for DDPG without gym

I'm using Keras to build a ddpg model,I followed the official instruction from here enter link description here But I want to my own environment, not gym,here is my own environment: class Environment1:...
William's user avatar
  • 4,020
1 vote
0 answers
134 views

proximal policy gradient tensorflow pendulum issue

import gym import numpy as np import tensorflow as tf class Memory(object): def __init__(self): self.ep_obs, self.ep_act, self.ep_rwd, self.ep_neglogp = [], [], [], [] def ...
okay All Right's user avatar
1 vote
1 answer
1k views

Why DQN for cartpole game has a ascending reward while loss is not descending?

I wrote a DQN to play the OpenAI gym cart pole game with TensorFlow and tf_agents. The code looks like the following: def compute_avg_return(environment, policy, num_episodes=10): total_return = 0....
Tianhao Zhou's user avatar
1 vote
0 answers
238 views

Failed to find data adapter that can handle input: <class 'numpy.uint8'>, <class 'NoneType'> - Keras Reinforcement Learning in AI Gym

I have been trying to train an agent to play Atari games from the Open Ai Gym environment using the Deep Q Networks but I am running in to an error when trying to use the current state given by the ...
H_Boofer's user avatar
  • 453
0 votes
0 answers
918 views

How to solve "KeyError: '/conv2d_1/kernel:0'"

I am trying to use the colab to run the gym package with pacman, since the spec in colab is more powerful than my notebook. This program is successful simulate in Jupyter in my notebook, which using ...
Koh PIN WAI's user avatar