Skip to main content

All Questions

Tagged with
Filter by
Sorted by
Tagged with
1 vote
0 answers
176 views

Ray-tune log results in two folders

I using ray-tune to fine-tune parameter. Based on the docs, ray-tune log the results of each trial to a sub-folder under a specified local dir, which defaults to ~/ray_results. To change ray_results ...
LearnToGrow's user avatar
  • 1,750
0 votes
1 answer
364 views

Where should I put reuse_actors=True?

After running the below code, it says INFO trainable.py:172 – Trainable.setup took 2940.989 seconds. if your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor ...
Animesh Kumar Paul's user avatar
0 votes
0 answers
436 views

Pytorch and Ray is used to tune hyperparameter but the code runs error

enter image description here enter image description here enter image description here enter image description here Those are error messages I received. I want to write the python code about tuning ...
김윤도's user avatar
1 vote
0 answers
165 views

Ray tune samples more than one value for the same model in multi-agent environment

So I have this weird behavior of ray tune that I can't make sense of. What I'm trying to do: I have setup a custom rllib multi-agent env with two agents Both agents have different observation and ...
Pat396's user avatar
  • 39
0 votes
1 answer
661 views

ray tune batch_size should be a positive integer value, but got batch_size=<ray.tune.search.sample.Categorical object

I am trying to tune a neural network using ray. I follow the standard flow to get it running on MNIST data. Data loading trainset = torchvision.datasets.MNIST( root='../data', train=True, ...
S.Dasgupta's user avatar
0 votes
1 answer
184 views

nan reward after hyperparameters optimization (ray, gym)

I launched a hyperopt algorithm on a custom gym environment. this is my code : config = { "env": "affecta", "sgd_minibatch_size": 1000, ...
Clm28's user avatar
  • 1
0 votes
1 answer
204 views

Why my hyper_opt algorithm returns a bad 'best configuration' with same parameters written several times

I recently worked on a hyperparameters optimization with a search algorithm. The purpose is to train an agent in an OpenAI Gym environment. The problem is the following one : when I realize a ...
Clm28's user avatar
  • 1
2 votes
1 answer
1k views

Ray Tune for hyper-parameter tuning with specified session directory (!=/tmp/ray/)

I am using Ray Tune to tune the hyper-paramters of a pytorch model. The storage capacity where the default ray session directory is located (/tmp/ray) is limited, thus I want to specify the session ...
stillsen's user avatar
0 votes
1 answer
930 views

How to optimize hyper-parameters of a PPO for a gym environment training

I would like to use an optimization algorithm (hyperOptSearch) using ray.tune . On the official documentation, they use this syntax : tuner = tune.Tuner( objective, tune_config=tune.TuneConfig(...
Clm28's user avatar
  • 1
1 vote
0 answers
338 views

running multiple ray Tuning in parallel using a search algorithm

I want to queue 200+ tuning jobs to my ray cluster, they each need to be guided by a search algorithm, as my actual objective function has 40+ parameters. I can do this for a single job like this: ...
XiB's user avatar
  • 710
0 votes
0 answers
175 views

Ray Tune | Find optimal network hidden size using PBT

I intend to develop a model to test whether PBT is working correctly or not and want to find the optimal hidden layer size via PBT in ray tune, but the hidden layer sizes found by PBT are not optimal. ...
Arman Asgharpoor's user avatar
0 votes
1 answer
450 views

Disable Ray Tune parallel hyperparameter tuning

I have an issue with running hyperparameter optimization on my language model because my setup requires about 20GB of GPU memory to train. Without working in a distributed fashion, I keep getting ...
Leo's user avatar
  • 89
1 vote
1 answer
795 views

Select best training results from all history using ray

I'm trying to find optimal hyperparams with ray: tuner = tune.Tuner( train, param_space=hyperparams1, tune_config=tune.TuneConfig( num_samples=200, metric="score",...
ckorzhik's user avatar
  • 788
1 vote
2 answers
316 views

Ray | AttributeError: 'BroadModel' object has no attribute 'model'

I am using ray tune to find to optimal hyperparameters value for this model: class BroadModel(tune.Trainable): os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' def build_model(self, config): ...
Arman Asgharpoor's user avatar
0 votes
1 answer
2k views

Ray Tune | ValueError: Trial returned a result which did not include the specified metric(s) `mse` that `tune.TuneConfig()` expects

I am trying to optimize the keras model (selecting the best hidden size for the first layer) but I got this error ValueError: Trial returned a result which did not include the specified metric(s) mse ...
Arman Asgharpoor's user avatar
0 votes
1 answer
812 views

Ray Tune: How to optimize one metric but schedule (early stop) based on a different one?

I'd like to use Ray Tune to optimize for metric_slow, but, since that takes a long time before it is available, to use ASHA to early stop based on metric_fast_but_rough. I tried to do this by giving ...
SRobertJames's user avatar
  • 9,139
0 votes
2 answers
272 views

How can I define the activation function as a hyperparameter in PyTorch through RAY.Tune?

This is the link to the main page of the topic I want. https://docs.ray.io/en/latest/tune/examples/tune-pytorch-cifar.html#tune-pytorch-cifar-ref But unfortunately, there is no good documentation to ...
Arash Sajjadi's user avatar
1 vote
0 answers
377 views

Memory usage on node keeps increasing while training a model with Ray Tune

This is the first time I am using Ray Tune to look for the best hyperparameters for an DL model and I am experiencing some problems related to memory usage. The Memory usage on this node keeps ...
Benjamin Cretois's user avatar
0 votes
1 answer
745 views

python ray tune unable to stop trial or experiment

I am trying to make ray tune with wandb stop the experiment under certain conditions. stop all experiment if any trial raises an Exception (so i can fix the code and resume) stop if my score gets -...
user670186's user avatar
  • 2,810
0 votes
1 answer
1k views

Hyperparameter tunning of DeepAREstimator from gluonTS with Ray

I want to create forecasting models using the DeepAREstimator from the gluonTS package. How can I use Ray for hyperparameter tuning? Here is sample code. !pip install --upgrade mxnet-cu101==1.6.0....
Fisseha Berhane's user avatar
1 vote
0 answers
93 views

Which configuration leads to Invalid beta parameter: 1.1203716642316337 - should be in [0.0, 1.0) error?

I am running a hyperparameter tuning using Ray Tune integration (1.9.2) and hugging face transformers framework (4.15.0). This is the code that is responsible for the procedure (based on this example):...
Maxim Kirilov's user avatar
3 votes
0 answers
438 views

Unable to pass multiple value parameters in config to my model, when using Ray with PyTorch

I am new to PyTorch and Ray. I was trying to tune my lightning model's hyperparameters using Ray, but when I passed multiple value parameters in the config dictionary, I got an error like this: ...
Zahra Salarian's user avatar
2 votes
0 answers
505 views

Ray[tune]: FileNotFoundError: [WinError 2] The system cannot find the file specified

I am trying to tune a list of parameters using Ray[Tune]. When I run it, I get the following error: FileNotFoundError: [WinError 2] The system cannot find the file specified I am using the bayesopt as ...
Samaresh Bera's user avatar
0 votes
1 answer
657 views

Ray Tune Stuck with multiple runs

Hi I am trying hyper parameter optimization with ray tune. Below is my code implementation. However I get stuck and can't get the result back even though there aren't any error messages. @ray.remote ...
Dongri's user avatar
  • 1
2 votes
1 answer
625 views

When using ray tune, value defined in config returns a non-float value

I'm new to use Ray Tune. I defined my ray config as below: ray_config = { "estimator/dropout_rate": tune.uniform(0.0, 0.3), "estimator/d_model": tune.choice([64]), ...
Ashikandi's user avatar
-1 votes
1 answer
587 views

Why is Ray Tune only using one worker?

Ray is starting only one worker, even though enough GPUs and CPUs are available to launch more workers. How can I increase the number of workers?
Tom Dörr's user avatar
  • 1,029
1 vote
1 answer
1k views

Add More Metrics to Ray Tune Status Table (Python, PyTorch)

When running tune.run() on a set of configs to search, is it possible to add more metrics columns (i.e. a, b, etc) to the status table being printed out? tune.track.log(a=metric1, b=metric2) will ...
Nyxynyx's user avatar
  • 63.5k
5 votes
0 answers
1k views

How to organize Ray Tune Trainable class calculations in a k-fold CV setup?

It looks like the Ray Tune docs push you to write a Trainable class with a _train method that does incremental training and reports metrics as a dict. There is some persistance of state through the ...
safetyduck's user avatar
  • 6,814
1 vote
0 answers
446 views

Ray Tune ; combine population based training schedule with Hyperopt

Are Population Based Training (PBT) and HyperOpt Search combinable ? The AsyncHyperBandScheduler is used in the Hyperopt Example of ray.tune Here config set some parameters for the run() function ...
Alexander Vocaet's user avatar
-1 votes
1 answer
151 views

Where does Ray.Tune create the model vs implementing the perturbed hyperparameters

I am new to using ray.tune. I already have my network written in a modular format and now I am trying to incorporate ray.tune, but I do not know where to initialize the model (vs updating the ...
LaMaster90's user avatar
0 votes
1 answer
394 views

Dataset enumeration (epoch and batchSize) when implementing Ray.Tune PBT hyperparameter optimization

This is my first time trying to use Ray.Tune for hyperparameter optimization. I am confused as to where in the Ray code I should initialize the dataset as well as where to put the for-loops for ...
LaMaster90's user avatar
1 vote
1 answer
2k views

Ray Tune: How do schedulers and search algorithms interact?

It seems to me that the natural way to integrate hyperband with a bayesian optimization search is to have the search algorithm determine each bracket and have the hyperband scheduler run the bracket. ...
user2663116's user avatar
1 vote
0 answers
2k views

Fail to run ray tune with tensorflow and gpu

OS Platform and Distribution: Linux Ubuntu 16.04 Ray installed from (source or binary): binary Ray version: 0.6.5 Python version: 3.6 I am trying to use ray with tensorflow following the tutorial (...
Boooooooooms's user avatar
3 votes
1 answer
219 views

How to define SearchAlgorithm-agnostic, high-dimensional search space in Ray Tune?

I have two questions concerning Ray Tune. First, how can I define a hyperparameter search space independently from the particular SearchAlgorithm used. For instance, HyperOpt uses something like '...
Rylan Schaeffer's user avatar
7 votes
2 answers
5k views

purpose of 'num_samples' in Tune of Ray package for hyperprameter optimization

I am trying to perform a hyper parameter optimization task for a LSTM (pure Tensorflow) with Tune. I followed their example on the hyperopt algorithm. In the example they have used the below line ...
Suleka_28's user avatar
  • 2,899
3 votes
1 answer
4k views

ap_uniform_sampler() missing 1 required positional argument: 'high' in Ray Tune package for python

I am trying to use the Ray Tune package for hyperparameter tuning of a LSTM implemented using pure Tensorflow. I used the hyperband scheduler and HyperOptSearch algorithms for this and I am also using ...
Suleka_28's user avatar
  • 2,899
2 votes
1 answer
2k views

can't pickle _thread.RLock objects when running tune of ray packge for python (hyper parameter tuning)

I am trying to do a hyper parameter tuning with the tune package of Ray. Shown below is my code: # Disable linter warnings to maintain consistency with tutorial. # pylint: disable=invalid-name # ...
Suleka_28's user avatar
  • 2,899