Skip to main content
Filter by
Sorted by
Tagged with
0 votes
0 answers
4 views

PyTorch PruningCallback not pruning

Expected behavior I have implemented a simple example where I want to do hp tuning using optuna and each trial is spawn over two gpus (as my original data is huge). I am not looking for parallelizing ...
DEEPAK KUMAR POKKALLA's user avatar
0 votes
0 answers
12 views

How to extract specific metrics (mAP) from YOLOv7's train function?

I am using the train function from the file: https://github.com/WongKinYiu/yolov7/blob/main/train_aux.py to train a YOLOv7 model on a custom dataset. I would like to optimize the hyperparameters (...
Skywalker's user avatar
  • 652
0 votes
2 answers
48 views

Using Optuna for CatBoost with batches: got nan on second trial

I am trying to tune CatBoost's hyperparameters using Optuna. I need to train my CatBoost model using batches, because training data is too big. Here is my code: def expand_embeddings(df, embedding_col=...
Angelika's user avatar
  • 216
0 votes
0 answers
29 views

How to use Keras with Optuna tuning and Sklearn Pipeline

I am developing a model using Keras and use Optuna for the hyperameter tuning. I need to use K-fold method for the development. However, I cannot successfully run it. Please help. Here is the code: ...
HappyFish's user avatar
-1 votes
0 answers
25 views

Hydra + Optuna sweeper along with PyTorch + DDP. Any tutorials?

I am trying to use Hydra + Optuna Sweeper with my PyTorch + DDP (mp.spawn) setup. I have the PyTorch + DDP running properly. However, when I'm trying to run the hydra sweeper, it;s not working - as in,...
DEEPAK KUMAR POKKALLA's user avatar
0 votes
0 answers
14 views

Which file indicates the completion of YOLOv7 model training? [Assumption: I am unable to view the training logs]

I am using Optuna to optimize the hyperparameters of a YOLOv7 model. Because of the way Optuna operates, I cannot examine the output after the training is complete. Therefore, I would like to know if ...
Skywalker's user avatar
  • 652
0 votes
0 answers
24 views

Can we use Optuna to optimize YOLOv7's hyperparameters on a custom dataset?

I would like to use Optuna to optimize YOLOv7's hyperparameters like the learning rate, momentum, weight_decay, iou_t, etc. Is this possible? I tried writing an objective function that would call the ...
Skywalker's user avatar
  • 652
0 votes
2 answers
58 views

XGBoost Early Stopping Rounds

my code below keeps blowing up and I can't work out what is going on import optuna import xgboost as xgb from sklearn.model_selection import train_test_split from sklearn.metrics import ...
CraigBreezey's user avatar
0 votes
0 answers
26 views

Python: how to get 2D points on the Pareto front

Given a set of points calculated after optimization with Optuna, where minimization - maximization problem was solved, I would like to plot these points lying on the pareto front. How to estimate them ...
Yustina Ivanova's user avatar
0 votes
1 answer
56 views

CatBoost crashes when launching Optuna

I want to tune a regressor catboost using Optuna and local GPU. The dataset is not very large: the training sample contains about 120k records and only 16 features (including categorical ones). I run ...
Anastasia Volkova's user avatar
0 votes
0 answers
19 views

How are trials allocated to the different brackets in hyperband in Optuna?

In the original hyperband implementation, defining the minimum and maximum resources, automatically defined the number of configurations to examine. In Optuna, that is not the case. The number of ...
Sole Galli's user avatar
  • 1,042
0 votes
0 answers
64 views

How to clear Nvidia GPU RAM after OOM in Optuna study using DDP

I have 4 GPUs on my server and Im using Torch DDP to distribute the training across my GPUs. I want my Optuna trial to gracefully fail and move on to the next trial without crashing the study when ...
redress's user avatar
  • 1,429
4 votes
0 answers
71 views

Suppress warning messages when lightgbm is used in optuna?

I want to suppress the warnings, however I am unable to do so. This issue is happening only when I am using a custom objective function instead of a regular one. I have tried multiple things to ...
pppp_prs's user avatar
  • 146
0 votes
0 answers
53 views

Optuna XGBoost not using all of Mac's CPU

I am running Optuna with mySQL to attempt to get parallelization and use more of my Mac's CPU. When I would run GridSearchCV for example, my user CPU usage would go up to 90% and fans would kick in. ...
Tamzid Razzaque's user avatar
0 votes
0 answers
108 views

How to get a study's complete list of hyper parameters and their importance in Optuna?

I'm using Optuna to find the best values for training my machine learning model. Using Optuna dashboard I can see a chart showing a list of my hyper parameters and their importance. It's just the list ...
Mehran's user avatar
  • 16.8k
0 votes
0 answers
184 views

Parameter tuning with Slurm, Optuna, PyTorch Lightning, and KFold

With the following toy script, I am trying to tune a hyper-parameter of "learning rate" for a perceptron with "Optuna" and 5-fold cross validation. My cluster has multiple GPUs on ...
zhihao_li's user avatar
  • 183
0 votes
0 answers
72 views

TypeError: 'numpy.bool_' object is not iterable when working with SetFit and Optuna

I am trying to train a few shot text classifier using SetFit and Optuna. When I run my code, I get the error TypeError: 'numpy.bool_' object is not iterable. I don't understand where the error comes ...
kittielover's user avatar
0 votes
0 answers
165 views

Optuna HPO & Lightning Multi-GPU Training using DDP on SLURM - ValueError: World Size does not Match

I'm trying to utilize all the computational resources on a SLURM cluster to speed up my hyperparameter optimization using Optuna and Pytorch Lightning. My code works fine solely with the Pytorch ...
Zenia's user avatar
  • 13
0 votes
1 answer
815 views

Running multiple Optuna trials on each GPU to fully utilize available VRAM

I have an Optuna script for hyperparameter tuning that runs on four GPUs using the following command: CUDA_VISIBLE_DEVICES=0 python 'optuna script.py' & CUDA_VISIBLE_DEVICES=1 python 'optuna ...
Murtaza's user avatar
  • 11
0 votes
1 answer
285 views

Error while launching Optuna Dashboard in python

I am trying to launch optuna ( optuna-dashboard sqlite:///db.sqlite3 ) dashboard but I am receiving this error optuna-dashboard : The term 'optuna-dashboard' is not recognized as the name of a cmdlet, ...
DrGenius's user avatar
  • 957
0 votes
0 answers
49 views

ValueError: Value of min_samples_split must be a optuna distribution

start = time.time() model = DecisionTreeClassifier() parameters = { 'min_samples_split': range(2, 6), 'min_samples_leaf': range(1, 6), 'max_depth': range(2, 6) } oscv = OptunaSearchCV( model, ...
user519099's user avatar
0 votes
1 answer
164 views

Problem with MemoryUsageError during Optuna study for a 3DCNN

I am optimizing some parameters in a 3D CNN for semantic segmentation using Optuna, and i am getting problems with the Memory usage during running some trails. I have run other and not get any ...
user25069062's user avatar
0 votes
0 answers
340 views

Cross validation on XGBoost using callback of optuna

I am trying yo use optuna and cross_validate from sklearn also using the callback function from optuna. My code is below. It seems that the callback does not work here and I do not know why ... I don'...
aurfa's user avatar
  • 21
0 votes
0 answers
99 views

using clearml to explore learning rates in PyTorch. Tuning(experiments list) task is now DRAFT and data is not showing up on loss monitoring screen

I am using clearml to explore learning rates in PyTorch. Tuning task is now DRAFT and data is not showing up on loss monitoring screen. step1:(This has been a success.) import torch import torchvision ...
liveman's user avatar
0 votes
0 answers
250 views

Is optuna's XGBoostPruningCallback working?

I'm having trouble getting the XGBoostPruningCallback to work. Every attempt leads to the error 'callback must be an instance of TrainingCallback.' Someone in a GitHub thread claims this is a ...
Rafael Penido's user avatar
0 votes
1 answer
90 views

How to solve this InternalError: Graph execution error while optimizing hyperparameters in Optuna?

I have been optimizing the hyperparameters of several TensorFlow neural network models in Optuna on Jupyter Notebook (Python 3.x) in WSL with hundreds of trials and no prior problems until I thought I ...
Kartik's user avatar
  • 141
0 votes
1 answer
98 views

optuna Inconsistent parameters and distributions

I am running real world tests. I want Optuna to propose me a new set of parameters for my next test. I am running the code here below in Google Colab : import optuna import csv # Load previous trials ...
JBo's user avatar
  • 63
0 votes
0 answers
54 views

Retrieve optimal hyperparameters from Optuna in Python

I'm running Optuna on a large collection of hyperparameters for optimizing a neural net. After completing a study, is it possible to get an estimate of what the optimal hyperparameter settings would ...
NotProbable's user avatar
1 vote
1 answer
108 views

Optuna Hyperband Algorithm Not Following Expected Model Training Scheme

I have observed an issue while using the Hyperband algorithm in Optuna. According to the Hyperband algorithm, when min_resources = 5, max_resources = 20, and reduction_factor = 2, the search should ...
Tnb Marketplace's user avatar
0 votes
0 answers
28 views

How optuna narrow the search scope step by step, by using the bayesian method

How optuna narrow the search scope step by step, by using the bayesian method. Can you give a simple example ? My first instinct is like binary search. Just like, searching mid=(l+r)/2 after searching ...
BlueHeart0621's user avatar
0 votes
0 answers
79 views

Optuna parameter optimisation with MPI

I have some machine learning code which uses SVM (from scikit-learn) with a pre-computed kernel that I want to optimise using optuna, so the code simplistically looks a bit like this def objective(...
Georgia's user avatar
0 votes
1 answer
506 views

Reproducible results of optuna when n_jobs=-1

I am currently working with optuna and I have noticed that when I use n_jobs = -1, the TPESampler does not sample the exact same parameters for different study, even with the seed inside optuna....
aurfa's user avatar
  • 21
0 votes
0 answers
213 views

How to prune random forest regression in Optuna?

I am working on machine learning model and trying to tune hyperparameters with Optuna. I want to try pruning, but I dont know how to implement this feature. I am using random forest regressor and ...
david's user avatar
  • 55
2 votes
0 answers
246 views

Optuna in-memory paralellization

I am performing hyperparameter optimization with Optuna (from within rl-zoo) and have some questions about parallelization. In the docs, it is recommended to use process based (-> distributed) ...
mavex857's user avatar
  • 131
1 vote
1 answer
501 views

YOLOv8: Optimising for map with confidence and iou in prediction

I'm trying to figure out what the best conf and iou is for the model.pred. from ultralytics import YOLO import pandas as pd import numpy as np df= pd.DataFrame() # Load a model for i in range(1,105):...
HarriS's user avatar
  • 846
0 votes
1 answer
435 views

Argo workflow's template variable for step's IP is not resolving

I am building an Argo workflow to execute a machine learning hyperparameter optimisation with optuna, based on this original workflow, that I found reading this medium post. The issue is that this ...
Durand's user avatar
  • 89
1 vote
1 answer
281 views

Optuna pruned trial for random forest classifier

I am currently working on Optuna library and I have seen that there is a parameter which allows to prune unpromissing trials. It seems that this parameter can only be used with incremental learning ...
aurfa's user avatar
  • 21
0 votes
0 answers
283 views

How to implement optuna pruner in pytorch lightning?

I am trying to carry out hyperaprameter optimization of a TFT model from the Pytorch forecasting library. For this I am using pytorch lightning for training and Optuna for hyperparameter optimization. ...
Priyanka Dani's user avatar
0 votes
0 answers
79 views

XgbRegressor `base_score` parameter - how to choose range for tuning?

For classification, I guess the recommended value is 1/N for multi-class (N represents the number of classes) classification. For regression, I read somewhere that mean of the target could be a better ...
soumeng78's user avatar
  • 850
0 votes
1 answer
188 views

optuna parameter tuning for tweedie - Input contains infinity or a value too large for dtype('float32') error

I am trying to tune a XGBRegressor model and I am getting below error only when I try to use the parameter tuning flow: Input contains infinity or a value too large for dtype('float32') I do not get ...
soumeng78's user avatar
  • 850
0 votes
1 answer
458 views

Pytorch model runs fine stand alone but throws a runtime error when run with optuna

I am trying to tune the hyper parameters of my pytorch model using Optuna, but everytime I run the optimizer it gives the following error. [W 2024-02-05 17:19:26,007] Trial 2 failed with parameters: {'...
akash bais's user avatar
0 votes
0 answers
69 views

checkpoints in Pycaret tune_model

Hi i am using the the tune_model method with search_library=optuna. I wanted to ask if there is a way of adding checkpoints to the tune_model method.
Ron Fisher's user avatar
0 votes
0 answers
65 views

IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed when using multi-column y input

The input train_data_dict is a nested dictionary with a multi-column y-variable by design. All the columns in y are independent variables and should not be flattened. I'm using optuna to find the ...
Anon's user avatar
  • 1,545
0 votes
0 answers
46 views

Issues with DataLoader Reinstantiation and Resource Cleanup in Optuna Trials

Description I am using Optuna for hyperparameter optimization in my project. However, I've encountered a problem where DataLoader instances seem to accumulate over multiple trials. Despite creating a ...
fdsfsd's user avatar
  • 1
0 votes
0 answers
65 views

Displaying and Saving the Best Trial During Hyperparameter Search with Optuna in Parallel GPU Environment

Issue I am conducting hyperparameter optimization using Optuna integrated into the PyTorch Imagenet example (https://github.com/pytorch/examples/blob/main/imagenet/main.py), while parallelizing across ...
fdsfsd's user avatar
  • 1
0 votes
0 answers
52 views

Stale cached data when optimising using Optuna

I am running a backtest using Optuna ( trial Once only ), however I got some repetitive results. While if I just run this backtest itself, the result is fine. Has anyone got similar issue before ( it'...
user2273204's user avatar
0 votes
0 answers
74 views

Actual labels for Optuna hyperparameter importance plot

I am using Optuna as HPO in my Pytorch program to train a NN. After training, I generate the Contour and the parameter importance visualization, however when Optuna generates this p[lot only ...
user8000136's user avatar
1 vote
1 answer
376 views

How can I retry FAIL trials in Optuna in a second run?

I am doing grid search with Optuna but FAIL trials are not repeated in a second run. Instead, already COMPLETE trials are uselessly repeated. Here I describe the two problems separately: when a trial ...
Flavio's user avatar
  • 131
0 votes
1 answer
758 views

CUDA out of memory when using Optuna

Half month ago, I can use Optuna without a problem to do a 48-Hour study, with around 150+ trials. Yesterday I tried Optuna again on the same model, same dataset, same batch size and same device (A100 ...
Tianjian Qin's user avatar
1 vote
1 answer
466 views

Getting AttributeError when running optuna study

I am trying to run optimization using optuna: study = optuna.create_study(direction='minimize', sampler=optuna.samplers.GridSampler(search_space)) study.optimize(objective, n_trials=20) For which I ...
Karthik S's user avatar
  • 11.5k

1
2 3 4 5