Skip to main content

All Questions

Tagged with
Filter by
Sorted by
Tagged with
0 votes
0 answers
62 views

Issue with Data Preprocessing and Tensor Concatenation for Whisper Model Training

I am trying to train a Whisper model for Jeju dialect speech recognition. However, I am encountering several errors related to tensor concatenation during the data preprocessing phase. Below is the ...
dw26's user avatar
  • 1
1 vote
0 answers
654 views

Where is the Bottleneck for multiple requests using Whisper on Nvidia A100

I want to use Whisper-Large-v3 (Speech-to-Text) for a real-time application. However, I want to process several requests at the same time. My Whisper instance runs on an Nvidia A100 with 80GB VRAM. In ...
leon's user avatar
  • 11
2 votes
1 answer
4k views

RuntimeError: Library cublas64_12.dll is not found or cannot be loaded. While using WhisperX diarization

I was trying to use whisperx to do speaker diarization. I did it sucessfully on google colab but I'm encountering this error while tyring to transcribe the audio file. Traceback (most recent call last)...
St.Destiny's user avatar
0 votes
0 answers
257 views

How to optimise Hyperparameters for Whisper finetuning?

I am currently working on a project where I would like to fine-tune a whisper model via the HuggingFace Transformers library. So far, the finetuning has worked well, however, I have become stuck on ...
nsohko's user avatar
  • 1
1 vote
0 answers
156 views

convert a whisper-small.pt model to hugging face through convert_openai_to_hf.py

I have a problem I want to convert a file with the extension .pt to CTranslate2. To do that, you first need to convert it to hugging face and then execute a command line that will convert the file ...
sophie's user avatar
  • 11
1 vote
2 answers
5k views

RuntimeError: Library libcublas.so.11 is not found or cannot be loaded

I am working on an LLM project on google colab using V100 GPU, High-RAM mode, and these are my dependencies: git+https://github.com/pyannote/pyannote-audio git+https://github.com/huggingface/...
P0sitive's user avatar
0 votes
1 answer
1k views

Load and unload model on celery worker

I currently have a system that initiates tasks on whisper AI model using Celery. However, the existing setup involves loading the model inside each task, which is suboptimal due to the repeated ...
UnPapeur's user avatar
0 votes
1 answer
3k views

Converting a hugging face model into complete .pt file

I have downloaded a Hugging Face model, and it comes with various files, including pytorch_model.bin, config.json, and others. My goal is to integrate this model into my project, and I would like to ...
Dan Mathews Robin's user avatar
2 votes
0 answers
434 views

CUDA/GPU not being utilized

I am trying to use pyannote.audio for diarization and then use Whisper for transcription. I have two machines: one with RTX3050 Laptop GPU, another is my desktop with 2060 Super. When I am running my ...
mirzaahmergull's user avatar
1 vote
0 answers
361 views

OpenAI Whisper terminates immediately after running

I have a fresh Arch linux box with fresh install of both pytorch and whisper. When I try to use whisper on an audio file I get an error: $ whisper pytorch_env/pistol.mp3 --model base fish: Job 1, '...
Ivailo Hristov's user avatar
-1 votes
1 answer
160 views

Whisper Zoo model on DJL with GPU: Getting error "c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::operator->() const+0xc"

I'm trying to run the Whisper zoo model from DJL examples in GPU. In the first run, I got the error that two devices were found - Cuda and CPU. As I understood that this error occurs due to just the ...
Sreekumar KJ's user avatar
1 vote
0 answers
376 views

I could not able to install PyTorch on raspberry is there a PyTorch version which can be installed on raspberry pi 3b+?

I could not able to install PyTorch on raspberry is there a PyTorch version which can be installed on raspberry pi 3b+? I tried diffrent version and none worked for me I tried official sites other ...
Antexo's user avatar
  • 21
0 votes
1 answer
5k views

Unable to utilize GPU for whisper AI. Only using CPU

I am using WhisperAI from OpenAI to transcribe english and french audio. Git link here. I followed their instructions to install WhisperAI The instance has a GPU, but torch.cuda.is_available() keeps ...
Sharhad's user avatar
1 vote
1 answer
2k views

Title: I'm encountering a CUDA out of memory error while trying to fine-tune the Whisper model in Arabic using PyTorch

The error message is as follows:CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 23.65 GiB total capacity; 21.91 GiB already allocated; 25.56 MiB free; 22.62 GiB reserved in total by PyTorch) ...
marouane elbouazzaoui's user avatar
5 votes
0 answers
1k views

How to load a pytorch model directly to the GPU

I'm trying to load the whisper large v2 model into a GPU but in order to do that, it seems that pytorch unpickle the whole model using CPU's RAM using more than 10GB of memory, and then it load's it ...
Miguel Pinheiro's user avatar
0 votes
0 answers
1k views

Parallel inference on single model in CUDA cause worker processes to terminate

I'm trying to start openai\whisper inference on single model in CUDA with multiprocessing.Pool. On 6 workers inference works fine, except of some CUDA warnings on exiting worker processes. On 7 and ...
gorb's user avatar
  • 1