All Questions
Tagged with openai-whisper fine-tuning
7 questions
0
votes
0
answers
98
views
whisper finetuning on mutiple gpu's
Actually i am fine tuning whisper model but its running on only one gpu and other 3 gpu remain idle.
Problme is that it is not utilizing the other 3 GPU
I want to fine tuning the whisper model ...
0
votes
0
answers
30
views
TFWhisperForConditionalGeneration model.generate() returns repetitions of first word in sequence after finetuning
I fine-tuned TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") on the German version of mozilla-foundation/common_voice_11_0. The training process looks fine (...
1
vote
0
answers
121
views
Fine-Tuning Whisper for translate task "speech in cutom dialect to translated text in another custom language"
I recently fine-tuned the Whisper-Tiny model on my custom speech dataset for transcription tasks, and it worked well. However, when I tried to fine-tune the model for a translation task using the same ...
0
votes
0
answers
277
views
How can I use the encoder part of the whisper model and sending the output of the encoder to a classification head?
I want to use whisper for speech emotion recognition, and since whisper is an encoder-decoder architecture model, I only want to leverage the encoder part and add a classification head on top of it to ...
1
vote
0
answers
147
views
Using a fine-tuned model for the pipeline method in Hugging Face's Transformers
I'm trying to replicate the Long-Form Transcription technique in the README for Whisper. The method loads a pre-existing model via using the pipeline method.
import torch
from transformers import ...
0
votes
1
answer
340
views
TypeError: Whisper.forward() missing 1 required positional argument: 'tokens' when creating chatbot with fine-tuned model
On google colab I am creating a chatbot, using whisper to transcribe an audio then generate response, based on a fine-tuned model. I use the model with pure-text chatbot first, and it works fine, but ...
3
votes
1
answer
8k
views
How can I finetune a model from OpenAI's Whisper ASR on my own training data?
I use OpenAI's Whisper python lib for speech recognition. I have some training data: either text only, or audio + corresponding transcription. How can I finetune a model from OpenAI's Whisper ASR on ...