Open
Description
Hi,
Can anyone explain why the code snippet (interact.py) (underlined in red) was necessary?
As far as I know the logits returned by OpenAIGPTLMHeadModel is of the following form : (batch_size, sequence_length, vocabulary size).
Why was only the last token in the output sequence considered as the predicted next token?
Moreover, why do we have to iteratively generate an output text when the model itself returns a full sequence and not just a single token?
Metadata
Metadata
Assignees
Labels
No labels