Result of text generation
Generated text
Number of tokens generated
Prompt token count
Time taken in seconds
Tokens per second
Finish reason: 'stop', 'length', 'error'
Model used
Result of text generation