Man

Development & AI | Alper Akgun

LLM Settings

September, 2023

We can utilize specific LLM configurations to influence different facets of the model. These configurations can be fine-tuned to generate more imaginative, varied, and engaging results. Temperature, Top P, and Max Length settings hold the utmost significance but there are some others too.

Temperature

Usually a value from 0.0 to 1.0, with higher temperature setting the model becomes more creative. Lower values yield more conservative outputs, because the model is choosing the highest probable next token.

Top P

Top P sets a probability threshold and selects tokens whose combined probability surpasses that limit. Keep this low if you want exact and factual responses. Keep this high for more diverse responses.

Maximum Length

Number of total tokens to generate. Manage length and avoid irrelevant long responses.

Frequency Penalty

Penalizes token repetitions, avoiding garbage output.

Presence Penalty

Gives a flat penalty to tokens if they have occurred.

Stop Sequences

Whenever the model sees given sequences, it stops generating.