Development & AI | Alper Akgun
We can utilize specific LLM configurations to influence different facets of the model. These configurations can be fine-tuned to generate more imaginative, varied, and engaging results. Temperature, Top P, and Max Length settings hold the utmost significance but there are some others too.
Usually a value from 0.0 to 1.0, with higher temperature setting the model becomes more creative. Lower values yield more conservative outputs, because the model is choosing the highest probable next token.
Top P sets a probability threshold and selects tokens whose combined probability surpasses that limit. Keep this low if you want exact and factual responses. Keep this high for more diverse responses.
Number of total tokens to generate. Manage length and avoid irrelevant long responses.
Penalizes token repetitions, avoiding garbage output.
Gives a flat penalty to tokens if they have occurred.
Whenever the model sees given sequences, it stops generating.