LLM Parameters
Master the control panel. These settings define the "personality" and constraints of your AI model.
Model Configuration Mockup
Visualizing commonly used settings in standard AI Playgrounds.
Temperature0.7
Max Tokens2048
Top P0.95
These controls (often found in OpenAI Playground or Vertex AI) directly manipulate the generation algorithm step-by-step.
Parameter Reference Guide
| Setting | Description |
|---|---|
| Temperature | Controls the randomness of predictions. Low (0.2) is deterministic/focused. High (0.8+) is creative/random. |
| Top P (Nucleus) | Limits sampling to the top X% of cumulative probability range. Helps remove highly unlikely long-tail tokens. |
| Top K | Strictly limits sampling to the top K most likely tokens. E.g., only consider the top 40 words. |
| Max Tokens | Hard limit on output length. Important for cost control and preventing infinite loops. |
| Stop Sequences | Custom text strings (e.g., 'User:', 'END') that force the model to immediately stop generating. |
| Frequency Penalty | Penalizes tokens that have appeared often. Reduces repetition (e.g., prevents asking 'Why? Why? Why?'). |
| Presence Penalty | Penalizes tokens that have appeared at least once. Encourages introducing new topics. |