LLM Parameters

Master the control panel. These settings define the "personality" and constraints of your AI model.

Model Configuration Mockup
Visualizing commonly used settings in standard AI Playgrounds.
Temperature0.7
Max Tokens2048
Top P0.95

These controls (often found in OpenAI Playground or Vertex AI) directly manipulate the generation algorithm step-by-step.

Parameter Reference Guide
SettingDescription
TemperatureControls the randomness of predictions. Low (0.2) is deterministic/focused. High (0.8+) is creative/random.
Top P (Nucleus)Limits sampling to the top X% of cumulative probability range. Helps remove highly unlikely long-tail tokens.
Top KStrictly limits sampling to the top K most likely tokens. E.g., only consider the top 40 words.
Max TokensHard limit on output length. Important for cost control and preventing infinite loops.
Stop SequencesCustom text strings (e.g., 'User:', 'END') that force the model to immediately stop generating.
Frequency PenaltyPenalizes tokens that have appeared often. Reduces repetition (e.g., prevents asking 'Why? Why? Why?').
Presence PenaltyPenalizes tokens that have appeared at least once. Encourages introducing new topics.