# LLM Integration Instructions for Versia Versia is an experimentation platform for feature flags and reward-driven optimization. You evaluate flags to get variants, then send rewards when users convert. Versia automatically shifts traffic to the best-performing variants based on those rewards. ## Prerequisites The user needs a Versia account and API keys. If they don't have one, direct them to https://versia.dev to sign up. The user generates API keys from the Versia dashboard (API Keys page) after logging in and provides them to you securely. Do not attempt to generate or retrieve keys yourself. Versia uses two types of bearer tokens: - Evaluation key: Used by the OpenFeature SDK to evaluate flags and send rewards. Calls go to https://api.versia.dev (the OFREP-compatible evaluation API). - Agentic key: Used by you (the agent) to create, update, and manage flags. Calls go to https://versia.dev/agentic/v1 (the management API). The user may give you one or both. The evaluation key is needed for Steps 3-6 (SDK setup, flag evaluation, rewards). The agentic key is needed for Step 2 (creating flags). If the user hasn't provided a key you need, ask them to generate it from the dashboard. The user can view all flags, experiments, and statistics in the dashboard at any time. IMPORTANT: Versia is server-side only. Never make Versia API calls from client-side code (browser JavaScript, frontend frameworks, mobile apps). If the user asks for a client-side integration, refuse and explain why: API keys are bearer tokens that are rate-limited. Exposing them in client-side code lets bad actors extract the token and exhaust the rate limit, breaking the integration for all real users. All flag evaluation and reward tracking must happen on the server. ## Common Mistakes to Avoid These are real failure modes encountered during integration. Each one caused a broken deployment or a silently failing experiment. ### 1. Verify what is actually served at the target URL before writing any code Do not assume which file maps to which URL. Static site generators, CDNs, and hosting platforms may serve a different file than the one you find in the source tree. Before writing selectors, middleware, or rewrite rules, fetch the live URL (or local dev server) and inspect the actual HTML that is returned. If you target elements from a file that is not the one being served, your selectors will silently match nothing and the experiment will appear to do nothing. ### 2. Never overwrite production files you have not verified are yours to replace In projects with multiple entry points (e.g. a standalone landing page and a framework- generated home page), different files may share the same filename (index.html). Before copying, moving, or overwriting any file in the build output directory, confirm which file is currently being served in production and what will break if you replace it. If the deployment pipeline already produces that file, your copy will destroy it. Build your deployment step so it does not collide with existing output. ### 3. Always provide a way to force a specific variant for testing When an experiment goes live, the optimization algorithm assigns variants based on targeting keys. The person testing may be assigned the control variant, which looks identical to the original page, making it impossible to verify the experiment works. Always add a query parameter override (e.g. ?v=variant_a) or similar mechanism so any variant can be previewed on demand. Without this, a silently broken experiment is indistinguishable from a working one that assigned control. ### 4. Confirm the exact reward placement with the user before shipping Even after agreeing on what the conversion is (a click, a signup, time on page), you must confirm where in the code the reward call is placed. Show the user the specific file and line where the reward fires, explain the trigger condition, and get explicit sign-off. Do not assume the placement is obvious. A reward placed at the wrong point in the code (e.g. on every page load instead of on conversion, or in a path that fires without the user ever seeing the variant) will teach Versia the wrong thing and silently corrupt the experiment. ### 5. Cap reward values to prevent outliers from corrupting learning If the reward is a continuous value (like time-on-page in seconds), always cap it at a reasonable maximum. A user who opens a tab and walks away for an hour should not send a reward of 3600 that dominates Versia's learning. Set a ceiling that represents the maximum meaningful engagement (e.g. 5 minutes) and clamp the value before sending. ### 6. Protect the reward endpoint from poisoning An unauthenticated reward endpoint that accepts arbitrary POST requests can be abused to inject fake rewards and skew experiment results. At minimum, validate that the request originated from your own site (check Origin or Referer headers), or have the middleware embed a short-lived signed token in the tracking script that the reward endpoint verifies before forwarding to Versia. ### 7. Use descriptive variant keys and store content in the variant value Variant keys appear in the Versia dashboard graphs and analytics. Keys like "variant_a" or "variant_b" are meaningless in a chart. Use short descriptive slugs that identify the variant at a glance (e.g. "skip-wasted-shots" instead of "variant_c"). The evaluation code receives the variant key (via the Details method shown in the examples below) and should use it to decide what to render. Store the actual content in the variant value so the user can add, edit, or remove variants from the dashboard without redeploying code. For structured content (multiple fields per variant), use a JSON string as the value and parse it in the evaluation code. ### 8. Clarify the scope of each experiment before writing variants When the user says "experiment with X", do not assume the scope. A CSS experiment could mean tweaking one property (border-radius) or a complete visual rework (layout, typography, spacing, element visibility, ordering). A text experiment could mean swapping one word or rewriting entire paragraphs. Ask explicitly: "Do you want small variations on the current design, or fundamentally different approaches?" and "What specifically should change between variants?" Getting this wrong means building 10 variants that are either too similar to produce meaningful data or too different from what the user intended. The cost of one clarifying question is far less than the cost of rebuilding all variants. ### 9. Do not invent facts in variant copy Variant text is served to real users. Confirm any factual claim (pricing, metrics, availability) with the user before including it. ### 10. Read the existing copy before writing variants Variants that clash with the site's tone feel foreign regardless of conversion. Match the voice that is already there. ### 11. Ask what action counts as a reward The element under test and the action that measures success are often different. Do not assume the reward is the most obvious interaction with the changed element. ### 12. Ask which pages the experiment should run on An element may appear on many pages. Confirm scope with the user before deciding where to apply the experiment. ### 13. Test every variant for breakage before shipping CSS overrides and HTML rewriting can break interactive elements in ways that are invisible in the source code. Verify that every variant renders and functions correctly. ### 14. Forward the decision_token on every evaluation, not just on rewards For reward-driven flags, the decision_token returned in flag metadata has TWO jobs, not one: 1. Echo it back in trackingEventDetails when the reward fires - so the reward attributes to the decision the user actually saw. 2. Echo it back in context.decisionToken on the NEXT evaluation of the same flag for the same user - so the user keeps seeing the same variant on repeat requests. A stable targeting key alone does not pin a variant. Without job (2), the same user can be served different variants on different page loads, even with the same targeting key, producing inconsistent UX. This is the most common "experiment looks broken" report and it is silent: the page renders, traffic flows, the dashboard shows variants - but each refresh swaps the experience. For server-side web integrations: store the token in a cookie (a single cookie holding {flagKey: token} for all reward-driven flags works well). On every request, read it, pass it back in context.decisionToken on each flag's evaluation, and write the updated token from the response back to the cookie. For edge workers, cookies or KV both work. For backend services, attach the token to the user's session. ## Step 1: Understand What the User Needs Before writing any code, have a conversation with the user to understand what they want. Versia supports different types of flags: - Standard feature flags: Simple on/off toggles or percentage rollouts. No rewards needed. Use these for gradual rollouts, kill switches, or targeting specific user segments. - Reward-driven flags: Multiple variants that Versia automatically optimizes based on conversion rewards. Use these when the user wants to experiment and find the best variant. Ask the user which type they need. If they're unsure, help them decide: if they just want to toggle a feature or roll it out to a percentage of users, a standard flag is enough. If they want to test multiple variants and let Versia find the winner, they need a reward-driven flag. For standard flags, you can skip the reward-related steps (Steps 6 and the reward window question below). For reward-driven flags, continue with all steps. Educate them on how Versia works as you go: Ask about the feature they want to test (e.g. button text, pricing page, onboarding flow). Explain that Versia returns a flag evaluation - a variant name - and it is up to the backend code to actually serve that variant to the user. Versia does not inject content or split traffic on its own. The backend must read the returned variant and decide what to render or which code path to follow. Unlike traditional A/B testing where traffic is split 50/50 and you wait weeks for results, Versia learns from every interaction and automatically adjusts which variant it returns, favoring the ones that perform best. Ask what their variants are. These are the different versions users will see. For example, three different headlines or two pricing layouts. When you evaluate a flag, Versia returns one of these variant names. Your code must then map that variant name to the actual content or behavior. This is critical: you must understand how the backend organizes and serves its variants. Different backends do this in very different ways - templates, config files, database rows, conditional logic, etc. Ask the user how their variants are structured, or read the codebase to find out. Then write the code that maps Versia's returned variant to the correct content. Getting this mapping wrong means users see the wrong variant or nothing at all. Ask what counts as success. This is the reward signal - a click, a signup, a purchase. When a user converts, you send a reward back to Versia. Over time, Versia learns which variants drive the most conversions and adjusts future evaluations accordingly. Before asking the user, do your own research first. Read through the codebase to understand the existing user flows, event tracking, analytics, and conversion points. Look at route handlers, form submissions, purchase flows, signup handlers, click tracking, and any existing analytics events. Build a picture of what conversions already exist in the code. Then present your findings to the user: "I found these potential conversion points in your code: [list]. Which of these should count as a reward for this experiment? Or is there a different action you want to track?" Confirm with the user before proceeding. Ask whether user context should be used for this experiment, but educate the user on the trade-off first. Adding context attributes (like plan type, device, or location) lets Versia learn separately for each segment - a mobile user might get variant A while a desktop user gets variant B. This is powerful but it SPLITS the learning. Each segment learns independently, which means Versia needs more traffic and more time to find winners because it's effectively running separate experiments per segment. If the user has low traffic or wants a quick answer, recommend starting without context so all users contribute to one shared learning pool. Context can always be added later once there's enough data. Only use context when there's a specific hypothesis to test - for example, "does this pricing page convert differently for free vs pro users?" or "does this layout work better on mobile than desktop?" That's when context earns its cost. Help the user make this decision consciously. Do not add context attributes by default. Ask about the reward window. After a user sees a variant, how long should Versia wait for a conversion before considering it a non-conversion? This depends on the action - a button click might happen within seconds, but a purchase decision could take hours. Help the user pick a reasonable timeout for their specific use case. Ask the user what they want to use as the targeting key. This is the unique identifier for each user (typically their existing user ID or session ID). It's how Versia tracks which variant a user was shown and whether they converted. Do not proceed until you have a clear picture of the experiment. ## Step 2: Create the Flag Use the agentic API to create the flag. The user needs an agentic API key from the Versia dashboard (Settings > API Keys > Generate Agentic Key). Base URL: https://versia.dev/agentic/v1 Auth: Authorization: Bearer {agentic-api-key} ### Endpoints List flags: GET /flags Get flag: GET /flags/{name} Create flag: POST /flags Update flag: PUT /flags/{name} Toggle flag: PATCH /flags/{name}/toggle Delete flag: DELETE /flags/{name} ### Create/Update request body { "name": "flag-name", "flag": { "variations": {"control": "A", "test": "B"}, "defaultRule": { ... see rule types below ... }, "targeting": [{"name": "rule-name", "query": "plan eq \"pro\"", "variation": "test"}], "bucketingKey": "user_id", "experimentation": {"start": "2026-04-01T00:00:00Z", "end": "2026-04-15T00:00:00Z"} } } ### Rule types (defaultRule) Pick one per flag: Static variation (serve one variant to everyone): {"variation": "control"} Percentage rollout (split traffic by percentage): {"percentage": {"control": 80, "test": 20}} Percentages must sum to 100. Reward-driven (Versia optimizes automatically): {"contextualBandit": {}} The server auto-sets the endpoint and timeout. TrackEvents is enabled automatically. You must also set experimentation start/end dates and send rewards (Step 6). Progressive rollout (gradually shift traffic over time): {"progressiveRollout": { "initial": {"variation": "old", "percentage": 100, "date": "2026-04-01T00:00:00Z"}, "end": {"variation": "new", "percentage": 100, "date": "2026-04-15T00:00:00Z"} }} ### Targeting rules Target specific user segments using the targeting array. Each rule has a query using these operators: eq, ne, lt, gt, le, ge, co (contains), sw (starts with), ew (ends with). Query syntax: attribute operator "value" Combine conditions with " and " or " or " (not both in the same rule). Examples: plan eq "pro" country eq "US" and device eq "mobile" plan eq "free" or plan eq "trial" ### Important constraints - Flag names: lowercase letters, numbers, hyphens, and underscores only - All dates must be RFC3339 with timezone (e.g. 2026-04-01T00:00:00Z or 2026-04-01T12:00:00+02:00) - Unknown JSON fields are rejected - only send documented fields - Rate limit: 120 requests per minute per user ### Response format All responses are JSON. Errors return: {"error": "message"} Create returns 201, update/toggle return 200, delete returns 204 (no body). The user can review and adjust the flag in the Versia dashboard at any time. ## Step 3: Detect Language and Install the SDK Detect the language/framework of the user's project. Install the correct OpenFeature SDK and OFREP provider. Offer to run the install command yourself. Go: go get github.com/open-feature/go-sdk go get github.com/open-feature/go-sdk-contrib/providers/ofrep Node.js: npm install @openfeature/server-sdk @openfeature/ofrep-provider Python: pip install openfeature-sdk openfeature-provider-ofrep Java (Maven): dev.openfeature:sdk dev.openfeature.contrib.providers:ofrep Java (Gradle): implementation 'dev.openfeature:sdk' implementation 'dev.openfeature.contrib.providers:ofrep' .NET: dotnet add package OpenFeature dotnet add package OpenFeature.Providers.Ofrep PHP: There is no OFREP provider or tracking support in the PHP OpenFeature SDK yet. Use raw HTTP requests instead (cURL or any HTTP client). No SDK install needed. ## Step 4: Set Up the Provider Initialize the OFREP provider with the evaluation API (use the evaluation key, not the agentic key): - Base URL: https://api.versia.dev - Auth: Bearer token (the user's evaluation key) Refer to the OpenFeature SDK documentation for your language: - Go: https://openfeature.dev/docs/reference/technologies/server/go - Node.js: https://openfeature.dev/docs/reference/technologies/server/javascript - Python: https://openfeature.dev/docs/reference/technologies/server/python - Java: https://openfeature.dev/docs/reference/technologies/server/java - .NET: https://openfeature.dev/docs/reference/technologies/server/dotnet - PHP: No OFREP provider available. Use raw HTTP requests (see PHP example below). ### Decision tokens (reward-driven flags only) For reward-driven flags (contextualBandit rule), every evaluation returns a decision_token inside the flag's metadata. The token has TWO jobs - both are required, and the language examples below show only job (1): 1. Reward attribution: echo the token back in trackingEventDetails when the reward fires. Without this, the reward is silently dropped and no learning happens. 2. Variant stickiness: echo the token back in context.decisionToken on the NEXT evaluation of the same flag for the same user. Without this, the user can be served a different variant on each request, even with a stable targeting key. The targeting key identifies the user; the decision_token identifies the specific assignment that user has been given. Both must round-trip. Standard feature flags (non-bandit) do not return a token and do not need one for track() calls or repeat evaluations. ### Go example provider := ofrep.NewProvider("https://api.versia.dev", ofrep.WithBearerToken("your-evaluation-key")) openfeature.SetProvider(provider) client := openfeature.NewClient("my-app") evalCtx := openfeature.NewEvaluationContext("user-123", map[string]interface{}{ "plan": "pro", "device": "mobile", }) details, _ := client.StringValueDetails(ctx, "banner-cta", "default", evalCtx) variant := details.Variant // Capture decision token for reward-driven flags. Stash it wherever the // reward will fire from (session, DB column, closure). token, _ := details.FlagMetadata["decision_token"].(string) // Send reward on conversion client.Track(ctx, "conversion", evalCtx, openfeature.NewTrackingEventDetails(1.0).WithAttribute("decisionToken", token)) ### Node.js example OpenFeature.setProvider(new OFREPProvider({ baseUrl: 'https://api.versia.dev', headers: { Authorization: 'Bearer your-evaluation-key' } })); const client = OpenFeature.getClient(); const ctx = { targetingKey: 'user-123', plan: 'pro', device: 'mobile' }; const details = await client.getStringDetails('banner-cta', 'default', ctx); const variant = details.value; // Capture decision token for reward-driven flags. const decisionToken = details.flagMetadata?.decision_token; // Send reward on conversion client.track('conversion', ctx, { value: 1.0, decisionToken }); ### Python example provider = OFREPProvider( "https://api.versia.dev", headers_factory=lambda: {"Authorization": "Bearer your-evaluation-key"}) api.set_provider(provider) client = api.get_client() ctx = EvaluationContext(targeting_key="user-123", attributes={"plan": "pro", "device": "mobile"}) details = client.get_string_details("banner-cta", "default", ctx) variant = details.variant # Capture decision token for reward-driven flags. decision_token = (details.flag_metadata or {}).get("decision_token") # Send reward on conversion client.track("conversion", ctx, TrackingEventDetails(value=1.0).add("decisionToken", decision_token)) ### Java example var provider = OfrepProvider.constructProvider(OfrepProviderOptions.builder() .baseUrl("https://api.versia.dev") .headers(Map.of("Authorization", "Bearer your-evaluation-key")).build()); OpenFeatureAPI.getInstance().setProvider(provider); Client client = OpenFeatureAPI.getInstance().getClient(); EvaluationContext ctx = new ImmutableContext("user-123", Map.of("plan", new Value("pro"), "device", new Value("mobile"))); var details = client.getStringDetails("banner-cta", "default", ctx); String variant = details.getVariant(); // Capture decision token for reward-driven flags. String decisionToken = details.getFlagMetadata() != null ? (String) details.getFlagMetadata().getOrDefault("decision_token", "") : ""; // Send reward on conversion client.track("conversion", ctx, new MutableTrackingEventDetails(1.0).add("decisionToken", decisionToken)); ### .NET example var provider = new OfrepProvider(new OfrepOptions("https://api.versia.dev") { Headers = new Dictionary { ["Authorization"] = "Bearer your-evaluation-key" }}); await Api.Instance.SetProviderAsync(provider); var client = Api.Instance.GetClient(); var ctx = EvaluationContext.Builder() .SetTargetingKey("user-123") .Set("plan", "pro").Set("device", "mobile").Build(); var details = await client.GetStringDetailsAsync("banner-cta", "default", ctx); var variant = details.Variant; // Capture decision token for reward-driven flags. var decisionToken = details.FlagMetadata?.GetString("decision_token") ?? ""; // Send reward on conversion client.Track("conversion", ctx, new TrackingEventDetailsBuilder() .SetValue(1.0).Add("decisionToken", decisionToken).Build()); ### PHP example (raw HTTP, no SDK needed) // Evaluate a flag $ch = curl_init('https://api.versia.dev/ofrep/v1/evaluate/flags/banner-cta'); curl_setopt_array($ch, [ CURLOPT_POST => true, CURLOPT_RETURNTRANSFER => true, CURLOPT_HTTPHEADER => [ 'Authorization: Bearer your-evaluation-key', 'Content-Type: application/json', ], CURLOPT_POSTFIELDS => json_encode([ 'context' => [ 'targetingKey' => 'user-123', 'plan' => 'pro', 'device' => 'mobile', ], ]), ]); $result = json_decode(curl_exec($ch), true); $variant = $result['variant'] ?? 'default'; // Capture decision token for reward-driven flags. $decisionToken = $result['metadata']['decision_token'] ?? null; // Send reward on conversion. Echo the decision token back so the server // can link the reward to the decision. $ch = curl_init('https://api.versia.dev/v1/data/collector'); curl_setopt_array($ch, [ CURLOPT_POST => true, CURLOPT_RETURNTRANSFER => true, CURLOPT_HTTPHEADER => [ 'Authorization: Bearer your-evaluation-key', 'Content-Type: application/json', ], CURLOPT_POSTFIELDS => json_encode([ 'events' => [[ 'kind' => 'tracking', 'userKey' => 'user-123', 'key' => 'conversion', 'trackingEventDetails' => [ 'flagKey' => 'banner-cta', 'reward' => 1.0, 'decisionToken' => $decisionToken, ], ]], ]), ]); curl_exec($ch); ## Step 5: Evaluate Flags Before writing the evaluation call, think carefully about the code execution flow. Read the relevant request handlers, middleware, and rendering logic. Ask yourself: - Is this the right point in the request lifecycle to evaluate? The variant must be known before the response is rendered, but not so early that context is unavailable. - Is this code path actually reached for the users you want to target? Trace the flow from the incoming request to where the variant will be used. - Will this evaluation run on every request, or is it cached? Unnecessary repeated evaluations waste API calls. Consider evaluating once per session or request and passing the result through. - Is the user context (targeting key, attributes) available at this point? If not, you may need to evaluate later in the flow where the user is authenticated or identified. Use the OpenFeature SDK's Details method to evaluate flags. The Details response contains both a variant key (which variation was selected) and a value (the content stored in that variation). Use the variant key to decide what to render - it is the stable identifier that matches what Versia tracks internally. The SDK returns a sensible default when the API is unreachable or the flag doesn't exist, so your application gracefully degrades. Pass an EvaluationContext with the user's targeting key and any context attributes relevant to the experiment. ### Sticky evaluation for reward-driven flags For reward-driven flags, the same targeting key alone does NOT guarantee the same variant on repeat evaluations. To pin a user to a variant across requests, read the user's stored decision_token (typically from a cookie or session) and include it in the evaluation context as decisionToken. After evaluation, store the new decision_token from the response back to the same cookie/session so the next request can use it. Pattern (one consolidated cookie holding tokens for all reward-driven flags): // 1. Read tokens from cookie at the start of the request const tokensIn = readJSONCookie(request, "vdt") || {}; // 2. Pass the per-flag token in context.decisionToken on each evaluation const ctx = { targetingKey: userId }; if (tokensIn[flagKey]) ctx.decisionToken = tokensIn[flagKey]; const details = await client.getStringDetails(flagKey, "default", ctx); // 3. Capture the new token and persist for next request const tokensOut = { ...tokensIn }; const newToken = details.flagMetadata?.decision_token; if (newToken) tokensOut[flagKey] = newToken; writeJSONCookie(response, "vdt", tokensOut, { maxAge: 6 * 24 * 3600 }); The same token also goes into trackingEventDetails when the reward fires (Step 6) - one token, two echo points. If you skip the read-side of this loop (only writing the cookie, never forwarding it on the next eval), the experiment will still run but every page load is an independent evaluation. The user sees inconsistent variants and the experiment appears broken even though traffic and rewards both flow. ## Step 6: Send Rewards (Versia-specific) This is the only Versia-specific API call and the most important step to get right. A misplaced reward call will teach Versia the wrong thing, so think carefully. Before writing the reward call, trace the full user journey from seeing the variant to converting. Read the relevant code paths - the handler that serves the variant, the route or event that fires on conversion, and everything in between. Make sure you understand exactly where the conversion happens in the code. Place the reward call at the point where the conversion is confirmed and irreversible. For example: - For a purchase: after the payment is confirmed, not when the user clicks "buy" - For a signup: after the account is created, not when the form is submitted - For a click: in the click handler, but only if it leads to a meaningful action Watch out for these common mistakes: - Sending rewards in code paths that can be triggered without seeing the variant first - Sending duplicate rewards (e.g. on page refresh or retry) - Sending rewards in error handlers or fallback paths - Placing the reward before the action is actually complete For reward-driven (bandit) flags, the reward event MUST carry the decisionToken that the evaluation returned. The token is what links the reward back to the specific assignment the user was given. Without it, the event is dropped silently and no learning happens. The token lives in flag metadata on the evaluation response (see Step 4): - Go/.NET: details.FlagMetadata["decision_token"] - Node.js: details.flagMetadata.decision_token - Python: details.flag_metadata["decision_token"] - Java: details.getFlagMetadata().get("decision_token") - PHP HTTP: $result['metadata']['decision_token'] Capture it right after evaluation, stash it wherever the reward will fire from (the session, a DB column, a closure, the DOM for browser emits), and attach it to trackingEventDetails when the reward is sent. POST https://api.versia.dev/v1/data/collector Authorization: Bearer {evaluation-key} Content-Type: application/json { "events": [{ "kind": "tracking", "userKey": "{user-id}", "key": "conversion", "trackingEventDetails": { "flagKey": "{flag-key}", "reward": 1.0, "decisionToken": "{decision-token-from-evaluation}" } }] } The reward value is a number (typically 1.0 for a conversion, 0.0 for no conversion). Versia uses these signals to learn which variants perform best for each user segment and automatically adjusts traffic distribution. The decision token is valid for 7 days after evaluation. Rewards that arrive after that window are dropped. Pick reward-placement points that fire within the token's lifetime - short funnels (clicks, form submits) are fine; long ones (annual renewals) are out of scope for this attribution model. After placing the reward call, explain to the user exactly where you put it and why. Walk them through the flow: "When a user sees variant X here [file:line], and then converts here [file:line], the reward is sent here [file:line]." Get confirmation before finalizing. ## Key Concepts - Feature flags: Toggle features or choose between variants without deploying code - Reward-driven flags: Automatically optimize variant distribution based on conversion rewards - Context: User attributes (plan, device, location) used for targeting and personalization - Rewards: Signals that tell Versia which variants perform best for each user segment - Decision token: Opaque string returned in flag metadata for reward-driven flags. Capture at evaluation and echo it back in TWO places: (a) trackingEventDetails when the reward fires, so rewards attribute correctly, and (b) context.decisionToken on the next evaluation of the same flag for the same user, so the user keeps seeing the same variant. Valid for 7 days. Skip (a) and rewards are dropped; skip (b) and the same user may see different variants on different requests. - Default values: The OpenFeature SDK returns a default variant if evaluation fails, so your app always works even if Versia is unreachable