How It Works¶
This page explains the request flow through Connexions and how different features interact.
Request Flow¶
When a request arrives at Connexions, it passes through a middleware chain:
Request → Config Override → Latency/Error → Replay Read → Replay Write ──→ Cache Read → Upstream ──────────→ Response
↓ ↓ ↓ (if failed)
(if hit) (if hit) Custom MW → Handler → Cache Write
↓ ↓ ↓
Response Response ←────────────────────────────┘
Middleware Chain¶
- Config Override Middleware - Applies per-request config overrides from
X-Cxs-*headers - Latency & Error Middleware - Simulates network latency and injects errors
- Replay Read Middleware - Returns a recorded replay if the request matches (short-circuits)
- Replay Write Middleware - Wraps downstream to capture and record responses for replay
- Cache Read Middleware - Returns cached response if available (short-circuits)
- Upstream Middleware - Forwards to real backend; returns response if successful (short-circuits)
- Custom Middleware - Your service-specific middleware (compiled services only)
- Handler - Generates mock response from OpenAPI spec
- Cache Write Middleware - Stores response in cache for future requests
Per-Request Config Overrides¶
Override service configuration for individual requests using HTTP headers. This is useful for testing, debugging, or handling special cases without modifying the config file.
Supported Request Headers¶
| Header | Values | Description |
|---|---|---|
X-Cxs-Cache-Requests |
true / false |
Enable/disable request caching |
X-Cxs-Latency |
Duration (e.g., 100ms, 1s) |
Override latency |
X-Cxs-Upstream-Url |
URL or empty string | Override upstream URL (empty disables upstream) |
X-Cxs-Replay |
body:f1,f2;query:f3 or f1,f2 (or empty) |
Activate replay; optionally override match fields |
Response Headers¶
Connexions adds headers to responses indicating how they were processed:
| Header | Values | Description |
|---|---|---|
X-Cxs-Source |
generated, cache, upstream, replay |
Where the response came from |
X-Cxs-Duration |
Duration (e.g., 5.123ms) |
Total request processing time |
Using Config Overrides in the UI¶
The web UI provides a Config Overrides accordion in the generator view. This allows you to override settings without writing curl commands:
- Navigate to a service and select a resource
- Expand the Config Overrides accordion
- Check the box next to any setting you want to override:
- Upstream URL: Enter a URL to redirect requests, or leave empty to disable upstream (forces mock response)
- Cache Requests: Select
trueto enable caching orfalseto disable - Latency: Enter a duration like
100msor2s - Click refresh to send the request with your overrides
Only checked options are sent as headers. Unchecked options use the server's default configuration.
curl Examples¶
# Disable caching for this request
curl -H "X-Cxs-Cache-Requests: false" http://localhost:2200/petstore/pets
# Add 500ms latency
curl -H "X-Cxs-Latency: 500ms" http://localhost:2200/petstore/pets
# Disable upstream proxy (force mock response)
curl -H "X-Cxs-Upstream-Url: " http://localhost:2200/petstore/pets
# Redirect to a different upstream
curl -H "X-Cxs-Upstream-Url: https://api.example.com" http://localhost:2200/petstore/pets
# Combine multiple overrides
curl -H "X-Cxs-Latency: 200ms" -H "X-Cxs-Cache-Requests: true" http://localhost:2200/petstore/pets
Case Insensitivity¶
Headers are case-insensitive. These are all equivalent:
x-cxs-cache-requests: falseX-Cxs-Cache-Requests: falseX-CXS-CACHE-REQUESTS: false
Latency Simulation¶
Simulate real-world network conditions to test how your application handles delays.
Fixed Latency¶
# config.yml
latency: 100ms
Every request will be delayed by 100ms.
Percentile-Based Latency¶
# config.yml
latencies:
p25: 10ms # 25% of requests: 10ms
p50: 50ms # 25% of requests: 50ms
p90: 100ms # 40% of requests: 100ms
p99: 500ms # 9% of requests: 500ms
p100: 1s # 1% of requests: 1s
This creates a realistic latency distribution where most requests are fast, but some experience higher latency.
Error Injection¶
Test error handling by injecting HTTP errors at configurable rates.
# config.yml
errors:
p5: 500 # 5% return 500 Internal Server Error
p10: 400 # 5% return 400 Bad Request (p10 - p5)
p15: 429 # 5% return 429 Too Many Requests
Percentiles are cumulative - p10: 400 means requests between p5 and p10 (5%) return 400.
Flow:
- Request arrives
- Random number generated (0-100)
- If number ≤ 5 → return 500
- If number ≤ 10 → return 400
- If number ≤ 15 → return 429
- Otherwise → proceed to next middleware
Upstream Proxy¶
Forward requests to a real backend service with circuit breaker protection.
# config.yml
upstream:
url: https://api.example.com
timeout: 5s
How It Works¶
- Request arrives at Connexions
- Upstream middleware forwards request to
https://api.example.com - If successful → return upstream response
- If failed and status matches
fail-on→ return upstream error directly - If failed otherwise → proceed to mock handler (fallback)
By default, 400 Bad Request is returned directly (configurable via fail-on).
Circuit Breaker¶
The circuit breaker protects against cascading failures:
- Closed (normal): Requests flow to upstream
- Open (tripped): Requests skip upstream, go directly to mock handler
- Half-Open (recovery): Some requests test if upstream is healthy
The circuit opens when: - At least 3 requests have been made - Failure ratio ≥ 60%
Response Caching¶
Cache GET request responses to improve performance and consistency.
# config.yml
cache:
requests: true
How It Works¶
- Cache Read: Before processing, check if response exists in cache
- Cache Write: After generating response, store in cache
Cached responses are keyed by METHOD:URL and cleared periodically (configurable via historyDuration in app settings).
Cache Behavior¶
| Request | Cache State | Result |
|---|---|---|
| GET /pets | Empty | Generate response, cache it |
| GET /pets | Has entry | Return cached response |
| POST /pets | Any | Always generate new response |
| GET /pets/1 | Empty | Generate response, cache it |
Request Validation¶
Requests can be validated against the OpenAPI specification at run time.
Validation checks: - Required parameters - Parameter types and formats - Request body schema - Content-Type headers
Invalid requests return 400 Bad Request with validation details.
Response Generation¶
When no cached or upstream response is available, Connexions generates a mock response:
- Find matching operation in OpenAPI spec
- Select response (prefer 200, then 2xx, then first defined)
- Generate response body - values are resolved in this order:
- Replace from request headers
- Replace from path parameters
- Replace from context files
- Use schema
examplevalues - Generate based on schema
format(email, uuid, date, etc.) - Generate based on schema primitive type (string, integer, etc.)
- Fallback to default values
Static Responses¶
Override generated responses with static files:
static/{service}/{method}/{path}/index.json
Example: static/petstore/get/pets/index.json overrides GET /petstore/pets
x-static-response Extension¶
Define static responses directly in your OpenAPI spec:
paths:
/pets:
get:
responses:
'200':
content:
application/json:
x-static-response: |
[{"id": 1, "name": "Fluffy"}]
Replay¶
Record API responses and replay them on subsequent requests that match specific request fields. See Replay for full documentation.
How It Works¶
- Request arrives with
X-Cxs-Replayheader (or hits anauto-replayendpoint) - Replay Read checks for a stored recording matching the request body fields
- If found → return immediately with
X-Cxs-Source: replay - If not found → Replay Write captures the downstream response and stores it for future matches
Replay sits before the cache in the middleware chain, so content-addressed replay lookups take priority over URL-based cache lookups.
Combining Features¶
Features can be combined for realistic testing scenarios:
# config.yml
latencies:
p50: 50ms
p99: 200ms
errors:
p5: 500
upstream:
url: https://api.example.com
cache:
requests: true
replay:
upstream-only: true
endpoints:
/search:
POST:
match:
body:
- query
This configuration: 1. Adds realistic latency distribution 2. Injects 5% server errors 3. Tries upstream first, falls back to mock 4. Caches successful GET responses 5. Records and replays upstream POST responses matched by request body fields