# API Request Lifecycle

Sequence diagram showing a single Bedrock LLM call from service code all the way to a parsed result.

---

## Sequence: LangChain path (step-04, step-05)

```mermaid
sequenceDiagram
    participant Service as Service Function<br/>(page_detection_service.py<br/>ir_generation_service.py<br/>react_generation_service.py)
    participant TLC as timed_llm_call()<br/>shared/llm_logging.py
    participant LCModel as ChatBedrockConverse<br/>shared/bedrock_client.py
    participant Botocore as botocore<br/>HTTP client
    participant Bedrock as AWS Bedrock<br/>Converse API
    participant RunLog as RunLog<br/>shared/run_log.py
    participant Extract as extractors.py

    Service->>TLC: enter context manager<br/>{ step, model, page_id }
    activate TLC
    TLC->>TLC: t0 = perf_counter()<br/>print "calling..."<br/>start spinner thread

    Service->>LCModel: model.invoke(messages)<br/>messages: list[HumanMessage]<br/>content: text + image blocks

    activate LCModel
    LCModel->>Botocore: HTTP POST<br/>to bedrock.us-east-1.amazonaws.com<br/>Content-Type: application/json
    activate Botocore
    Botocore->>Bedrock: HTTPS POST /model/{modelId}/converse<br/>body: { messages, inferenceConfig }
    activate Bedrock

    alt Success (200)
        Bedrock-->>Botocore: 200 OK<br/>{ output.message.content, usage }
        Botocore-->>LCModel: Response object
        LCModel-->>Service: AIMessage<br/>content: str | list[dict]
        deactivate LCModel

        Service->>TLC: stats["response"] = AIMessage
        TLC->>TLC: stop spinner<br/>elapsed_ms = (perf_counter()-t0) * 1000
        TLC->>TLC: extract_token_usage(response)<br/>reads response_metadata["usage"]<br/>or usage_metadata
        TLC->>RunLog: get_active().record_llm_call(<br/>  model, duration_ms,<br/>  input_tokens, output_tokens, label)
        RunLog->>RunLog: append LLMCall to<br/>_stack[-1].llm_calls
        TLC->>TLC: logger.info("[LLM] step=... model=...<br/>in=N out=M total=K tok Xms")
        TLC-->>Service: stats["usage"] + stats["elapsed_ms"]
        deactivate TLC

        Service->>Extract: coerce_message_content_to_text(content)
        Extract-->>Service: text: str

        alt JSON expected (step-04)
            Service->>Extract: extract_json_object(text)
            Extract-->>Service: json_str: str
            Service->>Service: AppPlan.model_validate(json.loads(json_str))<br/>or IRBundle.model_validate(...)
            Service-->>Service: AppPlan | IRBundle
        else Code expected (step-05)
            Service->>Extract: extract_code_block(text)
            Extract-->>Service: tsx_code: str
            Service-->>Service: react_code stored in PageBundle
        end

    else Throttling / Error (4xx/5xx)
        Bedrock-->>Botocore: 429 ThrottlingException<br/>or 5xx ServiceException
        Botocore->>Botocore: retry up to BEDROCK_MAX_ATTEMPTS=2
        Botocore-->>LCModel: raise ClientError
        deactivate Botocore
        LCModel-->>Service: raise ClientError
        deactivate Bedrock

        Service->>TLC: exception propagates
        TLC->>TLC: stop spinner
        TLC->>TLC: logger.error("[LLM] FAILED | model=... attempt=.../... | Xms")
        TLC-->>Service: re-raise ClientError
        deactivate TLC

        Service-->>Service: exception propagates up
        note over Service: RunLog.step() context manager<br/>catches: StepEvent.status = "failed"<br/>StepEvent.error = repr(exc)<br/>re-raises to orchestrator
    end
```

---

## Sequence: Direct boto3 path (step-02, step-03)

```mermaid
sequenceDiagram
    participant PRD as prd_generator.py<br/>or backend_gen service
    participant Client as BedrockLLMClient<br/>shared/bedrock_raw_client.py
    participant Boto as boto3<br/>bedrock-runtime client
    participant Bedrock as AWS Bedrock<br/>InvokeModel API
    participant RunLog as RunLog<br/>shared/run_log.py

    PRD->>Client: llm.generate(prompt, images=None)
    activate Client

    Client->>Client: Build request body:<br/>{ anthropic_version, max_tokens: 32768,<br/>  temperature: 0.0, messages: [{role, content}] }

    Client->>Boto: invoke_model(modelId, body=json.dumps(body))
    activate Boto
    Boto->>Bedrock: POST /model/{modelId}/invoke<br/>body: JSON string
    activate Bedrock

    alt Success
        Bedrock-->>Boto: 200 OK<br/>body: StreamingBody
        Boto-->>Client: response dict<br/>response["body"].read() → bytes
        deactivate Boto
        Client->>Client: response_body = json.loads(response["body"].read())<br/>elapsed_ms = (perf_counter() - t0) * 1000

        Client->>RunLog: get_active().record_llm_call(<br/>  model=BACKEND_MODEL_ID,<br/>  duration_ms, input_tokens,<br/>  output_tokens, label="backend_gen")
        Client-->>PRD: response_body["content"][0]["text"]<br/>type: str
        deactivate Client
        deactivate Bedrock

    else Error
        Bedrock-->>Boto: 4xx/5xx
        Boto-->>Client: raise ClientError
        deactivate Bedrock
        deactivate Boto
        Client-->>PRD: re-raise ClientError
        deactivate Client
        note over PRD: propagates to RunLog.step() context<br/>status = "failed"
    end
```

---

## Credential Resolution Flow

```mermaid
sequenceDiagram
    participant Main as main.py startup
    participant Dotenv as python-dotenv
    participant Env as os.environ
    participant Builder as build_chat_model()<br/>bedrock_client.py
    participant Boto as boto3.Session

    Main->>Dotenv: load_dotenv(ROOT_DIR/.env, override=False)
    Dotenv->>Env: set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY,<br/>AWS_SESSION_TOKEN, BEDROCK_AWS_REGION, etc.<br/>(only if not already set in shell)

    Builder->>Env: _first_non_empty("AWS_ACCESS_KEY_ID", "AWS_ACCESS_KEY")
    Env-->>Builder: access_key: str | None

    Builder->>Env: _first_non_empty("AWS_SECRET_ACCESS_KEY", "AWS_SECRET_KEY")
    Env-->>Builder: secret_key: str | None

    alt explicit keys found
        Builder->>Boto: boto3.Session(aws_access_key_id, aws_secret_access_key,<br/>aws_session_token, region_name)
    else profile found
        Builder->>Boto: boto3.Session(profile_name, region_name)
    else neither
        Builder->>Boto: boto3.Session(region_name)  # uses default chain
    end

    Builder->>Boto: session.get_credentials()
    alt credentials found
        Boto-->>Builder: Credentials object
        Builder->>Builder: create ChatBedrockConverse(model, region, temp,<br/>config=botocore.Config(timeouts, retries))
        Builder-->>Builder: return ChatBedrockConverse
    else no credentials
        Boto-->>Builder: None
        Builder->>Builder: raise RuntimeError("AWS credentials were not found.")
    end
```

---

## Related source files

- [shared/bedrock_client.py](../../shared/bedrock_client.py) — ChatBedrockConverse factory
- [shared/bedrock_raw_client.py](../../shared/bedrock_raw_client.py) — BedrockLLMClient
- [shared/llm_logging.py](../../shared/llm_logging.py) — timed_llm_call
- [shared/extractors.py](../../shared/extractors.py) — JSON/code extraction
- [shared/run_log.py](../../shared/run_log.py) — RunLog.record_llm_call
