Suki Ambient APIs
The Suki Ambient APIs generate clinical notes from real-time conversations between providers and patients. Most operations use standard endpoints and return HTTP status codes. For streaming audio, the APIs use WebSocket endpoints for real-time transmission and event notifications.
What You Can Do
With the Suki Ambient API, you can:
• Send provider and encounter data to initiate a session.
• Retrieve the AI-generated clinical note at the end of the conversation.
• Access the complete conversation transcript for review or storage.
By integrating these capabilities, you can automate and enhance clinical documentation workflows in your healthcare application.
API Versions
The Suki Platform APIs use version v1, which is the stable version. All endpoints use the /api/v1/ prefix in the URL.
v1: The stable API version. Features in v1 are fully supported for the lifetime of the major version. If breaking changes are introduced, a new major version will be created and the existing version will be deprecated after a reasonable period. Non-breaking changes may be added without changing the major version.
Early Access: Some features are available in Early Access. These features are under development and may change. Early Access features use the v1 API version but may receive updates based on feedback.
All features in this documentation are available in v1. For versioning policies, backward compatibility, and migration strategies, see the API Reference Guidelines.
Postman Collection
Download the Postman collection below to test the Suki Ambient APIs.
Postman is an API client that displays requests and responses in structured formats. You can explore and integrate the Suki Ambient API through Postman.
Download Collection For Postman
API Capabilities
Personalization
Personalize the clinical note generation process for each provider.
Learn More →PBC (Problem-Based Charting)
Ensure the generated clinical note reflects the most up-to-date clinical picture.
Learn More →New APIs
Create Dictation Session
Initialize a new dictation session for real-time audio transcription.
View Endpoint →Stream Audio To Dictation Session
Stream audio data to the dictation service via WebSocket connection.
View Endpoint →End Dictation Session
Complete a dictation session and trigger transcript generation.
View Endpoint →Before You Begin: Access and Credentials
You need partner credentials to use the Suki Ambient API.
Contact our partnership team to get your credentials. They will guide you through the onboarding process and provide what you need to get started.
Prerequisites
You must have an OAuth-compliant authentication system, JWT tokens with consistent user identifiers, and a publicly accessible JWKS endpoint (or Okta authorization server) for token validation. See Partner Onboarding and Partner Authentication for details.
Environments
Use https://sdp.suki.ai for production. For testing, use https://sdp.suki-stage.com (staging). Your partnership team will confirm which environment and credentials to use.
Ambient APIs Workflow
To integrate with the Suki Ambient APIs, you follow a session-based workflow.
Getting Started With Suki Ambient APIs
Authenticate and Get Token
1. partner_id: Your unique partner ID, which we provide to you securely offline.
2. partner_token: The user’s OAuth 2.0 ID token from your identity provider.
3. provider_id (Optional): Unique identifier for the provider. Required for Bearer type partners only.
partner_token using your publicly exposed JWKS endpoint (or your Okta authorization server URL if you use Okta). On a successful request, the API returns a suki_token that you must include as the sdp_suki_token header for all subsequent API calls.Handling an unregistered user
If the user is not yet registered in our system, the/login request will fail. In this case, you must first call the /api/v1/auth/register endpoint to create the user, then call /login again. You only need to call the register endpoint once for each new user. Refer to the Register API reference for the full specification.Create Ambient Session
encounter_id. In older versions of the API, this parameter was named session_group_id. The old name will be deprecated in a future release.multilingual parameter is deprecated. Multilingual support is now true by default for all Suki for Partner users.1. encounter_id (Optional): A unique identifier for the patient encounter (also called a visit). This can be any alphanumeric string up to 255 characters long.
2. ambient_session_id (Optional): A UUID v4 to identify the session. If you do not provide one, Suki will generate it and include it in the response.
Seed Context (Optional)
• provider_specialty (Optional): The specialty of the provider (e.g., “cardiology”).
• sections (Optional): A list of the clinical sections you want summarized, such as “Assessment and Plan” or “Chief Complaint”. If you do not provide this, a default set of sections will be used.
• patient_info (Optional): An object containing date_of_birth (for accurate age calculation) and sex (MALE, FEMALE, OTHER, or UNKNOWN; defaults to UNKNOWN).
Stream Audio via WebSocket
wss://sdp.suki.ai/ws/stream. For authentication, use the Sec-WebSocket-Protocol header (browser clients) with format SukiAmbientAuth,<ambient_session_id>,<sdp_suki_token>, or sdp_suki_token and ambient_session_id headers (non-browser clients). See the Audio Streaming API reference for details.Audio format: You must stream raw audio data with the following specifications:• encoding: LINEAR16
• sample_rate: 16KHz
• channel: Mono
AUDIO (carries the raw audio data in the required format) and EVENT (signifies a specific action on the stream). The supported events are:• RESUME: Resumes a paused audio stream.
• CANCEL: Cancels the session. No note will be generated.
• ABORT: Aborts the stream due to an interruption. A note will be generated from the audio received so far. The session remains active and can be resumed.
• KEEP_ALIVE: Pings the server to keep the connection alive during periods of inactivity (e.g., when paused).
• EOF: Indicates the end of the file or stream.
CANCEL terminates the session completely. No note will be generated. Sending ABORT ends the current stream, but the session remains active. Suki will generate a note from the audio received before the interruption. You can resume streaming to the same ambient_session_id later.End Session
Retrieve Generated Content
session_summary_generated event to a webhook endpoint that you provide. Details on configuring your webhook and its authentication will be provided during the onboarding process.Retrieving content manually: Alternatively, you can use the following endpoints to check the status and retrieve the session’s content:1. GET /api/v1/ambient/session/{ambient_session_id}/status: Check the processing status of a session.
2. GET /api/v1/ambient/session/{ambient_session_id}/content: Retrieve the final, structured clinical note.
3. GET /api/v1/ambient/session/{ambient_session_id}/transcript: Get the full conversation transcript, including timestamps for each part of the dialogue.
Next Steps
After completing your first session, explore these resources to deepen your integration: