URI: /api/conversation/v1/exchange
CreateExchange
The CreateExchange API makes an input query based on defined parameters, and returns scored and ranked retrieval outputs in response.
The endpoint leverages the returned conversation_id to support conversational exchange. Each call performs one synchronous round-trip by sending a text or audio request to the service and receiving the text and optional audio response.
Input model
Field Name | Type | Description | Example Value | |
conversation_id required |
string |
The conversation_id value returned in the prior CreateExchangeResponse. Omitting the conversation_id will start a new conversation on the request. |
2204dpad-ea7f-11a8-b7e3-999251427254 | |
input required |
object |
The input message provides input information to the Conversation system specifies the type of input, text or audio. |
||
audio_input_enabled optional |
boolean |
If set to true, the system will respond with audio output when available. When audio output cannot be generated, successful exchange responses will include only text output. Default is false. |
True | |
knowledge_domain_id optional |
string |
Version of a collection. If not defined, the active version of the collection is automatically used. Location: The collection overview tab in the Advanced details listed as the Active Domain ID. |
5572cd8-65e9-46y7-9733-d2d7d123zz5z | |
max_outputs optional |
int32 | Specifies the number of outputs to be returned (min 3, max 10). Default is 3. | 3 | |
collection_id required |
string |
The knowledge collection, or index, that the exchange is performed against. Location: The collection overview tab in the Advanced details. |
2204dpad-b3f8-46y7-h944-0000a8f4bd9d | |
max_sentence_outputs optional |
int32 | Specifies the number of answer snippets returned per chunk. The default is 1 and the max is 10. | 1 | |
content_group_ids optional |
array |
Specifies the content groups used to return data. Content_group_ids can be retrieved via the contents API. |
['ddd143cc-0582-46bc-875e-91b89ed99111', 'ddd143cc-0582-46bc-875e-91b89ed99112'] | |
subject_ids optional |
array |
Specifies the subjects used to return data. |
error | |
filter condition optional |
string |
Filter conditions are written as strings. A single comparable condition is written as
You can combine conditions with AND, OR operators. |
(Star = 'Harrison Ford' AND (Creator = 'George Lucas' OR Director = 'Steven Spielberg')) | |
raw_text required |
string | Input query to be used for retrieval. | What is ML? | |
raw_audio optional |
object | |||
|
audio_encoding optional |
string |
Audio encoding of the content in the message Audio must be one-channel (mono)
|
Default: "UNSPECIFIED_AUDIO_ENCODING"
Enum: "UNSPECIFIED_AUDIO_ENCODING"
"LINEAR16"
|
|
sample_rate_hertz optional |
int32 | Sample rate in Hertz of the audio data. The only valid value currently supported is 16000, which must be explicitly stated by the client. | 16000 |
|
content optional |
int32 | The bytes of audio data encoded as specified in audio_encoding. Note: As with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64. |
Request sample
Show code
{
"conversation_id": "string",
"input": {
"option": {
"audio_output_enabled": true,
"knowledge_domain_id": "string",
"max_outputs": 0,
"collection_id": "string",
"max_sentence_outputs": 0,
"max_concise_outputs": 0,
"content_group_ids": [
"string"
],
"subject_ids": [
"string"
],
"filter": {
"condition": "string"
},
"context": {}
},
"language_id": "string",
"raw_text": "string",
"raw_audio": {
"audio_encoding": "UNSPECIFIED_AUDIO_ENCODING",
"sample_rate_hertz": 0,
"content": "string"
},
"recommended_questions": "UNSPECIFIED"
}
}
Output model
Field Name | Type | Description | Example Value | ||||
metadata | object |
|
|||||
uuid | string |
UUID returned in the resource response, which represents the unique interaction with the API, that is, the response id. |
05b67752-bfdc-4757-8175-0635d45588c9 | ||||
create_time | string | Time of the initial request. | 2023-05-18T22:44:24.388717Z | ||||
update_time | string | Time of the last update. A value of 0 indicates it has never been updated. | 2023-05-18T22:44:24.388717Z | ||||
response_time_milli | int32 | Time difference in milliseconds between when the request was received and when the response was generated, that is, the latency. | 20 | ||||
data | object | ||||||
exchange_id | string | UUID generated by the entity creating the exchange response data resource. The exchange_id can be used in the "feedback" API, which reviews the information and provides a thumbs up or thumbs down rating. | 9904dpad-ea7f-11a8-b7e3-999251427254 | ||||
conversation_id | string | Conversation ID for subsequent calls, which is saved in the client and returned on the next request. | 2204dpad-ea7f-11a8-b7e3-999251427254 | ||||
normalized_input | object | ||||||
normalized_input_id | string | UUID generated by the entity creating the normalized input resource. | 6525294f-c79b-4241-83d9-ce808271a0e7 | ||||
raw_text | string | The original input query (raw_text). | What is ML? | ||||
understood_text | string | The normalized input query, that is, raw text, which creates the output. | What is machine learning? | ||||
understood_subject_ids | array of strings | IDs of subjects found in the input that filter the outputs. | ['9898cde2-8191-ab14-aaaa-0011a8f4bd9d', '9898cde2-8191-ab14-aaaa-0011a8f4bd9c'] | ||||
suggested_raw_texts | array of strings | Spelling corrections, if any, or other suggestions found for the input query (raw_text) | Define machine learning | ||||
output |
array of objects | The Exchange result | |||||
output_id |
string | UUID generated by the entity creating the output resource. | 9904dpad-ea7f-11a8-b7e3-999251427254 | ||||
text |
string | The answer snippet or reply displayed to the end-user. | Class membership probability... | ||||
summary_text | string |
The exchange reply that is intended to be translated to speech and played as audio for the end-user. This is generally a shorter form equivalent to text that works better as a spoken reply. If there is no shorter form summarization available, this will be empty, and text will be synthesized into speech. |
ML uses mathematical models of data to help computers learn without direct instruction. | ||||
audio | object |
|
|||||
audio_output_id | string | Random UUID generated by the entity creating the audio output resource. | 9904dpad-ea7f-11a8-b7e3-999251427254 | ||||
audio_encoding | string |
Audio encoding of the content in the message.
|
Default: "UNSPECIFIED_AUDIO_ENCODING"
Enum: "UNSPECIFIED_AUDIO_ENCODING"
"LINEAR16" "MP3"
|
||||
sample_hertz_rate | int32 | The sample rate in Hertz of the audio data returned in content. | 16000 | ||||
content | string <byte> | The bytes of audio data encoded as specified in "audio_encoding". Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64. | ^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$ | ||||
attachments | object | The additional information containing the conversation response to the request query | |||||
property name | object | Additional property | |||||
content_type | string |
The content type of the additional information. Format of each type field is application/vnd.pryon.{content_type}. Commonly occurring types are: text: The answer snippet or the text corresponding to detected short spans within the answer_in_context. Best_n will be the same string as the ‘text’ field and represents the ranking of the sentence within the AIC/chunk. answer_in_context: Chunk of text identified as most relevant to the input query. The smaller text answer/reply is extracted from this larger text. answer_type: Answer type or classification. Answer types include: score: An approximation of the strength of the returned answer and answer_in_context chunk, or the float score returned from a model. level: Configurable answer confidence levels to categorize outputs. content_id: content_id of the knowledge domain content where the answer is located. A client application may use the content API to get more information about the content. content_display_name: Display name of the source file that includes the best sentence answer. content_source_location: URL of the source content where the answer and chunk are located. index: Index into a custom data source. related_questions: Questions related to the input question. e.g. key: _best_, value: {content_type:application/ vnd.pryon.related_questions, related_questions_score: Score of a Related Question, e.g. key: rq_score_best_, value: {content_type:application/ vnd.pryon.related_questions_score, start_page: Page number where the answer_in_context or the chunk starts in a PDF. start_page_bbox: Bounding box of the entire page on which the answer is found. end_page: Page number where the answer_in_context ends in a PDF. start_char_index: Index of the first occurrence of the best_sentence or answer substring within the surrounding answer_in_context chunk. end_char_index: Index of the last occurrence of the best_sentence or answer substring within the surrounding answer_in_context chunk. bbox: Bounding box coordinates for highlighting the answer snippet (or text field); coordinates list four floating point numbers representing the top-left and bottom-right corners of the box for the answer snippet on the source document image, formatted as 'X1, Y1, X2, Y2, PageNumber, Page Width, Page Height'. flag: Boolean flag with either true or false values. texttrack_cue: The relative time offset from the beginning of the video associated with the short answer. e.g. key: "texttrack_cue", value: {content_type: application/vnd.pryon.texttrack_cue, content: HH:MM:SS}. hyperlinks: The multi-hyperlink that user can follow by tapping. Hyperlinks is keyed by "hyperlink_1", "hyperlink_2", ... "hyperlink_" final_query: The version of the input query that matches the answer. Each n best answer has a potentially different final query. |
|||||
content | string |
The content of the additional information |
|||||
subject_ids |
array of strings | The subjects known to be associated with this output. | ['9898cde2-8191-ab14-aaaa-0011a8f4bd9d', '9898cde2-8191-ab14-aaaa-0011a8f4bd9c'] | ||||
context | object | Additional context for this output. | |||||
augmentation | object | Predefined additional information provided with the source of this output. | |||||
user_id | string | The identifier of the user that originally made the exchange. | aaaaaaaa-b3f8-4df1-aea4-c88gg39222da | ||||
knowledge_domain_id | string | UUID of the knowledge domain that supplied the exchange response. | 5572cd8-65e9-46y7-9733-d2d7d123zz5z | ||||
collection_id | string | UUID of the knowledge collection that supplied the exchange response. | 2204dpad-b3f8-46y7-h944-0000a8f4bd9d |
Response sample
Show code
{
"metadata": {
"uuid": "string",
"create_time": "2019-08-24T14:15:22Z",
"update_time": "2019-08-24T14:15:22Z",
"response_time_millis": 0
},
"data": {
"exchange_id": "string",
"conversation_id": "string",
"normalized_input": {
"normalized_input_id": "string",
"raw_text": "string",
"understood_text": "string",
"understood_subject_ids": [
"string"
],
"suggested_raw_texts": [
"string"
]
},
"output": [
{
"output_id": "string",
"text": "string",
"summary_text": "string",
"audio": {
"audio_output_id": "string",
"audio_encoding": "UNSPECIFIED_AUDIO_ENCODING",
"sample_rate_hertz": 0,
"content": "string"
},
"attachments": {
"property1": {
"content_type": "string",
"content": "string"
},
"property2": {
"content_type": "string",
"content": "string"
}
},
"subject_ids": [
"string"
],
"context": {
"augmentation": {}
}
}
]
},
"user_id": "string",
"knowledge_domain_id": "string",
"collection_id": "string"
}