Skip to content

Servers

https://api.memnetai.com/

Default


Memories API

POST
v1/memories

This interface is used to submit large model dialogue context messages as long-term memory in memory

heat: 38 points

📌 Attention

In order to be as close as possible to the formation mechanism of long-term human memory, the system conducts multidimensional and in-depth analysis of dialogue context when performing memory functions. This process includes the extraction of basic semantic information, the construction of entities and their relationships, as well as conflict detection and traceable change analysis with existing historical memory. At the same time, the system will also correct inconsistencies in historical entity relationships and make necessary restructuring and updates to the relationship graph. Due to the multi-stage cognitive analysis and reasoning involved in the above processing, the overall memory process is relatively time-consuming.

Basic Introduction

  1. When memorizing, dialogue context needs to be passed, and the context structure is enhanced on the basis of OpenAI dialogue context. Each dialogue sentence with the role of user in the dialogue context can pass a character field, which is used to inform the memory of the name of the speaker in this sentence. This not only allows the memory to know the name of the user, but also adapts to more complex multi person dialogue context scenarios (i.e. there are multiple different characters). If the character field is not specified, the dialogue with the role of user will default to "user"
  2. The memory interface supports internationalization, please configure it through the language parameter
  3. During the memory process, a memory summary is generated, which can be retrieved from the first person perspective of the memory or from the third person perspective. It can be configured through isThirdPerson. Unless in special circumstances, it is recommended to use the first person perspective
  4. The generation of human memory is often accompanied by some meta information, such as where the memory occurred, what scene the memory occurred in, etc. These meta information often have an enhancing effect on memory. We provide metadata parameters to record any meta information
  5. Memory is a very time-consuming operation, so two calling methods are provided: synchronous and asynchronous. Synchronous calls start to remember immediately but allow fewer dialogue contexts to be passed in at once. Asynchronous calls queue and wait, but allow a large number of context to be remembered at once
  6. A namespace is used to logically isolate memory within the same project. In the same project, the uniqueness of memory is determined by the combination of "namespace+memory name". Therefore, even if the memory names are the same, as long as the namespaces are different, they will be considered as independent and non influential memory. This mechanism is suitable for complex applications with multiple users, multiple agents, or multiple scenarios coexisting

Precautions

  1. The service does not guarantee idempotency, and repeated calls with the same dialogue context may result in multiple similar memories
  2. The maximum word count for a single content in the context is about 192 words. Content exceeding the word count limit will be intelligently split into multiple contents
  3. The maximum time required for synchronous mode requests is within 200 seconds
  4. Reasonably use namespaces for memory isolation. In B2C or multi-agent scenarios, it is recommended to use namespaces to isolate and manage the memory of different users, agents, or sub scenarios. For example, "User ID" or "Agent ID" can be used as namespaces to securely reuse memory names in the same project, while avoiding memory contamination between different contexts

Best practices

In a normal natural language conversation scenario, it is recommended to memorize 16 conversation contexts as a unit (i.e. triggering the memory function once every 16 sentences chatted to memorize the latest 16 conversation contexts)
2. Due to the time-consuming memory process itself, it is recommended not to immediately remove the corresponding dialogue context from the conversation after submitting the conversation content as memory. After several rounds of normal conversation, removing the historical context that has been memorized can achieve a more natural and stable interaction effect
3. Unless in special circumstances, using the first person for memory retention yields the best results

Parameters

Header Parameters

Authorization

API key token

Typestring
Example"Token <api-key>"

Request Body

application/json
JSON
{
"memoryAgentName": "string",
"messages": [
{
"role": "string",
"content": "string",
"character": "string"
}
],
"namespace": "string",
"language": "string",
"isThirdPerson": 0,
"metadata": "string",
"asyncMode": 0
}

Responses

application/json
JSON
{
"code": "string",
"msg": "string",
"data": {
"tip": "string",
"usage": {
"consumedPoints": "string",
"consumedMemoryCount": "string",
"consumedRecallCount": "string",
"consumedThinkingCount": "string",
"consumedDreamCount": "string",
"consumedCommonMemoryWords": "string",
"hasRemainingQuota": true
},
"memoriesInfo": {
"newMemoryCount": 0,
"summarys": [
"string"
],
"entityRelationship": {
"nodes": [
{
"name": "string"
}
],
"links": [
{
"sourceEntityName": "string",
"targetEntityName": "string",
"relations": [
{
"relation": "string"
}
]
}
]
},
"historyMemorySummaryConflictAnalyses": [
{
"summary": "string",
"statusBeforeChange": "string",
"statusAfterChange": "string",
"memoryChangeExplain": "string"
}
],
"historyEntityRelationCorrections": [
{
"sourceEntityName": "string",
"targetEntityName": "string",
"originalRelation": "string",
"correctionReason": "string"
}
]
},
"taskId": "string"
}
}

Playground

Headers
Body

Samples

cURL
JavaScript
PHP
Python

Powered by VitePress OpenAPI