记忆系统优化

优化了记忆的连接建立
重启了遗忘功能
This commit is contained in:
SengokuCola
2025-03-11 01:13:17 +08:00
parent 62523409d1
commit 354d6d0deb
7 changed files with 1422 additions and 104 deletions

View File

@@ -139,7 +139,7 @@
## 📌 注意事项 ## 📌 注意事项
SengokuCola纯编程外行面向cursor编程很多代码史一样多多包涵 SengokuCola已得到大脑升级
> ⚠️ **警告**:本应用生成内容来自人工智能模型,由 AI 生成请仔细甄别请勿用于违反法律的用途AI生成内容不代表本人观点和立场。 > ⚠️ **警告**:本应用生成内容来自人工智能模型,由 AI 生成请仔细甄别请勿用于违反法律的用途AI生成内容不代表本人观点和立场。

20
docs/Jonathan R.md Normal file
View File

@@ -0,0 +1,20 @@
Jonathan R. Wolpaw 在 “Memory in neuroscience: rhetoric versus reality.” 一文中提到,从神经科学的感觉运动假设出发,整个神经系统的功能是将经验与适当的行为联系起来,而不是单纯的信息存储。
Jonathan R,Wolpaw. (2019). Memory in neuroscience: rhetoric versus reality.. Behavioral and cognitive neuroscience reviews(2).
1. **单一过程理论**
- 单一过程理论认为,识别记忆主要是基于熟悉性这一单一因素的影响。熟悉性是指对刺激的一种自动的、无意识的感知,它可以使我们在没有回忆起具体细节的情况下,判断一个刺激是否曾经出现过。
- 例如在一些实验中研究者发现被试可以在没有回忆起具体学习情境的情况下对曾经出现过的刺激做出正确的判断这被认为是熟悉性在起作用1。
2. **双重过程理论**
- 双重过程理论则认为,识别记忆是基于两个过程:回忆和熟悉性。回忆是指对过去经验的有意识的回忆,它可以使我们回忆起具体的细节和情境;熟悉性则是一种自动的、无意识的感知。
- 该理论认为,在识别记忆中,回忆和熟悉性共同作用,使我们能够判断一个刺激是否曾经出现过。例如,在 “记得 / 知道” 范式中被试被要求判断他们对一个刺激的记忆是基于回忆还是熟悉性。研究发现被试可以区分这两种不同的记忆过程这为双重过程理论提供了支持1。
1. **神经元节点与连接**:借鉴神经网络原理,将每个记忆单元视为一个神经元节点。节点之间通过连接相互关联,连接的强度代表记忆之间的关联程度。在形态学联想记忆中,具有相似形态特征的记忆节点连接强度较高。例如,苹果和橘子的记忆节点,由于在形状、都是水果等形态语义特征上相似,它们之间的连接强度大于苹果与汽车记忆节点间的连接强度。
2. **记忆聚类与层次结构**:依据形态特征的相似性对记忆进行聚类,形成不同的记忆簇。每个记忆簇内部的记忆具有较高的相似性,而不同记忆簇之间的记忆相似性较低。同时,构建记忆的层次结构,高层次的记忆节点代表更抽象、概括的概念,低层次的记忆节点对应具体的实例。比如,“水果” 作为高层次记忆节点,连接着 “苹果”“橘子”“香蕉” 等低层次具体水果的记忆节点。
3. **网络的动态更新**:随着新记忆的不断加入,记忆网络动态调整。新记忆节点根据其形态特征与现有网络中的节点建立连接,同时影响相关连接的强度。若新记忆与某个记忆簇的特征高度相似,则被纳入该记忆簇;若具有独特特征,则可能引发新的记忆簇的形成。例如,当系统学习到一种新的水果 “番石榴”,它会根据番石榴的形态、语义等特征,在记忆网络中找到与之最相似的区域(如水果记忆簇),并建立相应连接,同时调整周围节点连接强度以适应这一新记忆。
- **相似性联想**:该理论认为,当两个或多个事物在形态上具有相似性时,它们在记忆中会形成关联。例如,梨和苹果在形状和都是水果这一属性上有相似性,所以当我们看到梨时,很容易通过形态学联想记忆联想到苹果。这种相似性联想有助于我们对新事物进行分类和理解,当遇到一个新的类似水果时,我们可以通过与已有的水果记忆进行相似性匹配,来推测它的一些特征。
- **时空关联性联想**除了相似性联想MAM 还强调时空关联性联想。如果两个事物在时间或空间上经常同时出现,它们也会在记忆中形成关联。比如,每次在公园里看到花的时候,都能听到鸟儿的叫声,那么花和鸟儿叫声的形态特征(花的视觉形态和鸟叫的听觉形态)就会在记忆中形成关联,以后听到鸟叫可能就会联想到公园里的花。

View File

@@ -121,9 +121,9 @@ async def build_memory_task():
@scheduler.scheduled_job("interval", seconds=global_config.forget_memory_interval, id="forget_memory") @scheduler.scheduled_job("interval", seconds=global_config.forget_memory_interval, id="forget_memory")
async def forget_memory_task(): async def forget_memory_task():
"""每30秒执行一次记忆构建""" """每30秒执行一次记忆构建"""
# print("\033[1;32m[记忆遗忘]\033[0m 开始遗忘记忆...") print("\033[1;32m[记忆遗忘]\033[0m 开始遗忘记忆...")
# await hippocampus.operation_forget_topic(percentage=0.1) await hippocampus.operation_forget_topic(percentage=0.1)
# print("\033[1;32m[记忆遗忘]\033[0m 记忆遗忘完成") print("\033[1;32m[记忆遗忘]\033[0m 记忆遗忘完成")
@scheduler.scheduled_job("interval", seconds=global_config.build_memory_interval + 10, id="merge_memory") @scheduler.scheduled_job("interval", seconds=global_config.build_memory_interval + 10, id="merge_memory")

View File

@@ -203,7 +203,7 @@ class EmojiManager:
try: try:
prompt = f'这是{global_config.BOT_NICKNAME}将要发送的消息内容:\n{text}\n若要为其配上表情包,请你输出这个表情包应该表达怎样的情感,应该给人什么样的感觉,不要太简洁也不要太长,注意不要输出任何对消息内容的分析内容,只输出\"一种什么样的感觉\"中间的形容词部分。' prompt = f'这是{global_config.BOT_NICKNAME}将要发送的消息内容:\n{text}\n若要为其配上表情包,请你输出这个表情包应该表达怎样的情感,应该给人什么样的感觉,不要太简洁也不要太长,注意不要输出任何对消息内容的分析内容,只输出\"一种什么样的感觉\"中间的形容词部分。'
content, _ = await self.llm_emotion_judge.generate_response_async(prompt) content, _ = await self.llm_emotion_judge.generate_response_async(prompt,temperature=1.5)
logger.info(f"输出描述: {content}") logger.info(f"输出描述: {content}")
return content return content

View File

@@ -79,7 +79,7 @@ class KnowledgeLibrary:
content = f.read() content = f.read()
# 按1024字符分段 # 按1024字符分段
segments = [content[i:i+600] for i in range(0, len(content), 600)] segments = [content[i:i+600] for i in range(0, len(content), 300)]
# 处理每个分段 # 处理每个分段
for segment in segments: for segment in segments:

View File

@@ -25,26 +25,46 @@ class Memory_graph:
self.db = Database.get_instance() self.db = Database.get_instance()
def connect_dot(self, concept1, concept2): def connect_dot(self, concept1, concept2):
# 如果边已存在,增加 strength # 避免自连接
if concept1 == concept2:
return
current_time = datetime.datetime.now().timestamp()
# 如果边已存在,增加 strength
if self.G.has_edge(concept1, concept2): if self.G.has_edge(concept1, concept2):
self.G[concept1][concept2]['strength'] = self.G[concept1][concept2].get('strength', 1) + 1 self.G[concept1][concept2]['strength'] = self.G[concept1][concept2].get('strength', 1) + 1
# 更新最后修改时间
self.G[concept1][concept2]['last_modified'] = current_time
else: else:
# 如果是新边初始化 strength 为 1 # 如果是新边,初始化 strength 为 1
self.G.add_edge(concept1, concept2, strength=1) self.G.add_edge(concept1, concept2,
strength=1,
created_time=current_time, # 添加创建时间
last_modified=current_time) # 添加最后修改时间
def add_dot(self, concept, memory): def add_dot(self, concept, memory):
current_time = datetime.datetime.now().timestamp()
if concept in self.G: if concept in self.G:
# 如果节点已存在,将新记忆添加到现有列表中
if 'memory_items' in self.G.nodes[concept]: if 'memory_items' in self.G.nodes[concept]:
if not isinstance(self.G.nodes[concept]['memory_items'], list): if not isinstance(self.G.nodes[concept]['memory_items'], list):
# 如果当前不是列表,将其转换为列表
self.G.nodes[concept]['memory_items'] = [self.G.nodes[concept]['memory_items']] self.G.nodes[concept]['memory_items'] = [self.G.nodes[concept]['memory_items']]
self.G.nodes[concept]['memory_items'].append(memory) self.G.nodes[concept]['memory_items'].append(memory)
# 更新最后修改时间
self.G.nodes[concept]['last_modified'] = current_time
else: else:
self.G.nodes[concept]['memory_items'] = [memory] self.G.nodes[concept]['memory_items'] = [memory]
# 如果节点存在但没有memory_items,说明是第一次添加memory,设置created_time
if 'created_time' not in self.G.nodes[concept]:
self.G.nodes[concept]['created_time'] = current_time
self.G.nodes[concept]['last_modified'] = current_time
else: else:
# 如果是新节点创建新的记忆列表 # 如果是新节点,创建新的记忆列表
self.G.add_node(concept, memory_items=[memory]) self.G.add_node(concept,
memory_items=[memory],
created_time=current_time, # 添加创建时间
last_modified=current_time) # 添加最后修改时间
def get_dot(self, concept): def get_dot(self, concept):
# 检查节点是否存在于图中 # 检查节点是否存在于图中
@@ -191,15 +211,11 @@ class Hippocampus:
async def memory_compress(self, messages: list, compress_rate=0.1): async def memory_compress(self, messages: list, compress_rate=0.1):
"""压缩消息记录为记忆 """压缩消息记录为记忆
Args:
messages: 消息记录字典列表每个字典包含text和time字段
compress_rate: 压缩率
Returns: Returns:
set: (话题, 记忆) 元组集合 tuple: (压缩记忆集合, 相似主题字典)
""" """
if not messages: if not messages:
return set() return set(), {}
# 合并消息文本,同时保留时间信息 # 合并消息文本,同时保留时间信息
input_text = "" input_text = ""
@@ -246,12 +262,33 @@ class Hippocampus:
# 等待所有任务完成 # 等待所有任务完成
compressed_memory = set() compressed_memory = set()
similar_topics_dict = {} # 存储每个话题的相似主题列表
for topic, task in tasks: for topic, task in tasks:
response = await task response = await task
if response: if response:
compressed_memory.add((topic, response[0])) compressed_memory.add((topic, response[0]))
# 为每个话题查找相似的已存在主题
existing_topics = list(self.memory_graph.G.nodes())
similar_topics = []
for existing_topic in existing_topics:
topic_words = set(jieba.cut(topic))
existing_words = set(jieba.cut(existing_topic))
all_words = topic_words | existing_words
v1 = [1 if word in topic_words else 0 for word in all_words]
v2 = [1 if word in existing_words else 0 for word in all_words]
similarity = cosine_similarity(v1, v2)
if similarity >= 0.6:
similar_topics.append((existing_topic, similarity))
similar_topics.sort(key=lambda x: x[1], reverse=True)
similar_topics = similar_topics[:5]
similar_topics_dict[topic] = similar_topics
return compressed_memory return compressed_memory, similar_topics_dict
def calculate_topic_num(self, text, compress_rate): def calculate_topic_num(self, text, compress_rate):
"""计算文本的话题数量""" """计算文本的话题数量"""
@@ -265,33 +302,40 @@ class Hippocampus:
return topic_num return topic_num
async def operation_build_memory(self, chat_size=20): async def operation_build_memory(self, chat_size=20):
# 最近消息获取频率 time_frequency = {'near': 3, 'mid': 8, 'far': 5}
time_frequency = {'near': 2, 'mid': 4, 'far': 2} memory_samples = self.get_memory_sample(chat_size, time_frequency)
memory_sample = self.get_memory_sample(chat_size, time_frequency)
for i, messages in enumerate(memory_samples, 1):
for i, input_text in enumerate(memory_sample, 1):
# 加载进度可视化
all_topics = [] all_topics = []
progress = (i / len(memory_sample)) * 100 # 加载进度可视化
progress = (i / len(memory_samples)) * 100
bar_length = 30 bar_length = 30
filled_length = int(bar_length * i // len(memory_sample)) filled_length = int(bar_length * i // len(memory_samples))
bar = '' * filled_length + '-' * (bar_length - filled_length) bar = '' * filled_length + '-' * (bar_length - filled_length)
logger.debug(f"进度: [{bar}] {progress:.1f}% ({i}/{len(memory_sample)})") logger.debug(f"进度: [{bar}] {progress:.1f}% ({i}/{len(memory_samples)})")
# 生成压缩后记忆 ,表现为 (话题,记忆) 的元组
compressed_memory = set()
compress_rate = 0.1 compress_rate = 0.1
compressed_memory = await self.memory_compress(input_text, compress_rate) compressed_memory, similar_topics_dict = await self.memory_compress(messages, compress_rate)
logger.info(f"压缩后记忆数量: {len(compressed_memory)}") logger.info(f"压缩后记忆数量: {len(compressed_memory)},似曾相识的话题: {len(similar_topics_dict)}")
# 将记忆加入到图谱中
for topic, memory in compressed_memory: for topic, memory in compressed_memory:
logger.info(f"添加节点: {topic}") logger.info(f"添加节点: {topic}")
self.memory_graph.add_dot(topic, memory) self.memory_graph.add_dot(topic, memory)
all_topics.append(topic) # 收集所有话题 all_topics.append(topic)
# 连接相似的已存在主题
if topic in similar_topics_dict:
similar_topics = similar_topics_dict[topic]
for similar_topic, similarity in similar_topics:
if topic != similar_topic:
strength = int(similarity * 10)
logger.info(f"连接相似节点: {topic}{similar_topic} (强度: {strength})")
self.memory_graph.G.add_edge(topic, similar_topic, strength=strength)
# 连接同批次的相关话题
for i in range(len(all_topics)): for i in range(len(all_topics)):
for j in range(i + 1, len(all_topics)): for j in range(i + 1, len(all_topics)):
logger.info(f"连接节点: {all_topics[i]}{all_topics[j]}") logger.info(f"连接同批次节点: {all_topics[i]}{all_topics[j]}")
self.memory_graph.connect_dot(all_topics[i], all_topics[j]) self.memory_graph.connect_dot(all_topics[i], all_topics[j])
self.sync_memory_to_db() self.sync_memory_to_db()
@@ -302,7 +346,7 @@ class Hippocampus:
db_nodes = list(self.memory_graph.db.db.graph_data.nodes.find()) db_nodes = list(self.memory_graph.db.db.graph_data.nodes.find())
memory_nodes = list(self.memory_graph.G.nodes(data=True)) memory_nodes = list(self.memory_graph.G.nodes(data=True))
# 转换数据库节点为字典格式方便查找 # 转换数据库节点为字典格式,方便查找
db_nodes_dict = {node['concept']: node for node in db_nodes} db_nodes_dict = {node['concept']: node for node in db_nodes}
# 检查并更新节点 # 检查并更新节点
@@ -313,13 +357,19 @@ class Hippocampus:
# 计算内存中节点的特征值 # 计算内存中节点的特征值
memory_hash = self.calculate_node_hash(concept, memory_items) memory_hash = self.calculate_node_hash(concept, memory_items)
# 获取时间信息
created_time = data.get('created_time', datetime.datetime.now().timestamp())
last_modified = data.get('last_modified', datetime.datetime.now().timestamp())
if concept not in db_nodes_dict: if concept not in db_nodes_dict:
# 数据库中缺少的节点添加 # 数据库中缺少的节点,添加
node_data = { node_data = {
'concept': concept, 'concept': concept,
'memory_items': memory_items, 'memory_items': memory_items,
'hash': memory_hash 'hash': memory_hash,
'created_time': created_time,
'last_modified': last_modified
} }
self.memory_graph.db.db.graph_data.nodes.insert_one(node_data) self.memory_graph.db.db.graph_data.nodes.insert_one(node_data)
else: else:
@@ -327,25 +377,21 @@ class Hippocampus:
db_node = db_nodes_dict[concept] db_node = db_nodes_dict[concept]
db_hash = db_node.get('hash', None) db_hash = db_node.get('hash', None)
# 如果特征值不同则更新节点 # 如果特征值不同,则更新节点
if db_hash != memory_hash: if db_hash != memory_hash:
self.memory_graph.db.db.graph_data.nodes.update_one( self.memory_graph.db.db.graph_data.nodes.update_one(
{'concept': concept}, {'concept': concept},
{'$set': { {'$set': {
'memory_items': memory_items, 'memory_items': memory_items,
'hash': memory_hash 'hash': memory_hash,
'created_time': created_time,
'last_modified': last_modified
}} }}
) )
# 检查并删除数据库中多余的节点
memory_concepts = set(node[0] for node in memory_nodes)
for db_node in db_nodes:
if db_node['concept'] not in memory_concepts:
self.memory_graph.db.db.graph_data.nodes.delete_one({'concept': db_node['concept']})
# 处理边的信息 # 处理边的信息
db_edges = list(self.memory_graph.db.db.graph_data.edges.find()) db_edges = list(self.memory_graph.db.db.graph_data.edges.find())
memory_edges = list(self.memory_graph.G.edges()) memory_edges = list(self.memory_graph.G.edges(data=True))
# 创建边的哈希值字典 # 创建边的哈希值字典
db_edge_dict = {} db_edge_dict = {}
@@ -357,10 +403,14 @@ class Hippocampus:
} }
# 检查并更新边 # 检查并更新边
for source, target in memory_edges: for source, target, data in memory_edges:
edge_hash = self.calculate_edge_hash(source, target) edge_hash = self.calculate_edge_hash(source, target)
edge_key = (source, target) edge_key = (source, target)
strength = self.memory_graph.G[source][target].get('strength', 1) strength = data.get('strength', 1)
# 获取边的时间信息
created_time = data.get('created_time', datetime.datetime.now().timestamp())
last_modified = data.get('last_modified', datetime.datetime.now().timestamp())
if edge_key not in db_edge_dict: if edge_key not in db_edge_dict:
# 添加新边 # 添加新边
@@ -368,7 +418,9 @@ class Hippocampus:
'source': source, 'source': source,
'target': target, 'target': target,
'strength': strength, 'strength': strength,
'hash': edge_hash 'hash': edge_hash,
'created_time': created_time,
'last_modified': last_modified
} }
self.memory_graph.db.db.graph_data.edges.insert_one(edge_data) self.memory_graph.db.db.graph_data.edges.insert_one(edge_data)
else: else:
@@ -378,20 +430,12 @@ class Hippocampus:
{'source': source, 'target': target}, {'source': source, 'target': target},
{'$set': { {'$set': {
'hash': edge_hash, 'hash': edge_hash,
'strength': strength 'strength': strength,
'created_time': created_time,
'last_modified': last_modified
}} }}
) )
# 删除多余的边
memory_edge_set = set(memory_edges)
for edge_key in db_edge_dict:
if edge_key not in memory_edge_set:
source, target = edge_key
self.memory_graph.db.db.graph_data.edges.delete_one({
'source': source,
'target': target
})
def sync_memory_from_db(self): def sync_memory_from_db(self):
"""从数据库同步数据到内存中的图结构""" """从数据库同步数据到内存中的图结构"""
# 清空当前图 # 清空当前图
@@ -405,61 +449,107 @@ class Hippocampus:
# 确保memory_items是列表 # 确保memory_items是列表
if not isinstance(memory_items, list): if not isinstance(memory_items, list):
memory_items = [memory_items] if memory_items else [] memory_items = [memory_items] if memory_items else []
# 获取时间信息
created_time = node.get('created_time', datetime.datetime.now().timestamp())
last_modified = node.get('last_modified', datetime.datetime.now().timestamp())
# 添加节点到图中 # 添加节点到图中
self.memory_graph.G.add_node(concept, memory_items=memory_items) self.memory_graph.G.add_node(concept,
memory_items=memory_items,
created_time=created_time,
last_modified=last_modified)
# 从数据库加载所有边 # 从数据库加载所有边
edges = self.memory_graph.db.db.graph_data.edges.find() edges = self.memory_graph.db.db.graph_data.edges.find()
for edge in edges: for edge in edges:
source = edge['source'] source = edge['source']
target = edge['target'] target = edge['target']
strength = edge.get('strength', 1) # 获取 strength默认为 1 strength = edge.get('strength', 1) # 获取 strength,默认为 1
# 获取时间信息
created_time = edge.get('created_time', datetime.datetime.now().timestamp())
last_modified = edge.get('last_modified', datetime.datetime.now().timestamp())
# 只有当源节点和目标节点都存在时才添加边 # 只有当源节点和目标节点都存在时才添加边
if source in self.memory_graph.G and target in self.memory_graph.G: if source in self.memory_graph.G and target in self.memory_graph.G:
self.memory_graph.G.add_edge(source, target, strength=strength) self.memory_graph.G.add_edge(source, target,
strength=strength,
created_time=created_time,
last_modified=last_modified)
async def operation_forget_topic(self, percentage=0.1): async def operation_forget_topic(self, percentage=0.1):
"""随机选择图中一定比例的节点进行检查根据条件决定是否遗忘""" """随机选择图中一定比例的节点和边进行检查,根据时间条件决定是否遗忘"""
# 获取所有节点
all_nodes = list(self.memory_graph.G.nodes()) all_nodes = list(self.memory_graph.G.nodes())
# 计算要检查的节点数量 all_edges = list(self.memory_graph.G.edges())
check_count = max(1, int(len(all_nodes) * percentage))
# 随机选择节点 check_nodes_count = max(1, int(len(all_nodes) * percentage))
nodes_to_check = random.sample(all_nodes, check_count) check_edges_count = max(1, int(len(all_edges) * percentage))
forgotten_nodes = [] nodes_to_check = random.sample(all_nodes, check_nodes_count)
edges_to_check = random.sample(all_edges, check_edges_count)
edge_changes = {'weakened': 0, 'removed': 0}
node_changes = {'reduced': 0, 'removed': 0}
current_time = datetime.datetime.now().timestamp()
# 检查并遗忘连接
logger.info("开始检查连接...")
for source, target in edges_to_check:
edge_data = self.memory_graph.G[source][target]
last_modified = edge_data.get('last_modified')
# print(source,target)
# print(f"float(last_modified):{float(last_modified)}" )
# print(f"current_time:{current_time}")
# print(f"current_time - last_modified:{current_time - last_modified}")
if current_time - last_modified > 3600*24: # test
current_strength = edge_data.get('strength', 1)
new_strength = current_strength - 1
if new_strength <= 0:
self.memory_graph.G.remove_edge(source, target)
edge_changes['removed'] += 1
logger.info(f"\033[1;31m[连接移除]\033[0m {source} - {target}")
else:
edge_data['strength'] = new_strength
edge_data['last_modified'] = current_time
edge_changes['weakened'] += 1
logger.info(f"\033[1;34m[连接减弱]\033[0m {source} - {target} (强度: {current_strength} -> {new_strength})")
# 检查并遗忘话题
logger.info("开始检查节点...")
for node in nodes_to_check: for node in nodes_to_check:
# 获取节点的连接数 node_data = self.memory_graph.G.nodes[node]
connections = self.memory_graph.G.degree(node) last_modified = node_data.get('last_modified', current_time)
# 获取节点的内容条数 if current_time - last_modified > 3600*24: # test
memory_items = self.memory_graph.G.nodes[node].get('memory_items', []) memory_items = node_data.get('memory_items', [])
if not isinstance(memory_items, list): if not isinstance(memory_items, list):
memory_items = [memory_items] if memory_items else [] memory_items = [memory_items] if memory_items else []
content_count = len(memory_items)
if memory_items:
# 检查连接强度 current_count = len(memory_items)
weak_connections = True removed_item = random.choice(memory_items)
if connections > 1: # 只有当连接数大于1时才检查强度 memory_items.remove(removed_item)
for neighbor in self.memory_graph.G.neighbors(node):
strength = self.memory_graph.G[node][neighbor].get('strength', 1) if memory_items:
if strength > 2: self.memory_graph.G.nodes[node]['memory_items'] = memory_items
weak_connections = False self.memory_graph.G.nodes[node]['last_modified'] = current_time
break node_changes['reduced'] += 1
logger.info(f"\033[1;33m[记忆减少]\033[0m {node} (记忆数量: {current_count} -> {len(memory_items)})")
# 如果满足遗忘条件 else:
if (connections <= 1 and weak_connections) or content_count <= 2: self.memory_graph.G.remove_node(node)
removed_item = self.memory_graph.forget_topic(node) node_changes['removed'] += 1
if removed_item: logger.info(f"\033[1;31m[节点移除]\033[0m {node}")
forgotten_nodes.append((node, removed_item))
logger.debug(f"遗忘节点 {node} 的记忆: {removed_item}") if any(count > 0 for count in edge_changes.values()) or any(count > 0 for count in node_changes.values()):
# 同步到数据库
if forgotten_nodes:
self.sync_memory_to_db() self.sync_memory_to_db()
logger.debug(f"完成遗忘操作,共遗忘 {len(forgotten_nodes)} 个节点的记忆") logger.info("\n遗忘操作统计:")
logger.info(f"连接变化: {edge_changes['weakened']} 个减弱, {edge_changes['removed']} 个移除")
logger.info(f"节点变化: {node_changes['reduced']} 个减少记忆, {node_changes['removed']} 个移除")
else: else:
logger.debug("本次检查没有节点满足遗忘条件") logger.info("\n本次检查没有节点或连接满足遗忘条件")
async def merge_memory(self, topic): async def merge_memory(self, topic):
""" """
@@ -486,7 +576,7 @@ class Hippocampus:
logger.debug(f"选择的记忆:\n{merged_text}") logger.debug(f"选择的记忆:\n{merged_text}")
# 使用memory_compress生成新的压缩记忆 # 使用memory_compress生成新的压缩记忆
compressed_memories = await self.memory_compress(selected_memories, 0.1) compressed_memories, _ = await self.memory_compress(selected_memories, 0.1)
# 从原记忆列表中移除被选中的记忆 # 从原记忆列表中移除被选中的记忆
for memory in selected_memories: for memory in selected_memories:

File diff suppressed because it is too large Load Diff