diff --git a/.gitignore b/.gitignore
index b9e101e40..34c7b1e28 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,6 +6,10 @@ log/
logs/
/test
/src/test
+nonebot-maibot-adapter/
+*.zip
+run.bat
+run.py
message_queue_content.txt
message_queue_content.bat
message_queue_window.bat
diff --git a/README.md b/README.md
index 572c76ad8..bf9649315 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-# 麦麦!MaiMBot-MaiCore (编辑中)
+# 麦麦!MaiCore-MaiMBot (编辑中)
@@ -13,10 +13,12 @@
**🍔MaiCore是一个基于大语言模型的可交互智能体**
- LLM 提供对话能力
+- 动态Prompt构建器
+- 实时的思维系统
- MongoDB 提供数据持久化支持
- 可扩展,可支持多种平台和多种功能
-**最新版本: v0.6.0** ([查看更新日志](changelog.md))
+**最新版本: v0.6.0** ([查看更新日志](changelogs/changelog.md))
> [!WARNING]
> 次版本MaiBot将基于MaiCore运行,不再依赖于nonebot相关组件运行。
> MaiBot将通过nonebot的插件与nonebot建立联系,然后nonebot与QQ建立联系,实现MaiBot与QQ的交互
@@ -58,46 +60,57 @@
### 最新版本部署教程(MaiCore版本)
- [🚀 最新版本部署教程](https://docs.mai-mai.org/manual/deployment/refactor_deploy.html) - 基于MaiCore的新版本部署方式(与旧版本不兼容)
-
## 🎯 功能介绍
### 💬 聊天功能
-
+- 提供思维流(心流)聊天和推理聊天两种对话逻辑
- 支持关键词检索主动发言:对消息的话题topic进行识别,如果检测到麦麦存储过的话题就会主动进行发言
- 支持bot名字呼唤发言:检测到"麦麦"会主动发言,可配置
- 支持多模型,多厂商自定义配置
- 动态的prompt构建器,更拟人
- 支持图片,转发消息,回复消息的识别
-- 支持私聊功能,包括消息处理和回复
+- 支持私聊功能,可使用PFC模式的有目的多轮对话(实验性)
-### 🧠 思维流系统(实验性功能)
-- 思维流能够生成实时想法,增加回复的拟人性
+### 🧠 思维流系统
+- 思维流能够在回复前后进行思考,生成实时想法
+- 思维流自动启停机制,提升资源利用效率
- 思维流与日程系统联动,实现动态日程生成
-### 🧠 记忆系统
+### 🧠 记忆系统 2.0
+- 优化记忆抽取策略和prompt结构
+- 改进海马体记忆提取机制,提升自然度
- 对聊天记录进行概括存储,在需要时调用
-### 😊 表情包功能
+### 😊 表情包系统
- 支持根据发言内容发送对应情绪的表情包
+- 支持识别和处理gif表情包
- 会自动偷群友的表情包
- 表情包审查功能
- 表情包文件完整性自动检查
+- 自动清理缓存图片
-### 📅 日程功能
-- 麦麦会自动生成一天的日程,实现更拟人的回复
-- 支持动态日程生成
-- 优化日程文本解析功能
+### 📅 日程系统
+- 动态更新的日程生成
+- 可自定义想象力程度
+- 与聊天情况交互(思维流模式下)
-### 👥 关系系统
-- 针对每个用户创建"关系",可以对不同用户进行个性化回复
+### 👥 关系系统 2.0
+- 优化关系管理系统,适用于新版本
+- 提供更丰富的关系接口
+- 针对每个用户创建"关系",实现个性化回复
### 📊 统计系统
-- 详细统计系统
-- LLM使用统计
+- 详细的使用数据统计
+- LLM调用统计
+- 在控制台显示统计信息
### 🔧 系统功能
- 支持优雅的shutdown机制
- 自动保存功能,定期保存聊天记录和关系数据
+- 完善的异常处理机制
+- 可自定义时区设置
+- 优化的日志输出格式
+- 配置自动更新功能
## 开发计划TODO:LIST
diff --git a/changelogs/changelog.md b/changelogs/changelog.md
index d9759ea11..6b9898b5c 100644
--- a/changelogs/changelog.md
+++ b/changelogs/changelog.md
@@ -1,26 +1,24 @@
# Changelog
-## [0.6.0] - 2025-3-30
+## [0.6.0] - 2025-4-4
+
+### 摘要
+- MaiBot 0.6.0 重磅升级! 核心重构为独立智能体MaiCore,新增思维流对话系统,支持拟真思考过程。记忆与关系系统2.0让交互更自然,动态日程引擎实现智能调整。优化部署流程,修复30+稳定性问题,隐私政策同步更新,推荐所有用户升级体验全新AI交互!(V3激烈生成)
+
### 🌟 核心功能增强
#### 架构重构
- 将MaiBot重构为MaiCore独立智能体
- 移除NoneBot相关代码,改为插件方式与NoneBot对接
-- 精简代码结构,优化文件夹组织
-- 新增详细统计系统
#### 思维流系统
-- 新增思维流作为实验功能
-- 思维流大核+小核架构
-- 思维流回复意愿模式
-- 优化思维流自动启停机制,提升资源利用效率
+- 提供两种聊天逻辑,思维流(心流)聊天(ThinkFlowChat)和推理聊天(ReasoningChat)
+- 思维流聊天能够在回复前后进行思考
+- 思维流自动启停机制,提升资源利用效率
- 思维流与日程系统联动,实现动态日程生成
-- 优化心流运行逻辑和思考时间计算
-- 添加错误检测机制
-- 修复心流无法观察群消息的问题
#### 回复系统
-- 优化回复逻辑,添加回复前思考机制
-- 移除推理模型在回复中的使用
+- 更改了回复引用的逻辑,从基于时间改为基于新消息
+- 提供私聊的PFC模式,可以进行有目的,自由多轮对话(实验性)
#### 记忆系统优化
- 优化记忆抽取策略
@@ -28,41 +26,33 @@
- 改进海马体记忆提取机制,提升自然度
#### 关系系统优化
-- 修复relationship_value类型错误
-- 优化关系管理系统
-- 改进关系值计算方式
+- 优化关系管理系统,适用于新版本
+- 改进关系值计算方式,提供更丰富的关系接口
+
+#### 表情包系统
+- 可以识别gif表情包
+- 表情包增加存储上限
+- 自动清理缓存图片
+
+## 日程系统优化
+- 日程现在动态更新
+- 日程可以自定义想象力程度
+- 日程会与聊天情况交互(思维流模式下)
### 💻 系统架构优化
#### 配置系统改进
-- 优化配置文件整理
-- 新增分割器功能
-- 新增表情惩罚系数自定义
+- 新增更多项目的配置项
- 修复配置文件保存问题
-- 优化配置项管理
-- 新增配置项:
- - `schedule`: 日程表生成功能配置
- - `response_spliter`: 回复分割控制
- - `experimental`: 实验性功能开关
- - `llm_observation`和`llm_sub_heartflow`: 思维流模型配置
- - `llm_heartflow`: 思维流核心模型配置
- - `prompt_schedule_gen`: 日程生成提示词配置
- - `memory_ban_words`: 记忆过滤词配置
- 优化配置结构:
- 调整模型配置组织结构
- 优化配置项默认值
- 调整配置项顺序
- 移除冗余配置
-#### WebUI改进
-- 新增回复意愿模式选择功能
-- 优化WebUI界面
-- 优化WebUI配置保存机制
-
#### 部署支持扩展
- 优化Docker构建流程
- 完善Windows脚本支持
- 优化Linux一键安装脚本
-- 新增macOS教程支持
### 🐛 问题修复
#### 功能稳定性
@@ -75,43 +65,24 @@
- 修复自定义API提供商识别问题
- 修复人格设置保存问题
- 修复EULA和隐私政策编码问题
-- 修复cfg变量引用问题
-
-#### 性能优化
-- 提高topic提取效率
-- 优化logger输出格式
-- 优化cmd清理功能
-- 改进LLM使用统计
-- 优化记忆处理效率
### 📚 文档更新
- 更新README.md内容
-- 添加macOS部署教程
- 优化文档结构
- 更新EULA和隐私政策
- 完善部署文档
### 🔧 其他改进
-- 新增神秘小测验功能
-- 新增人格测评模型
+- 新增详细统计系统
- 优化表情包审查功能
- 改进消息转发处理
- 优化代码风格和格式
- 完善异常处理机制
+- 可以自定义时区
- 优化日志输出格式
- 版本硬编码,新增配置自动更新功能
-- 更新日程生成器功能
- 优化了统计信息,会在控制台显示统计信息
-### 主要改进方向
-1. 完善思维流系统功能
-2. 优化记忆系统效率
-3. 改进关系系统稳定性
-4. 提升配置系统可用性
-5. 加强WebUI功能
-6. 完善部署文档
-
-
## [0.5.15] - 2025-3-17
### 🌟 核心功能增强
diff --git a/changelogs/changelog_dev.md b/changelogs/changelog_dev.md
index ab211c4b9..acfb7e03f 100644
--- a/changelogs/changelog_dev.md
+++ b/changelogs/changelog_dev.md
@@ -1,8 +1,18 @@
这里放置了测试版本的细节更新
+## [test-0.6.0-snapshot-9] - 2025-4-4
+- 可以识别gif表情包
+
+## [test-0.6.0-snapshot-8] - 2025-4-3
+- 修复了表情包的注册,获取和发送逻辑
+- 表情包增加存储上限
+- 更改了回复引用的逻辑,从基于时间改为基于新消息
+- 增加了调试信息
+- 自动清理缓存图片
+- 修复并重启了关系系统
## [test-0.6.0-snapshot-7] - 2025-4-2
- 修改版本号命名:test-前缀为测试版,无前缀为正式版
-- 提供私聊的PFC模式
+- 提供私聊的PFC模式,可以进行有目的,自由多轮对话
## [0.6.0-mmc-4] - 2025-4-1
- 提供两种聊天逻辑,思维流聊天(ThinkFlowChat 和 推理聊天(ReasoningChat)
diff --git a/docker-compose.yml b/docker-compose.yml
index 367d28cdd..8062b358d 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -8,9 +8,10 @@ services:
ports:
- "18002:18002"
volumes:
- - ./docker-config/adapters/plugins:/adapters/src/plugins # 持久化adapters
+ - ./docker-config/adapters/config.py:/adapters/src/plugins/nonebot_plugin_maibot_adapters/config.py # 持久化adapters配置文件
- ./docker-config/adapters/.env:/adapters/.env # 持久化adapters配置文件
- ./data/qq:/app/.config/QQ # 持久化QQ本体并同步qq表情和图片到adapters
+ - ./data/MaiMBot:/adapters/data
restart: always
depends_on:
- mongodb
@@ -61,7 +62,7 @@ services:
volumes:
- ./docker-config/napcat:/app/napcat/config # 持久化napcat配置文件
- ./data/qq:/app/.config/QQ # 持久化QQ本体并同步qq表情和图片到adapters
- - ./data/MaiMBot:/MaiMBot/data # NapCat 和 NoneBot 共享此卷,否则发送图片会有问题
+ - ./data/MaiMBot:/adapters/data # NapCat 和 NoneBot 共享此卷,否则发送图片会有问题
container_name: maim-bot-napcat
restart: always
image: mlikiowa/napcat-docker:latest
diff --git a/requirements.txt b/requirements.txt
index cea511f10..ada41d290 100644
Binary files a/requirements.txt and b/requirements.txt differ
diff --git a/src/heart_flow/sub_heartflow.py b/src/heart_flow/sub_heartflow.py
index 5aa69a6f6..fcbe9332f 100644
--- a/src/heart_flow/sub_heartflow.py
+++ b/src/heart_flow/sub_heartflow.py
@@ -192,7 +192,7 @@ class SubHeartflow:
logger.info(f"麦麦的思考前脑内状态:{self.current_mind}")
async def do_thinking_after_reply(self, reply_content, chat_talking_prompt):
- print("麦麦回复之后脑袋转起来了")
+ # print("麦麦回复之后脑袋转起来了")
current_thinking_info = self.current_mind
mood_info = self.current_state.mood
diff --git a/src/main.py b/src/main.py
index 9ab75f46a..14dc04355 100644
--- a/src/main.py
+++ b/src/main.py
@@ -107,8 +107,8 @@ class MainSystem:
self.forget_memory_task(),
self.print_mood_task(),
self.remove_recalled_message_task(),
- emoji_manager.start_periodic_check(),
- emoji_manager.start_periodic_register(),
+ emoji_manager.start_periodic_check_register(),
+ # emoji_manager.start_periodic_register(),
self.app.run(),
]
await asyncio.gather(*tasks)
diff --git a/src/plugins/chat/bot.py b/src/plugins/chat/bot.py
index 9046198c9..68afd2e76 100644
--- a/src/plugins/chat/bot.py
+++ b/src/plugins/chat/bot.py
@@ -75,25 +75,48 @@ class ChatBot:
- 表情包处理
- 性能计时
"""
-
- message = MessageRecv(message_data)
- groupinfo = message.message_info.group_info
+ try:
+ message = MessageRecv(message_data)
+ groupinfo = message.message_info.group_info
+ logger.debug(f"处理消息:{str(message_data)[:50]}...")
- if global_config.enable_pfc_chatting:
- try:
+ if global_config.enable_pfc_chatting:
+ try:
+ if groupinfo is None and global_config.enable_friend_chat:
+ userinfo = message.message_info.user_info
+ messageinfo = message.message_info
+ # 创建聊天流
+ chat = await chat_manager.get_or_create_stream(
+ platform=messageinfo.platform,
+ user_info=userinfo,
+ group_info=groupinfo,
+ )
+ message.update_chat_stream(chat)
+ await self.only_process_chat.process_message(message)
+ await self._create_PFC_chat(message)
+ else:
+ if groupinfo.group_id in global_config.talk_allowed_groups:
+ logger.debug(f"开始群聊模式{message_data}")
+ if global_config.response_mode == "heart_flow":
+ await self.think_flow_chat.process_message(message_data)
+ elif global_config.response_mode == "reasoning":
+ logger.debug(f"开始推理模式{message_data}")
+ await self.reasoning_chat.process_message(message_data)
+ else:
+ logger.error(f"未知的回复模式,请检查配置文件!!: {global_config.response_mode}")
+ except Exception as e:
+ logger.error(f"处理PFC消息失败: {e}")
+ else:
if groupinfo is None and global_config.enable_friend_chat:
- userinfo = message.message_info.user_info
- messageinfo = message.message_info
- # 创建聊天流
- chat = await chat_manager.get_or_create_stream(
- platform=messageinfo.platform,
- user_info=userinfo,
- group_info=groupinfo,
- )
- message.update_chat_stream(chat)
- await self.only_process_chat.process_message(message)
- await self._create_PFC_chat(message)
- else:
+ # 私聊处理流程
+ # await self._handle_private_chat(message)
+ if global_config.response_mode == "heart_flow":
+ await self.think_flow_chat.process_message(message_data)
+ elif global_config.response_mode == "reasoning":
+ await self.reasoning_chat.process_message(message_data)
+ else:
+ logger.error(f"未知的回复模式,请检查配置文件!!: {global_config.response_mode}")
+ else: # 群聊处理
if groupinfo.group_id in global_config.talk_allowed_groups:
if global_config.response_mode == "heart_flow":
await self.think_flow_chat.process_message(message_data)
@@ -101,26 +124,8 @@ class ChatBot:
await self.reasoning_chat.process_message(message_data)
else:
logger.error(f"未知的回复模式,请检查配置文件!!: {global_config.response_mode}")
- except Exception as e:
- logger.error(f"处理PFC消息失败: {e}")
- else:
- if groupinfo is None and global_config.enable_friend_chat:
- # 私聊处理流程
- # await self._handle_private_chat(message)
- if global_config.response_mode == "heart_flow":
- await self.think_flow_chat.process_message(message_data)
- elif global_config.response_mode == "reasoning":
- await self.reasoning_chat.process_message(message_data)
- else:
- logger.error(f"未知的回复模式,请检查配置文件!!: {global_config.response_mode}")
- else: # 群聊处理
- if groupinfo.group_id in global_config.talk_allowed_groups:
- if global_config.response_mode == "heart_flow":
- await self.think_flow_chat.process_message(message_data)
- elif global_config.response_mode == "reasoning":
- await self.reasoning_chat.process_message(message_data)
- else:
- logger.error(f"未知的回复模式,请检查配置文件!!: {global_config.response_mode}")
+ except Exception as e:
+ logger.error(f"预处理消息失败: {e}")
# 创建全局ChatBot实例
diff --git a/src/plugins/chat/emoji_manager.py b/src/plugins/chat/emoji_manager.py
index 279dfb464..6121124c5 100644
--- a/src/plugins/chat/emoji_manager.py
+++ b/src/plugins/chat/emoji_manager.py
@@ -39,12 +39,28 @@ class EmojiManager:
model=global_config.llm_emotion_judge, max_tokens=600, temperature=0.8, request_type="emoji"
) # 更高的温度,更少的token(后续可以根据情绪来调整温度)
+ self.emoji_num = 0
+ self.emoji_num_max = global_config.max_emoji_num
+ self.emoji_num_max_reach_deletion = global_config.max_reach_deletion
+
logger.info("启动表情包管理器")
def _ensure_emoji_dir(self):
"""确保表情存储目录存在"""
os.makedirs(self.EMOJI_DIR, exist_ok=True)
+ def _update_emoji_count(self):
+ """更新表情包数量统计
+
+ 检查数据库中的表情包数量并更新到 self.emoji_num
+ """
+ try:
+ self._ensure_db()
+ self.emoji_num = db.emoji.count_documents({})
+ logger.info(f"[统计] 当前表情包数量: {self.emoji_num}")
+ except Exception as e:
+ logger.error(f"[错误] 更新表情包数量失败: {str(e)}")
+
def initialize(self):
"""初始化数据库连接和表情目录"""
if not self._initialized:
@@ -52,6 +68,8 @@ class EmojiManager:
self._ensure_emoji_collection()
self._ensure_emoji_dir()
self._initialized = True
+ # 更新表情包数量
+ self._update_emoji_count()
# 启动时执行一次完整性检查
self.check_emoji_file_integrity()
except Exception:
@@ -339,13 +357,7 @@ class EmojiManager:
except Exception:
logger.exception("[错误] 扫描表情包失败")
-
- async def start_periodic_register(self):
- """定期扫描新表情包"""
- while True:
- logger.info("[扫描] 开始扫描新表情包...")
- await self.scan_new_emojis()
- await asyncio.sleep(global_config.EMOJI_CHECK_INTERVAL * 60)
+
def check_emoji_file_integrity(self):
"""检查表情包文件完整性
@@ -418,12 +430,136 @@ class EmojiManager:
logger.error(f"[错误] 检查表情包完整性失败: {str(e)}")
logger.error(traceback.format_exc())
- async def start_periodic_check(self):
+ def check_emoji_file_full(self):
+ """检查表情包文件是否完整,如果数量超出限制且允许删除,则删除多余的表情包
+
+ 删除规则:
+ 1. 优先删除创建时间更早的表情包
+ 2. 优先删除使用次数少的表情包,但使用次数多的也有小概率被删除
+ """
+ try:
+ self._ensure_db()
+ # 更新表情包数量
+ self._update_emoji_count()
+
+ # 检查是否超出限制
+ if self.emoji_num <= self.emoji_num_max:
+ return
+
+ # 如果超出限制但不允许删除,则只记录警告
+ if not global_config.max_reach_deletion:
+ logger.warning(f"[警告] 表情包数量({self.emoji_num})超出限制({self.emoji_num_max}),但未开启自动删除")
+ return
+
+ # 计算需要删除的数量
+ delete_count = self.emoji_num - self.emoji_num_max
+ logger.info(f"[清理] 需要删除 {delete_count} 个表情包")
+
+ # 获取所有表情包,按时间戳升序(旧的在前)排序
+ all_emojis = list(db.emoji.find().sort([("timestamp", 1)]))
+
+ # 计算权重:使用次数越多,被删除的概率越小
+ weights = []
+ max_usage = max((emoji.get("usage_count", 0) for emoji in all_emojis), default=1)
+ for emoji in all_emojis:
+ usage_count = emoji.get("usage_count", 0)
+ # 使用指数衰减函数计算权重,使用次数越多权重越小
+ weight = 1.0 / (1.0 + usage_count / max(1, max_usage))
+ weights.append(weight)
+
+ # 根据权重随机选择要删除的表情包
+ to_delete = []
+ remaining_indices = list(range(len(all_emojis)))
+
+ while len(to_delete) < delete_count and remaining_indices:
+ # 计算当前剩余表情包的权重
+ current_weights = [weights[i] for i in remaining_indices]
+ # 归一化权重
+ total_weight = sum(current_weights)
+ if total_weight == 0:
+ break
+ normalized_weights = [w/total_weight for w in current_weights]
+
+ # 随机选择一个表情包
+ selected_idx = random.choices(remaining_indices, weights=normalized_weights, k=1)[0]
+ to_delete.append(all_emojis[selected_idx])
+ remaining_indices.remove(selected_idx)
+
+ # 删除选中的表情包
+ deleted_count = 0
+ for emoji in to_delete:
+ try:
+ # 删除文件
+ if "path" in emoji and os.path.exists(emoji["path"]):
+ os.remove(emoji["path"])
+ logger.info(f"[删除] 文件: {emoji['path']} (使用次数: {emoji.get('usage_count', 0)})")
+
+ # 删除数据库记录
+ db.emoji.delete_one({"_id": emoji["_id"]})
+ deleted_count += 1
+
+ # 同时从images集合中删除
+ if "hash" in emoji:
+ db.images.delete_one({"hash": emoji["hash"]})
+
+ except Exception as e:
+ logger.error(f"[错误] 删除表情包失败: {str(e)}")
+ continue
+
+ # 更新表情包数量
+ self._update_emoji_count()
+ logger.success(f"[清理] 已删除 {deleted_count} 个表情包,当前数量: {self.emoji_num}")
+
+ except Exception as e:
+ logger.error(f"[错误] 检查表情包数量失败: {str(e)}")
+
+ async def start_periodic_check_register(self):
+ """定期检查表情包完整性和数量"""
while True:
+ logger.info("[扫描] 开始检查表情包完整性...")
self.check_emoji_file_integrity()
+ logger.info("[扫描] 开始删除所有图片缓存...")
+ await self.delete_all_images()
+ logger.info("[扫描] 开始扫描新表情包...")
+ if self.emoji_num < self.emoji_num_max:
+ await self.scan_new_emojis()
+ if (self.emoji_num > self.emoji_num_max):
+ logger.warning(f"[警告] 表情包数量超过最大限制: {self.emoji_num} > {self.emoji_num_max},跳过注册")
+ if not global_config.max_reach_deletion:
+ logger.warning("表情包数量超过最大限制,终止注册")
+ break
+ else:
+ logger.warning("表情包数量超过最大限制,开始删除表情包")
+ self.check_emoji_file_full()
await asyncio.sleep(global_config.EMOJI_CHECK_INTERVAL * 60)
-
+
+ async def delete_all_images(self):
+ """删除 data/image 目录下的所有文件"""
+ try:
+ image_dir = os.path.join("data", "image")
+ if not os.path.exists(image_dir):
+ logger.warning(f"[警告] 目录不存在: {image_dir}")
+ return
+
+ deleted_count = 0
+ failed_count = 0
+
+ # 遍历目录下的所有文件
+ for filename in os.listdir(image_dir):
+ file_path = os.path.join(image_dir, filename)
+ try:
+ if os.path.isfile(file_path):
+ os.remove(file_path)
+ deleted_count += 1
+ logger.debug(f"[删除] 文件: {file_path}")
+ except Exception as e:
+ failed_count += 1
+ logger.error(f"[错误] 删除文件失败 {file_path}: {str(e)}")
+
+ logger.success(f"[清理] 已删除 {deleted_count} 个文件,失败 {failed_count} 个")
+
+ except Exception as e:
+ logger.error(f"[错误] 删除图片目录失败: {str(e)}")
# 创建全局单例
-
emoji_manager = EmojiManager()
diff --git a/src/plugins/chat/message.py b/src/plugins/chat/message.py
index 8427a02e1..22487831f 100644
--- a/src/plugins/chat/message.py
+++ b/src/plugins/chat/message.py
@@ -31,7 +31,7 @@ class Message(MessageBase):
def __init__(
self,
message_id: str,
- time: int,
+ time: float,
chat_stream: ChatStream,
user_info: UserInfo,
message_segment: Optional[Seg] = None,
diff --git a/src/plugins/chat/message_sender.py b/src/plugins/chat/message_sender.py
index daba61552..5b4adc8d1 100644
--- a/src/plugins/chat/message_sender.py
+++ b/src/plugins/chat/message_sender.py
@@ -9,7 +9,7 @@ from .message import MessageSending, MessageThinking, MessageSet
from ..storage.storage import MessageStorage
from ..config.config import global_config
-from .utils import truncate_message, calculate_typing_time
+from .utils import truncate_message, calculate_typing_time, count_messages_between
from src.common.logger import LogConfig, SENDER_STYLE_CONFIG
@@ -69,9 +69,14 @@ class Message_Sender:
if end_point:
# logger.info(f"发送消息到{end_point}")
# logger.info(message_json)
- await global_api.send_message(end_point, message_json)
+ await global_api.send_message_REST(end_point, message_json)
else:
- raise ValueError(f"未找到平台:{message.message_info.platform} 的url配置,请检查配置文件")
+ try:
+ await global_api.send_message(message)
+ except Exception as e:
+ raise ValueError(
+ f"未找到平台:{message.message_info.platform} 的url配置,请检查配置文件"
+ ) from e
logger.success(f"发送消息“{message_preview}”成功")
except Exception as e:
logger.error(f"发送消息“{message_preview}”失败: {str(e)}")
@@ -85,16 +90,16 @@ class MessageContainer:
self.max_size = max_size
self.messages = []
self.last_send_time = 0
- self.thinking_timeout = 10 # 思考等待超时时间(秒)
+ self.thinking_wait_timeout = 20 # 思考等待超时时间(秒)
def get_timeout_messages(self) -> List[MessageSending]:
- """获取所有超时的Message_Sending对象(思考时间超过30秒),按thinking_start_time排序"""
+ """获取所有超时的Message_Sending对象(思考时间超过20秒),按thinking_start_time排序"""
current_time = time.time()
timeout_messages = []
for msg in self.messages:
if isinstance(msg, MessageSending):
- if current_time - msg.thinking_start_time > self.thinking_timeout:
+ if current_time - msg.thinking_start_time > self.thinking_wait_timeout:
timeout_messages.append(msg)
# 按thinking_start_time排序,时间早的在前面
@@ -172,6 +177,7 @@ class MessageManager:
message_earliest = container.get_earliest_message()
if isinstance(message_earliest, MessageThinking):
+ """取得了思考消息"""
message_earliest.update_thinking_time()
thinking_time = message_earliest.thinking_time
# print(thinking_time)
@@ -187,14 +193,20 @@ class MessageManager:
container.remove_message(message_earliest)
else:
- # print(message_earliest.is_head)
- # print(message_earliest.update_thinking_time())
- # print(message_earliest.is_private_message())
+ """取得了发送消息"""
thinking_time = message_earliest.update_thinking_time()
- print(thinking_time)
+ thinking_start_time = message_earliest.thinking_start_time
+ now_time = time.time()
+ thinking_messages_count, thinking_messages_length = count_messages_between(
+ start_time=thinking_start_time, end_time=now_time, stream_id=message_earliest.chat_stream.stream_id
+ )
+ # print(thinking_time)
+ # print(thinking_messages_count)
+ # print(thinking_messages_length)
+
if (
message_earliest.is_head
- and message_earliest.update_thinking_time() > 18
+ and (thinking_messages_count > 4 or thinking_messages_length > 250)
and not message_earliest.is_private_message() # 避免在私聊时插入reply
):
logger.debug(f"设置回复消息{message_earliest.processed_plain_text}")
@@ -216,12 +228,18 @@ class MessageManager:
continue
try:
- # print(msg.is_head)
- print(msg.update_thinking_time())
- # print(msg.is_private_message())
+ thinking_time = msg.update_thinking_time()
+ thinking_start_time = msg.thinking_start_time
+ now_time = time.time()
+ thinking_messages_count, thinking_messages_length = count_messages_between(
+ start_time=thinking_start_time, end_time=now_time, stream_id=msg.chat_stream.stream_id
+ )
+ # print(thinking_time)
+ # print(thinking_messages_count)
+ # print(thinking_messages_length)
if (
msg.is_head
- and msg.update_thinking_time() > 18
+ and (thinking_messages_count > 4 or thinking_messages_length > 250)
and not msg.is_private_message() # 避免在私聊时插入reply
):
logger.debug(f"设置回复消息{msg.processed_plain_text}")
diff --git a/src/plugins/chat/utils.py b/src/plugins/chat/utils.py
index c575eea88..9646fe73b 100644
--- a/src/plugins/chat/utils.py
+++ b/src/plugins/chat/utils.py
@@ -487,3 +487,108 @@ def is_western_char(char):
def is_western_paragraph(paragraph):
"""检测是否为西文字符段落"""
return all(is_western_char(char) for char in paragraph if char.isalnum())
+
+
+def count_messages_between(start_time: float, end_time: float, stream_id: str) -> tuple[int, int]:
+ """计算两个时间点之间的消息数量和文本总长度
+
+ Args:
+ start_time (float): 起始时间戳
+ end_time (float): 结束时间戳
+ stream_id (str): 聊天流ID
+
+ Returns:
+ tuple[int, int]: (消息数量, 文本总长度)
+ - 消息数量:包含起始时间的消息,不包含结束时间的消息
+ - 文本总长度:所有消息的processed_plain_text长度之和
+ """
+ try:
+ # 获取开始时间之前最新的一条消息
+ start_message = db.messages.find_one(
+ {
+ "chat_id": stream_id,
+ "time": {"$lte": start_time}
+ },
+ sort=[("time", -1), ("_id", -1)] # 按时间倒序,_id倒序(最后插入的在前)
+ )
+
+ # 获取结束时间最近的一条消息
+ # 先找到结束时间点的所有消息
+ end_time_messages = list(db.messages.find(
+ {
+ "chat_id": stream_id,
+ "time": {"$lte": end_time}
+ },
+ sort=[("time", -1)] # 先按时间倒序
+ ).limit(10)) # 限制查询数量,避免性能问题
+
+ if not end_time_messages:
+ logger.warning(f"未找到结束时间 {end_time} 之前的消息")
+ return 0, 0
+
+ # 找到最大时间
+ max_time = end_time_messages[0]["time"]
+ # 在最大时间的消息中找最后插入的(_id最大的)
+ end_message = max(
+ [msg for msg in end_time_messages if msg["time"] == max_time],
+ key=lambda x: x["_id"]
+ )
+
+ if not start_message:
+ logger.warning(f"未找到开始时间 {start_time} 之前的消息")
+ return 0, 0
+
+ # 调试输出
+ # print("\n=== 消息范围信息 ===")
+ # print("Start message:", {
+ # "message_id": start_message.get("message_id"),
+ # "time": start_message.get("time"),
+ # "text": start_message.get("processed_plain_text", ""),
+ # "_id": str(start_message.get("_id"))
+ # })
+ # print("End message:", {
+ # "message_id": end_message.get("message_id"),
+ # "time": end_message.get("time"),
+ # "text": end_message.get("processed_plain_text", ""),
+ # "_id": str(end_message.get("_id"))
+ # })
+ # print("Stream ID:", stream_id)
+
+ # 如果结束消息的时间等于开始时间,返回0
+ if end_message["time"] == start_message["time"]:
+ return 0, 0
+
+ # 获取并打印这个时间范围内的所有消息
+ # print("\n=== 时间范围内的所有消息 ===")
+ all_messages = list(db.messages.find(
+ {
+ "chat_id": stream_id,
+ "time": {
+ "$gte": start_message["time"],
+ "$lte": end_message["time"]
+ }
+ },
+ sort=[("time", 1), ("_id", 1)] # 按时间正序,_id正序
+ ))
+
+ count = 0
+ total_length = 0
+ for msg in all_messages:
+ count += 1
+ text_length = len(msg.get("processed_plain_text", ""))
+ total_length += text_length
+ # print(f"\n消息 {count}:")
+ # print({
+ # "message_id": msg.get("message_id"),
+ # "time": msg.get("time"),
+ # "text": msg.get("processed_plain_text", ""),
+ # "text_length": text_length,
+ # "_id": str(msg.get("_id"))
+ # })
+
+ # 如果时间不同,需要把end_message本身也计入
+ return count - 1, total_length
+
+ except Exception as e:
+ logger.error(f"计算消息数量时出错: {str(e)}")
+ return 0, 0
diff --git a/src/plugins/chat/utils_image.py b/src/plugins/chat/utils_image.py
index f19fedfdd..7c930f6dc 100644
--- a/src/plugins/chat/utils_image.py
+++ b/src/plugins/chat/utils_image.py
@@ -112,8 +112,13 @@ class ImageManager:
return f"[表情包:{cached_description}]"
# 调用AI获取描述
- prompt = "这是一个表情包,使用中文简洁的描述一下表情包的内容和表情包所表达的情感"
- description, _ = await self._llm.generate_response_for_image(prompt, image_base64, image_format)
+ if image_format == "gif" or image_format == "GIF":
+ image_base64 = self.transform_gif(image_base64)
+ prompt = "这是一个动态图表情包,每一张图代表了动态图的某一帧,黑色背景代表透明,使用中文简洁的描述一下表情包的内容和表达的情感,简短一些"
+ description, _ = await self._llm.generate_response_for_image(prompt, image_base64, "jpg")
+ else:
+ prompt = "这是一个表情包,使用中文简洁的描述一下表情包的内容和表情包所表达的情感"
+ description, _ = await self._llm.generate_response_for_image(prompt, image_base64, image_format)
cached_description = self._get_description_from_db(image_hash, "emoji")
if cached_description:
@@ -221,6 +226,72 @@ class ImageManager:
logger.error(f"获取图片描述失败: {str(e)}")
return "[图片]"
+ def transform_gif(self, gif_base64: str) -> str:
+ """将GIF转换为水平拼接的静态图像
+
+ Args:
+ gif_base64: GIF的base64编码字符串
+
+ Returns:
+ str: 拼接后的JPG图像的base64编码字符串
+ """
+ try:
+ # 解码base64
+ gif_data = base64.b64decode(gif_base64)
+ gif = Image.open(io.BytesIO(gif_data))
+
+ # 收集所有帧
+ frames = []
+ try:
+ while True:
+ gif.seek(len(frames))
+ frame = gif.convert('RGB')
+ frames.append(frame.copy())
+ except EOFError:
+ pass
+
+ if not frames:
+ raise ValueError("No frames found in GIF")
+
+ # 计算需要抽取的帧的索引
+ total_frames = len(frames)
+ if total_frames <= 15:
+ selected_frames = frames
+ else:
+ # 均匀抽取10帧
+ indices = [int(i * (total_frames - 1) / 14) for i in range(15)]
+ selected_frames = [frames[i] for i in indices]
+
+ # 获取单帧的尺寸
+ frame_width, frame_height = selected_frames[0].size
+
+ # 计算目标尺寸,保持宽高比
+ target_height = 200 # 固定高度
+ target_width = int((target_height / frame_height) * frame_width)
+
+ # 调整所有帧的大小
+ resized_frames = [frame.resize((target_width, target_height), Image.Resampling.LANCZOS)
+ for frame in selected_frames]
+
+ # 创建拼接图像
+ total_width = target_width * len(resized_frames)
+ combined_image = Image.new('RGB', (total_width, target_height))
+
+ # 水平拼接图像
+ for idx, frame in enumerate(resized_frames):
+ combined_image.paste(frame, (idx * target_width, 0))
+
+ # 转换为base64
+ buffer = io.BytesIO()
+ combined_image.save(buffer, format='JPEG', quality=85)
+ result_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
+
+ return result_base64
+
+ except Exception as e:
+ logger.error(f"GIF转换失败: {str(e)}")
+ return None
+
# 创建全局单例
image_manager = ImageManager()
diff --git a/src/plugins/chat_module/think_flow_chat/think_flow_chat.py b/src/plugins/chat_module/think_flow_chat/think_flow_chat.py
index 8f5322e22..0f8d3298b 100644
--- a/src/plugins/chat_module/think_flow_chat/think_flow_chat.py
+++ b/src/plugins/chat_module/think_flow_chat/think_flow_chat.py
@@ -177,15 +177,18 @@ class ThinkFlowChat:
heartflow.create_subheartflow(chat.stream_id)
await message.process()
-
+ logger.debug(f"消息处理成功{message.processed_plain_text}")
+
# 过滤词/正则表达式过滤
if self._check_ban_words(message.processed_plain_text, chat, userinfo) or self._check_ban_regex(
message.raw_message, chat, userinfo
):
return
+ logger.debug(f"过滤词/正则表达式过滤成功{message.processed_plain_text}")
await self.storage.store_message(message, chat)
-
+ logger.debug(f"存储成功{message.processed_plain_text}")
+
# 记忆激活
timer1 = time.time()
interested_rate = await HippocampusManager.get_instance().get_activate_from_text(
diff --git a/src/plugins/config/config.py b/src/plugins/config/config.py
index 6db225a4b..2422b0d1f 100644
--- a/src/plugins/config/config.py
+++ b/src/plugins/config/config.py
@@ -1,6 +1,7 @@
import os
from dataclasses import dataclass, field
from typing import Dict, List, Optional
+from dateutil import tz
import tomli
import tomlkit
@@ -24,8 +25,8 @@ config_config = LogConfig(
logger = get_module_logger("config", config=config_config)
#考虑到,实际上配置文件中的mai_version是不会自动更新的,所以采用硬编码
-mai_version_main = "test-0.6.0"
-mai_version_fix = "snapshot-7"
+mai_version_main = "0.6.0"
+mai_version_fix = ""
mai_version = f"{mai_version_main}-{mai_version_fix}"
def update_config():
@@ -151,6 +152,7 @@ class BotConfig:
PROMPT_SCHEDULE_GEN = "无日程"
SCHEDULE_DOING_UPDATE_INTERVAL: int = 300 # 日程表更新间隔 单位秒
SCHEDULE_TEMPERATURE: float = 0.5 # 日程表温度,建议0.5-1.0
+ TIME_ZONE: str = "Asia/Shanghai" # 时区
# message
MAX_CONTEXT_SIZE: int = 15 # 上下文最大消息数
@@ -182,6 +184,8 @@ class BotConfig:
# MODEL_R1_DISTILL_PROBABILITY: float = 0.1 # R1蒸馏模型概率
# emoji
+ max_emoji_num: int = 200 # 表情包最大数量
+ max_reach_deletion: bool = True # 开启则在达到最大数量时删除表情包,关闭则不会继续收集表情包
EMOJI_CHECK_INTERVAL: int = 120 # 表情包检查间隔(分钟)
EMOJI_REGISTER_INTERVAL: int = 10 # 表情包注册间隔(分钟)
EMOJI_SAVE: bool = True # 偷表情包
@@ -353,6 +357,11 @@ class BotConfig:
)
if config.INNER_VERSION in SpecifierSet(">=1.0.2"):
config.SCHEDULE_TEMPERATURE = schedule_config.get("schedule_temperature", config.SCHEDULE_TEMPERATURE)
+ time_zone = schedule_config.get("time_zone", config.TIME_ZONE)
+ if tz.gettz(time_zone) is None:
+ logger.error(f"无效的时区: {time_zone},使用默认值: {config.TIME_ZONE}")
+ else:
+ config.TIME_ZONE = time_zone
def emoji(parent: dict):
emoji_config = parent["emoji"]
@@ -361,6 +370,9 @@ class BotConfig:
config.EMOJI_CHECK_PROMPT = emoji_config.get("check_prompt", config.EMOJI_CHECK_PROMPT)
config.EMOJI_SAVE = emoji_config.get("auto_save", config.EMOJI_SAVE)
config.EMOJI_CHECK = emoji_config.get("enable_check", config.EMOJI_CHECK)
+ if config.INNER_VERSION in SpecifierSet(">=1.1.1"):
+ config.max_emoji_num = emoji_config.get("max_emoji_num", config.max_emoji_num)
+ config.max_reach_deletion = emoji_config.get("max_reach_deletion", config.max_reach_deletion)
def bot(parent: dict):
# 机器人基础配置
diff --git a/src/plugins/memory_system/Hippocampus.py b/src/plugins/memory_system/Hippocampus.py
index 717cebe17..7f781ac31 100644
--- a/src/plugins/memory_system/Hippocampus.py
+++ b/src/plugins/memory_system/Hippocampus.py
@@ -14,7 +14,6 @@ from src.common.logger import get_module_logger, LogConfig, MEMORY_STYLE_CONFIG
from src.plugins.memory_system.sample_distribution import MemoryBuildScheduler # 分布生成器
from .memory_config import MemoryConfig
-
def get_closest_chat_from_db(length: int, timestamp: str):
# print(f"获取最接近指定时间戳的聊天记录,长度: {length}, 时间戳: {timestamp}")
# print(f"当前时间: {timestamp},转换后时间: {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(timestamp))}")
diff --git a/src/plugins/message/api.py b/src/plugins/message/api.py
index 30cc8aeca..a29ce429e 100644
--- a/src/plugins/message/api.py
+++ b/src/plugins/message/api.py
@@ -1,6 +1,7 @@
-from fastapi import FastAPI, HTTPException
-from typing import Dict, Any, Callable, List
+from fastapi import FastAPI, HTTPException, WebSocket, WebSocketDisconnect
+from typing import Dict, Any, Callable, List, Set
from src.common.logger import get_module_logger
+from src.plugins.message.message_base import MessageBase
import aiohttp
import asyncio
import uvicorn
@@ -10,6 +11,212 @@ import traceback
logger = get_module_logger("api")
+class BaseMessageHandler:
+ """消息处理基类"""
+
+ def __init__(self):
+ self.message_handlers: List[Callable] = []
+ self.background_tasks = set()
+
+ def register_message_handler(self, handler: Callable):
+ """注册消息处理函数"""
+ self.message_handlers.append(handler)
+
+ async def process_message(self, message: Dict[str, Any]):
+ """处理单条消息"""
+ tasks = []
+ for handler in self.message_handlers:
+ try:
+ tasks.append(handler(message))
+ except Exception as e:
+ raise RuntimeError(str(e)) from e
+ if tasks:
+ await asyncio.gather(*tasks, return_exceptions=True)
+
+ async def _handle_message(self, message: Dict[str, Any]):
+ """后台处理单个消息"""
+ try:
+ await self.process_message(message)
+ except Exception as e:
+ raise RuntimeError(str(e)) from e
+
+
+class MessageServer(BaseMessageHandler):
+ """WebSocket服务端"""
+
+ _class_handlers: List[Callable] = [] # 类级别的消息处理器
+
+ def __init__(self, host: str = "0.0.0.0", port: int = 18000, enable_token=False):
+ super().__init__()
+ # 将类级别的处理器添加到实例处理器中
+ self.message_handlers.extend(self._class_handlers)
+ self.app = FastAPI()
+ self.host = host
+ self.port = port
+ self.active_websockets: Set[WebSocket] = set()
+ self.platform_websockets: Dict[str, WebSocket] = {} # 平台到websocket的映射
+ self.valid_tokens: Set[str] = set()
+ self.enable_token = enable_token
+ self._setup_routes()
+ self._running = False
+
+ @classmethod
+ def register_class_handler(cls, handler: Callable):
+ """注册类级别的消息处理器"""
+ if handler not in cls._class_handlers:
+ cls._class_handlers.append(handler)
+
+ def register_message_handler(self, handler: Callable):
+ """注册实例级别的消息处理器"""
+ if handler not in self.message_handlers:
+ self.message_handlers.append(handler)
+
+ async def verify_token(self, token: str) -> bool:
+ if not self.enable_token:
+ return True
+ return token in self.valid_tokens
+
+ def add_valid_token(self, token: str):
+ self.valid_tokens.add(token)
+
+ def remove_valid_token(self, token: str):
+ self.valid_tokens.discard(token)
+
+ def _setup_routes(self):
+ @self.app.post("/api/message")
+ async def handle_message(message: Dict[str, Any]):
+ try:
+ # 创建后台任务处理消息
+ asyncio.create_task(self._handle_message(message))
+ return {"status": "success"}
+ except Exception as e:
+ raise HTTPException(status_code=500, detail=str(e)) from e
+
+ @self.app.websocket("/ws")
+ async def websocket_endpoint(websocket: WebSocket):
+ headers = dict(websocket.headers)
+ token = headers.get("authorization")
+ platform = headers.get("platform", "default") # 获取platform标识
+ if self.enable_token:
+ if not token or not await self.verify_token(token):
+ await websocket.close(code=1008, reason="Invalid or missing token")
+ return
+
+ await websocket.accept()
+ self.active_websockets.add(websocket)
+
+ # 添加到platform映射
+ if platform not in self.platform_websockets:
+ self.platform_websockets[platform] = websocket
+
+ try:
+ while True:
+ message = await websocket.receive_json()
+ # print(f"Received message: {message}")
+ asyncio.create_task(self._handle_message(message))
+ except WebSocketDisconnect:
+ self._remove_websocket(websocket, platform)
+ except Exception as e:
+ self._remove_websocket(websocket, platform)
+ raise RuntimeError(str(e)) from e
+ finally:
+ self._remove_websocket(websocket, platform)
+
+ def _remove_websocket(self, websocket: WebSocket, platform: str):
+ """从所有集合中移除websocket"""
+ if websocket in self.active_websockets:
+ self.active_websockets.remove(websocket)
+ if platform in self.platform_websockets:
+ if self.platform_websockets[platform] == websocket:
+ del self.platform_websockets[platform]
+
+ async def broadcast_message(self, message: Dict[str, Any]):
+ disconnected = set()
+ for websocket in self.active_websockets:
+ try:
+ await websocket.send_json(message)
+ except Exception:
+ disconnected.add(websocket)
+ for websocket in disconnected:
+ self.active_websockets.remove(websocket)
+
+ async def broadcast_to_platform(self, platform: str, message: Dict[str, Any]):
+ """向指定平台的所有WebSocket客户端广播消息"""
+ if platform not in self.platform_websockets:
+ raise ValueError(f"平台:{platform} 未连接")
+
+ disconnected = set()
+ try:
+ await self.platform_websockets[platform].send_json(message)
+ except Exception:
+ disconnected.add(self.platform_websockets[platform])
+
+ # 清理断开的连接
+ for websocket in disconnected:
+ self._remove_websocket(websocket, platform)
+
+ async def send_message(self, message: MessageBase):
+ await self.broadcast_to_platform(message.message_info.platform, message.to_dict())
+
+ def run_sync(self):
+ """同步方式运行服务器"""
+ uvicorn.run(self.app, host=self.host, port=self.port)
+
+ async def run(self):
+ """异步方式运行服务器"""
+ config = uvicorn.Config(self.app, host=self.host, port=self.port, loop="asyncio")
+ self.server = uvicorn.Server(config)
+ try:
+ await self.server.serve()
+ except KeyboardInterrupt as e:
+ await self.stop()
+ raise KeyboardInterrupt from e
+
+ async def start_server(self):
+ """启动服务器的异步方法"""
+ if not self._running:
+ self._running = True
+ await self.run()
+
+ async def stop(self):
+ """停止服务器"""
+ # 清理platform映射
+ self.platform_websockets.clear()
+
+ # 取消所有后台任务
+ for task in self.background_tasks:
+ task.cancel()
+ # 等待所有任务完成
+ await asyncio.gather(*self.background_tasks, return_exceptions=True)
+ self.background_tasks.clear()
+
+ # 关闭所有WebSocket连接
+ for websocket in self.active_websockets:
+ await websocket.close()
+ self.active_websockets.clear()
+
+ if hasattr(self, "server"):
+ self._running = False
+ # 正确关闭 uvicorn 服务器
+ self.server.should_exit = True
+ await self.server.shutdown()
+ # 等待服务器完全停止
+ if hasattr(self.server, "started") and self.server.started:
+ await self.server.main_loop()
+ # 清理处理程序
+ self.message_handlers.clear()
+
+ async def send_message_REST(self, url: str, data: Dict[str, Any]) -> Dict[str, Any]:
+ """发送消息到指定端点"""
+ async with aiohttp.ClientSession() as session:
+ try:
+ async with session.post(url, json=data, headers={"Content-Type": "application/json"}) as response:
+ return await response.json()
+ except Exception:
+ # logger.error(f"发送消息失败: {str(e)}")
+ pass
+
+
class BaseMessageAPI:
def __init__(self, host: str = "0.0.0.0", port: int = 18000):
self.app = FastAPI()
@@ -111,4 +318,4 @@ class BaseMessageAPI:
loop.close()
-global_api = BaseMessageAPI(host=os.environ["HOST"], port=int(os.environ["PORT"]))
+global_api = MessageServer(host=os.environ["HOST"], port=int(os.environ["PORT"]))
diff --git a/src/plugins/message/message_base.py b/src/plugins/message/message_base.py
index ea5c3daef..edaa9a033 100644
--- a/src/plugins/message/message_base.py
+++ b/src/plugins/message/message_base.py
@@ -166,7 +166,7 @@ class BaseMessageInfo:
platform: Optional[str] = None
message_id: Union[str, int, None] = None
- time: Optional[int] = None
+ time: Optional[float] = None
group_info: Optional[GroupInfo] = None
user_info: Optional[UserInfo] = None
format_info: Optional[FormatInfo] = None
diff --git a/src/plugins/models/utils_model.py b/src/plugins/models/utils_model.py
index 260c5f5a6..852bba412 100644
--- a/src/plugins/models/utils_model.py
+++ b/src/plugins/models/utils_model.py
@@ -154,7 +154,7 @@ class LLM_request:
# 合并重试策略
default_retry = {
"max_retries": 3,
- "base_wait": 15,
+ "base_wait": 10,
"retry_codes": [429, 413, 500, 503],
"abort_codes": [400, 401, 402, 403],
}
@@ -179,9 +179,6 @@ class LLM_request:
# logger.debug(f"{logger_msg}发送请求到URL: {api_url}")
# logger.info(f"使用模型: {self.model_name}")
- # 流式输出标志
- if stream_mode:
- payload["stream"] = stream_mode
# 构建请求体
if image_base64:
@@ -189,6 +186,11 @@ class LLM_request:
elif payload is None:
payload = await self._build_payload(prompt)
+ # 流式输出标志
+ # 先构建payload,再添加流式输出标志
+ if stream_mode:
+ payload["stream"] = stream_mode
+
for retry in range(policy["max_retries"]):
try:
# 使用上下文管理器处理会话
@@ -203,21 +205,21 @@ class LLM_request:
# 处理需要重试的状态码
if response.status in policy["retry_codes"]:
wait_time = policy["base_wait"] * (2**retry)
- logger.warning(f"错误码: {response.status}, 等待 {wait_time}秒后重试")
+ logger.warning(f"模型 {self.model_name} 错误码: {response.status}, 等待 {wait_time}秒后重试")
if response.status == 413:
logger.warning("请求体过大,尝试压缩...")
image_base64 = compress_base64_image_by_scale(image_base64)
payload = await self._build_payload(prompt, image_base64, image_format)
elif response.status in [500, 503]:
- logger.error(f"错误码: {response.status} - {error_code_mapping.get(response.status)}")
+ logger.error(f"模型 {self.model_name} 错误码: {response.status} - {error_code_mapping.get(response.status)}")
raise RuntimeError("服务器负载过高,模型恢复失败QAQ")
else:
- logger.warning(f"请求限制(429),等待{wait_time}秒后重试...")
+ logger.warning(f"模型 {self.model_name} 请求限制(429),等待{wait_time}秒后重试...")
await asyncio.sleep(wait_time)
continue
elif response.status in policy["abort_codes"]:
- logger.error(f"错误码: {response.status} - {error_code_mapping.get(response.status)}")
+ logger.error(f"模型 {self.model_name} 错误码: {response.status} - {error_code_mapping.get(response.status)}")
# 尝试获取并记录服务器返回的详细错误信息
try:
error_json = await response.json()
@@ -319,9 +321,9 @@ class LLM_request:
flag_delta_content_finished = True
except Exception as e:
- logger.exception(f"解析流式输出错误: {str(e)}")
+ logger.exception(f"模型 {self.model_name} 解析流式输出错误: {str(e)}")
except GeneratorExit:
- logger.warning("流式输出被中断,正在清理资源...")
+ logger.warning("模型 {self.model_name} 流式输出被中断,正在清理资源...")
# 确保资源被正确清理
await response.release()
# 返回已经累积的内容
@@ -335,7 +337,7 @@ class LLM_request:
else self._default_response_handler(result, user_id, request_type, endpoint)
)
except Exception as e:
- logger.error(f"处理流式输出时发生错误: {str(e)}")
+ logger.error(f"模型 {self.model_name} 处理流式输出时发生错误: {str(e)}")
# 确保在发生错误时也能正确清理资源
try:
await response.release()
@@ -378,21 +380,21 @@ class LLM_request:
except (aiohttp.ClientError, asyncio.TimeoutError) as e:
if retry < policy["max_retries"] - 1:
wait_time = policy["base_wait"] * (2**retry)
- logger.error(f"网络错误,等待{wait_time}秒后重试... 错误: {str(e)}")
+ logger.error(f"模型 {self.model_name} 网络错误,等待{wait_time}秒后重试... 错误: {str(e)}")
await asyncio.sleep(wait_time)
continue
else:
- logger.critical(f"网络错误达到最大重试次数: {str(e)}")
+ logger.critical(f"模型 {self.model_name} 网络错误达到最大重试次数: {str(e)}")
raise RuntimeError(f"网络请求失败: {str(e)}") from e
except Exception as e:
- logger.critical(f"未预期的错误: {str(e)}")
+ logger.critical(f"模型 {self.model_name} 未预期的错误: {str(e)}")
raise RuntimeError(f"请求过程中发生错误: {str(e)}") from e
except aiohttp.ClientResponseError as e:
# 处理aiohttp抛出的响应错误
if retry < policy["max_retries"] - 1:
wait_time = policy["base_wait"] * (2**retry)
- logger.error(f"HTTP响应错误,等待{wait_time}秒后重试... 状态码: {e.status}, 错误: {e.message}")
+ logger.error(f"模型 {self.model_name} HTTP响应错误,等待{wait_time}秒后重试... 状态码: {e.status}, 错误: {e.message}")
try:
if hasattr(e, "response") and e.response and hasattr(e.response, "text"):
error_text = await e.response.text()
@@ -403,27 +405,27 @@ class LLM_request:
if "error" in error_item and isinstance(error_item["error"], dict):
error_obj = error_item["error"]
logger.error(
- f"服务器错误详情: 代码={error_obj.get('code')}, "
+ f"模型 {self.model_name} 服务器错误详情: 代码={error_obj.get('code')}, "
f"状态={error_obj.get('status')}, "
f"消息={error_obj.get('message')}"
)
elif isinstance(error_json, dict) and "error" in error_json:
error_obj = error_json.get("error", {})
logger.error(
- f"服务器错误详情: 代码={error_obj.get('code')}, "
+ f"模型 {self.model_name} 服务器错误详情: 代码={error_obj.get('code')}, "
f"状态={error_obj.get('status')}, "
f"消息={error_obj.get('message')}"
)
else:
- logger.error(f"服务器错误响应: {error_json}")
+ logger.error(f"模型 {self.model_name} 服务器错误响应: {error_json}")
except (json.JSONDecodeError, TypeError) as json_err:
- logger.warning(f"响应不是有效的JSON: {str(json_err)}, 原始内容: {error_text[:200]}")
+ logger.warning(f"模型 {self.model_name} 响应不是有效的JSON: {str(json_err)}, 原始内容: {error_text[:200]}")
except (AttributeError, TypeError, ValueError) as parse_err:
- logger.warning(f"无法解析响应错误内容: {str(parse_err)}")
+ logger.warning(f"模型 {self.model_name} 无法解析响应错误内容: {str(parse_err)}")
await asyncio.sleep(wait_time)
else:
- logger.critical(f"HTTP响应错误达到最大重试次数: 状态码: {e.status}, 错误: {e.message}")
+ logger.critical(f"模型 {self.model_name} HTTP响应错误达到最大重试次数: 状态码: {e.status}, 错误: {e.message}")
# 安全地检查和记录请求详情
if (
image_base64
@@ -440,14 +442,14 @@ class LLM_request:
f"{image_base64[:10]}...{image_base64[-10:]}"
)
logger.critical(f"请求头: {await self._build_headers(no_key=True)} 请求体: {payload}")
- raise RuntimeError(f"API请求失败: 状态码 {e.status}, {e.message}") from e
+ raise RuntimeError(f"模型 {self.model_name} API请求失败: 状态码 {e.status}, {e.message}") from e
except Exception as e:
if retry < policy["max_retries"] - 1:
wait_time = policy["base_wait"] * (2**retry)
- logger.error(f"请求失败,等待{wait_time}秒后重试... 错误: {str(e)}")
+ logger.error(f"模型 {self.model_name} 请求失败,等待{wait_time}秒后重试... 错误: {str(e)}")
await asyncio.sleep(wait_time)
else:
- logger.critical(f"请求失败: {str(e)}")
+ logger.critical(f"模型 {self.model_name} 请求失败: {str(e)}")
# 安全地检查和记录请求详情
if (
image_base64
@@ -464,10 +466,10 @@ class LLM_request:
f"{image_base64[:10]}...{image_base64[-10:]}"
)
logger.critical(f"请求头: {await self._build_headers(no_key=True)} 请求体: {payload}")
- raise RuntimeError(f"API请求失败: {str(e)}") from e
+ raise RuntimeError(f"模型 {self.model_name} API请求失败: {str(e)}") from e
- logger.error("达到最大重试次数,请求仍然失败")
- raise RuntimeError("达到最大重试次数,API请求仍然失败")
+ logger.error(f"模型 {self.model_name} 达到最大重试次数,请求仍然失败")
+ raise RuntimeError(f"模型 {self.model_name} 达到最大重试次数,API请求仍然失败")
async def _transform_parameters(self, params: dict) -> dict:
"""
diff --git a/src/plugins/schedule/schedule_generator.py b/src/plugins/schedule/schedule_generator.py
index ecc032761..edce54b64 100644
--- a/src/plugins/schedule/schedule_generator.py
+++ b/src/plugins/schedule/schedule_generator.py
@@ -3,6 +3,7 @@ import os
import sys
from typing import Dict
import asyncio
+from dateutil import tz
# 添加项目根目录到 Python 路径
root_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../.."))
@@ -13,6 +14,8 @@ from src.common.logger import get_module_logger, SCHEDULE_STYLE_CONFIG, LogConfi
from src.plugins.models.utils_model import LLM_request # noqa: E402
from src.plugins.config.config import global_config # noqa: E402
+TIME_ZONE = tz.gettz(global_config.TIME_ZONE) # 设置时区
+
schedule_config = LogConfig(
# 使用海马体专用样式
@@ -44,7 +47,7 @@ class ScheduleGenerator:
self.personality = ""
self.behavior = ""
- self.start_time = datetime.datetime.now()
+ self.start_time = datetime.datetime.now(TIME_ZONE)
self.schedule_doing_update_interval = 300 # 最好大于60
@@ -74,7 +77,7 @@ class ScheduleGenerator:
while True:
# print(self.get_current_num_task(1, True))
- current_time = datetime.datetime.now()
+ current_time = datetime.datetime.now(TIME_ZONE)
# 检查是否需要重新生成日程(日期变化)
if current_time.date() != self.start_time.date():
@@ -100,7 +103,7 @@ class ScheduleGenerator:
Returns:
tuple: (today_schedule_text, today_schedule) 今天的日程文本和解析后的日程字典
"""
- today = datetime.datetime.now()
+ today = datetime.datetime.now(TIME_ZONE)
yesterday = today - datetime.timedelta(days=1)
# 先检查昨天的日程
@@ -156,7 +159,7 @@ class ScheduleGenerator:
"""打印完整的日程安排"""
if not self.today_schedule_text:
logger.warning("今日日程有误,将在下次运行时重新生成")
- db.schedule.delete_one({"date": datetime.datetime.now().strftime("%Y-%m-%d")})
+ db.schedule.delete_one({"date": datetime.datetime.now(TIME_ZONE).strftime("%Y-%m-%d")})
else:
logger.info("=== 今日日程安排 ===")
logger.info(self.today_schedule_text)
@@ -165,7 +168,7 @@ class ScheduleGenerator:
async def update_today_done_list(self):
# 更新数据库中的 today_done_list
- today_str = datetime.datetime.now().strftime("%Y-%m-%d")
+ today_str = datetime.datetime.now(TIME_ZONE).strftime("%Y-%m-%d")
existing_schedule = db.schedule.find_one({"date": today_str})
if existing_schedule:
@@ -177,7 +180,7 @@ class ScheduleGenerator:
async def move_doing(self, mind_thinking: str = ""):
try:
- current_time = datetime.datetime.now()
+ current_time = datetime.datetime.now(TIME_ZONE)
if mind_thinking:
doing_prompt = self.construct_doing_prompt(current_time, mind_thinking)
else:
@@ -246,7 +249,7 @@ class ScheduleGenerator:
def save_today_schedule_to_db(self):
"""保存日程到数据库,同时初始化 today_done_list"""
- date_str = datetime.datetime.now().strftime("%Y-%m-%d")
+ date_str = datetime.datetime.now(TIME_ZONE).strftime("%Y-%m-%d")
schedule_data = {
"date": date_str,
"schedule": self.today_schedule_text,
diff --git a/src/plugins/utils/statistic.py b/src/plugins/utils/statistic.py
index 529793837..eef10c01d 100644
--- a/src/plugins/utils/statistic.py
+++ b/src/plugins/utils/statistic.py
@@ -139,13 +139,13 @@ class LLMStatistics:
user_info = doc.get("user_info", {})
group_info = chat_info.get("group_info") if chat_info else {}
# print(f"group_info: {group_info}")
- group_name = "unknown"
+ group_name = None
if group_info:
- group_name = group_info["group_name"]
- if user_info and group_name == "unknown":
+ group_name = group_info.get("group_name", f"群{group_info.get('group_id')}")
+ if user_info and not group_name:
group_name = user_info["user_nickname"]
# print(f"group_name: {group_name}")
- stats["messages_by_user"][user_id] += 1
+ stats["messages_by_user"][user_id] += 1
stats["messages_by_chat"][group_name] += 1
return stats
@@ -225,7 +225,7 @@ class LLMStatistics:
output.append(f"{group_name[:32]:<32} {count:>10}")
return "\n".join(output)
-
+
def _format_stats_section_lite(self, stats: Dict[str, Any], title: str) -> str:
"""格式化统计部分的输出"""
output = []
@@ -314,7 +314,7 @@ class LLMStatistics:
def _console_output_loop(self):
"""控制台输出循环,每5分钟输出一次最近1小时的统计"""
while self.running:
- # 等待5分钟
+ # 等待5分钟
for _ in range(300): # 5分钟 = 300秒
if not self.running:
break
@@ -323,16 +323,16 @@ class LLMStatistics:
# 收集最近1小时的统计数据
now = datetime.now()
hour_stats = self._collect_statistics_for_period(now - timedelta(hours=1))
-
+
# 使用logger输出
- stats_output = self._format_stats_section_lite(hour_stats, "最近1小时统计:详细信息见根目录文件:llm_statistics.txt")
+ stats_output = self._format_stats_section_lite(
+ hour_stats, "最近1小时统计:详细信息见根目录文件:llm_statistics.txt"
+ )
logger.info("\n" + stats_output + "\n" + "=" * 50)
-
+
except Exception:
logger.exception("控制台统计数据输出失败")
-
-
def _stats_loop(self):
"""统计循环,每5分钟运行一次"""
while self.running:
diff --git a/template/bot_config_template.toml b/template/bot_config_template.toml
index 2372b10b1..7df6a6e8e 100644
--- a/template/bot_config_template.toml
+++ b/template/bot_config_template.toml
@@ -1,5 +1,5 @@
[inner]
-version = "1.1.0"
+version = "1.1.3"
#以下是给开发人员阅读的,一般用户不需要阅读
@@ -48,6 +48,7 @@ enable_schedule_gen = true # 是否启用日程表(尚未完成)
prompt_schedule_gen = "用几句话描述描述性格特点或行动规律,这个特征会用来生成日程表"
schedule_doing_update_interval = 900 # 日程表更新间隔 单位秒
schedule_temperature = 0.3 # 日程表温度,建议0.3-0.6
+time_zone = "Asia/Shanghai" # 给你的机器人设置时区,可以解决运行电脑时区和国内时区不同的情况,或者模拟国外留学生日程
[platforms] # 必填项目,填写每个平台适配器提供的链接
nonebot-qq="http://127.0.0.1:18002/api/message"
@@ -93,8 +94,9 @@ emoji_response_penalty = 0.1 # 表情包回复惩罚系数,设为0为不回复
[emoji]
-check_interval = 15 # 检查破损表情包的时间间隔(分钟)
-register_interval = 60 # 注册表情包的时间间隔(分钟)
+max_emoji_num = 120 # 表情包最大数量
+max_reach_deletion = true # 开启则在达到最大数量时删除表情包,关闭则达到最大数量时不删除,只是不会继续收集表情包
+check_interval = 30 # 检查表情包(注册,破损,删除)的时间间隔(分钟)
auto_save = true # 是否保存表情包和图片
enable_check = false # 是否启用表情包过滤
check_prompt = "符合公序良俗" # 表情包过滤要求