remove:移除info_catcher
This commit is contained in:
@@ -2,22 +2,106 @@
|
|||||||
|
|
||||||
## 前言
|
## 前言
|
||||||
|
|
||||||
目前插件系统为v0.1版本,仅试行并实现简单功能,且只能在focus下使用
|
插件系统目前为v1.0版本,支持Focus和Normal两种聊天模式下的动作扩展。
|
||||||
|
|
||||||
目前插件的形式为给focus模型的决策增加新**动作action**
|
### 🆕 v1.0 新特性
|
||||||
|
- **双激活类型系统**:Focus模式智能化,Normal模式高性能
|
||||||
|
- **并行动作支持**:支持与回复同时执行的动作
|
||||||
|
- **四种激活类型**:ALWAYS、RANDOM、LLM_JUDGE、KEYWORD
|
||||||
|
- **智能缓存机制**:提升LLM判定性能
|
||||||
|
- **模式启用控制**:精确控制插件在不同模式下的行为
|
||||||
|
|
||||||
原有focus的planner有reply和no_reply两种动作
|
插件以**动作(Action)**的形式扩展MaiBot功能。原有的focus模式包含reply和no_reply两种基础动作,通过插件系统可以添加更多自定义动作如mute_action、pic_action等。
|
||||||
|
|
||||||
在麦麦plugin文件夹中的示例插件新增了mute_action动作和pic_action动作,你可以参考其中的代码
|
**⚠️ 重要变更**:旧的`action_activation_type`属性已被移除,必须使用新的双激活类型系统。详见[迁移指南](#迁移指南)。
|
||||||
|
|
||||||
在**之后的更新**中,会兼容normal_chat aciton,更多的自定义组件,tool,和/help式指令
|
## 动作激活系统 🚀
|
||||||
|
|
||||||
|
### 双激活类型架构
|
||||||
|
|
||||||
|
MaiBot采用**双激活类型架构**,为Focus模式和Normal模式分别提供最优的激活策略:
|
||||||
|
|
||||||
|
**Focus模式**:智能优先
|
||||||
|
- 支持复杂的LLM判定
|
||||||
|
- 提供精确的上下文理解
|
||||||
|
- 适合需要深度分析的场景
|
||||||
|
|
||||||
|
**Normal模式**:性能优先
|
||||||
|
- 使用快速的关键词匹配
|
||||||
|
- 采用简单的随机触发
|
||||||
|
- 确保快速响应用户
|
||||||
|
|
||||||
|
### 四种激活类型
|
||||||
|
|
||||||
|
#### 1. ALWAYS - 总是激活
|
||||||
|
```python
|
||||||
|
focus_activation_type = ActionActivationType.ALWAYS
|
||||||
|
normal_activation_type = ActionActivationType.ALWAYS
|
||||||
|
```
|
||||||
|
**用途**:基础必需动作,如`reply_action`、`no_reply_action`
|
||||||
|
|
||||||
|
#### 2. KEYWORD - 关键词触发
|
||||||
|
```python
|
||||||
|
focus_activation_type = ActionActivationType.KEYWORD
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD
|
||||||
|
activation_keywords = ["画", "画图", "生成图片", "draw"]
|
||||||
|
keyword_case_sensitive = False
|
||||||
|
```
|
||||||
|
**用途**:精确命令式触发,如图片生成、搜索等
|
||||||
|
|
||||||
|
#### 3. LLM_JUDGE - 智能判定
|
||||||
|
```python
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD # 推荐Normal模式使用KEYWORD
|
||||||
|
```
|
||||||
|
**用途**:需要上下文理解的复杂判定,如情感分析、意图识别
|
||||||
|
|
||||||
|
**优化特性**:
|
||||||
|
- 🚀 并行执行:多个LLM判定同时进行
|
||||||
|
- 💾 智能缓存:相同上下文复用结果(30秒有效期)
|
||||||
|
- ⚡ 直接判定:减少复杂度,提升性能
|
||||||
|
|
||||||
|
#### 4. RANDOM - 随机激活
|
||||||
|
```python
|
||||||
|
focus_activation_type = ActionActivationType.RANDOM
|
||||||
|
normal_activation_type = ActionActivationType.RANDOM
|
||||||
|
random_activation_probability = 0.1 # 10%概率
|
||||||
|
```
|
||||||
|
**用途**:增加不可预测性和趣味性,如随机表情
|
||||||
|
|
||||||
|
### 并行动作系统 🆕
|
||||||
|
|
||||||
|
支持动作与回复生成同时执行:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 并行动作:与回复生成同时执行
|
||||||
|
parallel_action = True # 提升用户体验,适用于辅助性动作
|
||||||
|
|
||||||
|
# 串行动作:替代回复生成(传统行为)
|
||||||
|
parallel_action = False # 默认值,适用于主要内容生成
|
||||||
|
```
|
||||||
|
|
||||||
|
**适用场景**:
|
||||||
|
- **并行动作**:情感表达、状态变更、TTS播报
|
||||||
|
- **串行动作**:图片生成、搜索查询、内容创作
|
||||||
|
|
||||||
|
### 模式启用控制
|
||||||
|
|
||||||
|
```python
|
||||||
|
from src.chat.chat_mode import ChatMode
|
||||||
|
|
||||||
|
mode_enable = ChatMode.ALL # 在所有模式下启用(默认)
|
||||||
|
mode_enable = ChatMode.FOCUS # 仅在Focus模式启用
|
||||||
|
mode_enable = ChatMode.NORMAL # 仅在Normal模式启用
|
||||||
|
```
|
||||||
|
|
||||||
## 基本步骤
|
## 基本步骤
|
||||||
|
|
||||||
1. 在`src/plugins/你的插件名/actions/`目录下创建插件文件
|
1. 在`src/plugins/你的插件名/actions/`目录下创建插件文件
|
||||||
2. 继承`PluginAction`基类
|
2. 继承`PluginAction`基类
|
||||||
3. 实现`process`方法
|
3. 配置双激活类型和相关属性
|
||||||
4. 在`src/plugins/你的插件名/__init__.py`中导入你的插件类,确保插件能被正确加载
|
4. 实现`process`方法
|
||||||
|
5. 在`src/plugins/你的插件名/__init__.py`中导入你的插件类
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# src/plugins/你的插件名/__init__.py
|
# src/plugins/你的插件名/__init__.py
|
||||||
@@ -28,9 +112,12 @@ __all__ = ["YourAction"]
|
|||||||
|
|
||||||
## 插件结构示例
|
## 插件结构示例
|
||||||
|
|
||||||
|
### 智能自适应插件(推荐)
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from src.common.logger_manager import get_logger
|
from src.common.logger_manager import get_logger
|
||||||
from src.chat.focus_chat.planners.actions.plugin_action import PluginAction, register_action
|
from src.chat.focus_chat.planners.actions.plugin_action import PluginAction, register_action, ActionActivationType
|
||||||
|
from src.chat.chat_mode import ChatMode
|
||||||
from typing import Tuple
|
from typing import Tuple
|
||||||
|
|
||||||
logger = get_logger("your_action_name")
|
logger = get_logger("your_action_name")
|
||||||
@@ -39,8 +126,21 @@ logger = get_logger("your_action_name")
|
|||||||
class YourAction(PluginAction):
|
class YourAction(PluginAction):
|
||||||
"""你的动作描述"""
|
"""你的动作描述"""
|
||||||
|
|
||||||
action_name = "your_action_name" # 动作名称,必须唯一
|
action_name = "your_action_name"
|
||||||
action_description = "这个动作的详细描述,会展示给用户"
|
action_description = "这个动作的详细描述,会展示给用户"
|
||||||
|
|
||||||
|
# 🆕 双激活类型配置(智能自适应模式)
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE # Focus模式使用智能判定
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD # Normal模式使用关键词
|
||||||
|
activation_keywords = ["关键词1", "关键词2", "keyword"]
|
||||||
|
keyword_case_sensitive = False
|
||||||
|
|
||||||
|
# 🆕 模式和并行控制
|
||||||
|
mode_enable = ChatMode.ALL # 支持所有模式
|
||||||
|
parallel_action = False # 根据需要调整
|
||||||
|
enable_plugin = True # 是否启用插件
|
||||||
|
|
||||||
|
# 传统配置
|
||||||
action_parameters = {
|
action_parameters = {
|
||||||
"param1": "参数1的说明(可选)",
|
"param1": "参数1的说明(可选)",
|
||||||
"param2": "参数2的说明(可选)"
|
"param2": "参数2的说明(可选)"
|
||||||
@@ -49,9 +149,9 @@ class YourAction(PluginAction):
|
|||||||
"使用场景1",
|
"使用场景1",
|
||||||
"使用场景2"
|
"使用场景2"
|
||||||
]
|
]
|
||||||
default = False # 是否默认启用
|
default = False
|
||||||
|
|
||||||
associated_types = ["command", "text"] #该插件会发送的消息类型
|
associated_types = ["text", "command"]
|
||||||
|
|
||||||
async def process(self) -> Tuple[bool, str]:
|
async def process(self) -> Tuple[bool, str]:
|
||||||
"""插件核心逻辑"""
|
"""插件核心逻辑"""
|
||||||
@@ -59,6 +159,105 @@ class YourAction(PluginAction):
|
|||||||
return True, "执行结果"
|
return True, "执行结果"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 关键词触发插件
|
||||||
|
|
||||||
|
```python
|
||||||
|
@register_action
|
||||||
|
class SearchAction(PluginAction):
|
||||||
|
action_name = "search_action"
|
||||||
|
action_description = "智能搜索功能"
|
||||||
|
|
||||||
|
# 两个模式都使用关键词触发
|
||||||
|
focus_activation_type = ActionActivationType.KEYWORD
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD
|
||||||
|
activation_keywords = ["搜索", "查找", "什么是", "search", "find"]
|
||||||
|
keyword_case_sensitive = False
|
||||||
|
|
||||||
|
mode_enable = ChatMode.ALL
|
||||||
|
parallel_action = False
|
||||||
|
enable_plugin = True
|
||||||
|
|
||||||
|
async def process(self) -> Tuple[bool, str]:
|
||||||
|
# 搜索逻辑
|
||||||
|
return True, "搜索完成"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 并行辅助动作
|
||||||
|
|
||||||
|
```python
|
||||||
|
@register_action
|
||||||
|
class EmotionAction(PluginAction):
|
||||||
|
action_name = "emotion_action"
|
||||||
|
action_description = "情感表达动作"
|
||||||
|
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
normal_activation_type = ActionActivationType.RANDOM
|
||||||
|
random_activation_probability = 0.05 # 5%概率
|
||||||
|
|
||||||
|
mode_enable = ChatMode.ALL
|
||||||
|
parallel_action = True # 🆕 与回复并行执行
|
||||||
|
enable_plugin = True
|
||||||
|
|
||||||
|
async def process(self) -> Tuple[bool, str]:
|
||||||
|
# 情感表达逻辑
|
||||||
|
return True, "" # 并行动作通常不返回文本
|
||||||
|
```
|
||||||
|
|
||||||
|
### Focus专享高级功能
|
||||||
|
|
||||||
|
```python
|
||||||
|
@register_action
|
||||||
|
class AdvancedAnalysisAction(PluginAction):
|
||||||
|
action_name = "advanced_analysis"
|
||||||
|
action_description = "高级分析功能"
|
||||||
|
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
normal_activation_type = ActionActivationType.ALWAYS # 不会生效
|
||||||
|
|
||||||
|
mode_enable = ChatMode.FOCUS # 🆕 仅在Focus模式启用
|
||||||
|
parallel_action = False
|
||||||
|
enable_plugin = True
|
||||||
|
```
|
||||||
|
|
||||||
|
## 推荐配置模式
|
||||||
|
|
||||||
|
### 模式1:智能自适应(推荐)
|
||||||
|
```python
|
||||||
|
# Focus模式智能判定,Normal模式快速触发
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD
|
||||||
|
activation_keywords = ["相关", "关键词"]
|
||||||
|
mode_enable = ChatMode.ALL
|
||||||
|
parallel_action = False # 根据具体需求调整
|
||||||
|
```
|
||||||
|
|
||||||
|
### 模式2:统一关键词
|
||||||
|
```python
|
||||||
|
# 两个模式都使用关键词,确保行为一致
|
||||||
|
focus_activation_type = ActionActivationType.KEYWORD
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD
|
||||||
|
activation_keywords = ["画", "图片", "生成"]
|
||||||
|
mode_enable = ChatMode.ALL
|
||||||
|
```
|
||||||
|
|
||||||
|
### 模式3:Focus专享功能
|
||||||
|
```python
|
||||||
|
# 仅在Focus模式启用的高级功能
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
mode_enable = ChatMode.FOCUS
|
||||||
|
parallel_action = False
|
||||||
|
```
|
||||||
|
|
||||||
|
### 模式4:随机娱乐功能
|
||||||
|
```python
|
||||||
|
# 增加趣味性的随机功能
|
||||||
|
focus_activation_type = ActionActivationType.RANDOM
|
||||||
|
normal_activation_type = ActionActivationType.RANDOM
|
||||||
|
random_activation_probability = 0.08 # 8%概率
|
||||||
|
mode_enable = ChatMode.ALL
|
||||||
|
parallel_action = True # 通常与回复并行
|
||||||
|
```
|
||||||
|
|
||||||
## 可用的API方法
|
## 可用的API方法
|
||||||
|
|
||||||
插件可以使用`PluginAction`基类提供的以下API:
|
插件可以使用`PluginAction`基类提供的以下API:
|
||||||
@@ -79,19 +278,13 @@ await self.send_message(
|
|||||||
display_message=f"我 禁言了 {target} {duration_str}秒",
|
display_message=f"我 禁言了 {target} {duration_str}秒",
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
会将消息直接以原始文本发送
|
|
||||||
type指定消息类型
|
|
||||||
data为发送内容
|
|
||||||
|
|
||||||
### 2. 使用表达器发送消息
|
### 2. 使用表达器发送消息
|
||||||
|
|
||||||
```python
|
```python
|
||||||
await self.send_message_by_expressor("你好")
|
await self.send_message_by_expressor("你好")
|
||||||
|
|
||||||
await self.send_message_by_expressor(f"禁言{target} {duration}秒,因为{reason}")
|
await self.send_message_by_expressor(f"禁言{target} {duration}秒,因为{reason}")
|
||||||
```
|
```
|
||||||
将消息通过表达器发送,使用LLM组织成符合bot语言风格的内容并发送
|
|
||||||
只能发送文本
|
|
||||||
|
|
||||||
### 3. 获取聊天类型
|
### 3. 获取聊天类型
|
||||||
|
|
||||||
@@ -159,16 +352,173 @@ return True, "执行成功的消息"
|
|||||||
return False, "执行失败的原因"
|
return False, "执行失败的原因"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 性能优化建议
|
||||||
|
|
||||||
|
### 1. 激活类型选择
|
||||||
|
- **ALWAYS**:仅用于基础必需动作
|
||||||
|
- **KEYWORD**:明确的命令式动作,性能最佳
|
||||||
|
- **LLM_JUDGE**:复杂判断,建议仅在Focus模式使用
|
||||||
|
- **RANDOM**:娱乐功能,低概率触发
|
||||||
|
|
||||||
|
### 2. 双模式配置
|
||||||
|
- **智能自适应**:Focus用LLM_JUDGE,Normal用KEYWORD(推荐)
|
||||||
|
- **性能优先**:两个模式都用KEYWORD或RANDOM
|
||||||
|
- **功能分离**:高级功能仅在Focus模式启用
|
||||||
|
|
||||||
|
### 3. 并行动作使用
|
||||||
|
- **parallel_action = True**:辅助性、非内容生成类动作
|
||||||
|
- **parallel_action = False**:主要内容生成、需要完整注意力的动作
|
||||||
|
|
||||||
|
### 4. LLM判定优化
|
||||||
|
- 编写清晰的激活条件描述
|
||||||
|
- 避免过于复杂的逻辑判断
|
||||||
|
- 利用智能缓存机制(自动)
|
||||||
|
- Normal模式避免使用LLM_JUDGE
|
||||||
|
|
||||||
|
### 5. 关键词设计
|
||||||
|
- 包含同义词和英文对应词
|
||||||
|
- 考虑用户的不同表达习惯
|
||||||
|
- 避免过于宽泛的关键词
|
||||||
|
- 根据实际使用调整覆盖率
|
||||||
|
|
||||||
|
## 迁移指南 ⚠️
|
||||||
|
|
||||||
|
### 重大变更说明
|
||||||
|
**旧的 `action_activation_type` 属性已被移除**,必须更新为新的双激活类型系统。
|
||||||
|
|
||||||
|
### 快速迁移步骤
|
||||||
|
|
||||||
|
#### 第一步:更新基本属性
|
||||||
|
```python
|
||||||
|
# 旧的配置(已废弃)❌
|
||||||
|
class OldAction(BaseAction):
|
||||||
|
action_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
|
||||||
|
# 新的配置(必须使用)✅
|
||||||
|
class NewAction(BaseAction):
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD
|
||||||
|
activation_keywords = ["相关", "关键词"]
|
||||||
|
mode_enable = ChatMode.ALL
|
||||||
|
parallel_action = False
|
||||||
|
enable_plugin = True
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 第二步:根据原类型选择对应策略
|
||||||
|
```python
|
||||||
|
# 原来是 ALWAYS
|
||||||
|
focus_activation_type = ActionActivationType.ALWAYS
|
||||||
|
normal_activation_type = ActionActivationType.ALWAYS
|
||||||
|
|
||||||
|
# 原来是 LLM_JUDGE
|
||||||
|
focus_activation_type = ActionActivationType.LLM_JUDGE
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD # 添加关键词
|
||||||
|
activation_keywords = ["需要", "添加", "关键词"]
|
||||||
|
|
||||||
|
# 原来是 KEYWORD
|
||||||
|
focus_activation_type = ActionActivationType.KEYWORD
|
||||||
|
normal_activation_type = ActionActivationType.KEYWORD
|
||||||
|
# 保持原有的 activation_keywords
|
||||||
|
|
||||||
|
# 原来是 RANDOM
|
||||||
|
focus_activation_type = ActionActivationType.RANDOM
|
||||||
|
normal_activation_type = ActionActivationType.RANDOM
|
||||||
|
# 保持原有的 random_activation_probability
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 第三步:配置新功能
|
||||||
|
```python
|
||||||
|
# 添加模式控制
|
||||||
|
mode_enable = ChatMode.ALL # 或 ChatMode.FOCUS / ChatMode.NORMAL
|
||||||
|
|
||||||
|
# 添加并行控制
|
||||||
|
parallel_action = False # 根据动作特性选择True/False
|
||||||
|
|
||||||
|
# 添加插件控制
|
||||||
|
enable_plugin = True # 是否启用此插件
|
||||||
|
```
|
||||||
|
|
||||||
## 最佳实践
|
## 最佳实践
|
||||||
|
|
||||||
1. 使用`action_parameters`清晰定义你的动作需要的参数
|
### 1. 代码组织
|
||||||
2. 使用`action_require`描述何时应该使用你的动作
|
- 使用清晰的`action_description`描述功能
|
||||||
3. 使用`action_description`准确描述你的动作功能
|
- 使用`action_parameters`定义所需参数
|
||||||
4. 使用`logger`记录重要信息,方便调试
|
- 使用`action_require`描述使用场景
|
||||||
5. 避免操作底层系统,尽量使用`PluginAction`提供的API
|
- 使用`logger`记录重要信息,方便调试
|
||||||
|
|
||||||
|
### 2. 性能考虑
|
||||||
|
- 优先使用KEYWORD触发,性能最佳
|
||||||
|
- Normal模式避免使用LLM_JUDGE
|
||||||
|
- 合理设置随机概率(0.05-0.3)
|
||||||
|
- 利用智能缓存机制(自动优化)
|
||||||
|
|
||||||
|
### 3. 用户体验
|
||||||
|
- 并行动作提升响应速度
|
||||||
|
- 关键词覆盖用户常用表达
|
||||||
|
- 错误处理和友好提示
|
||||||
|
- 避免操作底层系统
|
||||||
|
|
||||||
|
### 4. 兼容性
|
||||||
|
- 支持中英文关键词
|
||||||
|
- 考虑不同聊天模式的用户需求
|
||||||
|
- 提供合理的默认配置
|
||||||
|
- 向后兼容旧版本用户习惯
|
||||||
|
|
||||||
## 注册与加载
|
## 注册与加载
|
||||||
|
|
||||||
插件会在系统启动时自动加载,只要放在正确的目录并添加了`@register_action`装饰器。
|
插件会在系统启动时自动加载,只要:
|
||||||
|
1. 放在正确的目录结构中
|
||||||
|
2. 添加了`@register_action`装饰器
|
||||||
|
3. 在`__init__.py`中正确导入
|
||||||
|
|
||||||
若设置`default = True`,插件会自动添加到默认动作集并启用,否则默认只加载不启用。
|
若设置`default = True`,插件会自动添加到默认动作集并启用,否则默认只加载不启用。
|
||||||
|
|
||||||
|
## 调试和测试
|
||||||
|
|
||||||
|
### 性能监控
|
||||||
|
系统会自动记录以下性能指标:
|
||||||
|
```python
|
||||||
|
logger.debug(f"激活判定:{before_count} -> {after_count} actions")
|
||||||
|
logger.debug(f"并行LLM判定完成,耗时: {duration:.2f}s")
|
||||||
|
logger.debug(f"使用缓存结果 {action_name}: {'激活' if result else '未激活'}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 测试验证
|
||||||
|
使用测试脚本验证配置:
|
||||||
|
```bash
|
||||||
|
python test_action_activation.py
|
||||||
|
```
|
||||||
|
|
||||||
|
该脚本会显示:
|
||||||
|
- 所有注册动作的双激活类型配置
|
||||||
|
- 模拟不同模式下的激活结果
|
||||||
|
- 并行动作系统的工作状态
|
||||||
|
- 帮助验证配置是否正确
|
||||||
|
|
||||||
|
## 系统优势
|
||||||
|
|
||||||
|
### 1. 高性能
|
||||||
|
- **并行判定**:多个LLM判定同时进行
|
||||||
|
- **智能缓存**:避免重复计算
|
||||||
|
- **双模式优化**:Focus智能化,Normal快速化
|
||||||
|
- **预期性能提升**:3-5x
|
||||||
|
|
||||||
|
### 2. 智能化
|
||||||
|
- **上下文感知**:基于聊天内容智能激活
|
||||||
|
- **动态配置**:从动作配置中收集关键词
|
||||||
|
- **冲突避免**:防止重复激活
|
||||||
|
- **模式自适应**:根据聊天模式选择最优策略
|
||||||
|
|
||||||
|
### 3. 可扩展性
|
||||||
|
- **插件式**:新的激活类型易于添加
|
||||||
|
- **配置驱动**:通过配置控制行为
|
||||||
|
- **模块化**:各组件独立可测试
|
||||||
|
- **双模式支持**:灵活适应不同使用场景
|
||||||
|
|
||||||
|
### 4. 用户体验
|
||||||
|
- **响应速度**:显著提升机器人反应速度
|
||||||
|
- **智能决策**:精确理解用户意图
|
||||||
|
- **交互流畅**:并行动作减少等待时间
|
||||||
|
- **适应性强**:不同模式满足不同需求
|
||||||
|
|
||||||
|
这个升级后的插件系统为MaiBot提供了强大而灵活的扩展能力,既保证了性能,又提供了智能化的用户体验。
|
||||||
|
|||||||
@@ -12,7 +12,6 @@ from src.chat.utils.timer_calculator import Timer # <--- Import Timer
|
|||||||
from src.chat.emoji_system.emoji_manager import emoji_manager
|
from src.chat.emoji_system.emoji_manager import emoji_manager
|
||||||
from src.chat.focus_chat.heartFC_sender import HeartFCSender
|
from src.chat.focus_chat.heartFC_sender import HeartFCSender
|
||||||
from src.chat.utils.utils import process_llm_response
|
from src.chat.utils.utils import process_llm_response
|
||||||
from src.chat.utils.info_catcher import info_catcher_manager
|
|
||||||
from src.chat.heart_flow.utils_chat import get_chat_type_and_target_info
|
from src.chat.heart_flow.utils_chat import get_chat_type_and_target_info
|
||||||
from src.chat.message_receive.chat_stream import ChatStream
|
from src.chat.message_receive.chat_stream import ChatStream
|
||||||
from src.chat.focus_chat.hfc_utils import parse_thinking_id_to_timestamp
|
from src.chat.focus_chat.hfc_utils import parse_thinking_id_to_timestamp
|
||||||
@@ -186,9 +185,6 @@ class DefaultExpressor:
|
|||||||
# current_temp = float(global_config.model.normal["temp"]) * arousal_multiplier
|
# current_temp = float(global_config.model.normal["temp"]) * arousal_multiplier
|
||||||
# self.express_model.params["temperature"] = current_temp # 动态调整温度
|
# self.express_model.params["temperature"] = current_temp # 动态调整温度
|
||||||
|
|
||||||
# 2. 获取信息捕捉器
|
|
||||||
info_catcher = info_catcher_manager.get_info_catcher(thinking_id)
|
|
||||||
|
|
||||||
# --- Determine sender_name for private chat ---
|
# --- Determine sender_name for private chat ---
|
||||||
sender_name_for_prompt = "某人" # Default for group or if info unavailable
|
sender_name_for_prompt = "某人" # Default for group or if info unavailable
|
||||||
if not self.is_group_chat and self.chat_target_info:
|
if not self.is_group_chat and self.chat_target_info:
|
||||||
@@ -227,14 +223,10 @@ class DefaultExpressor:
|
|||||||
# logger.info(f"{self.log_prefix}[Replier-{thinking_id}]\nPrompt:\n{prompt}\n")
|
# logger.info(f"{self.log_prefix}[Replier-{thinking_id}]\nPrompt:\n{prompt}\n")
|
||||||
content, (reasoning_content, model_name) = await self.express_model.generate_response_async(prompt)
|
content, (reasoning_content, model_name) = await self.express_model.generate_response_async(prompt)
|
||||||
|
|
||||||
# logger.info(f"{self.log_prefix}\nPrompt:\n{prompt}\n---------------------------\n")
|
|
||||||
|
|
||||||
logger.info(f"想要表达:{in_mind_reply}||理由:{reason}")
|
logger.info(f"想要表达:{in_mind_reply}||理由:{reason}")
|
||||||
logger.info(f"最终回复: {content}\n")
|
logger.info(f"最终回复: {content}\n")
|
||||||
|
|
||||||
info_catcher.catch_after_llm_generated(
|
|
||||||
prompt=prompt, response=content, reasoning_content=reasoning_content, model_name=model_name
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as llm_e:
|
except Exception as llm_e:
|
||||||
# 精简报错信息
|
# 精简报错信息
|
||||||
|
|||||||
@@ -110,7 +110,9 @@ class HeartFCSender:
|
|||||||
message.set_reply()
|
message.set_reply()
|
||||||
logger.debug(f"[{chat_id}] 应用 set_reply 逻辑: {message.processed_plain_text[:20]}...")
|
logger.debug(f"[{chat_id}] 应用 set_reply 逻辑: {message.processed_plain_text[:20]}...")
|
||||||
|
|
||||||
|
# print(f"message.display_message: {message.display_message}")
|
||||||
await message.process()
|
await message.process()
|
||||||
|
# print(f"message.display_message: {message.display_message}")
|
||||||
|
|
||||||
if typing:
|
if typing:
|
||||||
if has_thinking:
|
if has_thinking:
|
||||||
|
|||||||
@@ -12,7 +12,6 @@ from src.chat.utils.timer_calculator import Timer # <--- Import Timer
|
|||||||
from src.chat.emoji_system.emoji_manager import emoji_manager
|
from src.chat.emoji_system.emoji_manager import emoji_manager
|
||||||
from src.chat.focus_chat.heartFC_sender import HeartFCSender
|
from src.chat.focus_chat.heartFC_sender import HeartFCSender
|
||||||
from src.chat.utils.utils import process_llm_response
|
from src.chat.utils.utils import process_llm_response
|
||||||
from src.chat.utils.info_catcher import info_catcher_manager
|
|
||||||
from src.chat.heart_flow.utils_chat import get_chat_type_and_target_info
|
from src.chat.heart_flow.utils_chat import get_chat_type_and_target_info
|
||||||
from src.chat.message_receive.chat_stream import ChatStream
|
from src.chat.message_receive.chat_stream import ChatStream
|
||||||
from src.chat.focus_chat.hfc_utils import parse_thinking_id_to_timestamp
|
from src.chat.focus_chat.hfc_utils import parse_thinking_id_to_timestamp
|
||||||
@@ -238,8 +237,6 @@ class DefaultReplyer:
|
|||||||
# current_temp = float(global_config.model.normal["temp"]) * arousal_multiplier
|
# current_temp = float(global_config.model.normal["temp"]) * arousal_multiplier
|
||||||
# self.express_model.params["temperature"] = current_temp # 动态调整温度
|
# self.express_model.params["temperature"] = current_temp # 动态调整温度
|
||||||
|
|
||||||
# 2. 获取信息捕捉器
|
|
||||||
info_catcher = info_catcher_manager.get_info_catcher(thinking_id)
|
|
||||||
|
|
||||||
reply_to = action_data.get("reply_to", "none")
|
reply_to = action_data.get("reply_to", "none")
|
||||||
|
|
||||||
@@ -286,10 +283,6 @@ class DefaultReplyer:
|
|||||||
# logger.info(f"prompt: {prompt}")
|
# logger.info(f"prompt: {prompt}")
|
||||||
logger.info(f"最终回复: {content}")
|
logger.info(f"最终回复: {content}")
|
||||||
|
|
||||||
info_catcher.catch_after_llm_generated(
|
|
||||||
prompt=prompt, response=content, reasoning_content=reasoning_content, model_name=model_name
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as llm_e:
|
except Exception as llm_e:
|
||||||
# 精简报错信息
|
# 精简报错信息
|
||||||
logger.error(f"{self.log_prefix}LLM 生成失败: {llm_e}")
|
logger.error(f"{self.log_prefix}LLM 生成失败: {llm_e}")
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ class MessageStorage:
|
|||||||
else:
|
else:
|
||||||
filtered_processed_plain_text = ""
|
filtered_processed_plain_text = ""
|
||||||
|
|
||||||
|
|
||||||
if isinstance(message, MessageSending):
|
if isinstance(message, MessageSending):
|
||||||
display_message = message.display_message
|
display_message = message.display_message
|
||||||
if display_message:
|
if display_message:
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ from src.common.logger_manager import get_logger
|
|||||||
from src.chat.heart_flow.utils_chat import get_chat_type_and_target_info
|
from src.chat.heart_flow.utils_chat import get_chat_type_and_target_info
|
||||||
from src.manager.mood_manager import mood_manager
|
from src.manager.mood_manager import mood_manager
|
||||||
from src.chat.message_receive.chat_stream import ChatStream, chat_manager
|
from src.chat.message_receive.chat_stream import ChatStream, chat_manager
|
||||||
from src.chat.utils.info_catcher import info_catcher_manager
|
|
||||||
from src.chat.utils.timer_calculator import Timer
|
from src.chat.utils.timer_calculator import Timer
|
||||||
from src.chat.utils.prompt_builder import global_prompt_manager
|
from src.chat.utils.prompt_builder import global_prompt_manager
|
||||||
from .normal_chat_generator import NormalChatGenerator
|
from .normal_chat_generator import NormalChatGenerator
|
||||||
@@ -277,9 +276,6 @@ class NormalChat:
|
|||||||
|
|
||||||
logger.debug(f"[{self.stream_name}] 创建捕捉器,thinking_id:{thinking_id}")
|
logger.debug(f"[{self.stream_name}] 创建捕捉器,thinking_id:{thinking_id}")
|
||||||
|
|
||||||
info_catcher = info_catcher_manager.get_info_catcher(thinking_id)
|
|
||||||
info_catcher.catch_decide_to_response(message)
|
|
||||||
|
|
||||||
# 如果启用planner,预先修改可用actions(避免在并行任务中重复调用)
|
# 如果启用planner,预先修改可用actions(避免在并行任务中重复调用)
|
||||||
available_actions = None
|
available_actions = None
|
||||||
if self.enable_planner:
|
if self.enable_planner:
|
||||||
@@ -373,8 +369,6 @@ class NormalChat:
|
|||||||
if isinstance(response_set, Exception):
|
if isinstance(response_set, Exception):
|
||||||
logger.error(f"[{self.stream_name}] 回复生成异常: {response_set}")
|
logger.error(f"[{self.stream_name}] 回复生成异常: {response_set}")
|
||||||
response_set = None
|
response_set = None
|
||||||
elif response_set:
|
|
||||||
info_catcher.catch_after_generate_response(timing_results["并行生成回复和规划"])
|
|
||||||
|
|
||||||
# 处理规划结果(可选,不影响回复)
|
# 处理规划结果(可选,不影响回复)
|
||||||
if isinstance(plan_result, Exception):
|
if isinstance(plan_result, Exception):
|
||||||
@@ -414,7 +408,6 @@ class NormalChat:
|
|||||||
|
|
||||||
# 检查 first_bot_msg 是否为 None (例如思考消息已被移除的情况)
|
# 检查 first_bot_msg 是否为 None (例如思考消息已被移除的情况)
|
||||||
if first_bot_msg:
|
if first_bot_msg:
|
||||||
info_catcher.catch_after_response(timing_results["消息发送"], response_set, first_bot_msg)
|
|
||||||
|
|
||||||
# 记录回复信息到最近回复列表中
|
# 记录回复信息到最近回复列表中
|
||||||
reply_info = {
|
reply_info = {
|
||||||
@@ -447,8 +440,6 @@ class NormalChat:
|
|||||||
# await self._check_switch_to_focus()
|
# await self._check_switch_to_focus()
|
||||||
pass
|
pass
|
||||||
|
|
||||||
info_catcher.done_catch()
|
|
||||||
|
|
||||||
with Timer("处理表情包", timing_results):
|
with Timer("处理表情包", timing_results):
|
||||||
await self._handle_emoji(message, response_set[0])
|
await self._handle_emoji(message, response_set[0])
|
||||||
|
|
||||||
|
|||||||
@@ -133,6 +133,7 @@ class NormalChatExpressor:
|
|||||||
thinking_start_time=time.time(),
|
thinking_start_time=time.time(),
|
||||||
reply_to=mark_head,
|
reply_to=mark_head,
|
||||||
is_emoji=is_emoji,
|
is_emoji=is_emoji,
|
||||||
|
display_message=display_message,
|
||||||
)
|
)
|
||||||
logger.debug(f"{self.log_prefix} 添加{response_type}类型消息: {content}")
|
logger.debug(f"{self.log_prefix} 添加{response_type}类型消息: {content}")
|
||||||
|
|
||||||
@@ -167,6 +168,7 @@ class NormalChatExpressor:
|
|||||||
thinking_start_time: float,
|
thinking_start_time: float,
|
||||||
reply_to: bool = False,
|
reply_to: bool = False,
|
||||||
is_emoji: bool = False,
|
is_emoji: bool = False,
|
||||||
|
display_message: str = "",
|
||||||
) -> MessageSending:
|
) -> MessageSending:
|
||||||
"""构建发送消息
|
"""构建发送消息
|
||||||
|
|
||||||
@@ -197,6 +199,7 @@ class NormalChatExpressor:
|
|||||||
reply=anchor_message if reply_to else None,
|
reply=anchor_message if reply_to else None,
|
||||||
thinking_start_time=thinking_start_time,
|
thinking_start_time=thinking_start_time,
|
||||||
is_emoji=is_emoji,
|
is_emoji=is_emoji,
|
||||||
|
display_message=display_message,
|
||||||
)
|
)
|
||||||
|
|
||||||
return message_sending
|
return message_sending
|
||||||
|
|||||||
@@ -6,7 +6,6 @@ from src.chat.message_receive.message import MessageThinking
|
|||||||
from src.chat.normal_chat.normal_prompt import prompt_builder
|
from src.chat.normal_chat.normal_prompt import prompt_builder
|
||||||
from src.chat.utils.timer_calculator import Timer
|
from src.chat.utils.timer_calculator import Timer
|
||||||
from src.common.logger_manager import get_logger
|
from src.common.logger_manager import get_logger
|
||||||
from src.chat.utils.info_catcher import info_catcher_manager
|
|
||||||
from src.person_info.person_info import person_info_manager
|
from src.person_info.person_info import person_info_manager
|
||||||
from src.chat.utils.utils import process_llm_response
|
from src.chat.utils.utils import process_llm_response
|
||||||
|
|
||||||
@@ -69,7 +68,6 @@ class NormalChatGenerator:
|
|||||||
enable_planner: bool = False,
|
enable_planner: bool = False,
|
||||||
available_actions=None,
|
available_actions=None,
|
||||||
):
|
):
|
||||||
info_catcher = info_catcher_manager.get_info_catcher(thinking_id)
|
|
||||||
|
|
||||||
person_id = person_info_manager.get_person_id(
|
person_id = person_info_manager.get_person_id(
|
||||||
message.chat_stream.user_info.platform, message.chat_stream.user_info.user_id
|
message.chat_stream.user_info.platform, message.chat_stream.user_info.user_id
|
||||||
@@ -105,9 +103,6 @@ class NormalChatGenerator:
|
|||||||
|
|
||||||
logger.info(f"对 {message.processed_plain_text} 的回复:{content}")
|
logger.info(f"对 {message.processed_plain_text} 的回复:{content}")
|
||||||
|
|
||||||
info_catcher.catch_after_llm_generated(
|
|
||||||
prompt=prompt, response=content, reasoning_content=reasoning_content, model_name=self.current_model_name
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("生成回复时出错")
|
logger.exception("生成回复时出错")
|
||||||
|
|||||||
@@ -150,7 +150,7 @@ class NormalChatPlanner:
|
|||||||
try:
|
try:
|
||||||
content, (reasoning_content, model_name) = await self.planner_llm.generate_response_async(prompt)
|
content, (reasoning_content, model_name) = await self.planner_llm.generate_response_async(prompt)
|
||||||
|
|
||||||
logger.info(f"{self.log_prefix}规划器原始提示词: {prompt}")
|
# logger.info(f"{self.log_prefix}规划器原始提示词: {prompt}")
|
||||||
logger.info(f"{self.log_prefix}规划器原始响应: {content}")
|
logger.info(f"{self.log_prefix}规划器原始响应: {content}")
|
||||||
logger.info(f"{self.log_prefix}规划器推理: {reasoning_content}")
|
logger.info(f"{self.log_prefix}规划器推理: {reasoning_content}")
|
||||||
logger.info(f"{self.log_prefix}规划器模型: {model_name}")
|
logger.info(f"{self.log_prefix}规划器模型: {model_name}")
|
||||||
|
|||||||
@@ -1,223 +0,0 @@
|
|||||||
from src.config.config import global_config
|
|
||||||
from src.chat.message_receive.message import MessageRecv, MessageSending, Message
|
|
||||||
from src.common.database.database_model import Messages, ThinkingLog
|
|
||||||
import time
|
|
||||||
import traceback
|
|
||||||
from typing import List
|
|
||||||
import json
|
|
||||||
|
|
||||||
|
|
||||||
class InfoCatcher:
|
|
||||||
def __init__(self):
|
|
||||||
self.chat_history = [] # 聊天历史,长度为三倍使用的上下文喵~
|
|
||||||
self.chat_history_in_thinking = [] # 思考期间的聊天内容喵~
|
|
||||||
self.chat_history_after_response = [] # 回复后的聊天内容,长度为一倍上下文喵~
|
|
||||||
|
|
||||||
self.chat_id = ""
|
|
||||||
self.trigger_response_text = ""
|
|
||||||
self.response_text = ""
|
|
||||||
|
|
||||||
self.trigger_response_time = 0
|
|
||||||
self.trigger_response_message = None
|
|
||||||
|
|
||||||
self.response_time = 0
|
|
||||||
self.response_messages = []
|
|
||||||
|
|
||||||
# 使用字典来存储 heartflow 模式的数据
|
|
||||||
self.heartflow_data = {
|
|
||||||
"heart_flow_prompt": "",
|
|
||||||
"sub_heartflow_before": "",
|
|
||||||
"sub_heartflow_now": "",
|
|
||||||
"sub_heartflow_after": "",
|
|
||||||
"sub_heartflow_model": "",
|
|
||||||
"prompt": "",
|
|
||||||
"response": "",
|
|
||||||
"model": "",
|
|
||||||
}
|
|
||||||
|
|
||||||
# 使用字典来存储 reasoning 模式的数据喵~
|
|
||||||
self.reasoning_data = {"thinking_log": "", "prompt": "", "response": "", "model": ""}
|
|
||||||
|
|
||||||
# 耗时喵~
|
|
||||||
self.timing_results = {
|
|
||||||
"interested_rate_time": 0,
|
|
||||||
"sub_heartflow_observe_time": 0,
|
|
||||||
"sub_heartflow_step_time": 0,
|
|
||||||
"make_response_time": 0,
|
|
||||||
}
|
|
||||||
|
|
||||||
def catch_decide_to_response(self, message: MessageRecv):
|
|
||||||
# 搜集决定回复时的信息
|
|
||||||
self.trigger_response_message = message
|
|
||||||
self.trigger_response_text = message.detailed_plain_text
|
|
||||||
|
|
||||||
self.trigger_response_time = time.time()
|
|
||||||
|
|
||||||
self.chat_id = message.chat_stream.stream_id
|
|
||||||
|
|
||||||
self.chat_history = self.get_message_from_db_before_msg(message)
|
|
||||||
|
|
||||||
def catch_after_observe(self, obs_duration: float): # 这里可以有更多信息
|
|
||||||
self.timing_results["sub_heartflow_observe_time"] = obs_duration
|
|
||||||
|
|
||||||
def catch_afer_shf_step(self, step_duration: float, past_mind: str, current_mind: str):
|
|
||||||
self.timing_results["sub_heartflow_step_time"] = step_duration
|
|
||||||
if len(past_mind) > 1:
|
|
||||||
self.heartflow_data["sub_heartflow_before"] = past_mind[-1]
|
|
||||||
self.heartflow_data["sub_heartflow_now"] = current_mind
|
|
||||||
else:
|
|
||||||
self.heartflow_data["sub_heartflow_before"] = past_mind[-1]
|
|
||||||
self.heartflow_data["sub_heartflow_now"] = current_mind
|
|
||||||
|
|
||||||
def catch_after_llm_generated(self, prompt: str, response: str, reasoning_content: str = "", model_name: str = ""):
|
|
||||||
self.reasoning_data["thinking_log"] = reasoning_content
|
|
||||||
self.reasoning_data["prompt"] = prompt
|
|
||||||
self.reasoning_data["response"] = response
|
|
||||||
self.reasoning_data["model"] = model_name
|
|
||||||
|
|
||||||
self.response_text = response
|
|
||||||
|
|
||||||
def catch_after_generate_response(self, response_duration: float):
|
|
||||||
self.timing_results["make_response_time"] = response_duration
|
|
||||||
|
|
||||||
def catch_after_response(
|
|
||||||
self, response_duration: float, response_message: List[str], first_bot_msg: MessageSending
|
|
||||||
):
|
|
||||||
self.timing_results["make_response_time"] = response_duration
|
|
||||||
self.response_time = time.time()
|
|
||||||
self.response_messages = []
|
|
||||||
for msg in response_message:
|
|
||||||
self.response_messages.append(msg)
|
|
||||||
|
|
||||||
self.chat_history_in_thinking = self.get_message_from_db_between_msgs(
|
|
||||||
self.trigger_response_message, first_bot_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_message_from_db_between_msgs(message_start: Message, message_end: Message):
|
|
||||||
try:
|
|
||||||
time_start = message_start.message_info.time
|
|
||||||
time_end = message_end.message_info.time
|
|
||||||
chat_id = message_start.chat_stream.stream_id
|
|
||||||
|
|
||||||
# print(f"查询参数: time_start={time_start}, time_end={time_end}, chat_id={chat_id}")
|
|
||||||
|
|
||||||
messages_between_query = (
|
|
||||||
Messages.select()
|
|
||||||
.where((Messages.chat_id == chat_id) & (Messages.time > time_start) & (Messages.time < time_end))
|
|
||||||
.order_by(Messages.time.desc())
|
|
||||||
)
|
|
||||||
|
|
||||||
result = list(messages_between_query)
|
|
||||||
# print(f"查询结果数量: {len(result)}")
|
|
||||||
# if result:
|
|
||||||
# print(f"第一条消息时间: {result[0].time}")
|
|
||||||
# print(f"最后一条消息时间: {result[-1].time}")
|
|
||||||
return result
|
|
||||||
except Exception as e:
|
|
||||||
print(f"获取消息时出错: {str(e)}")
|
|
||||||
print(traceback.format_exc())
|
|
||||||
return []
|
|
||||||
|
|
||||||
def get_message_from_db_before_msg(self, message: MessageRecv):
|
|
||||||
message_id_val = message.message_info.message_id
|
|
||||||
chat_id_val = message.chat_stream.stream_id
|
|
||||||
|
|
||||||
messages_before_query = (
|
|
||||||
Messages.select()
|
|
||||||
.where((Messages.chat_id == chat_id_val) & (Messages.message_id < message_id_val))
|
|
||||||
.order_by(Messages.time.desc())
|
|
||||||
.limit(global_config.focus_chat.observation_context_size * 3)
|
|
||||||
)
|
|
||||||
|
|
||||||
return list(messages_before_query)
|
|
||||||
|
|
||||||
def message_list_to_dict(self, message_list):
|
|
||||||
result = []
|
|
||||||
for msg_item in message_list:
|
|
||||||
processed_msg_item = msg_item
|
|
||||||
if not isinstance(msg_item, dict):
|
|
||||||
processed_msg_item = self.message_to_dict(msg_item)
|
|
||||||
|
|
||||||
if not processed_msg_item:
|
|
||||||
continue
|
|
||||||
|
|
||||||
lite_message = {
|
|
||||||
"time": processed_msg_item.get("time"),
|
|
||||||
"user_nickname": processed_msg_item.get("user_nickname"),
|
|
||||||
"processed_plain_text": processed_msg_item.get("processed_plain_text"),
|
|
||||||
}
|
|
||||||
result.append(lite_message)
|
|
||||||
return result
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def message_to_dict(msg_obj):
|
|
||||||
if not msg_obj:
|
|
||||||
return None
|
|
||||||
if isinstance(msg_obj, dict):
|
|
||||||
return msg_obj
|
|
||||||
|
|
||||||
if isinstance(msg_obj, Messages):
|
|
||||||
return {
|
|
||||||
"time": msg_obj.time,
|
|
||||||
"user_id": msg_obj.user_id,
|
|
||||||
"user_nickname": msg_obj.user_nickname,
|
|
||||||
"processed_plain_text": msg_obj.processed_plain_text,
|
|
||||||
}
|
|
||||||
|
|
||||||
if hasattr(msg_obj, "message_info") and hasattr(msg_obj.message_info, "user_info"):
|
|
||||||
return {
|
|
||||||
"time": msg_obj.message_info.time,
|
|
||||||
"user_id": msg_obj.message_info.user_info.user_id,
|
|
||||||
"user_nickname": msg_obj.message_info.user_info.user_nickname,
|
|
||||||
"processed_plain_text": msg_obj.processed_plain_text,
|
|
||||||
}
|
|
||||||
|
|
||||||
print(f"Warning: message_to_dict received an unhandled type: {type(msg_obj)}")
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def done_catch(self):
|
|
||||||
"""将收集到的信息存储到数据库的 thinking_log 表中喵~"""
|
|
||||||
try:
|
|
||||||
trigger_info_dict = self.message_to_dict(self.trigger_response_message)
|
|
||||||
response_info_dict = {
|
|
||||||
"time": self.response_time,
|
|
||||||
"message": self.response_messages,
|
|
||||||
}
|
|
||||||
chat_history_list = self.message_list_to_dict(self.chat_history)
|
|
||||||
chat_history_in_thinking_list = self.message_list_to_dict(self.chat_history_in_thinking)
|
|
||||||
chat_history_after_response_list = self.message_list_to_dict(self.chat_history_after_response)
|
|
||||||
|
|
||||||
log_entry = ThinkingLog(
|
|
||||||
chat_id=self.chat_id,
|
|
||||||
trigger_text=self.trigger_response_text,
|
|
||||||
response_text=self.response_text,
|
|
||||||
trigger_info_json=json.dumps(trigger_info_dict) if trigger_info_dict else None,
|
|
||||||
response_info_json=json.dumps(response_info_dict),
|
|
||||||
timing_results_json=json.dumps(self.timing_results),
|
|
||||||
chat_history_json=json.dumps(chat_history_list),
|
|
||||||
chat_history_in_thinking_json=json.dumps(chat_history_in_thinking_list),
|
|
||||||
chat_history_after_response_json=json.dumps(chat_history_after_response_list),
|
|
||||||
heartflow_data_json=json.dumps(self.heartflow_data),
|
|
||||||
reasoning_data_json=json.dumps(self.reasoning_data),
|
|
||||||
)
|
|
||||||
log_entry.save()
|
|
||||||
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
print(f"存储思考日志时出错: {str(e)} 喵~")
|
|
||||||
print(traceback.format_exc())
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
class InfoCatcherManager:
|
|
||||||
def __init__(self):
|
|
||||||
self.info_catchers = {}
|
|
||||||
|
|
||||||
def get_info_catcher(self, thinking_id: str) -> InfoCatcher:
|
|
||||||
if thinking_id not in self.info_catchers:
|
|
||||||
self.info_catchers[thinking_id] = InfoCatcher()
|
|
||||||
return self.info_catchers[thinking_id]
|
|
||||||
|
|
||||||
|
|
||||||
info_catcher_manager = InfoCatcherManager()
|
|
||||||
@@ -71,6 +71,22 @@ def update_config():
|
|||||||
# 如果是空数组,确保它保持为空数组
|
# 如果是空数组,确保它保持为空数组
|
||||||
if not value:
|
if not value:
|
||||||
target[key] = tomlkit.array()
|
target[key] = tomlkit.array()
|
||||||
|
else:
|
||||||
|
# 特殊处理正则表达式数组和包含正则表达式的结构
|
||||||
|
if key == "ban_msgs_regex":
|
||||||
|
# 直接使用原始值,不进行额外处理
|
||||||
|
target[key] = value
|
||||||
|
elif key == "regex_rules":
|
||||||
|
# 对于regex_rules,需要特殊处理其中的regex字段
|
||||||
|
target[key] = value
|
||||||
|
else:
|
||||||
|
# 检查是否包含正则表达式相关的字典项
|
||||||
|
contains_regex = False
|
||||||
|
if value and isinstance(value[0], dict) and "regex" in value[0]:
|
||||||
|
contains_regex = True
|
||||||
|
|
||||||
|
if contains_regex:
|
||||||
|
target[key] = value
|
||||||
else:
|
else:
|
||||||
target[key] = tomlkit.array(value)
|
target[key] = tomlkit.array(value)
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -22,6 +22,7 @@ class MuteAction(PluginAction):
|
|||||||
"当有人刷屏时使用",
|
"当有人刷屏时使用",
|
||||||
"当有人发了擦边,或者色情内容时使用",
|
"当有人发了擦边,或者色情内容时使用",
|
||||||
"当有人要求禁言自己时使用",
|
"当有人要求禁言自己时使用",
|
||||||
|
"如果某人已经被禁言了,就不要再次禁言了,除非你想追加时间!!"
|
||||||
]
|
]
|
||||||
enable_plugin = True # 启用插件
|
enable_plugin = True # 启用插件
|
||||||
associated_types = ["command", "text"]
|
associated_types = ["command", "text"]
|
||||||
@@ -66,7 +67,7 @@ class MuteAction(PluginAction):
|
|||||||
mode_enable = ChatMode.ALL
|
mode_enable = ChatMode.ALL
|
||||||
|
|
||||||
# 并行执行设置 - 禁言动作可以与回复并行执行,不覆盖回复内容
|
# 并行执行设置 - 禁言动作可以与回复并行执行,不覆盖回复内容
|
||||||
parallel_action = True
|
parallel_action = False
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
[inner]
|
[inner]
|
||||||
version = "2.15.1"
|
version = "2.16.0"
|
||||||
|
|
||||||
#----以下是给开发人员阅读的,如果你只是部署了麦麦,不需要阅读----
|
#----以下是给开发人员阅读的,如果你只是部署了麦麦,不需要阅读----
|
||||||
#如果你想要修改配置文件,请在修改后将version的值进行变更
|
#如果你想要修改配置文件,请在修改后将version的值进行变更
|
||||||
|
|||||||
Reference in New Issue
Block a user