This commit is contained in:
SengokuCola
2025-04-08 23:12:00 +08:00
45 changed files with 1177 additions and 1224 deletions

20
CLAUDE.md Normal file
View File

@@ -0,0 +1,20 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Commands
- **Run Bot**: `python bot.py`
- **Lint**: `ruff check --fix .` or `ruff format .`
- **Run Tests**: `python -m unittest discover -v`
- **Run Single Test**: `python -m unittest src/plugins/message/test.py`
## Code Style
- **Formatting**: Line length 120 chars, use double quotes for strings
- **Imports**: Group standard library, external packages, then internal imports
- **Naming**: snake_case for functions/variables, PascalCase for classes
- **Error Handling**: Use try/except blocks with specific exceptions
- **Types**: Use type hints where possible
- **Docstrings**: Document classes and complex functions
- **Linting**: Follow ruff rules (E, F, B) with ignores E711, E501
When making changes, run `ruff check --fix .` to ensure code follows style guidelines. The codebase uses Ruff for linting and formatting.

153
README.md
View File

@@ -1,24 +1,66 @@
# 麦麦MaiCore-MaiMBot (编辑中)
## 新版0.6.0部署前先阅读https://docs.mai-mai.org/manual/usage/mmc_q_a
<br />
<div align="center">
![Python Version](https://img.shields.io/badge/Python-3.9+-blue)
![License](https://img.shields.io/github/license/SengokuCola/MaiMBot)
![License](https://img.shields.io/github/license/SengokuCola/MaiMBot?label=协议)
![Status](https://img.shields.io/badge/状态-开发中-yellow)
![Contributors](https://img.shields.io/github/contributors/MaiM-with-u/MaiBot.svg?style=flat&label=贡献者)
![forks](https://img.shields.io/github/forks/MaiM-with-u/MaiBot.svg?style=flat&label=分支数)
![stars](https://img.shields.io/github/stars/MaiM-with-u/MaiBot?style=flat&label=星标数)
![issues](https://img.shields.io/github/issues/MaiM-with-u/MaiBot)
</div>
<p align="center">
<a href="https://github.com/MaiM-with-u/MaiBot/">
<img src="depends-data/maimai.png" alt="Logo" width="200">
</a>
<br />
<a href="https://space.bilibili.com/1344099355">
画师略nd
</a>
<h3 align="center">MaiBot(麦麦)</h3>
<p align="center">
一款专注于<strong> 群组聊天 </strong>的赛博网友
<br />
<a href="https://docs.mai-mai.org"><strong>探索本项目的文档 »</strong></a>
<br />
<br />
<!-- <a href="https://github.com/shaojintian/Best_README_template">查看Demo</a>
· -->
<a href="https://github.com/MaiM-with-u/MaiBot/issues">报告Bug</a>
·
<a href="https://github.com/MaiM-with-u/MaiBot/issues">提出新特性</a>
</p>
</p>
## 新版0.6.0部署前先阅读https://docs.mai-mai.org/manual/usage/mmc_q_a
## 📝 项目简介
**🍔MaiCore是一个基于大语言模型的可交互智能体**
- LLM 提供对话能力
- 动态Prompt构建器
- 实时思维系统
- MongoDB 提供数据持久化支持
- 可扩展,可支持多种平台和多种功能
- 💭 **智能对话系统**基于LLM的自然语言交互
- 🤔 **实时思维系统**:模拟人类思考过程
- 💝 **情感表达系统**:丰富的表情包和情绪表达
- 🧠 **持久记忆系统**基于MongoDB的长期记忆存储
- 🔄 **动态人格系统**:自适应的性格特征
<div align="center">
<a href="https://www.bilibili.com/video/BV1amAneGE3P" target="_blank">
<img src="depends-data/video.png" width="200" alt="麦麦演示视频">
<br>
👆 点击观看麦麦演示视频 👆
</a>
</div>
### 📢 版本信息
**最新版本: v0.6.0** ([查看更新日志](changelogs/changelog.md))
> [!WARNING]
@@ -28,19 +70,12 @@
> 次版本MaiBot将基于MaiCore运行不再依赖于nonebot相关组件运行。
> MaiBot将通过nonebot的插件与nonebot建立联系然后nonebot与QQ建立联系实现MaiBot与QQ的交互
**分支介绍:**
- main 稳定版本
- dev 开发版(不知道什么意思就别下)
- classical 0.6.0前的版本
**分支说明:**
- `main`: 稳定发布版本
- `dev`: 开发测试版本(不知道什么意思就别下)
- `classical`: 0.6.0前的版本
<div align="center">
<a href="https://www.bilibili.com/video/BV1amAneGE3P" target="_blank">
<img src="docs/pic/video.png" width="300" alt="麦麦演示视频">
<br>
👆 点击观看麦麦演示视频 👆
</a>
</div>
> [!WARNING]
> - 项目处于活跃开发阶段,代码可能随时更改
@@ -49,6 +84,12 @@
> - 由于持续迭代可能存在一些已知或未知的bug
> - 由于开发中可能消耗较多token
### ⚠️ 重要提示
- 升级到v0.6.0版本前请务必阅读:[升级指南](https://docs.mai-mai.org/manual/usage/mmc_q_a)
- 本版本基于MaiCore重构通过nonebot插件与QQ平台交互
- 项目处于活跃开发阶段功能和API可能随时调整
### 💬交流群(开发和建议相关讨论)不一定有空回复,会优先写文档和代码
- [五群](https://qm.qq.com/q/JxvHZnxyec) 1022489779
- [一群](https://qm.qq.com/q/VQ3XZrWgMs) 766798517 【已满】
@@ -72,55 +113,35 @@
## 🎯 功能介绍
### 💬 聊天功能
- 提供思维流(心流)聊天和推理聊天两种对话逻辑
- 支持关键词检索主动发言对消息的话题topic进行识别如果检测到麦麦存储过的话题就会主动进行发言
- 支持bot名字呼唤发言检测到"麦麦"会主动发言,可配置
- 支持多模型,多厂商自定义配置
- 动态的prompt构建器更拟人
- 支持图片,转发消息,回复消息的识别
- 支持私聊功能可使用PFC模式的有目的多轮对话实验性
| 模块 | 主要功能 | 特点 |
|------|---------|------|
| 💬 聊天系统 | • 思维流/推理聊天<br>关键词主动发言<br>• 多模型支持<br>• 动态prompt构建<br>• 私聊功能(PFC) | 拟人化交互 |
| 🧠 思维流系统 | • 实时思考生成<br>• 自动启停机制<br>• 日程系统联动 | 智能化决策 |
| 🧠 记忆系统 2.0 | • 优化记忆抽取<br>• 海马体记忆机制<br>• 聊天记录概括 | 持久化记忆 |
| 😊 表情包系统 | • 情绪匹配发送<br>• GIF支持<br>• 自动收集与审查 | 丰富表达 |
| 📅 日程系统 | • 动态日程生成<br>• 自定义想象力<br>• 思维流联动 | 智能规划 |
| 👥 关系系统 2.0 | • 关系管理优化<br>• 丰富接口支持<br>• 个性化交互 | 深度社交 |
| 📊 统计系统 | • 使用数据统计<br>• LLM调用记录<br>• 实时控制台显示 | 数据可视 |
| 🔧 系统功能 | • 优雅关闭机制<br>• 自动数据保存<br>• 异常处理完善 | 稳定可靠 |
### 🧠 思维流系统
- 思维流能够在回复前后进行思考,生成实时想法
- 思维流自动启停机制,提升资源利用效率
- 思维流与日程系统联动,实现动态日程生成
## 📐 项目架构
### 🧠 记忆系统 2.0
- 优化记忆抽取策略和prompt结构
- 改进海马体记忆提取机制,提升自然度
- 对聊天记录进行概括存储,在需要时调用
```mermaid
graph TD
A[MaiCore] --> B[对话系统]
A --> C[思维流系统]
A --> D[记忆系统]
A --> E[情感系统]
B --> F[多模型支持]
B --> G[动态Prompt]
C --> H[实时思考]
C --> I[日程联动]
D --> J[记忆存储]
D --> K[记忆检索]
E --> L[表情管理]
E --> M[情绪识别]
```
### 😊 表情包系统
- 支持根据发言内容发送对应情绪的表情包
- 支持识别和处理gif表情包
- 会自动偷群友的表情包
- 表情包审查功能
- 表情包文件完整性自动检查
- 自动清理缓存图片
### 📅 日程系统
- 动态更新的日程生成
- 可自定义想象力程度
- 与聊天情况交互(思维流模式下)
### 👥 关系系统 2.0
- 优化关系管理系统,适用于新版本
- 提供更丰富的关系接口
- 针对每个用户创建"关系",实现个性化回复
### 📊 统计系统
- 详细的使用数据统计
- LLM调用统计
- 在控制台显示统计信息
### 🔧 系统功能
- 支持优雅的shutdown机制
- 自动保存功能,定期保存聊天记录和关系数据
- 完善的异常处理机制
- 可自定义时区设置
- 优化的日志输出格式
- 配置自动更新功能
## 开发计划TODOLIST

BIN
depends-data/maimai.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 455 KiB

BIN
depends-data/video.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

View File

@@ -42,7 +42,6 @@ class Heartflow:
self._subheartflows = {}
self.active_subheartflows_nums = 0
async def _cleanup_inactive_subheartflows(self):
"""定期清理不活跃的子心流"""
while True:
@@ -98,11 +97,8 @@ class Heartflow:
random.shuffle(identity_detail)
prompt_personality += f",{identity_detail[0]}"
personality_info = prompt_personality
current_thinking_info = self.current_mind
mood_info = self.current_state.mood
related_memory_info = "memory"
@@ -160,8 +156,6 @@ class Heartflow:
random.shuffle(identity_detail)
prompt_personality += f",{identity_detail[0]}"
personality_info = prompt_personality
mood_info = self.current_state.mood

View File

@@ -7,6 +7,7 @@ from src.common.database import db
from src.individuality.individuality import Individuality
import random
# 所有观察的基类
class Observation:
def __init__(self, observe_type, observe_id):
@@ -131,12 +132,8 @@ class ChattingObservation(Observation):
random.shuffle(identity_detail)
prompt_personality += f",{identity_detail[0]}"
personality_info = prompt_personality
prompt = ""
prompt += f"{personality_info},请注意识别你自己的聊天发言"
prompt += f"你的名字叫:{self.name},你的昵称是:{self.nick_name}\n"
@@ -149,7 +146,6 @@ class ChattingObservation(Observation):
print(f"prompt{prompt}")
print(f"self.observe_info{self.observe_info}")
def translate_message_list_to_str(self):
self.talking_message_str = ""
for message in self.talking_message:

View File

@@ -53,7 +53,6 @@ class SubHeartflow:
if not self.current_mind:
self.current_mind = "你什么也没想"
self.is_active = False
self.observations: list[Observation] = []
@@ -86,7 +85,9 @@ class SubHeartflow:
async def subheartflow_start_working(self):
while True:
current_time = time.time()
if current_time - self.last_reply_time > global_config.sub_heart_flow_freeze_time: # 120秒无回复/不在场,冻结
if (
current_time - self.last_reply_time > global_config.sub_heart_flow_freeze_time
): # 120秒无回复/不在场,冻结
self.is_active = False
await asyncio.sleep(global_config.sub_heart_flow_update_interval) # 每60秒检查一次
else:
@@ -100,7 +101,9 @@ class SubHeartflow:
await asyncio.sleep(global_config.sub_heart_flow_update_interval)
# 检查是否超过10分钟没有激活
if current_time - self.last_active_time > global_config.sub_heart_flow_stop_time: # 5分钟无回复/不在场,销毁
if (
current_time - self.last_active_time > global_config.sub_heart_flow_stop_time
): # 5分钟无回复/不在场,销毁
logger.info(f"子心流 {self.subheartflow_id} 已经5分钟没有激活正在销毁...")
break # 退出循环,销毁自己
@@ -176,9 +179,6 @@ class SubHeartflow:
random.shuffle(identity_detail)
prompt_personality += f",{identity_detail[0]}"
# 调取记忆
related_memory = await HippocampusManager.get_instance().get_memory_from_text(
text=chat_observe_info, max_memory_num=2, max_memory_length=2, max_depth=3, fast_retrieval=False
@@ -244,8 +244,6 @@ class SubHeartflow:
random.shuffle(identity_detail)
prompt_personality += f",{identity_detail[0]}"
current_thinking_info = self.current_mind
mood_info = self.current_state.mood
@@ -293,8 +291,6 @@ class SubHeartflow:
random.shuffle(identity_detail)
prompt_personality += f",{identity_detail[0]}"
# print("麦麦闹情绪了1")
current_thinking_info = self.current_mind
mood_info = self.current_state.mood
@@ -321,7 +317,6 @@ class SubHeartflow:
self.past_mind.append(self.current_mind)
self.current_mind = reponse
async def get_prompt_info(self, message: str, threshold: float):
start_time = time.time()
related_info = ""
@@ -442,7 +437,9 @@ class SubHeartflow:
# 6. 限制总数量最多10条
filtered_results = filtered_results[:10]
logger.info(f"结果处理完成,耗时: {time.time() - process_start_time:.3f}秒,过滤后剩余{len(filtered_results)}条结果")
logger.info(
f"结果处理完成,耗时: {time.time() - process_start_time:.3f}秒,过滤后剩余{len(filtered_results)}条结果"
)
# 7. 格式化输出
if filtered_results:
@@ -470,7 +467,9 @@ class SubHeartflow:
logger.info(f"知识库检索总耗时: {time.time() - start_time:.3f}")
return related_info, grouped_results
def get_info_from_db(self, query_embedding: list, limit: int = 1, threshold: float = 0.5, return_raw: bool = False) -> Union[str, list]:
def get_info_from_db(
self, query_embedding: list, limit: int = 1, threshold: float = 0.5, return_raw: bool = False
) -> Union[str, list]:
if not query_embedding:
return "" if not return_raw else []
# 使用余弦相似度计算

View File

@@ -2,9 +2,11 @@ from dataclasses import dataclass
from typing import List
import random
@dataclass
class Identity:
"""身份特征类"""
identity_detail: List[str] # 身份细节描述
height: int # 身高(厘米)
weight: int # 体重(千克)
@@ -19,8 +21,15 @@ class Identity:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self, identity_detail: List[str] = None, height: int = 0, weight: int = 0,
age: int = 0, gender: str = "", appearance: str = ""):
def __init__(
self,
identity_detail: List[str] = None,
height: int = 0,
weight: int = 0,
age: int = 0,
gender: str = "",
appearance: str = "",
):
"""初始化身份特征
Args:
@@ -41,7 +50,7 @@ class Identity:
self.appearance = appearance
@classmethod
def get_instance(cls) -> 'Identity':
def get_instance(cls) -> "Identity":
"""获取Identity单例实例
Returns:
@@ -52,8 +61,9 @@ class Identity:
return cls._instance
@classmethod
def initialize(cls, identity_detail: List[str], height: int, weight: int,
age: int, gender: str, appearance: str) -> 'Identity':
def initialize(
cls, identity_detail: List[str], height: int, weight: int, age: int, gender: str, appearance: str
) -> "Identity":
"""初始化身份特征
Args:
@@ -105,11 +115,11 @@ class Identity:
"weight": self.weight,
"age": self.age,
"gender": self.gender,
"appearance": self.appearance
"appearance": self.appearance,
}
@classmethod
def from_dict(cls, data: dict) -> 'Identity':
def from_dict(cls, data: dict) -> "Identity":
"""从字典创建身份特征实例"""
instance = cls.get_instance()
for key, value in data.items():

View File

@@ -2,8 +2,10 @@ from typing import Optional
from .personality import Personality
from .identity import Identity
class Individuality:
"""个体特征管理类"""
_instance = None
def __new__(cls, *args, **kwargs):
@@ -16,7 +18,7 @@ class Individuality:
self.identity: Optional[Identity] = None
@classmethod
def get_instance(cls) -> 'Individuality':
def get_instance(cls) -> "Individuality":
"""获取Individuality单例实例
Returns:
@@ -26,9 +28,18 @@ class Individuality:
cls._instance = cls()
return cls._instance
def initialize(self, bot_nickname: str, personality_core: str, personality_sides: list,
identity_detail: list, height: int, weight: int, age: int,
gender: str, appearance: str) -> None:
def initialize(
self,
bot_nickname: str,
personality_core: str,
personality_sides: list,
identity_detail: list,
height: int,
weight: int,
age: int,
gender: str,
appearance: str,
) -> None:
"""初始化个体特征
Args:
@@ -44,30 +55,23 @@ class Individuality:
"""
# 初始化人格
self.personality = Personality.initialize(
bot_nickname=bot_nickname,
personality_core=personality_core,
personality_sides=personality_sides
bot_nickname=bot_nickname, personality_core=personality_core, personality_sides=personality_sides
)
# 初始化身份
self.identity = Identity.initialize(
identity_detail=identity_detail,
height=height,
weight=weight,
age=age,
gender=gender,
appearance=appearance
identity_detail=identity_detail, height=height, weight=weight, age=age, gender=gender, appearance=appearance
)
def to_dict(self) -> dict:
"""将个体特征转换为字典格式"""
return {
"personality": self.personality.to_dict() if self.personality else None,
"identity": self.identity.to_dict() if self.identity else None
"identity": self.identity.to_dict() if self.identity else None,
}
@classmethod
def from_dict(cls, data: dict) -> 'Individuality':
def from_dict(cls, data: dict) -> "Individuality":
"""从字典创建个体特征实例"""
instance = cls.get_instance()
if data.get("personality"):
@@ -101,5 +105,3 @@ class Individuality:
return self.personality.agreeableness
elif factor == "neuroticism":
return self.personality.neuroticism

View File

@@ -32,11 +32,10 @@ else:
def adapt_scene(scene: str) -> str:
personality_core = config['personality']['personality_core']
personality_sides = config['personality']['personality_sides']
personality_core = config["personality"]["personality_core"]
personality_sides = config["personality"]["personality_sides"]
personality_side = random.choice(personality_sides)
identity_details = config['identity']['identity_detail']
identity_details = config["identity"]["identity_detail"]
identity_detail = random.choice(identity_details)
"""
@@ -51,10 +50,10 @@ def adapt_scene(scene: str) -> str:
try:
prompt = f"""
这是一个参与人格测评的角色形象:
- 昵称: {config['bot']['nickname']}
- 性别: {config['identity']['gender']}
- 年龄: {config['identity']['age']}
- 外貌: {config['identity']['appearance']}
- 昵称: {config["bot"]["nickname"]}
- 性别: {config["identity"]["gender"]}
- 年龄: {config["identity"]["age"]}
- 外貌: {config["identity"]["appearance"]}
- 性格核心: {personality_core}
- 性格侧面: {personality_side}
- 身份细节: {identity_detail}
@@ -62,11 +61,11 @@ def adapt_scene(scene: str) -> str:
请根据上述形象,改编以下场景,在测评中,用户将根据该场景给出上述角色形象的反应:
{scene}
保持场景的本质不变,但最好贴近生活且具体,并且让它更适合这个角色。
改编后的场景应该自然、连贯,并考虑角色的年龄、身份和性格特点。只返回改编后的场景描述,不要包含其他说明。注意{config['bot']['nickname']}是面对这个场景的人,而不是场景的其他人。场景中不会有其描述,
改编后的场景应该自然、连贯,并考虑角色的年龄、身份和性格特点。只返回改编后的场景描述,不要包含其他说明。注意{config["bot"]["nickname"]}是面对这个场景的人,而不是场景的其他人。场景中不会有其描述,
现在,请你给出改编后的场景描述
"""
llm = LLM_request_off(model_name=config['model']['llm_normal']['name'])
llm = LLM_request_off(model_name=config["model"]["llm_normal"]["name"])
adapted_scene, _ = llm.generate_response(prompt)
# 检查返回的场景是否为空或错误信息
@@ -187,7 +186,12 @@ class PersonalityEvaluator_direct:
input()
total_scenarios = len(self.scenarios)
progress_bar = tqdm(total=total_scenarios, desc="场景进度", ncols=100, bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}]')
progress_bar = tqdm(
total=total_scenarios,
desc="场景进度",
ncols=100,
bar_format="{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}]",
)
for _i, scenario_data in enumerate(self.scenarios, 1):
# print(f"\n{'-' * 20} 场景 {i}/{total_scenarios} - {scenario_data['场景编号']} {'-' * 20}")
@@ -251,16 +255,16 @@ class PersonalityEvaluator_direct:
"dimension_counts": self.dimension_counts,
"scenarios": self.scenarios,
"bot_info": {
"nickname": config['bot']['nickname'],
"gender": config['identity']['gender'],
"age": config['identity']['age'],
"height": config['identity']['height'],
"weight": config['identity']['weight'],
"appearance": config['identity']['appearance'],
"personality_core": config['personality']['personality_core'],
"personality_sides": config['personality']['personality_sides'],
"identity_detail": config['identity']['identity_detail']
}
"nickname": config["bot"]["nickname"],
"gender": config["identity"]["gender"],
"age": config["identity"]["age"],
"height": config["identity"]["height"],
"weight": config["identity"]["weight"],
"appearance": config["identity"]["appearance"],
"personality_core": config["personality"]["personality_core"],
"personality_sides": config["personality"]["personality_sides"],
"identity_detail": config["identity"]["identity_detail"],
},
}
@@ -275,7 +279,7 @@ def main():
"extraversion": round(result["final_scores"]["外向性"] / 6, 1),
"agreeableness": round(result["final_scores"]["宜人性"] / 6, 1),
"neuroticism": round(result["final_scores"]["神经质"] / 6, 1),
"bot_nickname": config['bot']['nickname']
"bot_nickname": config["bot"]["nickname"],
}
# 确保目录存在
@@ -283,10 +287,10 @@ def main():
os.makedirs(save_dir, exist_ok=True)
# 创建文件名,替换可能的非法字符
bot_name = config['bot']['nickname']
bot_name = config["bot"]["nickname"]
# 替换Windows文件名中不允许的字符
for char in ['\\', '/', ':', '*', '?', '"', '<', '>', '|']:
bot_name = bot_name.replace(char, '_')
for char in ["\\", "/", ":", "*", "?", '"', "<", ">", "|"]:
bot_name = bot_name.replace(char, "_")
file_name = f"{bot_name}_personality.per"
save_path = os.path.join(save_dir, file_name)

View File

@@ -4,9 +4,11 @@ import json
from pathlib import Path
import random
@dataclass
class Personality:
"""人格特质类"""
openness: float # 开放性
conscientiousness: float # 尽责性
extraversion: float # 外向性
@@ -30,7 +32,7 @@ class Personality:
self.personality_sides = personality_sides
@classmethod
def get_instance(cls) -> 'Personality':
def get_instance(cls) -> "Personality":
"""获取Personality单例实例
Returns:
@@ -47,13 +49,13 @@ class Personality:
# 如果文件存在,读取文件
if personality_file.exists():
with open(personality_file, 'r', encoding='utf-8') as f:
with open(personality_file, "r", encoding="utf-8") as f:
personality_data = json.load(f)
self.openness = personality_data.get('openness', 0.5)
self.conscientiousness = personality_data.get('conscientiousness', 0.5)
self.extraversion = personality_data.get('extraversion', 0.5)
self.agreeableness = personality_data.get('agreeableness', 0.5)
self.neuroticism = personality_data.get('neuroticism', 0.5)
self.openness = personality_data.get("openness", 0.5)
self.conscientiousness = personality_data.get("conscientiousness", 0.5)
self.extraversion = personality_data.get("extraversion", 0.5)
self.agreeableness = personality_data.get("agreeableness", 0.5)
self.neuroticism = personality_data.get("neuroticism", 0.5)
else:
# 如果文件不存在根据personality_core和personality_core来设置大五人格特质
if "活泼" in self.personality_core or "开朗" in self.personality_sides:
@@ -79,7 +81,7 @@ class Personality:
self.openness = 0.5
@classmethod
def initialize(cls, bot_nickname: str, personality_core: str, personality_sides: List[str]) -> 'Personality':
def initialize(cls, bot_nickname: str, personality_core: str, personality_sides: List[str]) -> "Personality":
"""初始化人格特质
Args:
@@ -107,11 +109,11 @@ class Personality:
"neuroticism": self.neuroticism,
"bot_nickname": self.bot_nickname,
"personality_core": self.personality_core,
"personality_sides": self.personality_sides
"personality_sides": self.personality_sides,
}
@classmethod
def from_dict(cls, data: Dict) -> 'Personality':
def from_dict(cls, data: Dict) -> "Personality":
"""从字典创建人格特质实例"""
instance = cls.get_instance()
for key, value in data.items():

View File

@@ -2,6 +2,7 @@ import json
from typing import Dict
import os
def load_scenes() -> Dict:
"""
从JSON文件加载场景数据
@@ -10,13 +11,15 @@ def load_scenes() -> Dict:
Dict: 包含所有场景的字典
"""
current_dir = os.path.dirname(os.path.abspath(__file__))
json_path = os.path.join(current_dir, 'template_scene.json')
json_path = os.path.join(current_dir, "template_scene.json")
with open(json_path, 'r', encoding='utf-8') as f:
with open(json_path, "r", encoding="utf-8") as f:
return json.load(f)
PERSONALITY_SCENES = load_scenes()
def get_scene_by_factor(factor: str) -> Dict:
"""
根据人格因子获取对应的情景测试

View File

@@ -100,7 +100,7 @@ class MainSystem:
weight=global_config.weight,
age=global_config.age,
gender=global_config.gender,
appearance=global_config.appearance
appearance=global_config.appearance,
)
logger.success("个体特征初始化成功")
@@ -136,7 +136,6 @@ class MainSystem:
logger.info("正在进行记忆构建")
await HippocampusManager.get_instance().build_memory()
async def forget_memory_task(self):
"""记忆遗忘任务"""
while True:
@@ -145,7 +144,6 @@ class MainSystem:
await HippocampusManager.get_instance().forget_memory(percentage=global_config.memory_forget_percentage)
print("\033[1;32m[记忆遗忘]\033[0m 记忆遗忘完成")
async def print_mood_task(self):
"""打印情绪状态"""
while True:

View File

@@ -9,11 +9,12 @@ from .message_storage import MessageStorage, MongoDBMessageStorage
logger = get_module_logger("chat_observer")
class ChatObserver:
"""聊天状态观察器"""
# 类级别的实例管理
_instances: Dict[str, 'ChatObserver'] = {}
_instances: Dict[str, "ChatObserver"] = {}
@classmethod
def get_instance(cls, stream_id: str, message_storage: Optional[MessageStorage] = None) -> 'ChatObserver':
@@ -186,7 +187,7 @@ class ChatObserver:
start_time: Optional[float] = None,
end_time: Optional[float] = None,
limit: Optional[int] = None,
user_id: Optional[str] = None
user_id: Optional[str] = None,
) -> List[Dict[str, Any]]:
"""获取消息历史
@@ -209,8 +210,7 @@ class ChatObserver:
if user_id is not None:
filtered_messages = [
m for m in filtered_messages
if UserInfo.from_dict(m.get("user_info", {})).user_id == user_id
m for m in filtered_messages if UserInfo.from_dict(m.get("user_info", {})).user_id == user_id
]
if limit is not None:

View File

@@ -32,10 +32,7 @@ class GoalAnalyzer:
def __init__(self, stream_id: str):
self.llm = LLM_request(
model=global_config.llm_normal,
temperature=0.7,
max_tokens=1000,
request_type="conversation_goal"
model=global_config.llm_normal, temperature=0.7, max_tokens=1000, request_type="conversation_goal"
)
self.personality_info = Individuality.get_instance().get_prompt(type="personality", x_person=2, level=2)
@@ -110,9 +107,7 @@ class GoalAnalyzer:
# 使用简化函数提取JSON内容
success, result = get_items_from_json(
content,
"goal", "reasoning",
required_types={"goal": str, "reasoning": str}
content, "goal", "reasoning", required_types={"goal": str, "reasoning": str}
)
if not success:
@@ -265,6 +260,7 @@ class GoalAnalyzer:
class Waiter:
"""快 速 等 待"""
def __init__(self, stream_id: str):
self.chat_observer = ChatObserver.get_instance(stream_id)
self.personality_info = Individuality.get_instance().get_prompt(type="personality", x_person=2, level=2)
@@ -357,4 +353,3 @@ class DirectMessageSender:
except Exception as e:
self.logger.error(f"直接发送消息失败: {str(e)}")
raise

View File

@@ -7,15 +7,13 @@ from ..chat.message import Message
logger = get_module_logger("knowledge_fetcher")
class KnowledgeFetcher:
"""知识调取器"""
def __init__(self):
self.llm = LLM_request(
model=global_config.llm_normal,
temperature=0.7,
max_tokens=1000,
request_type="knowledge_fetch"
model=global_config.llm_normal, temperature=0.7, max_tokens=1000, request_type="knowledge_fetch"
)
async def fetch(self, query: str, chat_history: List[Message]) -> Tuple[str, str]:
@@ -40,7 +38,7 @@ class KnowledgeFetcher:
max_memory_num=3,
max_memory_length=2,
max_depth=3,
fast_retrieval=False
fast_retrieval=False,
)
if related_memory:

View File

@@ -5,11 +5,12 @@ from src.common.logger import get_module_logger
logger = get_module_logger("pfc_utils")
def get_items_from_json(
content: str,
*items: str,
default_values: Optional[Dict[str, Any]] = None,
required_types: Optional[Dict[str, type]] = None
required_types: Optional[Dict[str, type]] = None,
) -> Tuple[bool, Dict[str, Any]]:
"""从文本中提取JSON内容并获取指定字段
@@ -34,7 +35,7 @@ def get_items_from_json(
json_data = json.loads(content)
except json.JSONDecodeError:
# 如果直接解析失败尝试查找和提取JSON部分
json_pattern = r'\{[^{}]*\}'
json_pattern = r"\{[^{}]*\}"
json_match = re.search(json_pattern, content)
if json_match:
try:

View File

@@ -9,26 +9,19 @@ from ..message.message_base import UserInfo
logger = get_module_logger("reply_checker")
class ReplyChecker:
"""回复检查器"""
def __init__(self, stream_id: str):
self.llm = LLM_request(
model=global_config.llm_normal,
temperature=0.7,
max_tokens=1000,
request_type="reply_check"
model=global_config.llm_normal, temperature=0.7, max_tokens=1000, request_type="reply_check"
)
self.name = global_config.BOT_NICKNAME
self.chat_observer = ChatObserver.get_instance(stream_id)
self.max_retries = 2 # 最大重试次数
async def check(
self,
reply: str,
goal: str,
retry_count: int = 0
) -> Tuple[bool, str, bool]:
async def check(self, reply: str, goal: str, retry_count: int = 0) -> Tuple[bool, str, bool]:
"""检查生成的回复是否合适
Args:
@@ -92,7 +85,8 @@ class ReplyChecker:
except json.JSONDecodeError:
# 如果直接解析失败尝试查找和提取JSON部分
import re
json_pattern = r'\{[^{}]*\}'
json_pattern = r"\{[^{}]*\}"
json_match = re.search(json_pattern, content)
if json_match:
try:

View File

@@ -12,5 +12,5 @@ __all__ = [
"chat_manager",
"message_manager",
"MessageStorage",
"auto_speak_manager"
"auto_speak_manager",
]

View File

@@ -377,7 +377,6 @@ class EmojiManager:
except Exception:
logger.exception("[错误] 扫描表情包失败")
def check_emoji_file_integrity(self):
"""检查表情包文件完整性
如果文件已被删除,则从数据库中移除对应记录
@@ -542,7 +541,7 @@ class EmojiManager:
logger.info("[扫描] 开始扫描新表情包...")
if self.emoji_num < self.emoji_num_max:
await self.scan_new_emojis()
if (self.emoji_num > self.emoji_num_max):
if self.emoji_num > self.emoji_num_max:
logger.warning(f"[警告] 表情包数量超过最大限制: {self.emoji_num} > {self.emoji_num_max},跳过注册")
if not global_config.max_reach_deletion:
logger.warning("表情包数量超过最大限制,终止注册")
@@ -580,5 +579,6 @@ class EmojiManager:
except Exception as e:
logger.error(f"[错误] 删除图片目录失败: {str(e)}")
# 创建全局单例
emoji_manager = EmojiManager()

View File

@@ -13,6 +13,7 @@ from ..config.config import global_config
logger = get_module_logger("message_buffer")
@dataclass
class CacheMessages:
message: MessageRecv
@@ -37,13 +38,14 @@ class MessageBuffer:
async def start_caching_messages(self, message: MessageRecv):
"""添加消息,启动缓冲"""
if not global_config.message_buffer:
person_id = person_info_manager.get_person_id(message.message_info.user_info.platform,
message.message_info.user_info.user_id)
person_id = person_info_manager.get_person_id(
message.message_info.user_info.platform, message.message_info.user_info.user_id
)
asyncio.create_task(self.save_message_interval(person_id, message.message_info))
return
person_id_ = self.get_person_id_(message.message_info.platform,
message.message_info.user_info.user_id,
message.message_info.group_info)
person_id_ = self.get_person_id_(
message.message_info.platform, message.message_info.user_info.user_id, message.message_info.group_info
)
async with self.lock:
if person_id_ not in self.buffer_pool:
@@ -66,7 +68,7 @@ class MessageBuffer:
recent_F_count += 1
# 判断条件最近T之后有超过3-5条F
if (recent_F_count >= random.randint(3, 5)):
if recent_F_count >= random.randint(3, 5):
new_msg = CacheMessages(message=message, result="T")
new_msg.cache_determination.set()
self.buffer_pool[person_id_][message.message_info.message_id] = new_msg
@@ -77,12 +79,11 @@ class MessageBuffer:
self.buffer_pool[person_id_][message.message_info.message_id] = CacheMessages(message=message)
# 启动3秒缓冲计时器
person_id = person_info_manager.get_person_id(message.message_info.user_info.platform,
message.message_info.user_info.user_id)
person_id = person_info_manager.get_person_id(
message.message_info.user_info.platform, message.message_info.user_info.user_id
)
asyncio.create_task(self.save_message_interval(person_id, message.message_info))
asyncio.create_task(self._debounce_processor(person_id_,
message.message_info.message_id,
person_id))
asyncio.create_task(self._debounce_processor(person_id_, message.message_info.message_id, person_id))
async def _debounce_processor(self, person_id_: str, message_id: str, person_id: str):
"""等待3秒无新消息"""
@@ -94,8 +95,7 @@ class MessageBuffer:
await asyncio.sleep(interval_time)
async with self.lock:
if (person_id_ not in self.buffer_pool or
message_id not in self.buffer_pool[person_id_]):
if person_id_ not in self.buffer_pool or message_id not in self.buffer_pool[person_id_]:
logger.debug(f"消息已被清理msgid: {message_id}")
return
@@ -104,15 +104,13 @@ class MessageBuffer:
cache_msg.result = "T"
cache_msg.cache_determination.set()
async def query_buffer_result(self, message: MessageRecv) -> bool:
"""查询缓冲结果,并清理"""
if not global_config.message_buffer:
return True
person_id_ = self.get_person_id_(message.message_info.platform,
message.message_info.user_info.user_id,
message.message_info.group_info)
person_id_ = self.get_person_id_(
message.message_info.platform, message.message_info.user_info.user_id, message.message_info.group_info
)
async with self.lock:
user_msgs = self.buffer_pool.get(person_id_, {})
@@ -144,8 +142,7 @@ class MessageBuffer:
keep_msgs[msg_id] = msg
elif msg.result == "F":
# 收集F消息的文本内容
if (hasattr(msg.message, 'processed_plain_text')
and msg.message.processed_plain_text):
if hasattr(msg.message, "processed_plain_text") and msg.message.processed_plain_text:
if msg.message.message_segment.type == "text":
combined_text.append(msg.message.processed_plain_text)
elif msg.message.message_segment.type != "text":
@@ -182,7 +179,7 @@ class MessageBuffer:
"platform": message.platform,
"user_id": message.user_info.user_id,
"nickname": message.user_info.user_nickname,
"konw_time" : int(time.time())
"konw_time": int(time.time()),
}
await person_info_manager.update_one_field(person_id, "msg_interval_list", message_interval_list, data)

View File

@@ -68,7 +68,8 @@ class Message_Sender:
typing_time = calculate_typing_time(
input_string=message.processed_plain_text,
thinking_start_time=message.thinking_start_time,
is_emoji=message.is_emoji)
is_emoji=message.is_emoji,
)
logger.debug(f"{message.processed_plain_text},{typing_time},计算输入时间结束")
await asyncio.sleep(typing_time)
logger.debug(f"{message.processed_plain_text},{typing_time},等待输入时间结束")

View File

@@ -56,14 +56,13 @@ def is_mentioned_bot_in_message(message: MessageRecv) -> bool:
logger.info("被@回复概率设置为100%")
else:
if not is_mentioned:
# 判断是否被回复
if re.match(f"回复[\s\S]*?\({global_config.BOT_QQ}\)的消息,说:", message.processed_plain_text):
is_mentioned = True
# 判断内容中是否被提及
message_content = re.sub(r'\@[\s\S]*?(\d+)','', message.processed_plain_text)
message_content = re.sub(r'回复[\s\S]*?\((\d+)\)的消息,说: ','', message_content)
message_content = re.sub(r"\@[\s\S]*?(\d+)", "", message.processed_plain_text)
message_content = re.sub(r"回复[\s\S]*?\((\d+)\)的消息,说: ", "", message_content)
for keyword in keywords:
if keyword in message_content:
is_mentioned = True
@@ -359,7 +358,13 @@ def process_llm_response(text: str) -> List[str]:
return sentences
def calculate_typing_time(input_string: str, thinking_start_time: float, chinese_time: float = 0.2, english_time: float = 0.1, is_emoji: bool = False) -> float:
def calculate_typing_time(
input_string: str,
thinking_start_time: float,
chinese_time: float = 0.2,
english_time: float = 0.1,
is_emoji: bool = False,
) -> float:
"""
计算输入字符串所需的时间,中文和英文字符有不同的输入时间
input_string (str): 输入的字符串
@@ -394,7 +399,6 @@ def calculate_typing_time(input_string: str, thinking_start_time: float, chinese
else: # 其他字符(如英文)
total_time += english_time
if is_emoji:
total_time = 1
@@ -535,22 +539,18 @@ def count_messages_between(start_time: float, end_time: float, stream_id: str) -
try:
# 获取开始时间之前最新的一条消息
start_message = db.messages.find_one(
{
"chat_id": stream_id,
"time": {"$lte": start_time}
},
sort=[("time", -1), ("_id", -1)] # 按时间倒序_id倒序最后插入的在前
{"chat_id": stream_id, "time": {"$lte": start_time}},
sort=[("time", -1), ("_id", -1)], # 按时间倒序_id倒序最后插入的在前
)
# 获取结束时间最近的一条消息
# 先找到结束时间点的所有消息
end_time_messages = list(db.messages.find(
{
"chat_id": stream_id,
"time": {"$lte": end_time}
},
sort=[("time", -1)] # 先按时间倒序
).limit(10)) # 限制查询数量,避免性能问题
end_time_messages = list(
db.messages.find(
{"chat_id": stream_id, "time": {"$lte": end_time}},
sort=[("time", -1)], # 先按时间倒序
).limit(10)
) # 限制查询数量,避免性能问题
if not end_time_messages:
logger.warning(f"未找到结束时间 {end_time} 之前的消息")
@@ -559,10 +559,7 @@ def count_messages_between(start_time: float, end_time: float, stream_id: str) -
# 找到最大时间
max_time = end_time_messages[0]["time"]
# 在最大时间的消息中找最后插入的_id最大的
end_message = max(
[msg for msg in end_time_messages if msg["time"] == max_time],
key=lambda x: x["_id"]
)
end_message = max([msg for msg in end_time_messages if msg["time"] == max_time], key=lambda x: x["_id"])
if not start_message:
logger.warning(f"未找到开始时间 {start_time} 之前的消息")
@@ -590,16 +587,12 @@ def count_messages_between(start_time: float, end_time: float, stream_id: str) -
# 获取并打印这个时间范围内的所有消息
# print("\n=== 时间范围内的所有消息 ===")
all_messages = list(db.messages.find(
{
"chat_id": stream_id,
"time": {
"$gte": start_message["time"],
"$lte": end_message["time"]
}
},
sort=[("time", 1), ("_id", 1)] # 按时间正序_id正序
))
all_messages = list(
db.messages.find(
{"chat_id": stream_id, "time": {"$gte": start_message["time"], "$lte": end_message["time"]}},
sort=[("time", 1), ("_id", 1)], # 按时间正序_id正序
)
)
count = 0
total_length = 0

View File

@@ -245,7 +245,7 @@ class ImageManager:
try:
while True:
gif.seek(len(frames))
frame = gif.convert('RGB')
frame = gif.convert("RGB")
frames.append(frame.copy())
except EOFError:
pass
@@ -270,12 +270,13 @@ class ImageManager:
target_width = int((target_height / frame_height) * frame_width)
# 调整所有帧的大小
resized_frames = [frame.resize((target_width, target_height), Image.Resampling.LANCZOS)
for frame in selected_frames]
resized_frames = [
frame.resize((target_width, target_height), Image.Resampling.LANCZOS) for frame in selected_frames
]
# 创建拼接图像
total_width = target_width * len(resized_frames)
combined_image = Image.new('RGB', (total_width, target_height))
combined_image = Image.new("RGB", (total_width, target_height))
# 水平拼接图像
for idx, frame in enumerate(resized_frames):
@@ -283,8 +284,8 @@ class ImageManager:
# 转换为base64
buffer = io.BytesIO()
combined_image.save(buffer, format='JPEG', quality=85)
result_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
combined_image.save(buffer, format="JPEG", quality=85)
result_base64 = base64.b64encode(buffer.getvalue()).decode("utf-8")
return result_base64

View File

@@ -7,6 +7,7 @@ from datetime import datetime
logger = get_module_logger("pfc_message_processor")
class MessageProcessor:
"""消息处理器,负责处理接收到的消息并存储"""
@@ -60,7 +61,4 @@ class MessageProcessor:
mes_name = chat.group_info.group_name if chat.group_info else "私聊"
# 将时间戳转换为datetime对象
current_time = datetime.fromtimestamp(message.message_info.time).strftime("%H:%M:%S")
logger.info(
f"[{current_time}][{mes_name}]"
f"{chat.user_info.user_nickname}: {message.processed_plain_text}"
)
logger.info(f"[{current_time}][{mes_name}]{chat.user_info.user_nickname}: {message.processed_plain_text}")

View File

@@ -27,6 +27,7 @@ chat_config = LogConfig(
logger = get_module_logger("reasoning_chat", config=chat_config)
class ReasoningChat:
def __init__(self):
self.storage = MessageStorage()

View File

@@ -52,7 +52,6 @@ class ResponseGenerator:
f"{self.current_model_type}思考:{message.processed_plain_text[:30] + '...' if len(message.processed_plain_text) > 30 else message.processed_plain_text}"
) # noqa: E501
model_response = await self._generate_response_with_model(message, current_model)
# print(f"raw_content: {model_response}")

View File

@@ -24,7 +24,6 @@ class PromptBuilder:
async def _build_prompt(
self, chat_stream, message_txt: str, sender_name: str = "某人", stream_id: Optional[int] = None
) -> tuple[str, str]:
# 开始构建prompt
prompt_personality = ""
# person
@@ -41,12 +40,10 @@ class PromptBuilder:
random.shuffle(identity_detail)
prompt_personality += f",{identity_detail[0]}"
# 关系
who_chat_in_group = [(chat_stream.user_info.platform,
chat_stream.user_info.user_id,
chat_stream.user_info.user_nickname)]
who_chat_in_group = [
(chat_stream.user_info.platform, chat_stream.user_info.user_id, chat_stream.user_info.user_nickname)
]
who_chat_in_group += get_recent_group_speaker(
stream_id,
(chat_stream.user_info.platform, chat_stream.user_info.user_id),
@@ -84,7 +81,7 @@ class PromptBuilder:
# print(f"相关记忆:{related_memory_info}")
# 日程构建
schedule_prompt = f'''你现在正在做的事情是:{bot_schedule.get_current_num_task(num = 1,time_info = False)}'''
schedule_prompt = f"""你现在正在做的事情是:{bot_schedule.get_current_num_task(num=1, time_info=False)}"""
# 获取聊天上下文
chat_in_group = True
@@ -281,7 +278,9 @@ class PromptBuilder:
# 6. 限制总数量最多10条
filtered_results = filtered_results[:10]
logger.info(f"结果处理完成,耗时: {time.time() - process_start_time:.3f}秒,过滤后剩余{len(filtered_results)}条结果")
logger.info(
f"结果处理完成,耗时: {time.time() - process_start_time:.3f}秒,过滤后剩余{len(filtered_results)}条结果"
)
# 7. 格式化输出
if filtered_results:
@@ -309,7 +308,9 @@ class PromptBuilder:
logger.info(f"知识库检索总耗时: {time.time() - start_time:.3f}")
return related_info
def get_info_from_db(self, query_embedding: list, limit: int = 1, threshold: float = 0.5, return_raw: bool = False) -> Union[str, list]:
def get_info_from_db(
self, query_embedding: list, limit: int = 1, threshold: float = 0.5, return_raw: bool = False
) -> Union[str, list]:
if not query_embedding:
return "" if not return_raw else []
# 使用余弦相似度计算

View File

@@ -28,6 +28,7 @@ chat_config = LogConfig(
logger = get_module_logger("think_flow_chat", config=chat_config)
class ThinkFlowChat:
def __init__(self):
self.storage = MessageStorage()
@@ -214,7 +215,6 @@ class ThinkFlowChat:
# 处理提及
is_mentioned, reply_probability = is_mentioned_bot_in_message(message)
# 计算回复意愿
current_willing_old = willing_manager.get_willing(chat_stream=chat)
# current_willing_new = (heartflow.get_subheartflow(chat.stream_id).current_state.willing - 5) / 4
@@ -222,7 +222,6 @@ class ThinkFlowChat:
# 有点bug
current_willing = current_willing_old
willing_manager.set_willing(chat.stream_id, current_willing)
# 意愿激活
@@ -280,7 +279,9 @@ class ThinkFlowChat:
# 思考前脑内状态
try:
timer1 = time.time()
await heartflow.get_subheartflow(chat.stream_id).do_thinking_before_reply(message.processed_plain_text)
await heartflow.get_subheartflow(chat.stream_id).do_thinking_before_reply(
message.processed_plain_text
)
timer2 = time.time()
timing_results["思考前脑内状态"] = timer2 - timer1
except Exception as e:

View File

@@ -35,7 +35,6 @@ class ResponseGenerator:
async def generate_response(self, message: MessageThinking) -> Optional[Union[str, List[str]]]:
"""根据当前模型类型选择对应的生成函数"""
logger.info(
f"思考:{message.processed_plain_text[:30] + '...' if len(message.processed_plain_text) > 30 else message.processed_plain_text}"
)
@@ -178,4 +177,3 @@ class ResponseGenerator:
# print(f"得到了处理后的llm返回{processed_response}")
return processed_response

View File

@@ -21,16 +21,15 @@ class PromptBuilder:
async def _build_prompt(
self, chat_stream, message_txt: str, sender_name: str = "某人", stream_id: Optional[int] = None
) -> tuple[str, str]:
current_mind_info = heartflow.get_subheartflow(stream_id).current_mind
individuality = Individuality.get_instance()
prompt_personality = individuality.get_prompt(type="personality", x_person=2, level=1)
prompt_identity = individuality.get_prompt(type="identity", x_person=2, level=1)
# 关系
who_chat_in_group = [(chat_stream.user_info.platform,
chat_stream.user_info.user_id,
chat_stream.user_info.user_nickname)]
who_chat_in_group = [
(chat_stream.user_info.platform, chat_stream.user_info.user_id, chat_stream.user_info.user_nickname)
]
who_chat_in_group += get_recent_group_speaker(
stream_id,
(chat_stream.user_info.platform, chat_stream.user_info.user_id),

View File

@@ -3,6 +3,7 @@ import tomlkit
from pathlib import Path
from datetime import datetime
def update_config():
print("开始更新配置文件...")
# 获取根目录路径

View File

@@ -28,6 +28,7 @@ logger = get_module_logger("config", config=config_config)
is_test = True
mai_version_main = "0.6.2"
mai_version_fix = "snapshot-1"
if mai_version_fix:
if is_test:
mai_version = f"test-{mai_version_main}-{mai_version_fix}"
@@ -39,6 +40,7 @@ else:
else:
mai_version = mai_version_main
def update_config():
# 获取根目录路径
root_dir = Path(__file__).parent.parent.parent.parent
@@ -127,6 +129,7 @@ def update_config():
f.write(tomlkit.dumps(new_config))
logger.info("配置文件更新完成")
logger = get_module_logger("config")
@@ -149,16 +152,20 @@ class BotConfig:
# personality
personality_core = "用一句话或几句话描述人格的核心特点" # 建议20字以内谁再写3000字小作文敲谁脑袋
personality_sides: List[str] = field(default_factory=lambda: [
personality_sides: List[str] = field(
default_factory=lambda: [
"用一句话或几句话描述人格的一些侧面",
"用一句话或几句话描述人格的一些侧面",
"用一句话或几句话描述人格的一些侧面"
])
"用一句话或几句话描述人格的一些侧面",
]
)
# identity
identity_detail: List[str] = field(default_factory=lambda: [
identity_detail: List[str] = field(
default_factory=lambda: [
"身份特点",
"身份特点",
])
]
)
height: int = 170 # 身高 单位厘米
weight: int = 50 # 体重 单位千克
age: int = 20 # 年龄 单位岁
@@ -354,7 +361,6 @@ class BotConfig:
"""从TOML配置文件加载配置"""
config = cls()
def personality(parent: dict):
personality_config = parent["personality"]
if config.INNER_VERSION in SpecifierSet(">=1.2.4"):
@@ -421,10 +427,18 @@ class BotConfig:
def heartflow(parent: dict):
heartflow_config = parent["heartflow"]
config.sub_heart_flow_update_interval = heartflow_config.get("sub_heart_flow_update_interval", config.sub_heart_flow_update_interval)
config.sub_heart_flow_freeze_time = heartflow_config.get("sub_heart_flow_freeze_time", config.sub_heart_flow_freeze_time)
config.sub_heart_flow_stop_time = heartflow_config.get("sub_heart_flow_stop_time", config.sub_heart_flow_stop_time)
config.heart_flow_update_interval = heartflow_config.get("heart_flow_update_interval", config.heart_flow_update_interval)
config.sub_heart_flow_update_interval = heartflow_config.get(
"sub_heart_flow_update_interval", config.sub_heart_flow_update_interval
)
config.sub_heart_flow_freeze_time = heartflow_config.get(
"sub_heart_flow_freeze_time", config.sub_heart_flow_freeze_time
)
config.sub_heart_flow_stop_time = heartflow_config.get(
"sub_heart_flow_stop_time", config.sub_heart_flow_stop_time
)
config.heart_flow_update_interval = heartflow_config.get(
"heart_flow_update_interval", config.heart_flow_update_interval
)
def willing(parent: dict):
willing_config = parent["willing"]

View File

@@ -14,6 +14,7 @@ from src.common.logger import get_module_logger, LogConfig, MEMORY_STYLE_CONFIG
from src.plugins.memory_system.sample_distribution import MemoryBuildScheduler # 分布生成器
from .memory_config import MemoryConfig
def get_closest_chat_from_db(length: int, timestamp: str):
# print(f"获取最接近指定时间戳的聊天记录,长度: {length}, 时间戳: {timestamp}")
# print(f"当前时间: {timestamp},转换后时间: {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(timestamp))}")

View File

@@ -179,7 +179,6 @@ class LLM_request:
# logger.debug(f"{logger_msg}发送请求到URL: {api_url}")
# logger.info(f"使用模型: {self.model_name}")
# 构建请求体
if image_base64:
payload = await self._build_payload(prompt, image_base64, image_format)
@@ -205,13 +204,17 @@ class LLM_request:
# 处理需要重试的状态码
if response.status in policy["retry_codes"]:
wait_time = policy["base_wait"] * (2**retry)
logger.warning(f"模型 {self.model_name} 错误码: {response.status}, 等待 {wait_time}秒后重试")
logger.warning(
f"模型 {self.model_name} 错误码: {response.status}, 等待 {wait_time}秒后重试"
)
if response.status == 413:
logger.warning("请求体过大,尝试压缩...")
image_base64 = compress_base64_image_by_scale(image_base64)
payload = await self._build_payload(prompt, image_base64, image_format)
elif response.status in [500, 503]:
logger.error(f"模型 {self.model_name} 错误码: {response.status} - {error_code_mapping.get(response.status)}")
logger.error(
f"模型 {self.model_name} 错误码: {response.status} - {error_code_mapping.get(response.status)}"
)
raise RuntimeError("服务器负载过高模型恢复失败QAQ")
else:
logger.warning(f"模型 {self.model_name} 请求限制(429),等待{wait_time}秒后重试...")
@@ -219,7 +222,9 @@ class LLM_request:
await asyncio.sleep(wait_time)
continue
elif response.status in policy["abort_codes"]:
logger.error(f"模型 {self.model_name} 错误码: {response.status} - {error_code_mapping.get(response.status)}")
logger.error(
f"模型 {self.model_name} 错误码: {response.status} - {error_code_mapping.get(response.status)}"
)
# 尝试获取并记录服务器返回的详细错误信息
try:
error_json = await response.json()
@@ -257,7 +262,9 @@ class LLM_request:
):
old_model_name = self.model_name
self.model_name = self.model_name[4:] # 移除"Pro/"前缀
logger.warning(f"检测到403错误模型从 {old_model_name} 降级为 {self.model_name}")
logger.warning(
f"检测到403错误模型从 {old_model_name} 降级为 {self.model_name}"
)
# 对全局配置进行更新
if global_config.llm_normal.get("name") == old_model_name:
@@ -266,7 +273,9 @@ class LLM_request:
if global_config.llm_reasoning.get("name") == old_model_name:
global_config.llm_reasoning["name"] = self.model_name
logger.warning(f"将全局配置中的 llm_reasoning 模型临时降级至{self.model_name}")
logger.warning(
f"将全局配置中的 llm_reasoning 模型临时降级至{self.model_name}"
)
# 更新payload中的模型名
if payload and "model" in payload:
@@ -328,7 +337,14 @@ class LLM_request:
await response.release()
# 返回已经累积的内容
result = {
"choices": [{"message": {"content": accumulated_content, "reasoning_content": reasoning_content}}],
"choices": [
{
"message": {
"content": accumulated_content,
"reasoning_content": reasoning_content,
}
}
],
"usage": usage,
}
return (
@@ -345,7 +361,14 @@ class LLM_request:
logger.error(f"清理资源时发生错误: {cleanup_error}")
# 返回已经累积的内容
result = {
"choices": [{"message": {"content": accumulated_content, "reasoning_content": reasoning_content}}],
"choices": [
{
"message": {
"content": accumulated_content,
"reasoning_content": reasoning_content,
}
}
],
"usage": usage,
}
return (
@@ -360,7 +383,9 @@ class LLM_request:
content = re.sub(r"<think>.*?</think>", "", content, flags=re.DOTALL).strip()
# 构造一个伪result以便调用自定义响应处理器或默认处理器
result = {
"choices": [{"message": {"content": content, "reasoning_content": reasoning_content}}],
"choices": [
{"message": {"content": content, "reasoning_content": reasoning_content}}
],
"usage": usage,
}
return (
@@ -394,7 +419,9 @@ class LLM_request:
# 处理aiohttp抛出的响应错误
if retry < policy["max_retries"] - 1:
wait_time = policy["base_wait"] * (2**retry)
logger.error(f"模型 {self.model_name} HTTP响应错误等待{wait_time}秒后重试... 状态码: {e.status}, 错误: {e.message}")
logger.error(
f"模型 {self.model_name} HTTP响应错误等待{wait_time}秒后重试... 状态码: {e.status}, 错误: {e.message}"
)
try:
if hasattr(e, "response") and e.response and hasattr(e.response, "text"):
error_text = await e.response.text()
@@ -419,13 +446,17 @@ class LLM_request:
else:
logger.error(f"模型 {self.model_name} 服务器错误响应: {error_json}")
except (json.JSONDecodeError, TypeError) as json_err:
logger.warning(f"模型 {self.model_name} 响应不是有效的JSON: {str(json_err)}, 原始内容: {error_text[:200]}")
logger.warning(
f"模型 {self.model_name} 响应不是有效的JSON: {str(json_err)}, 原始内容: {error_text[:200]}"
)
except (AttributeError, TypeError, ValueError) as parse_err:
logger.warning(f"模型 {self.model_name} 无法解析响应错误内容: {str(parse_err)}")
await asyncio.sleep(wait_time)
else:
logger.critical(f"模型 {self.model_name} HTTP响应错误达到最大重试次数: 状态码: {e.status}, 错误: {e.message}")
logger.critical(
f"模型 {self.model_name} HTTP响应错误达到最大重试次数: 状态码: {e.status}, 错误: {e.message}"
)
# 安全地检查和记录请求详情
if (
image_base64

View File

@@ -282,5 +282,6 @@ class MoodManager:
self._update_mood_text()
logger.info(f"[情绪变化] {emotion}(强度:{intensity:.2f}) | 愉悦度:{old_valence:.2f}->{self.current_mood.valence:.2f}, 唤醒度:{old_arousal:.2f}->{self.current_mood.arousal:.2f} | 心情:{old_mood}->{self.current_mood.text}")
logger.info(
f"[情绪变化] {emotion}(强度:{intensity:.2f}) | 愉悦度:{old_valence:.2f}->{self.current_mood.valence:.2f}, 唤醒度:{old_arousal:.2f}->{self.current_mood.arousal:.2f} | 心情:{old_mood}->{self.current_mood.text}"
)

View File

@@ -8,7 +8,8 @@ import asyncio
import numpy as np
import matplotlib
matplotlib.use('Agg')
matplotlib.use("Agg")
import matplotlib.pyplot as plt
from pathlib import Path
import pandas as pd
@@ -41,9 +42,10 @@ person_info_default = {
# "gender" : Unkown,
"konw_time": 0,
"msg_interval": 3000,
"msg_interval_list": []
"msg_interval_list": [],
} # 个人信息的各项与默认值在此定义,以下处理会自动创建/补全每一项
class PersonInfoManager:
def __init__(self):
if "person_info" not in db.list_collection_names():
@@ -81,10 +83,7 @@ class PersonInfoManager:
document = db.person_info.find_one({"person_id": person_id})
if document:
db.person_info.update_one(
{"person_id": person_id},
{"$set": {field_name: value}}
)
db.person_info.update_one({"person_id": person_id}, {"$set": {field_name: value}})
else:
Data[field_name] = value
logger.debug(f"更新时{person_id}不存在,已新建")
@@ -112,10 +111,7 @@ class PersonInfoManager:
logger.debug(f"get_value获取失败字段'{field_name}'未定义")
return None
document = db.person_info.find_one(
{"person_id": person_id},
{field_name: 1}
)
document = db.person_info.find_one({"person_id": person_id}, {field_name: 1})
if document and field_name in document:
return document[field_name]
@@ -139,16 +135,12 @@ class PersonInfoManager:
# 构建查询投影(所有字段都有效才会执行到这里)
projection = {field: 1 for field in field_names}
document = db.person_info.find_one(
{"person_id": person_id},
projection
)
document = db.person_info.find_one({"person_id": person_id}, projection)
result = {}
for field in field_names:
result[field] = copy.deepcopy(
document.get(field, person_info_default[field])
if document else person_info_default[field]
document.get(field, person_info_default[field]) if document else person_info_default[field]
)
return result
@@ -162,13 +154,12 @@ class PersonInfoManager:
# 遍历集合中的所有文档
for document in db.person_info.find({}):
# 找出文档中未定义的字段
undefined_fields = set(document.keys()) - defined_fields - {'_id'}
undefined_fields = set(document.keys()) - defined_fields - {"_id"}
if undefined_fields:
# 构建更新操作,使用$unset删除未定义字段
update_result = db.person_info.update_one(
{'_id': document['_id']},
{'$unset': {field: 1 for field in undefined_fields}}
{"_id": document["_id"]}, {"$unset": {field: 1 for field in undefined_fields}}
)
if update_result.modified_count > 0:
@@ -208,10 +199,7 @@ class PersonInfoManager:
try:
result = {}
for doc in db.person_info.find(
{field_name: {"$exists": True}},
{"person_id": 1, field_name: 1, "_id": 0}
):
for doc in db.person_info.find({field_name: {"$exists": True}}, {"person_id": 1, field_name: 1, "_id": 0}):
try:
value = doc[field_name]
if way(value):
@@ -229,7 +217,7 @@ class PersonInfoManager:
async def personal_habit_deduction(self):
"""启动个人信息推断,每天根据一定条件推断一次"""
try:
while(1):
while 1:
await asyncio.sleep(60)
current_time = datetime.datetime.now()
logger.info(f"个人信息推断启动: {current_time.strftime('%Y-%m-%d %H:%M:%S')}")
@@ -237,8 +225,7 @@ class PersonInfoManager:
# "msg_interval"推断
msg_interval_map = False
msg_interval_lists = await self.get_specific_value_list(
"msg_interval_list",
lambda x: isinstance(x, list) and len(x) >= 100
"msg_interval_list", lambda x: isinstance(x, list) and len(x) >= 100
)
for person_id, msg_interval_list_ in msg_interval_lists.items():
try:
@@ -258,14 +245,14 @@ class PersonInfoManager:
log_dir.mkdir(parents=True, exist_ok=True)
plt.figure(figsize=(10, 6))
time_series = pd.Series(time_interval)
plt.hist(time_series, bins=50, density=True, alpha=0.4, color='pink', label='Histogram')
time_series.plot(kind='kde', color='mediumpurple', linewidth=1, label='Density')
plt.hist(time_series, bins=50, density=True, alpha=0.4, color="pink", label="Histogram")
time_series.plot(kind="kde", color="mediumpurple", linewidth=1, label="Density")
plt.grid(True, alpha=0.2)
plt.xlim(0, 8000)
plt.title(f"Message Interval Distribution (User: {person_id[:8]}...)")
plt.xlabel("Interval (ms)")
plt.ylabel("Density")
plt.legend(framealpha=0.9, facecolor='white')
plt.legend(framealpha=0.9, facecolor="white")
img_path = log_dir / f"interval_distribution_{person_id[:8]}.png"
plt.savefig(img_path)
plt.close()

View File

@@ -12,6 +12,7 @@ relationship_config = LogConfig(
)
logger = get_module_logger("rel_manager", config=relationship_config)
class RelationshipManager:
def __init__(self):
self.positive_feedback_value = 0 # 正反馈系统
@@ -22,6 +23,7 @@ class RelationshipManager:
def mood_manager(self):
if self._mood_manager is None:
from ..moods.moods import MoodManager # 延迟导入
self._mood_manager = MoodManager.get_instance()
return self._mood_manager
@@ -58,8 +60,9 @@ class RelationshipManager:
def mood_feedback(self, value):
"""情绪反馈"""
mood_manager = self.mood_manager
mood_gain = (mood_manager.get_current_mood().valence) ** 2 \
* math.copysign(1, value * mood_manager.get_current_mood().valence)
mood_gain = (mood_manager.get_current_mood().valence) ** 2 * math.copysign(
1, value * mood_manager.get_current_mood().valence
)
value += value * mood_gain
logger.info(f"当前relationship增益系数{mood_gain:.3f}")
return value
@@ -67,8 +70,7 @@ class RelationshipManager:
def feedback_to_mood(self, mood_value):
"""对情绪的反馈"""
coefficient = self.gain_coefficient[abs(self.positive_feedback_value)]
if (mood_value > 0 and self.positive_feedback_value > 0
or mood_value < 0 and self.positive_feedback_value < 0):
if mood_value > 0 and self.positive_feedback_value > 0 or mood_value < 0 and self.positive_feedback_value < 0:
return mood_value * coefficient
else:
return mood_value / coefficient
@@ -106,7 +108,7 @@ class RelationshipManager:
"platform": chat_stream.user_info.platform,
"user_id": chat_stream.user_info.user_id,
"nickname": chat_stream.user_info.user_nickname,
"konw_time" : int(time.time())
"konw_time": int(time.time()),
}
old_value = await person_info_manager.get_value(person_id, "relationship_value")
old_value = self.ensure_float(old_value, person_id)
@@ -200,4 +202,5 @@ class RelationshipManager:
logger.warning(f"[关系管理] {person_id}值转换失败(原始值:{value}已重置为0")
return 0.0
relationship_manager = RelationshipManager()

View File

@@ -31,10 +31,16 @@ class ScheduleGenerator:
def __init__(self):
# 使用离线LLM模型
self.llm_scheduler_all = LLM_request(
model=global_config.llm_reasoning, temperature=global_config.SCHEDULE_TEMPERATURE, max_tokens=7000, request_type="schedule"
model=global_config.llm_reasoning,
temperature=global_config.SCHEDULE_TEMPERATURE,
max_tokens=7000,
request_type="schedule",
)
self.llm_scheduler_doing = LLM_request(
model=global_config.llm_normal, temperature=global_config.SCHEDULE_TEMPERATURE, max_tokens=2048, request_type="schedule"
model=global_config.llm_normal,
temperature=global_config.SCHEDULE_TEMPERATURE,
max_tokens=2048,
request_type="schedule",
)
self.today_schedule_text = ""

View File

@@ -2,7 +2,7 @@ import threading
import time
from collections import defaultdict
from datetime import datetime, timedelta
from typing import Any, Dict
from typing import Any, Dict, List
from src.common.logger import get_module_logger
from ...common.database import db
@@ -22,6 +22,7 @@ class LLMStatistics:
self.stats_thread = None
self.console_thread = None
self._init_database()
self.name_dict: Dict[List] = {}
def _init_database(self):
"""初始化数据库集合"""
@@ -137,16 +138,24 @@ class LLMStatistics:
# user_id = str(doc.get("user_info", {}).get("user_id", "unknown"))
chat_info = doc.get("chat_info", {})
user_info = doc.get("user_info", {})
message_time = doc.get("time", 0)
group_info = chat_info.get("group_info") if chat_info else {}
# print(f"group_info: {group_info}")
group_name = None
if group_info:
group_id = f"g{group_info.get('group_id')}"
group_name = group_info.get("group_name", f"{group_info.get('group_id')}")
if user_info and not group_name:
group_id = f"u{user_info['user_id']}"
group_name = user_info["user_nickname"]
if self.name_dict.get(group_id):
if message_time > self.name_dict.get(group_id)[1]:
self.name_dict[group_id] = [group_name, message_time]
else:
self.name_dict[group_id] = [group_name, message_time]
# print(f"group_name: {group_name}")
stats["messages_by_user"][user_id] += 1
stats["messages_by_chat"][group_name] += 1
stats["messages_by_chat"][group_id] += 1
return stats
@@ -187,7 +196,7 @@ class LLMStatistics:
tokens = stats["tokens_by_model"][model_name]
cost = stats["costs_by_model"][model_name]
output.append(
data_fmt.format(model_name[:32] + ".." if len(model_name) > 32 else model_name, count, tokens, cost)
data_fmt.format(model_name[:30] + ".." if len(model_name) > 32 else model_name, count, tokens, cost)
)
output.append("")
@@ -221,8 +230,8 @@ class LLMStatistics:
# 添加聊天统计
output.append("群组统计:")
output.append(("群组名称 消息数量"))
for group_name, count in sorted(stats["messages_by_chat"].items()):
output.append(f"{group_name[:32]:<32} {count:>10}")
for group_id, count in sorted(stats["messages_by_chat"].items()):
output.append(f"{self.name_dict[group_id][0][:32]:<32} {count:>10}")
return "\n".join(output)
@@ -250,7 +259,7 @@ class LLMStatistics:
tokens = stats["tokens_by_model"][model_name]
cost = stats["costs_by_model"][model_name]
output.append(
data_fmt.format(model_name[:32] + ".." if len(model_name) > 32 else model_name, count, tokens, cost)
data_fmt.format(model_name[:30] + ".." if len(model_name) > 32 else model_name, count, tokens, cost)
)
output.append("")
@@ -284,8 +293,8 @@ class LLMStatistics:
# 添加聊天统计
output.append("群组统计:")
output.append(("群组名称 消息数量"))
for group_name, count in sorted(stats["messages_by_chat"].items()):
output.append(f"{group_name[:32]:<32} {count:>10}")
for group_id, count in sorted(stats["messages_by_chat"].items()):
output.append(f"{self.name_dict[group_id][0][:32]:<32} {count:>10}")
return "\n".join(output)

View File

@@ -32,7 +32,7 @@ SECTION_TRANSLATIONS = {
"response_spliter": "回复分割器",
"remote": "远程设置",
"experimental": "实验功能",
"model": "模型设置"
"model": "模型设置",
}
# 配置项的中文描述
@@ -41,16 +41,13 @@ CONFIG_DESCRIPTIONS = {
"bot.qq": "机器人的QQ号码",
"bot.nickname": "机器人的昵称",
"bot.alias_names": "机器人的别名列表",
# 群组设置
"groups.talk_allowed": "允许机器人回复消息的群号列表",
"groups.talk_frequency_down": "降低回复频率的群号列表",
"groups.ban_user_id": "禁止回复和读取消息的QQ号列表",
# 人格设置
"personality.personality_core": "人格核心描述建议20字以内",
"personality.personality_sides": "人格特点列表",
# 身份设置
"identity.identity_detail": "身份细节描述列表",
"identity.height": "身高(厘米)",
@@ -58,28 +55,23 @@ CONFIG_DESCRIPTIONS = {
"identity.age": "年龄",
"identity.gender": "性别",
"identity.appearance": "外貌特征",
# 日程设置
"schedule.enable_schedule_gen": "是否启用日程表生成",
"schedule.prompt_schedule_gen": "日程表生成提示词",
"schedule.schedule_doing_update_interval": "日程表更新间隔(秒)",
"schedule.schedule_temperature": "日程表温度建议0.3-0.6",
"schedule.time_zone": "时区设置",
# 平台设置
"platforms.nonebot-qq": "QQ平台适配器链接",
# 回复设置
"response.response_mode": "回复策略heart_flow心流reasoning推理",
"response.model_r1_probability": "主要回复模型使用概率",
"response.model_v3_probability": "次要回复模型使用概率",
# 心流设置
"heartflow.sub_heart_flow_update_interval": "子心流更新频率(秒)",
"heartflow.sub_heart_flow_freeze_time": "子心流冻结时间(秒)",
"heartflow.sub_heart_flow_stop_time": "子心流停止时间(秒)",
"heartflow.heart_flow_update_interval": "心流更新频率(秒)",
# 消息设置
"message.max_context_size": "获取的上下文数量",
"message.emoji_chance": "使用表情包的概率",
@@ -88,14 +80,12 @@ CONFIG_DESCRIPTIONS = {
"message.message_buffer": "是否启用消息缓冲器",
"message.ban_words": "禁用词列表",
"message.ban_msgs_regex": "禁用消息正则表达式列表",
# 意愿设置
"willing.willing_mode": "回复意愿模式",
"willing.response_willing_amplifier": "回复意愿放大系数",
"willing.response_interested_rate_amplifier": "回复兴趣度放大系数",
"willing.down_frequency_rate": "降低回复频率的群组回复意愿降低系数",
"willing.emoji_response_penalty": "表情包回复惩罚系数",
# 表情设置
"emoji.max_emoji_num": "表情包最大数量",
"emoji.max_reach_deletion": "达到最大数量时是否删除表情包",
@@ -103,7 +93,6 @@ CONFIG_DESCRIPTIONS = {
"emoji.auto_save": "是否保存表情包和图片",
"emoji.enable_check": "是否启用表情包过滤",
"emoji.check_prompt": "表情包过滤要求",
# 记忆设置
"memory.build_memory_interval": "记忆构建间隔(秒)",
"memory.build_memory_distribution": "记忆构建分布参数",
@@ -114,104 +103,90 @@ CONFIG_DESCRIPTIONS = {
"memory.memory_forget_time": "记忆遗忘时间(小时)",
"memory.memory_forget_percentage": "记忆遗忘比例",
"memory.memory_ban_words": "记忆禁用词列表",
# 情绪设置
"mood.mood_update_interval": "情绪更新间隔(秒)",
"mood.mood_decay_rate": "情绪衰减率",
"mood.mood_intensity_factor": "情绪强度因子",
# 关键词反应
"keywords_reaction.enable": "是否启用关键词反应功能",
# 中文错别字
"chinese_typo.enable": "是否启用中文错别字生成器",
"chinese_typo.error_rate": "单字替换概率",
"chinese_typo.min_freq": "最小字频阈值",
"chinese_typo.tone_error_rate": "声调错误概率",
"chinese_typo.word_replace_rate": "整词替换概率",
# 回复分割器
"response_spliter.enable_response_spliter": "是否启用回复分割器",
"response_spliter.response_max_length": "回复允许的最大长度",
"response_spliter.response_max_sentence_num": "回复允许的最大句子数",
# 远程设置
"remote.enable": "是否启用远程统计",
# 实验功能
"experimental.enable_friend_chat": "是否启用好友聊天",
"experimental.pfc_chatting": "是否启用PFC聊天",
# 模型设置
"model.llm_reasoning.name": "推理模型名称",
"model.llm_reasoning.provider": "推理模型提供商",
"model.llm_reasoning.pri_in": "推理模型输入价格",
"model.llm_reasoning.pri_out": "推理模型输出价格",
"model.llm_normal.name": "回复模型名称",
"model.llm_normal.provider": "回复模型提供商",
"model.llm_normal.pri_in": "回复模型输入价格",
"model.llm_normal.pri_out": "回复模型输出价格",
"model.llm_emotion_judge.name": "表情判断模型名称",
"model.llm_emotion_judge.provider": "表情判断模型提供商",
"model.llm_emotion_judge.pri_in": "表情判断模型输入价格",
"model.llm_emotion_judge.pri_out": "表情判断模型输出价格",
"model.llm_topic_judge.name": "主题判断模型名称",
"model.llm_topic_judge.provider": "主题判断模型提供商",
"model.llm_topic_judge.pri_in": "主题判断模型输入价格",
"model.llm_topic_judge.pri_out": "主题判断模型输出价格",
"model.llm_summary_by_topic.name": "概括模型名称",
"model.llm_summary_by_topic.provider": "概括模型提供商",
"model.llm_summary_by_topic.pri_in": "概括模型输入价格",
"model.llm_summary_by_topic.pri_out": "概括模型输出价格",
"model.moderation.name": "内容审核模型名称",
"model.moderation.provider": "内容审核模型提供商",
"model.moderation.pri_in": "内容审核模型输入价格",
"model.moderation.pri_out": "内容审核模型输出价格",
"model.vlm.name": "图像识别模型名称",
"model.vlm.provider": "图像识别模型提供商",
"model.vlm.pri_in": "图像识别模型输入价格",
"model.vlm.pri_out": "图像识别模型输出价格",
"model.embedding.name": "嵌入模型名称",
"model.embedding.provider": "嵌入模型提供商",
"model.embedding.pri_in": "嵌入模型输入价格",
"model.embedding.pri_out": "嵌入模型输出价格",
"model.llm_observation.name": "观察模型名称",
"model.llm_observation.provider": "观察模型提供商",
"model.llm_observation.pri_in": "观察模型输入价格",
"model.llm_observation.pri_out": "观察模型输出价格",
"model.llm_sub_heartflow.name": "子心流模型名称",
"model.llm_sub_heartflow.provider": "子心流模型提供商",
"model.llm_sub_heartflow.pri_in": "子心流模型输入价格",
"model.llm_sub_heartflow.pri_out": "子心流模型输出价格",
"model.llm_heartflow.name": "心流模型名称",
"model.llm_heartflow.provider": "心流模型提供商",
"model.llm_heartflow.pri_in": "心流模型输入价格",
"model.llm_heartflow.pri_out": "心流模型输出价格",
}
# 获取翻译
def get_translation(key):
return SECTION_TRANSLATIONS.get(key, key)
# 获取配置项描述
def get_description(key):
return CONFIG_DESCRIPTIONS.get(key, "")
# 获取根目录路径
def get_root_dir():
try:
# 获取当前脚本所在目录
if getattr(sys, 'frozen', False):
if getattr(sys, "frozen", False):
# 如果是打包后的应用
current_dir = os.path.dirname(sys.executable)
else:
@@ -235,9 +210,11 @@ def get_root_dir():
# 返回当前目录作为备选
return os.getcwd()
# 配置文件路径
CONFIG_PATH = os.path.join(get_root_dir(), "config", "bot_config.toml")
# 保存配置
def save_config(config_data):
try:
@@ -266,6 +243,7 @@ def save_config(config_data):
print(f"保存配置失败: {e}")
return False
# 加载配置
def load_config():
try:
@@ -279,6 +257,7 @@ def load_config():
print(f"加载配置失败: {e}")
return {}
# 多行文本输入框
class ScrollableTextFrame(ctk.CTkFrame):
def __init__(self, master, initial_text="", height=100, width=400, **kwargs):
@@ -305,6 +284,7 @@ class ScrollableTextFrame(ctk.CTkFrame):
self.text_box.insert("1.0", text)
self.update_var()
# 配置UI
class ConfigUI(ctk.CTk):
def __init__(self):
@@ -430,7 +410,7 @@ class ConfigUI(ctk.CTk):
width=30,
command=self.show_search_dialog,
fg_color="transparent",
hover_color=("gray80", "gray30")
hover_color=("gray80", "gray30"),
)
search_btn.pack(side="right", padx=5, pady=5)
@@ -457,7 +437,7 @@ class ConfigUI(ctk.CTk):
text_color=("gray10", "gray90"),
anchor="w",
height=35,
command=lambda s=section: self.show_category(s)
command=lambda s=section: self.show_category(s),
)
btn.pack(fill="x", padx=5, pady=2)
self.category_buttons[section] = btn
@@ -484,18 +464,12 @@ class ConfigUI(ctk.CTk):
category_name = f"{category} ({get_translation(category)})"
# 添加标题
ctk.CTkLabel(
self.content_frame,
text=f"{category_name} 配置",
font=("Arial", 16, "bold")
).pack(anchor="w", padx=10, pady=(5, 15))
ctk.CTkLabel(self.content_frame, text=f"{category_name} 配置", font=("Arial", 16, "bold")).pack(
anchor="w", padx=10, pady=(5, 15)
)
# 添加配置项
self.add_config_section(
self.content_frame,
category,
self.config_data[category]
)
self.add_config_section(self.content_frame, category, self.config_data[category])
def add_config_section(self, parent, section_path, section_data, indent=0):
# 递归添加配置项
@@ -514,12 +488,7 @@ class ConfigUI(ctk.CTk):
header_frame = ctk.CTkFrame(group_frame, fg_color=("gray85", "gray25"))
header_frame.pack(fill="x", padx=0, pady=0)
label = ctk.CTkLabel(
header_frame,
text=f"{key}",
font=("Arial", 13, "bold"),
anchor="w"
)
label = ctk.CTkLabel(header_frame, text=f"{key}", font=("Arial", 13, "bold"), anchor="w")
label.pack(anchor="w", padx=10, pady=5)
# 如果有描述,添加提示图标
@@ -536,12 +505,7 @@ class ConfigUI(ctk.CTk):
tipwindow.wm_geometry(f"+{x}+{y}")
tipwindow.lift()
label = ctk.CTkLabel(
tipwindow,
text=text,
justify="left",
wraplength=300
)
label = ctk.CTkLabel(tipwindow, text=text, justify="left", wraplength=300)
label.pack(padx=5, pady=5)
# 自动关闭
@@ -553,11 +517,7 @@ class ConfigUI(ctk.CTk):
# 在标题后添加提示图标
tip_label = ctk.CTkLabel(
header_frame,
text="",
font=("Arial", 12),
text_color="light blue",
width=20
header_frame, text="", font=("Arial", 12), text_color="light blue", width=20
)
tip_label.pack(side="right", padx=5)
@@ -584,21 +544,11 @@ class ConfigUI(ctk.CTk):
if description:
label_text = f"{key}: ({description})"
label = ctk.CTkLabel(
label_frame,
text=label_text,
font=("Arial", 12),
anchor="w"
)
label = ctk.CTkLabel(label_frame, text=label_text, font=("Arial", 12), anchor="w")
label.pack(anchor="w", padx=5 + indent * 10, pady=0)
# 添加提示信息
info_label = ctk.CTkLabel(
label_frame,
text="(列表格式: JSON)",
font=("Arial", 9),
text_color="gray50"
)
info_label = ctk.CTkLabel(label_frame, text="(列表格式: JSON)", font=("Arial", 9), text_color="gray50")
info_label.pack(anchor="w", padx=5 + indent * 10, pady=(0, 5))
# 确定文本框高度,根据列表项数量决定
@@ -608,12 +558,7 @@ class ConfigUI(ctk.CTk):
json_str = json.dumps(value, ensure_ascii=False, indent=2)
# 使用多行文本框
text_frame = ScrollableTextFrame(
frame,
initial_text=json_str,
height=list_height,
width=550
)
text_frame = ScrollableTextFrame(frame, initial_text=json_str, height=list_height, width=550)
text_frame.pack(fill="x", padx=10 + indent * 10, pady=5)
self.config_vars[full_path] = (text_frame.text_var, "list")
@@ -635,10 +580,7 @@ class ConfigUI(ctk.CTk):
checkbox_text = f"{key} ({description})"
checkbox = ctk.CTkCheckBox(
frame,
text=checkbox_text,
variable=var,
command=lambda path=full_path: self.on_field_change(path)
frame, text=checkbox_text, variable=var, command=lambda path=full_path: self.on_field_change(path)
)
checkbox.pack(anchor="w", padx=10 + indent * 10, pady=5)
@@ -652,12 +594,7 @@ class ConfigUI(ctk.CTk):
if description:
label_text = f"{key}: ({description})"
label = ctk.CTkLabel(
frame,
text=label_text,
font=("Arial", 12),
anchor="w"
)
label = ctk.CTkLabel(frame, text=label_text, font=("Arial", 12), anchor="w")
label.pack(anchor="w", padx=10 + indent * 10, pady=(5, 0))
var = StringVar(value=str(value))
@@ -682,12 +619,7 @@ class ConfigUI(ctk.CTk):
if description:
label_text = f"{key}: ({description})"
label = ctk.CTkLabel(
frame,
text=label_text,
font=("Arial", 12),
anchor="w"
)
label = ctk.CTkLabel(frame, text=label_text, font=("Arial", 12), anchor="w")
label.pack(anchor="w", padx=10 + indent * 10, pady=(5, 0))
var = StringVar(value=str(value))
@@ -696,16 +628,11 @@ class ConfigUI(ctk.CTk):
# 判断文本长度,决定输入框的类型和大小
text_len = len(str(value))
if text_len > 80 or '\n' in str(value):
if text_len > 80 or "\n" in str(value):
# 对于长文本或多行文本,使用多行文本框
text_height = max(80, min(str(value).count('\n') * 20 + 40, 150))
text_height = max(80, min(str(value).count("\n") * 20 + 40, 150))
text_frame = ScrollableTextFrame(
frame,
initial_text=str(value),
height=text_height,
width=550
)
text_frame = ScrollableTextFrame(frame, initial_text=str(value), height=text_height, width=550)
text_frame.pack(fill="x", padx=10 + indent * 10, pady=5)
self.config_vars[full_path] = (text_frame.text_var, "string")
@@ -751,7 +678,6 @@ class ConfigUI(ctk.CTk):
target[parts[-1]] = var.get()
updated = True
elif var_type == "number":
# 获取原始类型int或float
num_type = args[0] if args else int
new_value = num_type(var.get())
@@ -760,7 +686,6 @@ class ConfigUI(ctk.CTk):
updated = True
elif var_type == "list":
# 解析JSON字符串为列表
new_value = json.loads(var.get())
if json.dumps(target[parts[-1]], sort_keys=True) != json.dumps(new_value, sort_keys=True):
@@ -841,11 +766,7 @@ class ConfigUI(ctk.CTk):
current_config = json.dumps(temp_config, sort_keys=True)
if current_config != self.original_config:
result = messagebox.askyesnocancel(
"未保存的更改",
"有未保存的更改,是否保存?",
icon="warning"
)
result = messagebox.askyesnocancel("未保存的更改", "有未保存的更改,是否保存?", icon="warning")
if result is None: # 取消
return False
@@ -868,29 +789,17 @@ class ConfigUI(ctk.CTk):
about_window.geometry(f"+{x}+{y}")
# 内容
ctk.CTkLabel(
about_window,
text="麦麦配置修改器",
font=("Arial", 16, "bold")
).pack(pady=(20, 10))
ctk.CTkLabel(about_window, text="麦麦配置修改器", font=("Arial", 16, "bold")).pack(pady=(20, 10))
ctk.CTkLabel(
about_window,
text="用于修改MaiBot-Core的配置文件\n配置文件路径: config/bot_config.toml"
).pack(pady=5)
ctk.CTkLabel(about_window, text="用于修改MaiBot-Core的配置文件\n配置文件路径: config/bot_config.toml").pack(
pady=5
)
ctk.CTkLabel(
about_window,
text="注意: 修改配置前请备份原始配置文件",
text_color=("red", "light coral")
).pack(pady=5)
ctk.CTkLabel(about_window, text="注意: 修改配置前请备份原始配置文件", text_color=("red", "light coral")).pack(
pady=5
)
ctk.CTkButton(
about_window,
text="确定",
command=about_window.destroy,
width=100
).pack(pady=15)
ctk.CTkButton(about_window, text="确定", command=about_window.destroy, width=100).pack(pady=15)
def on_closing(self):
"""关闭窗口前检查未保存更改"""
@@ -961,11 +870,9 @@ class ConfigUI(ctk.CTk):
backup_window.geometry(f"+{x}+{y}")
# 创建说明标签
ctk.CTkLabel(
backup_window,
text="备份文件列表 (双击可恢复)",
font=("Arial", 14, "bold")
).pack(pady=(10, 5), padx=10, anchor="w")
ctk.CTkLabel(backup_window, text="备份文件列表 (双击可恢复)", font=("Arial", 14, "bold")).pack(
pady=(10, 5), padx=10, anchor="w"
)
# 创建列表框
backup_frame = ctk.CTkScrollableFrame(backup_window, width=580, height=300)
@@ -981,27 +888,17 @@ class ConfigUI(ctk.CTk):
item_frame.pack(fill="x", padx=5, pady=5)
# 显示备份文件信息
ctk.CTkLabel(
item_frame,
text=f"{time_str}",
font=("Arial", 12, "bold"),
width=200
).pack(side="left", padx=10, pady=10)
ctk.CTkLabel(item_frame, text=f"{time_str}", font=("Arial", 12, "bold"), width=200).pack(
side="left", padx=10, pady=10
)
# 文件名
name_label = ctk.CTkLabel(
item_frame,
text=filename,
font=("Arial", 11)
)
name_label = ctk.CTkLabel(item_frame, text=filename, font=("Arial", 11))
name_label.pack(side="left", fill="x", expand=True, padx=5, pady=10)
# 恢复按钮
restore_btn = ctk.CTkButton(
item_frame,
text="恢复",
width=80,
command=lambda path=filepath: self.restore_backup(path)
item_frame, text="恢复", width=80, command=lambda path=filepath: self.restore_backup(path)
)
restore_btn.pack(side="right", padx=10, pady=10)
@@ -1010,12 +907,7 @@ class ConfigUI(ctk.CTk):
widget.bind("<Double-1>", lambda e, path=filepath: self.restore_backup(path))
# 关闭按钮
ctk.CTkButton(
backup_window,
text="关闭",
command=backup_window.destroy,
width=100
).pack(pady=10)
ctk.CTkButton(backup_window, text="关闭", command=backup_window.destroy, width=100).pack(pady=10)
def restore_backup(self, backup_path):
"""从备份文件恢复配置"""
@@ -1027,7 +919,7 @@ class ConfigUI(ctk.CTk):
confirm = messagebox.askyesno(
"确认",
f"确定要从以下备份文件恢复配置吗?\n{os.path.basename(backup_path)}\n\n这将覆盖当前的配置!",
icon="warning"
icon="warning",
)
if not confirm:
@@ -1069,7 +961,9 @@ class ConfigUI(ctk.CTk):
search_frame.pack(fill="x", padx=10, pady=10)
search_var = StringVar()
search_entry = ctk.CTkEntry(search_frame, placeholder_text="输入关键词搜索...", width=380, textvariable=search_var)
search_entry = ctk.CTkEntry(
search_frame, placeholder_text="输入关键词搜索...", width=380, textvariable=search_var
)
search_entry.pack(side="left", padx=5, pady=5, fill="x", expand=True)
# 结果列表框
@@ -1150,7 +1044,7 @@ class ConfigUI(ctk.CTk):
text=f"{full_path}{desc_text}",
font=("Arial", 11, "bold"),
anchor="w",
wraplength=450
wraplength=450,
)
path_label.pack(anchor="w", padx=10, pady=(5, 0), fill="x")
@@ -1160,11 +1054,7 @@ class ConfigUI(ctk.CTk):
value_str = value_str[:50] + "..."
value_label = ctk.CTkLabel(
item_frame,
text=f"值: {value_str}",
font=("Arial", 10),
anchor="w",
wraplength=450
item_frame, text=f"值: {value_str}", font=("Arial", 10), anchor="w", wraplength=450
)
value_label.pack(anchor="w", padx=10, pady=(0, 5), fill="x")
@@ -1174,7 +1064,7 @@ class ConfigUI(ctk.CTk):
text="转到",
width=60,
height=25,
command=lambda s=section: self.goto_config_item(s, search_window)
command=lambda s=section: self.goto_config_item(s, search_window),
)
goto_btn.pack(side="right", padx=10, pady=5)
@@ -1227,37 +1117,22 @@ class ConfigUI(ctk.CTk):
menu_window.geometry(f"+{x}+{y}")
# 创建按钮
ctk.CTkLabel(
menu_window,
text="配置导入导出",
font=("Arial", 16, "bold")
).pack(pady=(20, 10))
ctk.CTkLabel(menu_window, text="配置导入导出", font=("Arial", 16, "bold")).pack(pady=(20, 10))
# 导出按钮
export_btn = ctk.CTkButton(
menu_window,
text="导出配置到文件",
command=lambda: self.export_config(menu_window),
width=200
menu_window, text="导出配置到文件", command=lambda: self.export_config(menu_window), width=200
)
export_btn.pack(pady=10)
# 导入按钮
import_btn = ctk.CTkButton(
menu_window,
text="从文件导入配置",
command=lambda: self.import_config(menu_window),
width=200
menu_window, text="从文件导入配置", command=lambda: self.import_config(menu_window), width=200
)
import_btn.pack(pady=10)
# 取消按钮
cancel_btn = ctk.CTkButton(
menu_window,
text="取消",
command=menu_window.destroy,
width=100
)
cancel_btn = ctk.CTkButton(menu_window, text="取消", command=menu_window.destroy, width=100)
cancel_btn.pack(pady=10)
def export_config(self, parent_window=None):
@@ -1277,7 +1152,7 @@ class ConfigUI(ctk.CTk):
title="导出配置",
filetypes=[("TOML 文件", "*.toml"), ("所有文件", "*.*")],
defaultextension=".toml",
initialfile=default_filename
initialfile=default_filename,
)
if not file_path:
@@ -1306,8 +1181,7 @@ class ConfigUI(ctk.CTk):
# 选择要导入的文件
file_path = filedialog.askopenfilename(
title="导入配置",
filetypes=[("TOML 文件", "*.toml"), ("所有文件", "*.*")]
title="导入配置", filetypes=[("TOML 文件", "*.toml"), ("所有文件", "*.*")]
)
if not file_path:
@@ -1327,9 +1201,7 @@ class ConfigUI(ctk.CTk):
# 确认导入
confirm = messagebox.askyesno(
"确认导入",
f"确定要导入此配置文件吗?\n{file_path}\n\n这将替换当前的配置!",
icon="warning"
"确认导入", f"确定要导入此配置文件吗?\n{file_path}\n\n这将替换当前的配置!", icon="warning"
)
if not confirm:
@@ -1354,6 +1226,7 @@ class ConfigUI(ctk.CTk):
messagebox.showerror("导入失败", f"导入配置失败: {e}")
return False
# 主函数
def main():
try:
@@ -1365,6 +1238,7 @@ def main():
import tkinter as tk
from tkinter import messagebox
root = tk.Tk()
root.withdraw()
messagebox.showerror("程序错误", f"程序运行时发生错误:\n{e}")