Compare commits

..

3 Commits

Author SHA1 Message Date
8a0c1d6876 status 2025-07-29 09:53:02 +08:00
f0bf3b6184 优化衔接处bug 2025-07-29 02:50:08 +08:00
a96fc86d42 处理结尾判断 2025-07-29 01:36:40 +08:00
50 changed files with 1073 additions and 2790 deletions

View File

@ -1,44 +0,0 @@
name: Gitea Actions Demo
run-name: ${{ gitea.actor }} is testing out Gitea Actions 🚀
on:
push:
branches:
- 'old_man'
env:
BUILD: staging
jobs:
Explore-Gitea-Actions:
runs-on: stream9
steps:
- run: echo "🎉 The job was automatically triggered by a ${{ gitea.event_name }} event."
- run: echo "🐧 This job is now running on a ${{ runner.os }} server hosted by Gitea!"
- run: echo "🔎 The name of your branch is ${{ gitea.ref }} and your repository is ${{ gitea.repository }}."
- name: Check out repository code
uses: https://gitea.yantootech.com/neil/checkout@v4
- run: echo "💡 The ${{ gitea.repository }} repository has been cloned to the runner."
- run: echo "🖥️ The workflow is now ready to test your code on the runner."
- name: List files in the repository
run: |
whoami
uname -a
pwd
ls ${{ gitea.workspace }}
- name: Build and push
uses: https://gitea.yantootech.com/neil/build-push-action@v6
with:
push: false
tags: emotion-digital-video:${{ gitea.run_id }}
- name: Run docker
run: |
pwd
if [ "$(docker ps -q -f name=^emotion-digital-video$)" ]; then
docker stop emotion-digital-video
fi
docker run -d --rm --name emotion-digital-video \
-v /usr/share/fonts/opentype/noto:/usr/share/fonts \
-p 6900:3000 \
emotion-digital-video:${{ gitea.run_id }}
- run: echo "🍏 This job's status is ${{ job.status }}."

View File

@ -1,24 +0,0 @@
# 使用官方Node.js运行时作为基础镜像
FROM node:18-alpine
# 设置工作目录
WORKDIR /app
# 复制package.json和yarn.lock
COPY package.json yarn.lock* ./
# 安装项目依赖
RUN yarn install
# 复制项目文件
COPY . .
# 设置环境变量
ENV HOST=0.0.0.0
ENV PORT=3000
# 暴露端口
EXPOSE 3000
# 启动项目
CMD ["yarn", "dev"]

View File

@ -1,129 +0,0 @@
# 默认视频播放问题修复
## 问题描述
在性能优化过程中,发现默认视频 `d-3s.mp4``s-1.mp4` 没有正常播放的问题。
## 问题原因
1. **缓存策略过于激进**: 将缓存数量从3个减少到2个导致重要视频被过早清理
2. **缺少默认视频启动调用**: 在 `startCall()` 方法中没有调用 `startDefaultVideoStream()`
3. **重要视频保护不足**: 没有区分重要视频和普通视频的清理策略
## 修复措施
### 1. 优化缓存策略
```javascript
// 修复前
if (this.videoStreams.size >= 2) { // 缓存数量过少
const firstKey = this.videoStreams.keys().next().value;
// 直接清理第一个视频,可能包括重要视频
}
// 修复后
if (this.videoStreams.size >= 4) { // 增加缓存数量
const importantVideos = [this.defaultVideo, 's-1.mp4', 'd-3s.mp4'];
const videoToRemove = cachedVideos.find(video => !importantVideos.includes(video));
// 只清理非重要视频
}
```
### 2. 添加默认视频启动调用
```javascript
async startCall() {
// ... 其他代码 ...
// 启动默认视频流
await this.startDefaultVideoStream();
// 通知服务器通话开始
this.socket.emit('call-started');
}
```
### 3. 改进预加载策略
```javascript
async preloadCommonVideos() {
const videosToPreload = new Set([]);
// 添加重要视频(默认视频和常用视频)
videosToPreload.add(this.defaultVideo); // 默认视频
videosToPreload.add('s-1.mp4'); // 常用视频
videosToPreload.add('d-3s.mp4'); // 默认视频的另一个版本
// 添加视频映射中的所有视频
Object.values(this.videoMapping).forEach(video => {
videosToPreload.add(video);
});
}
```
### 4. 优化性能监控清理
```javascript
// 如果缓存过多,清理一些(但保护重要视频)
if (this.videoStreams.size > 5) {
const importantVideos = [this.defaultVideo, 's-1.mp4', 'd-3s.mp4'];
// 只清理非重要视频
const videosToRemove = cachedVideos.filter(video => !importantVideos.includes(video));
videosToRemove.slice(0, 2).forEach(key => {
this.cleanupVideoResources(key);
});
}
```
### 5. 调整性能测试阈值
```javascript
// 检查视频流数量
if (testResults.metrics.videoStreamsCount > 5) { // 从3增加到5
// 报告问题
}
// 检查动画帧数量
if (testResults.metrics.animationFramesCount > 3) { // 从2增加到3
// 报告问题
}
```
## 重要视频列表
以下视频被标记为重要视频,不会被自动清理:
- `d-3s.mp4` - 默认视频
- `s-1.mp4` - 常用视频
- 当前默认视频(`this.defaultVideo`
## 测试功能
添加了测试功能来验证默认视频播放:
1. **测试按钮**: "测试默认视频" 按钮
2. **测试方法**: `testDefaultVideoPlayback()`
3. **测试流程**:
- 检查默认视频文件是否存在
- 创建默认视频流
- 设置到视频元素并播放
- 5秒后自动停止测试
## 验证步骤
1. 启动应用
2. 点击"开始音频通话"
3. 观察默认视频是否开始播放
4. 点击"测试默认视频"按钮验证功能
5. 查看性能监控面板确认视频流数量
## 预期效果
修复后,默认视频应该能够:
1. **正常播放**: 通话开始时自动播放默认视频
2. **不被清理**: 重要视频不会被自动清理机制删除
3. **快速切换**: 预加载确保切换时响应迅速
4. **稳定运行**: 性能监控不会误报重要视频为问题
## 监控指标
- **视频流数量**: 正常范围 1-5 个
- **重要视频保护**: 确保 `d-3s.mp4``s-1.mp4` 不被清理
- **默认视频状态**: 通话开始时应该显示默认视频

View File

@ -1,22 +0,0 @@
version: '3.8'
services:
webrtc-app:
build: .
ports:
- "3000:3000"
volumes:
- ./videos:/app/videos
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
- webrtc-network
networks:
webrtc-network:
driver: bridge

View File

@ -1,3 +0,0 @@
{
"currentSceneIndex": 0
}

331
server.js
View File

@ -8,17 +8,7 @@ const { MessageHistory } = require('./src/message_history.js');
const app = express();
const server = http.createServer(app);
const io = socketIo(server, {
pingTimeout: 60000, // 60秒超时 (减少到1分钟)
pingInterval: 10000, // 10秒心跳间隔
upgradeTimeout: 30000, // 30秒升级超时
allowEIO3: true, // 允许Engine.IO v3客户端
transports: ['websocket', 'polling'], // 支持多种传输方式
cors: {
origin: "*",
methods: ["GET", "POST"]
}
});
const io = socketIo(server);
// 创建消息历史管理器
const messageHistory = new MessageHistory();
@ -95,120 +85,18 @@ app.delete('/api/messages/clear', async (req, res) => {
// 存储连接的客户端和他们的视频流状态
const connectedClients = new Map();
// 场景轮询系统
// 场景轮询系统 - 添加持久化
// 删除这行const fs = require('fs'); // 重复声明,需要删除
const sceneStateFile = path.join(__dirname, 'scene_state.json');
// 从文件加载场景状态
function loadSceneState() {
try {
if (fs.existsSync(sceneStateFile)) {
const data = fs.readFileSync(sceneStateFile, 'utf8');
const state = JSON.parse(data);
currentSceneIndex = state.currentSceneIndex || 0;
console.log(`从文件加载场景状态: ${currentSceneIndex} (${scenes[currentSceneIndex].name})`);
} else {
console.log('场景状态文件不存在,使用默认值: 0');
}
} catch (error) {
console.error('加载场景状态失败:', error);
currentSceneIndex = 0;
}
}
// 保存场景状态到文件
function saveSceneState() {
try {
const state = { currentSceneIndex };
fs.writeFileSync(sceneStateFile, JSON.stringify(state, null, 2));
console.log(`场景状态已保存: ${currentSceneIndex}`);
} catch (error) {
console.error('保存场景状态失败:', error);
}
}
let currentSceneIndex = 0;
const scenes = [
{
name: '聊天',
defaultVideo: 'xnh-bd-2.mp4',
interactionVideo: 'xnh-sh.mp4',
tag: 'chat',
apiKey: 'bot-20250916100919-w8vxr', // 起床场景的API key
openingLines: [
"我来啦!今天您过得怎么样呀?有没有什么好玩的事儿跟我说说呀?",
"天冷了,您可得多穿点啊!"
]
}
];
// 获取当前场景
function getCurrentScene() {
return scenes[currentSceneIndex];
}
// 切换到下一个场景 - 改进版
function switchToNextScene() {
const previousIndex = currentSceneIndex;
const previousScene = scenes[previousIndex].name;
currentSceneIndex = (currentSceneIndex + 1) % scenes.length;
const newScene = getCurrentScene();
console.log(`场景切换: ${previousScene}(${previousIndex}) → ${newScene.name}(${currentSceneIndex})`);
// 保存状态到文件
saveSceneState();
return newScene;
}
// 在服务器启动时加载场景状态
async function initializeServer() {
try {
// 加载场景状态
loadSceneState();
await messageHistory.initialize();
console.log('消息历史初始化完成');
console.log(`当前场景: ${getCurrentScene().name} (索引: ${currentSceneIndex})`);
} catch (error) {
console.error('初始化服务器失败:', error);
}
}
// 视频映射配置 - 动态更新
function getVideoMapping() {
const currentScene = getCurrentScene();
return {
'defaultVideo': currentScene.defaultVideo,
'interactionVideo': currentScene.interactionVideo,
'tag': currentScene.tag
};
}
// 默认视频流配置 - 动态获取
function getDefaultVideo() {
return getCurrentScene().defaultVideo;
}
let currentScene = getCurrentScene();
// 视频映射配置
const videoMapping = {
// 'say-6s-m-e': '1-m.mp4',
'default': currentScene.defaultVideo,
'8-4-sh': currentScene.interactionVideo,
'tag': currentScene.tag
'default': 'd-3s.mp4',
// 'say-5s-amplitude': '2.mp4',
// 'say-5s-m-e': '4.mp4',
// 'say-5s-m-sw': 'd-0.mp4',
// 'say-3s-m-sw': '6.mp4',
// 'say-5s-m-sw': '5.mp4',
'say-3s-m-sw': 's-1.mp4',
};
// 默认视频流配置
const DEFAULT_VIDEO = currentScene.defaultVideo;
const DEFAULT_VIDEO = 'd-3s.mp4';
const INTERACTION_TIMEOUT = 10000; // 10秒后回到默认视频
// 获取视频列表
@ -225,88 +113,26 @@ app.get('/api/videos', (req, res) => {
});
});
// 获取当前场景信息的API接口
app.get('/api/current-scene', (req, res) => {
const scene = getCurrentScene();
res.json({
name: scene.name,
tag: scene.tag,
apiKey: scene.apiKey,
defaultVideo: scene.defaultVideo,
interactionVideo: scene.interactionVideo
});
});
// 获取视频映射
app.get('/api/video-mapping', (req, res) => {
const currentMapping = getVideoMapping();
const dynamicMapping = {
'default': currentMapping.defaultVideo,
'8-4-sh': currentMapping.interactionVideo,
'tag': currentMapping.tag
};
res.json({ mapping: dynamicMapping });
res.json({ mapping: videoMapping });
});
// 获取默认视频
app.get('/api/default-video', (req, res) => {
res.json({
defaultVideo: getDefaultVideo(),
defaultVideo: DEFAULT_VIDEO,
autoLoop: true
});
});
// 在现有的API接口后添加
app.get('/api/current-scene/opening-line', (req, res) => {
try {
const currentScene = getCurrentScene();
if (currentScene && currentScene.openingLines && currentScene.openingLines.length > 0) {
// 随机选择一个开场白
const randomIndex = Math.floor(Math.random() * currentScene.openingLines.length);
const selectedOpeningLine = currentScene.openingLines[randomIndex];
res.json({
success: true,
openingLine: selectedOpeningLine,
sceneName: currentScene.name,
sceneTag: currentScene.tag
});
} else {
res.json({
success: false,
message: '当前场景没有配置开场白'
});
}
} catch (error) {
console.error('获取开场白失败:', error);
res.status(500).json({
success: false,
message: '获取开场白失败',
error: error.message
});
}
});
// Socket.IO 连接处理
io.on('connection', (socket) => {
// 检查是否超过最大用户数
if (connectedClients.size >= MAX_USERS) {
console.log('拒绝连接,已达到最大用户数:', socket.id);
socket.emit('connection-rejected', {
reason: '系统当前只支持一位用户同时使用,请稍后再试'
});
socket.disconnect(true);
return;
}
console.log('用户连接:', socket.id);
activeUser = socket.id;
connectedClients.set(socket.id, {
socket: socket,
currentVideo: getDefaultVideo(),
isInInteraction: false,
hasTriggeredSceneSwitch: false // 添加这个标志
currentVideo: DEFAULT_VIDEO,
isInInteraction: false
});
// 处理WebRTC信令 - 用于传输视频流
@ -355,21 +181,21 @@ io.on('connection', (socket) => {
});
// 如果是交互类型,设置定时器回到默认视频
// if (type === 'text' || type === 'voice') {
// setTimeout(() => {
// console.log(`交互超时,用户 ${socket.id} 回到默认视频`);
// if (client) {
// client.currentVideo = getDefaultVideo();
// client.isInInteraction = false;
// }
// // 广播回到默认视频的指令
// io.emit('video-stream-switched', {
// videoFile: getDefaultVideo(),
// type: 'default',
// from: socket.id
// });
// }, INTERACTION_TIMEOUT);
// }
if (type === 'text' || type === 'voice') {
setTimeout(() => {
console.log(`交互超时,用户 ${socket.id} 回到默认视频`);
if (client) {
client.currentVideo = DEFAULT_VIDEO;
client.isInInteraction = false;
}
// 广播回到默认视频的指令
io.emit('video-stream-switched', {
videoFile: DEFAULT_VIDEO,
type: 'default',
from: socket.id
});
}, INTERACTION_TIMEOUT);
}
});
// 处理通话开始
@ -377,7 +203,7 @@ io.on('connection', (socket) => {
console.log('通话开始,用户:', socket.id);
const client = connectedClients.get(socket.id);
if (client) {
client.currentVideo = getDefaultVideo();
client.currentVideo = DEFAULT_VIDEO;
client.isInInteraction = false;
}
io.emit('call-started', { from: socket.id });
@ -436,89 +262,25 @@ io.on('connection', (socket) => {
console.log('用户请求回到默认视频:', socket.id);
const client = connectedClients.get(socket.id);
if (client) {
client.currentVideo = getDefaultVideo();
client.currentVideo = DEFAULT_VIDEO;
client.isInInteraction = false;
}
socket.emit('switch-video-stream', {
videoFile: getDefaultVideo(),
videoFile: DEFAULT_VIDEO,
type: 'default'
});
});
// 处理用户关闭连接事件
socket.on('user-disconnect', () => {
console.log('=== 场景切换开始 ===');
console.log('用户主动关闭连接:', socket.id);
console.log('切换前场景:', getCurrentScene().name, '(索引:', currentSceneIndex, ')');
// 切换到下一个场景
const newScene = switchToNextScene();
console.log('切换后场景:', newScene.name, '(索引:', currentSceneIndex, ')');
// 检查是否已经处理过场景切换
const client = connectedClients.get(socket.id);
if (client && client.hasTriggeredSceneSwitch) {
console.log('场景切换已处理,跳过重复触发');
return;
}
// 标记已处理场景切换
if (client) {
client.hasTriggeredSceneSwitch = true;
}
// 更新videoMapping
const newMapping = getVideoMapping();
videoMapping['default'] = newMapping.defaultVideo;
videoMapping['8-4-sh'] = newMapping.interactionVideo;
videoMapping['tag'] = newMapping.tag;
// 广播场景切换事件给所有客户端
io.emit('scene-switched', {
scene: newScene,
mapping: {
defaultVideo: newMapping.defaultVideo,
interactionVideo: newMapping.interactionVideo,
tag: newMapping.tag,
'default': newMapping.defaultVideo,
'8-4-sh': newMapping.interactionVideo
},
from: socket.id
});
});
// 断开连接
socket.on('disconnect', async () => {
socket.on('disconnect', () => {
console.log('用户断开连接:', socket.id);
const client = connectedClients.get(socket.id);
if (client) {
// 广播用户离开事件
socket.broadcast.emit('user-disconnected', {
id: socket.id,
username: client.username
});
}
connectedClients.delete(socket.id);
// 清空聊天记录
try {
await messageHistory.clearHistory();
console.log('断开连接后已清空 chat_history.json');
} catch (err) {
console.error('清空聊天记录失败:', err);
}
// 清除活跃用户
if (activeUser === socket.id) {
activeUser = null;
console.log('活跃用户已清除,系统现在可供新用户使用');
}
});
});
// 启动服务器
const PORT = process.env.PORT || 3000;
server.listen(PORT, '0.0.0.0', async () => {
server.listen(PORT, async () => {
console.log(`服务器运行在端口 ${PORT}`);
await initializeServer();
});
@ -526,38 +288,3 @@ server.listen(PORT, '0.0.0.0', async () => {
// 导出消息历史管理器供其他模块使用
module.exports = { messageHistory };
console.log(`访问 http://localhost:${PORT} 开始使用`);
// 在现有的代码的基础上添加以下内容
// 在 connectedClients 定义后立即添加约第97行
let activeUser = null; // 当前活跃用户
const MAX_USERS = 1; // 最大用户数
// 在静态文件中间件之前添加主页路由检查约第37行之后
app.use(cors());
app.use(express.json());
// 主页路由 - 必须在静态文件服务之前
app.get('/', (req, res) => {
if (connectedClients.size >= MAX_USERS) {
// 如果已有用户在线,重定向到等待页面
res.sendFile(path.join(__dirname, 'src', 'waiting.html'));
} else {
// 否则正常访问主页
res.sendFile(path.join(__dirname, 'src', 'index.html'));
}
});
// 检查可用性API
app.get('/api/check-availability', (req, res) => {
const available = connectedClients.size < MAX_USERS;
res.json({
available,
currentUsers: connectedClients.size,
maxUsers: MAX_USERS
});
});
// 静态文件服务 - 必须在主页路由之后
app.use(express.static('src'));
app.use('/videos', express.static('videos'));

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 MiB

View File

@ -3,7 +3,6 @@
class AudioProcessor {
constructor(options = {}) {
this.audioContext = null;
this.stream = null; // 添加这一行
this.isRecording = false;
this.audioChunks = [];
@ -312,14 +311,9 @@ class AudioProcessor {
}
// 开始录音
async startRecording(existingStream = null) {
async startRecording() {
try {
// 如果有外部提供的音频流,使用它;否则获取新的
if (existingStream) {
this.stream = existingStream;
console.log('使用外部提供的音频流');
} else {
this.stream = await navigator.mediaDevices.getUserMedia({
const stream = await navigator.mediaDevices.getUserMedia({
audio: {
sampleRate: 16000,
channelCount: 1,
@ -327,14 +321,12 @@ class AudioProcessor {
noiseSuppression: true
}
});
console.log('获取新的音频流');
}
this.audioContext = new (window.AudioContext || window.webkitAudioContext)({
sampleRate: 16000
});
const source = this.audioContext.createMediaStreamSource(this.stream);
const source = this.audioContext.createMediaStreamSource(stream);
const processor = this.audioContext.createScriptProcessor(4096, 1, 1);
processor.onaudioprocess = (event) => {
@ -351,10 +343,6 @@ class AudioProcessor {
source.connect(processor);
processor.connect(this.audioContext.destination);
// 保存处理器引用以便后续清理
this.processor = processor;
this.source = source;
this.isRecording = true;
this.onStatusUpdate('等待语音输入...', 'ready');
@ -374,34 +362,8 @@ class AudioProcessor {
// 停止录音
stopRecording() {
console.log('开始停止录音...');
// 断开音频节点连接
if (this.source) {
this.source.disconnect();
this.source = null;
}
if (this.processor) {
this.processor.disconnect();
this.processor = null;
}
// 停止所有音频轨道
if (this.stream) {
this.stream.getTracks().forEach(track => {
track.stop();
console.log(`停止音频轨道: ${track.label}`);
});
this.stream = null;
}
if (this.audioContext) {
this.audioContext.close().then(() => {
console.log('AudioContext已关闭');
}).catch(err => {
console.error('关闭AudioContext时出错:', err);
});
this.audioContext.close();
this.audioContext = null;
}
@ -415,20 +377,12 @@ class AudioProcessor {
this.handleSpeechEnd();
}
// 重置所有状态
this.isRecording = false;
this.isSpeaking = false;
this.audioBuffer = [];
this.audioChunks = [];
this.consecutiveFramesCount = 0;
this.frameBuffer = [];
// 重置校准状态,确保下次启动时重新校准
this.noiseCalibrationSamples = [];
this.isCalibrated = false;
this.onStatusUpdate('录音已完全停止', 'stopped');
console.log('录音已完全停止,所有资源已释放');
this.onStatusUpdate('录音已停止', 'stopped');
console.log('录音已停止');
}
// 获取录音状态

Binary file not shown.

Before

Width:  |  Height:  |  Size: 387 KiB

View File

@ -2,7 +2,7 @@
import { requestLLMStream } from './llm_stream.js';
import { requestMinimaxi } from './minimaxi_stream.js';
import { getLLMConfig, getLLMConfigByScene, getMinimaxiConfig, getAudioConfig, validateConfig } from './config.js';
import { getLLMConfig, getMinimaxiConfig, getAudioConfig, validateConfig } from './config.js';
// 防止重复播放的标志
let isPlaying = false;
@ -26,13 +26,12 @@ async function initializeHistoryMessage(recentCount = 5) {
const data = await response.json();
historyMessage = data.messages || [];
isInitialized = true;
console.log("历史消息初始化完成:", historyMessage.length, "条消息", historyMessage);
console.log("历史消息初始化完成:", historyMessage.length, "条消息");
return historyMessage;
} catch (error) {
console.error('获取历史消息失败,使用默认格式:', error);
historyMessage = [
// { role: 'system', content: 'You are a helpful assistant.' }
{ role: 'system', content: 'You are a helpful assistant.' }
];
isInitialized = true;
return historyMessage;
@ -43,7 +42,7 @@ async function initializeHistoryMessage(recentCount = 5) {
function getCurrentHistoryMessage() {
if (!isInitialized) {
console.warn('历史消息未初始化,返回默认消息');
return [];
return [{ role: 'system', content: 'You are a helpful assistant.' }];
}
return [...historyMessage]; // 返回副本,避免外部修改
}
@ -73,26 +72,19 @@ function updateHistoryMessage(userInput, assistantResponse) {
// 保存消息到服务端
async function saveMessage(userInput, assistantResponse) {
try {
// 验证参数是否有效
if (!userInput || !userInput.trim() || !assistantResponse || !assistantResponse.trim()) {
console.warn('跳过保存消息:用户输入或助手回复为空');
return;
}
const response = await fetch('/api/messages/save', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
userInput: userInput.trim(),
assistantResponse: assistantResponse.trim()
userInput,
assistantResponse
})
});
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new Error(`保存消息失败: ${response.status} ${errorData.error || response.statusText}`);
throw new Error('保存消息失败');
}
console.log('消息已保存到服务端');
@ -104,7 +96,7 @@ async function saveMessage(userInput, assistantResponse) {
async function chatWithAudioStream(userInput) {
// 确保历史消息已初始化
if (!isInitialized) {
await initializeHistoryMessage(100);
await initializeHistoryMessage();
}
// 验证配置
@ -114,19 +106,16 @@ async function chatWithAudioStream(userInput) {
console.log('用户输入:', userInput);
// 获取当前场景对应的配置
const llmConfig = await getLLMConfigByScene();
// 获取配置
const llmConfig = getLLMConfig();
const minimaxiConfig = getMinimaxiConfig();
const audioConfig = getAudioConfig();
console.log(`当前场景: ${llmConfig.sceneName} (${llmConfig.sceneTag})`);
console.log(`使用API Key: ${llmConfig.model}...`);
// 清空音频队列
audioQueue = [];
// 定义段落处理函数
const handleSegment = async (segment, textPlay) => {
const handleSegment = async (segment) => {
console.log('\n=== 处理文本段落 ===');
console.log('段落内容:', segment);
@ -145,7 +134,6 @@ async function chatWithAudioStream(userInput) {
audio_setting: audioConfig.audioSetting,
},
stream: true,
textPlay: textPlay
});
// 将音频添加到播放队列
@ -197,7 +185,7 @@ async function chatWithAudioStream(userInput) {
}
// 导出初始化函数,供外部调用
export { chatWithAudioStream, initializeHistoryMessage, getCurrentHistoryMessage, saveMessage, updateHistoryMessage, prependIntroRole };
export { chatWithAudioStream, initializeHistoryMessage, getCurrentHistoryMessage };
// 处理音频播放队列
async function processAudioQueue() {
@ -323,18 +311,3 @@ async function playAudioStreamNode(audioHex) {
// export { chatWithAudioStream, playAudioStream, playAudioStreamNode, initializeHistoryMessage, getCurrentHistoryMessage };
// 在历史消息顶部插入“我是你的roleName / 好的roleName。”开场提示
function prependIntroRole(roleName) {
if (!roleName) return;
const introUser = { role: 'user', content: `我是你的${roleName}` };
const introAssistant = { role: 'assistant', content: `好的,${roleName}` };
const hasIntro = historyMessage.slice(0, 2).some(m =>
m.content === introUser.content || m.content === introAssistant.content
);
if (!hasIntro) {
// 先插助手,再插用户,确保用户消息在最顶部
historyMessage.unshift(introAssistant);
historyMessage.unshift(introUser);
}
}

94
src/config.example.js Normal file
View File

@ -0,0 +1,94 @@
// 示例配置文件 - 请复制此文件为 config.js 并填入实际的API密钥
export const config = {
// LLM API配置
llm: {
apiKey: 'your_ark_api_key_here', // 请替换为实际的ARK API密钥
model: 'bot-20250720193048-84fkp',
},
// Minimaxi API配置
minimaxi: {
apiKey: 'your_minimaxi_api_key_here', // 请替换为实际的Minimaxi API密钥
groupId: 'your_minimaxi_group_id_here', // 请替换为实际的Minimaxi Group ID
},
// 音频配置
audio: {
model: 'speech-02-hd',
voiceSetting: {
voice_id: 'yantu-qinggang',
speed: 1,
vol: 1,
pitch: 0,
emotion: 'happy',
},
audioSetting: {
sample_rate: 32000,
bitrate: 128000,
format: 'mp3',
},
},
// 系统配置
system: {
language_boost: 'auto',
output_format: 'hex',
stream: true,
},
};
// 验证配置是否完整
export function validateConfig() {
const requiredFields = [
'llm.apiKey',
'llm.model',
'minimaxi.apiKey',
'minimaxi.groupId'
];
const missingFields = [];
for (const field of requiredFields) {
const keys = field.split('.');
let value = config;
for (const key of keys) {
value = value[key];
if (!value) break;
}
if (!value || value === 'your_ark_api_key_here' || value === 'your_minimaxi_api_key_here' || value === 'your_minimaxi_group_id_here') {
missingFields.push(field);
}
}
if (missingFields.length > 0) {
console.warn('配置不完整,请检查以下字段:', missingFields);
return false;
}
return true;
}
// 获取配置的便捷方法
export function getLLMConfig() {
return {
apiKey: config.llm.apiKey,
model: config.llm.model,
};
}
export function getMinimaxiConfig() {
return {
apiKey: config.minimaxi.apiKey,
groupId: config.minimaxi.groupId,
};
}
export function getAudioConfig() {
return {
model: config.audio.model,
voiceSetting: config.audio.voiceSetting,
audioSetting: config.audio.audioSetting,
...config.system,
};
}

View File

@ -3,7 +3,7 @@ export const config = {
// LLM API配置
llm: {
apiKey: 'd012651b-a65b-4b13-8ff3-cc4ff3a29783', // 请替换为实际的API密钥
model: 'bot-20250916100919-w8vxr',
model: 'bot-20250720193048-84fkp',
},
// Minimaxi API配置
@ -16,7 +16,7 @@ export const config = {
audio: {
model: 'speech-02-hd',
voiceSetting: {
voice_id: 'yantu-old-man-demo-xnh3',
voice_id: 'yantu-qinggang-2',
speed: 1,
vol: 1,
pitch: 0,
@ -70,30 +70,11 @@ export function validateConfig() {
}
// 获取配置的便捷方法
export function getLLMConfig(sceneApiKey = null) {
return {
apiKey: config.llm.apiKey, // 如果提供了场景API key则使用它
model: sceneApiKey || config.llm.model,
};
}
// 新增根据场景获取LLM配置
export async function getLLMConfigByScene() {
try {
const response = await fetch('/api/current-scene');
const sceneData = await response.json();
export function getLLMConfig() {
return {
apiKey: config.llm.apiKey,
model: sceneData.apiKey,
sceneTag: sceneData.tag,
sceneName: sceneData.name
model: config.llm.model,
};
} catch (error) {
console.warn('获取场景配置失败,使用默认配置:', error);
return getLLMConfig(); // 回退到默认配置
}
}
export function getMinimaxiConfig() {

View File

@ -1,117 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Yantootech</title>
<script src="https://cdn.tailwindcss.com"></script>
<link href="https://unpkg.com/aos@2.3.1/dist/aos.css" rel="stylesheet">
<script src="https://unpkg.com/aos@2.3.1/dist/aos.js"></script>
<script src="https://cdn.jsdelivr.net/npm/feather-icons/dist/feather.min.js"></script>
<script src="https://unpkg.com/feather-icons"></script>
<!-- <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r134/three.min.js"></script> -->
<!-- <script src="https://cdn.jsdelivr.net/npm/vanta@latest/dist/vanta.globe.min.js"></script> -->
<style>
body {
overflow-x: hidden;
}
.avatar-hover {
transition: all 0.3s ease;
}
.avatar-hover:hover {
transform: translateY(-5px);
box-shadow: 0 20px 25px -5px rgba(132, 204, 22, 0.3), 0 10px 10px -5px rgba(132, 204, 22, 0.1);
}
#vanta-bg {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
z-index: -1;
pointer-events: none;
}
</style>
</head>
<body class="min-h-screen bg-gray-900 text-white">
<div id="vanta-bg"></div>
<main class="container mx-auto px-4 py-12 md:py-24 flex flex-col items-center justify-center min-h-screen">
<div class="text-center mb-16" data-aos="fade-down">
<h1 class="text-4xl md:text-6xl font-bold mb-4 bg-clip-text text-transparent bg-gradient-to-r from-green-400 to-purple-500">
选择你的场景
</h1>
<!-- <p class="text-lg md:text-xl text-gray-300 max-w-2xl mx-auto">
Explore the decentralized future with our curated web3 experiences
</p> -->
</div>
<div class="relative w-full max-w-4xl mx-auto">
<div class="absolute inset-0 flex items-center justify-center">
<div class="w-full h-full max-w-md mx-auto opacity-20">
<svg viewBox="0 0 200 200" xmlns="http://www.w3.org/2000/svg">
<path fill="#4F46E5" d="M45.8,-51.1C58.5,-40.1,67.7,-24.9,70.6,-7.7C73.5,9.5,70.1,28.7,57.8,42.1C45.5,55.5,24.3,63.1,3.8,59.3C-16.7,55.5,-33.4,40.3,-46.1,26.9C-58.8,13.5,-67.5,1.9,-66.5,-10.1C-65.5,-22.1,-54.8,-34.5,-41.5,-45.3C-28.2,-56.1,-12.4,-65.4,3.2,-68.6C18.8,-71.8,37.6,-68.9,45.8,-51.1Z" transform="translate(100 100)" />
</svg>
</div>
</div>
<div class="flex flex-col md:flex-row items-center justify-center gap-8 md:gap-16 w-full" data-aos="fade-up">
<a href="/old.html" class="group block cursor-pointer focus:outline-none">
<div class="avatar-hover bg-gray-800 bg-opacity-60 backdrop-blur-md rounded-full p-4 border-2 border-green-500 transition-all duration-300 group-hover:border-purple-500 md:group-hover:border-purple-500 group-active:border-purple-500">
<div class="w-32 h-32 md:w-40 md:h-40 rounded-full overflow-hidden border-4 border-green-400 group-hover:border-purple-400 md:group-hover:border-purple-400 group-active:border-purple-400 transition-all duration-300">
<img src="tx.png" alt="Dashboard" class="w-full h-full object-cover">
</div>
<h3 class="mt-4 text-xl font-semibold text-center group-hover:text-purple-400 md:group-hover:text-purple-400 group-active:text-purple-400 transition-colors duration-300">老人陪伴</h3>
</div>
</a>
<a href="https://medicine.yantootech.com" class="group block cursor-pointer focus:outline-none">
<div class="avatar-hover bg-gray-800 bg-opacity-60 backdrop-blur-md rounded-full p-4 border-2 border-purple-500 transition-all duration-300 group-hover:border-green-500 md:group-hover:border-green-500 group-active:border-green-500">
<div class="w-32 h-32 md:w-40 md:h-40 rounded-full overflow-hidden border-4 border-purple-400 group-hover:border-green-400 md:group-hover:border-green-400 group-active:border-green-400 transition-all duration-300">
<img src="nv.png" alt="Explore" class="w-full h-full object-cover">
</div>
<h3 class="mt-4 text-xl font-semibold text-center group-hover:text-green-400 md:group-hover:text-green-400 group-active:text-green-400 transition-colors duration-300">医疗咨询</h3>
</div>
</a>
</div>
</div>
<div class="mt-16 text-center" data-aos="fade-up" data-aos-delay="200">
<p class="text-gray-400 text-sm md:text-base max-w-md mx-auto">
A Little Young Boy And Cartoon Doctor
</p>
<!-- <button class="mt-4 px-6 py-3 bg-gradient-to-r from-green-500 to purple-600 rounded-full font-medium hover:from-green-600 hover:to-purple-700 transition-all duration-300 flex items-center mx-auto">
<i data-feather="lock" class="mr-2 w-5 h-5"></i> Connect Wallet
</button> -->
</div>
</main>
<!-- <script>
window.addEventListener('DOMContentLoaded', function () {
if (window.VANTA && document.querySelector('#vanta-bg')) {
VANTA.GLOBE({
el: "#vanta-bg",
mouseControls: true,
touchControls: true,
gyroControls: false,
minHeight: 200.00,
minWidth: 200.00,
scale: 1.00,
scaleMobile: 1.00,
color: 0x3f83f8,
backgroundColor: 0x111827,
size: 0.8
});
}
});
</script> -->
<script>
AOS.init({
duration: 800,
easing: 'ease-in-out',
once: true
});
feather.replace();
</script>
</body>
</html>

26
src/debug_audio.js Normal file
View File

@ -0,0 +1,26 @@
// 调试音频数据
function debugAudioData(audioHex) {
console.log('=== 音频数据调试 ===');
console.log('音频数据长度:', audioHex.length);
console.log('音频数据前100个字符:', audioHex.substring(0, 100));
console.log('音频数据后100个字符:', audioHex.substring(audioHex.length - 100));
// 检查是否有重复模式
const halfLength = Math.floor(audioHex.length / 2);
const firstHalf = audioHex.substring(0, halfLength);
const secondHalf = audioHex.substring(halfLength);
if (firstHalf === secondHalf) {
console.log('⚠️ 警告:音频数据可能是重复的!');
} else {
console.log('✅ 音频数据没有重复');
}
}
// 如果在浏览器环境中运行
if (typeof window !== 'undefined') {
window.debugAudioData = debugAudioData;
console.log('音频调试函数已挂载到 window.debugAudioData');
}
export { debugAudioData };

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

View File

@ -2,527 +2,71 @@
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<title>Soulmate In Parallels - 壹和零人工智能</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>WebRTC 音频通话</title>
<link rel="stylesheet" href="styles.css">
<link rel="icon" type="image/png" sizes="48x48" href="favicon.png" />
<style>
/* 全屏视频样式 */
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
html, body {
height: 100%;
overflow: hidden;
background: linear-gradient(135deg, #87CEEB 0%, #B0E0E6 100%); /* 浅蓝色渐变背景 */
}
.container {
width: 100vw;
height: 100vh;
margin: 0;
padding: 0;
display: flex;
flex-direction: column;
position: relative;
}
.main-content {
flex: 1;
background: transparent;
border-radius: 0;
padding: 0;
box-shadow: none;
width: 100%;
height: 100%;
display: flex;
flex-direction: column;
}
.recorded-video-section {
flex: 1;
display: flex;
align-items: center;
justify-content: center;
width: 100%;
height: 100%;
position: relative;
/* 确保视频区域固定高度并居中 */
min-height: 100vh;
max-height: 100vh;
}
/* 视频容器样式 - 支持双缓冲固定9:16比例 */
.video-container {
position: relative;
width: 56.25vh; /* 9:16比例与视频宽度保持一致 */
height: 100vh;
overflow: hidden;
display: flex;
align-items: center;
justify-content: center;
margin: 0 auto; /* 水平居中 */
}
#recordedVideo, #recordedVideoBuffer {
position: absolute;
width: 56.25vh; /* 9:16比例高度为100vh时宽度为100vh * 9/16 = 56.25vh */
height: 100vh;
object-fit: cover;
border-radius: 0;
box-shadow: none;
transition: opacity 0.5s ease-in-out;
/* 确保视频始终居中 */
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
}
/* 主视频默认显示 */
#recordedVideo {
opacity: 1;
z-index: 2;
}
/* 缓冲视频默认隐藏 */
#recordedVideoBuffer {
opacity: 0;
z-index: 1;
}
/* 切换状态 */
#recordedVideo.switching {
opacity: 0;
}
#recordedVideoBuffer.switching {
opacity: 1;
}
/* 加载状态 */
.video-loading {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
z-index: 10;
color: white;
font-size: 18px;
opacity: 0;
transition: opacity 0.3s ease;
}
.video-loading.show {
opacity: 1;
}
/* 等待连接提示样式 */
.connection-waiting {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
z-index: 20;
color: white;
font-size: 18px;
text-align: center;
background: rgba(0, 0, 0, 0.7);
padding: 30px;
border-radius: 15px;
backdrop-filter: blur(10px);
transition: opacity 0.3s ease;
}
.connection-waiting.show {
opacity: 1;
}
/* 加载动画 */
.loading-spinner {
width: 40px;
height: 40px;
border: 3px solid rgba(255, 255, 255, 0.3);
border-top: 3px solid white;
border-radius: 50%;
animation: spin 1s linear infinite;
margin: 0 auto 10px;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
/* 响应式设计 - 确保在不同屏幕尺寸下视频容器保持9:16比例 */
@media (max-width: 768px) {
.video-container {
height: 100vh;
width: 56.25vh; /* 9:16比例与视频宽度保持一致 */
}
#recordedVideo, #recordedVideoBuffer {
width: 56.25vh; /* 9:16比例 */
height: 100vh;
object-fit: cover;
}
}
@media (min-width: 769px) {
.video-container {
height: 100vh;
width: 56.25vh; /* 9:16比例与视频宽度保持一致 */
}
#recordedVideo, #recordedVideoBuffer {
width: 56.25vh; /* 9:16比例 */
height: 100vh;
object-fit: cover;
}
}
/* 横屏模式优化 */
@media (orientation: landscape) and (max-height: 500px) {
.video-container {
height: 100vh;
width: 56.25vh; /* 9:16比例与视频宽度保持一致 */
}
.controls {
bottom: 20px;
}
}
/* 竖屏模式优化 */
@media (orientation: portrait) {
.video-container {
height: 100vh;
width: 56.25vh; /* 9:16比例与视频宽度保持一致 */
}
}
.controls {
position: absolute;
bottom: 50px;
left: 50%;
transform: translateX(-50%);
z-index: 10;
display: flex !important;
flex-direction: row !important;
justify-content: center;
align-items: center;
gap: 20px;
}
/* 确保移动端也保持同一行 */
@media (max-width: 768px) {
.controls {
flex-direction: row !important;
gap: 15px;
}
}
#startButton {
width: 60px;
height: 60px;
border-radius: 50%;
background: rgba(34, 197, 94, 0.9);
backdrop-filter: blur(10px);
border: none;
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
transition: all 0.3s ease;
box-shadow: 0 4px 15px rgba(34, 197, 94, 0.3);
min-width: auto;
padding: 15px 30px;
font-size: 1.1rem;
border-radius: 25px;
min-width: 200px;
}
#startButton:hover:not(:disabled) {
background: rgba(22, 163, 74, 0.95);
transform: scale(1.1);
box-shadow: 0 6px 20px rgba(34, 197, 94, 0.5);
}
#startButton.connecting {
background: rgba(255, 193, 7, 0.9);
cursor: not-allowed;
}
#startButton.connecting:hover {
background: rgba(255, 193, 7, 0.9);
transform: none;
}
#startButton.calling {
background: rgba(255, 193, 7, 0.9);
animation: pulse 2s infinite;
}
#startButton.calling:hover {
background: rgba(255, 193, 7, 0.95);
transform: scale(1.05);
}
@keyframes pulse {
0% {
box-shadow: 0 4px 15px rgba(255, 193, 7, 0.3);
}
50% {
box-shadow: 0 6px 25px rgba(255, 193, 7, 0.6);
}
100% {
box-shadow: 0 4px 15px rgba(255, 193, 7, 0.3);
}
}
.audio-status {
position: absolute;
top: 20px;
left: 50%;
transform: translateX(-50%);
background: rgba(0, 0, 0, 0.7);
color: white;
padding: 8px 16px;
border-radius: 20px;
font-size: 14px;
z-index: 1000;
transition: all 0.3s ease;
}
.audio-status.connecting {
background: rgba(255, 193, 7, 0.9);
color: #000;
}
.audio-status.connected {
background: rgba(40, 167, 69, 0.9);
color: white;
}
.audio-status.error {
background: rgba(220, 53, 69, 0.9);
color: white;
}
#startButton svg {
width: 24px;
height: 24px;
fill: white;
}
#startButton:disabled {
opacity: 0.5;
cursor: not-allowed;
}
#stopButton {
width: 60px;
height: 60px;
border-radius: 50%;
background: rgba(220, 53, 69, 0.9);
backdrop-filter: blur(10px);
border: none;
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
transition: all 0.3s ease;
box-shadow: 0 4px 15px rgba(220, 53, 69, 0.3);
padding: 0; /* 确保没有内边距影响居中 */
}
#stopButton:hover:not(:disabled) {
background: rgba(200, 35, 51, 0.95);
transform: scale(1.1);
}
#stopButton svg {
width: 24px;
height: 24px;
display: block; /* 确保SVG作为块级元素 */
margin: auto; /* 额外的居中保证 */
}
#stopButton:disabled {
opacity: 0.5;
cursor: not-allowed;
}
/* 头像样式 - 确保显示 */
.avatar-container {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
z-index: 15; /* 提高z-index确保在视频上方 */
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
transition: opacity 0.3s ease;
opacity: 1; /* 确保默认显示 */
}
.avatar-container.hidden {
opacity: 0;
pointer-events: none;
}
.avatar {
width: 120px;
height: 120px;
border-radius: 50%;
border: 4px solid rgba(255, 255, 255, 0.8);
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.2);
/* background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); */
background: #000000;
display: flex;
align-items: center;
justify-content: center;
color: white;
font-size: 48px;
font-weight: bold;
overflow: hidden; /* 确保图片不会溢出 */
}
.avatar img {
width: 100%;
height: 100%;
border-radius: 50%;
object-fit: cover;
display: block; /* 确保图片显示 */
}
/* 确保视频默认隐藏 */
#recordedVideo, #recordedVideoBuffer {
position: absolute;
width: 56.25vh;
height: 100vh;
object-fit: cover;
border-radius: 0;
box-shadow: none;
/* transition: opacity 0.5s ease-in-out; */
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
opacity: 1; /* 默认隐藏视频 */
z-index: 1; /* 确保在头像下方 */
}
/* 通话时隐藏头像,显示视频 */
.video-container.calling .avatar-container {
opacity: 0;
pointer-events: none;
}
.video-container.calling #recordedVideo {
opacity: 1;
z-index: 10;
}
</style>
</head>
<body>
<div class="container">
<!-- 隐藏的header -->
<header style="display: none;">
<header>
<h1>WebRTC 音频通话</h1>
<p>实时播放录制视频,支持文本和语音输入</p>
</header>
<div class="main-content">
<!-- 音频状态显示 - 完全隐藏 -->
<div class="audio-status" style="display: none;">
<!-- 音频状态显示 -->
<div class="audio-status">
<div class="status-indicator">
<span id="audioStatus" style="display: none;">未连接</span>
<span id="audioStatus">未连接</span>
</div>
</div>
<!-- 录制视频播放区域 - 全屏显示 -->
<!-- 录制视频播放区域 -->
<div class="recorded-video-section">
<div class="video-container" id="videoContainer">
<!-- 头像容器 -->
<div class="avatar-container" id="avatarContainer">
<div class="avatar" id="avatar">
<!-- 使用相对路径引用图片 -->
<img src="./tx.png" alt="头像" onerror="this.style.display='none'; this.parentElement.innerHTML='壹和零';">
</div>
<!-- <div class="avatar-name">AI助手</div> -->
</div>
<!-- 主视频元素 -->
<h3>录制视频播放</h3>
<video id="recordedVideo" autoplay muted>
<source src="" type="video/mp4">
您的浏览器不支持视频播放
</video>
<!-- 缓冲视频元素 -->
<video id="recordedVideoBuffer" autoplay muted>
<source src="" type="video/mp4">
您的浏览器不支持视频播放
</video>
<!-- 加载指示器 -->
<div class="video-loading" id="videoLoading">
<div class="loading-spinner"></div>
<!-- <div>正在切换视频...</div> -->
</div>
<!-- 等待连接提示 -->
<div class="connection-waiting" id="connectionWaiting" style="display: none;">
<div class="loading-spinner"></div>
<div style="color: white; font-size: 18px; margin-top: 10px;">等待连接通话中...</div>
</div>
</div>
<div class="video-info" style="display: none;">
<div class="video-info">
<span id="currentVideoName">未选择视频</span>
</div>
</div>
<!-- 控制按钮 - 悬浮在视频上方 -->
<!-- 控制按钮 -->
<div class="controls">
<button id="startButton" class="btn btn-primary" title="开始通话">
<!-- 默认通话图标 -->
<svg id="callIcon" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6.62 10.79c1.44 2.83 3.76 5.14 6.59 6.59l2.2-2.2c.27-.27.67-.36 1.02-.24 1.12.37 2.33.57 3.57.57.55 0 1 .45 1 1V20c0 .55-.45 1-1 1-9.39 0-17-7.61-17-17 0-.55.45-1 1-1h3.5c.55 0 1 .45 1 1 0 1.25.2 2.45.57 3.57.11.35.03.74-.25 1.02l-2.2 2.2z" fill="white"/>
</svg>
<!-- 通话中文字显示(初始隐藏) -->
<span id="callingText" style="display: none; color: white; font-size: 14px;">正在通话中</span>
</button>
<button id="stopButton" class="btn btn-danger" disabled title="结束通话" style="display: none;">
<svg viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M19.23 15.26l-2.54-.29c-.61-.07-1.21.14-1.64.57l-1.84 1.84c-2.83-1.44-5.15-3.75-6.59-6.59l1.85-1.85c.43-.43.64-1.03.57-1.64l-.29-2.52c-.12-1.01-.97-1.77-1.99-1.77H5.03c-1.13 0-2.07.94-2 2.07.53 8.54 7.36 15.36 15.89 15.89 1.13.07 2.07-.87 2.07-2v-1.73c.01-1.01-.75-1.86-1.76-1.98z" fill="white"/>
<line x1="18" y1="6" x2="6" y2="18" stroke="white" stroke-width="2"/>
</svg>
</button>
<button id="startButton" class="btn btn-primary">开始音频通话</button>
<button id="stopButton" class="btn btn-danger" disabled>停止通话</button>
<!-- <button id="muteButton" class="btn btn-secondary">静音</button>
<button id="defaultVideoButton" class="btn btn-info">回到默认视频</button>
<button id="testVideoButton" class="btn btn-warning">测试视频文件</button> -->
</div>
<!-- 隐藏的输入区域 -->
<div class="input-section" style="display: none;">
<!-- 输入区域 -->
<div class="input-section">
<div class="text-input-group">
<input type="text" id="textInput" placeholder="输入文本内容..." />
<button id="sendTextButton" class="btn btn-primary">发送文本</button>
</div>
<div class="voice-input-group">
<button id="startVoiceButton" class="btn btn-success">开始语音输入</button>
<button id="stopVoiceButton" class="btn btn-warning" disabled>停止语音输入</button>
<span id="voiceStatus">点击开始语音输入</span>
</div>
</div>
<!-- 隐藏的视频选择 -->
<div class="video-selection" style="display: none;">
<!-- 视频选择 -->
<!-- <div class="video-selection">
<h3>选择要播放的视频</h3>
<div id="videoList" class="video-list">
<!-- 视频列表将在这里动态生成 -->
</div>
</div>
视频列表将在这里动态生成 -->
<!-- </div>
</div> -->
<!-- 隐藏的状态显示 -->
<div class="status-section" style="display: none;">
<div id="connectionStatus" class="status" style="display: none;">未连接</div>
<!-- 状态显示 -->
<div class="status-section">
<div id="connectionStatus" class="status">未连接</div>
<div id="messageLog" class="message-log"></div>
</div>
</div>

File diff suppressed because it is too large Load Diff

View File

@ -1,35 +1,5 @@
// 以流式方式请求LLM大模型接口并打印流式返回内容
// 过滤旁白内容的函数
function filterNarration(text) {
if (!text) return text;
// 匹配各种括号内的旁白内容
// 包括:()、【】、[]、{}、〈〉、《》等
const narrationPatterns = [
/[^]*/g, // 中文圆括号
/\([^)]*\)/g, // 英文圆括号
/【[^】]*】/g, // 中文方括号
/\[[^\]]*\]/g, // 英文方括号
/\{[^}]*\}/g, // 花括号
/〈[^〉]*〉/g, // 中文尖括号
/《[^》]*》/g, // 中文书名号
/<[^>]*>/g // 英文尖括号
];
let filteredText = text;
// 逐个应用过滤规则
narrationPatterns.forEach(pattern => {
filteredText = filteredText.replace(pattern, '');
});
// 清理多余的空格和换行
filteredText = filteredText.replace(/\s+/g, ' ').trim();
return filteredText;
}
async function requestLLMStream({ apiKey, model, messages, onSegment }) {
const response = await fetch('https://ark.cn-beijing.volces.com/api/v3/bots/chat/completions', {
method: 'POST',
@ -59,7 +29,7 @@ async function requestLLMStream({ apiKey, model, messages, onSegment }) {
let pendingText = ''; // 待处理的文本片段
// 分段分隔符
const segmentDelimiters = /[,。:;!?,.:;!?]|\.{3,}|……|…/;
const segmentDelimiters = /[,。:;!?,.:;!?]/;
while (!done) {
const { value, done: doneReading } = await reader.read();
@ -81,17 +51,9 @@ async function requestLLMStream({ apiKey, model, messages, onSegment }) {
if (jsonStr === '[DONE]') {
console.log('LLM SSE流结束');
// 处理最后的待处理文本无论长度是否大于5个字
// 处理最后的待处理文本
if (pendingText.trim() && onSegment) {
console.log('处理最后的待处理文本:', pendingText.trim());
// 过滤旁白内容
const filteredText = filterNarration(pendingText.trim());
if (filteredText.trim()) {
console.log('过滤旁白后的最后文本:', filteredText);
await onSegment(filteredText, true);
} else {
console.log('最后的文本被完全过滤,跳过');
}
await onSegment(pendingText.trim());
}
continue;
}
@ -102,50 +64,27 @@ async function requestLLMStream({ apiKey, model, messages, onSegment }) {
const deltaContent = obj.choices[0].delta.content;
content += deltaContent;
pendingText += deltaContent;
console.log('【未过滤】LLM内容片段:', pendingText);
console.log('LLM内容片段:', deltaContent);
// 先过滤旁白,再检查分段分隔符
const filteredPendingText = filterNarration(pendingText);
// 检查过滤后的文本是否包含分段分隔符
if (segmentDelimiters.test(filteredPendingText)) {
// 按分隔符分割已过滤的文本
const segments = filteredPendingText.split(segmentDelimiters);
// 重新组合处理:只处理足够长的完整段落
let accumulatedText = '';
let hasProcessed = false;
// 检查是否包含分段分隔符
if (segmentDelimiters.test(pendingText)) {
// 按分隔符分割文本
const segments = pendingText.split(segmentDelimiters);
// 处理完整的段落(除了最后一个,因为可能不完整)
for (let i = 0; i < segments.length - 1; i++) {
const segment = segments[i].trim();
if (segment) {
accumulatedText += segment;
// 找到分隔符
const delimiterMatch = filteredPendingText.match(segmentDelimiters);
if (delimiterMatch) {
accumulatedText += delimiterMatch[0];
}
// 如果累积文本长度大于5个字处理它
if (accumulatedText.length > 8 && onSegment) {
console.log('【已过滤】检测到完整段落:', accumulatedText);
// 文本已经过滤过旁白,直接使用
if (accumulatedText.trim()) {
console.log('处理过滤后的文本:', accumulatedText);
await onSegment(accumulatedText, false);
}
hasProcessed = true;
accumulatedText = ''; // 重置
}
if (segment && onSegment) {
// 找到对应的分隔符
const delimiterMatch = pendingText.match(segmentDelimiters);
const segmentWithDelimiter = segment + (delimiterMatch ? delimiterMatch[0] : '');
console.log('检测到完整段落:', segmentWithDelimiter);
await onSegment(segmentWithDelimiter);
}
}
// 更新pendingText - 使用原始文本但需要相应调整
if (hasProcessed) {
// 计算已处理的原始文本长度更新pendingText
const processedLength = pendingText.length - (segments[segments.length - 1] || '').length;
pendingText = pendingText.substring(processedLength);
}
// 保留最后一个不完整的段落
pendingText = segments[segments.length - 1] || '';
}
}
} catch (e) {

View File

@ -56,12 +56,12 @@ class MessageHistory {
const messages = [];
// 添加系统消息
// if (includeSystem) {
// messages.push({
// role: 'system',
// content: 'You are a helpful assistant.'
// });
// }
if (includeSystem) {
messages.push({
role: 'system',
content: 'You are a helpful assistant.'
});
}
// 获取最近的对话历史
const recentMessages = this.messages.slice(-recentCount * 2); // 用户+助手成对出现

View File

@ -1,11 +1,11 @@
// 以流式或非流式方式请求 minimaxi 大模型接口,并打印/返回内容
// import { text } from "express";
window.isPlaying = false;
// 在文件顶部添加音频播放相关的变量和函数
let audioContext = null;
let audioQueue = []; // 音频队列
let isPlaying = false;
// let isPlaying = false;
let isProcessingQueue = false; // 队列处理状态
let nextStartTime = 0; // 添加这行来声明 nextStartTime 变量
@ -52,48 +52,45 @@ async function addAudioToQueue(audioHex) {
console.error('音频解码失败:', error);
}
}
let isFirstChunk = true;
// 队列处理器 - 独立运行,按顺序播放音频
async function processAudioQueue() {
if (isProcessingQueue) return;
isProcessingQueue = true;
while (audioQueue.length > 0 && !isPlaying) {
console.log('开始处理音频队列');
if (!isPlaying && audioQueue.length > 0) {
let isFirstChunk = true;
while (audioQueue.length > 0 || window.isPlaying) {
// 如果当前没有音频在播放,且队列中有音频
if (!window.isPlaying && audioQueue.length > 0) {
const audioItem = audioQueue.shift();
const sayName = '8-4-sh';
const targetVideo = window.webrtcApp.interactionVideo;
if (sayName != window.webrtcApp.currentVideoTag && window.webrtcApp && window.webrtcApp.switchVideoStream) {
const sayName = 'say-3s-m-sw'
const targetVideo = 's-1.mp4'
// 如果是第一个音频片段,触发视频切换
if (sayName != window.webrtcApp.currentVideoTag && window.webrtcApp && window.webrtcApp.handleTextInput) {
try {
// 检查WebSocket连接状态仅影响服务端广播不阻断本地播放
if (window.webrtcApp.checkConnectionStatus && !window.webrtcApp.checkConnectionStatus()) {
console.log('WebSocket连接异常继续本地播放并切换视频');
} else {
console.log('--------------触发视频切换:', sayName);
window.webrtcApp.switchVideoStream(targetVideo, 'audio', '8-4-sh');
}
await window.webrtcApp.switchVideoWithReplaceTrack(targetVideo, 'audio', 'say-3s-m-sw');
isFirstChunk = false;
window.webrtcApp.currentVideoTag = sayName;
} catch (error) {
console.error('视频切换失败:', error);
}
}
await playAudioData(audioItem.audioData);
} else {
// 等待一小段时间再检查
await new Promise(resolve => setTimeout(resolve, 50));
}
}
isProcessingQueue = false;
const text = 'default';
console.log("音频结束------------------------", window.webrtcApp.currentVideoTag, isPlaying);
if (window.webrtcApp.currentVideoTag != text && !isPlaying) {
isFirstChunk = true;
window.webrtcApp.currentVideoTag = text;
window.webrtcApp.switchVideoStream(window.webrtcApp.defaultVideo, 'audio', text);
const text = 'default'
await window.webrtcApp.socket.emit('voice-input', { text });
if (window.webrtcApp.currentVideoTag != text) {
window.webrtcApp.currentVideoTag = text
await window.webrtcApp.switchVideoWithReplaceTrack(window.webrtcApp.defaultVideo, 'audio', text);
}
console.log('音频队列处理完成');
}
@ -107,29 +104,29 @@ function playAudioData(audioData) {
source.buffer = audioData;
source.connect(ctx.destination);
isPlaying = true;
window.isPlaying = true;
source.onended = () => {
console.log('音频片段播放完成');
isPlaying = false;
window.isPlaying = false;
resolve();
};
// 超时保护
// setTimeout(() => {
// if (isPlaying) {
// console.log('音频播放超时,强制结束');
// isPlaying = false;
// resolve();
// }
// }, (audioData.duration + 0.5) * 1000);
setTimeout(() => {
if (window.isPlaying) {
console.log('音频播放超时,强制结束');
window.isPlaying = false;
resolve();
}
}, (audioData.duration + 0.5) * 1000);
source.start(0);
console.log(`开始播放音频片段,时长: ${audioData.duration}`);
} catch (error) {
console.error('播放音频失败:', error);
isPlaying = false;
window.isPlaying = false;
resolve();
}
});
@ -155,10 +152,10 @@ function getQueueStatus() {
// 移除waitForCurrentAudioToFinish函数不再需要
async function requestMinimaxi({ apiKey, groupId, body, stream = true , textPlay = false}) {
async function requestMinimaxi({ apiKey, groupId, body, stream = true }) {
const url = `https://api.minimaxi.com/v1/t2a_v2`;
const reqBody = { ...body, stream };
isPlaying = textPlay
// 添加这两行变量定义
let isFirstChunk = true;
// const currentText = body.text;
@ -225,8 +222,8 @@ async function requestMinimaxi({ apiKey, groupId, body, stream = true , textPlay
// 流式解析每个chunk实时播放音频
if (obj.data && obj.data.audio && obj.data.status === 1) {
console.log('收到音频数据片段!', obj.data.audio.length);
// audioHex += obj.data.audio;
audioHex = obj.data.audio;
audioHex += obj.data.audio;
// const sayName = 'say-5s-m-sw'
// // 如果是第一个音频片段,触发视频切换
// if (isFirstChunk && sayName != window.webrtcApp.currentVideoName && window.webrtcApp && window.webrtcApp.handleTextInput) {
@ -247,7 +244,7 @@ async function requestMinimaxi({ apiKey, groupId, body, stream = true , textPlay
// const text = 'default'
// await window.webrtcApp.socket.emit('text-input', { text });
// await window.webrtcApp.handleTextInput(text);
lastFullResult = null;
lastFullResult = obj;
console.log('收到最终状态');
}
} catch (e) {
@ -264,7 +261,7 @@ async function requestMinimaxi({ apiKey, groupId, body, stream = true , textPlay
const obj = JSON.parse(line);
if (obj.data && obj.data.audio) {
console.log('收到无data:音频数据!', obj.data.audio.length);
audioHex = obj.data.audio;
audioHex += obj.data.audio;
// 立即播放这个音频片段
await playAudioChunk(obj.data.audio);
@ -424,4 +421,4 @@ function generateUUID() {
});
}
export { requestMinimaxi, requestVolcanTTS, addAudioToQueue };
export { requestMinimaxi, requestVolcanTTS };

346
src/new_app.js Normal file
View File

@ -0,0 +1,346 @@
let ASRTEXT = ''
class HttpASRRecognizer {
constructor() {
this.mediaRecorder = null;
this.audioContext = null;
this.isRecording = false;
this.audioChunks = [];
// VAD相关属性
this.isSpeaking = false;
this.silenceThreshold = 0.01;
this.silenceTimeout = 1000;
this.minSpeechDuration = 300;
this.silenceTimer = null;
this.speechStartTime = null;
this.audioBuffer = [];
// API配置
this.apiConfig = {
url: 'https://openspeech.bytedance.com/api/v3/auc/bigmodel/recognize/flash',
headers: {
'X-Api-App-Key': '1988591469',
'X-Api-Access-Key': 'mdEyhgZ59on1-NK3GXWAp3L4iLldSG0r',
'X-Api-Resource-Id': 'volc.bigasr.auc_turbo',
'X-Api-Request-Id': this.generateUUID(),
'X-Api-Sequence': '-1',
'Content-Type': 'application/json'
}
};
this.recordBtn = document.getElementById('startVoiceButton');
this.statusDiv = document.getElementById('status');
this.resultsDiv = document.getElementById('results');
this.initEventListeners();
}
initEventListeners() {
this.recordBtn.addEventListener('click', () => {
if (this.isRecording) {
this.stopRecording();
} else {
this.startRecording();
}
});
}
// 生成UUID
generateUUID() {
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
const r = Math.random() * 16 | 0;
const v = c == 'x' ? r : (r & 0x3 | 0x8);
return v.toString(16);
});
}
// 计算音频能量(音量)
calculateAudioLevel(audioData) {
let sum = 0;
for (let i = 0; i < audioData.length; i++) {
sum += audioData[i] * audioData[i];
}
return Math.sqrt(sum / audioData.length);
}
// 语音活动检测
detectVoiceActivity(audioData) {
const audioLevel = this.calculateAudioLevel(audioData);
const currentTime = Date.now();
if (audioLevel > this.silenceThreshold) {
if (!this.isSpeaking) {
this.isSpeaking = true;
this.speechStartTime = currentTime;
this.audioBuffer = [];
this.updateStatus('检测到语音,开始录音...', 'speaking');
console.log('开始说话');
}
if (this.silenceTimer) {
clearTimeout(this.silenceTimer);
this.silenceTimer = null;
}
return true;
} else {
if (this.isSpeaking && !this.silenceTimer) {
this.silenceTimer = setTimeout(() => {
this.onSpeechEnd();
}, this.silenceTimeout);
}
return this.isSpeaking;
}
}
// 语音结束处理
async onSpeechEnd() {
if (this.isSpeaking) {
const speechDuration = Date.now() - this.speechStartTime;
if (speechDuration >= this.minSpeechDuration) {
console.log(`语音结束,时长: ${speechDuration}ms`);
await this.processAudioBuffer();
// this.updateStatus('语音识别中...', 'processing');
console.log('语音识别中')
} else {
console.log('说话时长太短,忽略');
// this.updateStatus('等待语音输入...', 'ready');
console.log('等待语音输入...')
}
this.isSpeaking = false;
this.speechStartTime = null;
this.audioBuffer = [];
}
if (this.silenceTimer) {
clearTimeout(this.silenceTimer);
this.silenceTimer = null;
}
}
// 处理音频缓冲区并发送到API
async processAudioBuffer() {
if (this.audioBuffer.length === 0) {
return;
}
try {
// 合并所有音频数据
const totalLength = this.audioBuffer.reduce((sum, buffer) => sum + buffer.length, 0);
const combinedBuffer = new Float32Array(totalLength);
let offset = 0;
for (const buffer of this.audioBuffer) {
combinedBuffer.set(buffer, offset);
offset += buffer.length;
}
// 转换为WAV格式并编码为base64
const wavBuffer = this.encodeWAV(combinedBuffer, 16000);
const base64Audio = this.arrayBufferToBase64(wavBuffer);
// 调用ASR API
await this.callASRAPI(base64Audio);
} catch (error) {
console.error('处理音频数据失败:', error);
this.updateStatus('识别失败', 'error');
}
}
// 调用ASR API
async callASRAPI(base64AudioData) {
try {
const requestBody = {
user: {
uid: "1988591469"
},
audio: {
data: base64AudioData
},
request: {
model_name: "bigmodel"
}
};
const response = await fetch(this.apiConfig.url, {
method: 'POST',
headers: this.apiConfig.headers,
body: JSON.stringify(requestBody)
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const result = await response.json();
this.handleASRResponse(result);
} catch (error) {
console.error('ASR API调用失败:', error);
this.updateStatus('API调用失败', 'error');
}
}
// 处理ASR响应
handleASRResponse(response) {
console.log('ASR响应:', response);
if (response && response.data && response.data.result) {
ASRTEXT = response.data.result;
// this.displayResult(text);
// this.updateStatus('识别完成', 'completed');
console.log('识别完成')
} else {
console.log('未识别到文字');
// this.updateStatus('未识别到文字', 'ready');
}
}
// 显示识别结果
displayResult(text) {
const resultElement = document.createElement('div');
resultElement.className = 'result-item';
resultElement.innerHTML = `
<span class="timestamp">${new Date().toLocaleTimeString()}</span>
<span class="text">${text}</span>
`;
this.resultsDiv.appendChild(resultElement);
this.resultsDiv.scrollTop = this.resultsDiv.scrollHeight;
}
// 更新状态显示
updateStatus(message, status) {
this.statusDiv.textContent = message;
this.statusDiv.className = `status ${status}`;
}
// 编码WAV格式
encodeWAV(samples, sampleRate) {
const length = samples.length;
const buffer = new ArrayBuffer(44 + length * 2);
const view = new DataView(buffer);
// WAV文件头
const writeString = (offset, string) => {
for (let i = 0; i < string.length; i++) {
view.setUint8(offset + i, string.charCodeAt(i));
}
};
writeString(0, 'RIFF');
view.setUint32(4, 36 + length * 2, true);
writeString(8, 'WAVE');
writeString(12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
view.setUint16(22, 1, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * 2, true);
view.setUint16(32, 2, true);
view.setUint16(34, 16, true);
writeString(36, 'data');
view.setUint32(40, length * 2, true);
// 写入音频数据
let offset = 44;
for (let i = 0; i < length; i++) {
const sample = Math.max(-1, Math.min(1, samples[i]));
view.setInt16(offset, sample * 0x7FFF, true);
offset += 2;
}
return buffer;
}
// ArrayBuffer转Base64
arrayBufferToBase64(buffer) {
let binary = '';
const bytes = new Uint8Array(buffer);
for (let i = 0; i < bytes.byteLength; i++) {
binary += String.fromCharCode(bytes[i]);
}
return btoa(binary);
}
async startRecording() {
try {
const stream = await navigator.mediaDevices.getUserMedia({
audio: {
sampleRate: 16000,
channelCount: 1,
echoCancellation: true,
noiseSuppression: true
}
});
this.audioContext = new (window.AudioContext || window.webkitAudioContext)({
sampleRate: 16000
});
const source = this.audioContext.createMediaStreamSource(stream);
const processor = this.audioContext.createScriptProcessor(4096, 1, 1);
processor.onaudioprocess = (event) => {
const inputBuffer = event.inputBuffer;
const inputData = inputBuffer.getChannelData(0);
// 语音活动检测
if (this.detectVoiceActivity(inputData)) {
// 如果检测到语音活动,缓存音频数据
this.audioBuffer.push(new Float32Array(inputData));
}
};
source.connect(processor);
processor.connect(this.audioContext.destination);
this.isRecording = true;
this.recordBtn.textContent = '停止录音';
this.recordBtn.className = 'btn recording';
// this.updateStatus('等待语音输入...', 'ready');
} catch (error) {
console.error('启动录音失败:', error);
// this.updateStatus('录音启动失败', 'error');
}
}
stopRecording() {
if (this.audioContext) {
this.audioContext.close();
this.audioContext = null;
}
if (this.silenceTimer) {
clearTimeout(this.silenceTimer);
this.silenceTimer = null;
}
// 如果正在说话,处理最后的音频
if (this.isSpeaking) {
this.onSpeechEnd();
}
this.isRecording = false;
this.isSpeaking = false;
this.audioBuffer = [];
this.recordBtn.textContent = '开始录音';
this.recordBtn.className = 'btn';
console.log('录音已停止');
// this.updateStatus('录音已停止', 'stopped');
}
}
// 初始化应用
document.addEventListener('DOMContentLoaded', () => {
const asrRecognizer = new HttpASRRecognizer();
console.log('HTTP ASR识别器已初始化');
});

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

View File

@ -1,191 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Yantootech</title>
<script src="https://cdn.tailwindcss.com"></script>
<script src="https://unpkg.com/feather-icons"></script>
<script src="https://cdn.jsdelivr.net/npm/vanta@latest/dist/vanta.net.min.js"></script>
<style>
@keyframes float {
0%, 100% { transform: translateY(0); }
50% { transform: translateY(-10px); }
}
.card-hover:hover {
transform: translateY(-5px);
/* 采用 dash 深色主题的紫色光晕 */
box-shadow: 0 15px 30px rgba(79, 70, 229, 0.25);
}
.selected-card {
animation: pulse 2s infinite;
/* 选中态改为紫色描边光晕 */
box-shadow: 0 0 0 4px rgba(79, 70, 229, 0.4);
}
@keyframes pulse {
0% { box-shadow: 0 0 0 0 rgba(79, 70, 229, 0.4); }
70% { box-shadow: 0 0 0 15px rgba(79, 70, 229, 0); }
100% { box-shadow: 0 0 0 0 rgba(79, 70, 229, 0); }
}
.button-glow {
transition: all 0.3s ease;
}
.button-glow:hover {
/* 按钮悬停光晕改为紫色 */
box-shadow: 0 0 20px rgba(79, 70, 229, 0.6);
}
</style>
</head>
<body class="min-h-screen bg-gray-900 text-white font-sans">
<div id="vanta-bg" class="fixed inset-0 z-0"></div>
<div class="relative z-10 container mx-auto px-4 py-12">
<div class="text-center mb-16">
<!-- 标题改为绿-紫渐变,与 dash.html 一致 -->
<h1 class="text-4xl md:text-5xl font-bold mb-4 bg-clip-text text-transparent bg-gradient-to-r from-green-400 to-purple-500">选择你的身份</h1>
<p class="text-lg text-gray-300 max-w-2xl mx-auto">在数字世界找到属于你的温暖连接</p>
</div>
<!-- 身份选择卡片区域 -->
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-4 gap-8 max-w-4xl mx-auto">
<!-- Uncle Card -->
<div class="card bg-gray-800 bg-opacity-60 backdrop-blur-md rounded-xl p-6 text-center cursor-pointer transition-all duration-300 border-2 border-gray-700 card-hover" onclick="selectRole('uncle')">
<div class="rounded-full w-24 h-24 mx-auto mb-4 overflow-hidden border-4 border-white/80 shadow-lg">
<img src="uncle.jpg" alt="叔叔" class="w-full h-full object-cover">
</div>
<h3 class="text-2xl font-semibold text-white mb-2">叔叔</h3>
<div class="selected-indicator hidden mt-4 text-green-500">
<i data-feather="check-circle" class="w-8 h-8 mx-auto"></i>
</div>
</div>
<!-- Aunt Card -->
<div class="card bg-gray-800 bg-opacity-60 backdrop-blur-md rounded-xl p-6 text-center cursor-pointer transition-all duration-300 border-2 border-gray-700 card-hover" onclick="selectRole('aunt')">
<div class="rounded-full w-24 h-24 mx-auto mb-4 overflow-hidden border-4 border-white/80 shadow-lg">
<img src="aunt.jpg" alt="阿姨" class="w-full h-full object-cover">
</div>
<h3 class="text-2xl font-semibold text-white mb-2">阿姨</h3>
<div class="selected-indicator hidden mt-4 text-green-500">
<i data-feather="check-circle" class="w-8 h-8 mx-auto"></i>
</div>
</div>
<!-- Grandpa Card -->
<div class="card bg-gray-800 bg-opacity-60 backdrop-blur-md rounded-xl p-6 text-center cursor-pointer transition-all duration-300 border-2 border-gray-700 card-hover" onclick="selectRole('grandpa')">
<div class="rounded-full w-24 h-24 mx-auto mb-4 overflow-hidden border-4 border-white/80 shadow-lg">
<img src="Grandpa.png" alt="爷爷" class="w-full h-full object-cover">
</div>
<h3 class="text-2xl font-semibold text-white mb-2">爷爷</h3>
<div class="selected-indicator hidden mt-4 text-green-500">
<i data-feather="check-circle" class="w-8 h-8 mx-auto"></i>
</div>
</div>
<!-- Grandma Card -->
<div class="card bg-gray-800 bg-opacity-60 backdrop-blur-md rounded-xl p-6 text-center cursor-pointer transition-all duration-300 border-2 border-gray-700 card-hover" onclick="selectRole('grandma')">
<div class="rounded-full w-24 h-24 mx-auto mb-4 overflow-hidden border-4 border-white/80 shadow-lg">
<img src="Grandma.png" alt="奶奶" class="w-full h-full object-cover">
</div>
<h3 class="text-2xl font-semibold text-white mb-2">奶奶</h3>
<div class="selected-indicator hidden mt-4 text-green-500">
<i data-feather="check-circle" class="w-8 h-8 mx-auto"></i>
</div>
</div>
</div>
<div class="text-center mt-16">
<!-- 初始禁用按钮采用深灰,启用后切换为绿-紫渐变 -->
<button id="confirmBtn" class="bg-gray-700 bg-opacity-60 text-gray-400 px-8 py-4 rounded-full text-xl font-medium transition-all duration-300 cursor-not-allowed">
确认选择
</button>
</div>
</div>
<script>
if (window.THREE && window.VANTA && document.querySelector('#vanta-bg')) {
VANTA.NET({
el: "#vanta-bg",
THREE: window.THREE,
mouseControls: true,
touchControls: true,
gyroControls: false,
minHeight: 200.00,
minWidth: 200.00,
scale: 1.00,
scaleMobile: 1.00,
/* 采用 dash 深色背景与冷色系线条 */
color: 0x3f83f8, // 蓝色线条(接近 dash 的风格)
backgroundColor: 0x111827, // 深灰背景dash 的 bg-gray-900
points: 8.00,
maxDistance: 20.00,
spacing: 15.00
});
} else {
console.error('VANTA 初始化失败THREE 或 VANTA 未加载');
}
// 初始化 feather icons
feather.replace();
let selectedRole = null;
const roleNames = {
'uncle': '叔叔',
'aunt': '阿姨',
'grandpa': '爷爷',
'grandma': '奶奶'
};
function selectRole(role) {
// Remove selection from all cards
document.querySelectorAll('.card').forEach(card => {
card.classList.remove('selected-card', 'border-green-500');
card.querySelector('.selected-indicator').classList.add('hidden');
card.style.transform = '';
});
// Add selection to clicked card选中态采用绿色边框
const selectedCard = event.currentTarget;
selectedCard.classList.add('selected-card', 'border-green-500');
selectedCard.querySelector('.selected-indicator').classList.remove('hidden');
selectedCard.style.transform = 'scale(1.05)';
selectedRole = role;
// Enable confirm button切换为绿-紫渐变)
const confirmBtn = document.getElementById('confirmBtn');
confirmBtn.classList.remove('bg-gray-700', 'bg-opacity-60', 'text-gray-400', 'cursor-not-allowed');
confirmBtn.classList.add('bg-gradient-to-r', 'from-green-500', 'to-purple-600', 'text-white', 'button-glow', 'cursor-pointer');
confirmBtn.textContent = `小乐与 ${roleNames[role]} 开始对话`;
// Add animation to the button
confirmBtn.style.animation = 'none';
setTimeout(() => {
confirmBtn.style.animation = 'pulse 1.5s infinite';
}, 10);
}
// Add hover effect to cards悬停态采用紫色边框
document.querySelectorAll('.card').forEach(card => {
card.addEventListener('mouseenter', () => {
if (!card.classList.contains('selected-card')) {
card.classList.add('border-purple-500');
}
});
card.addEventListener('mouseleave', () => {
if (!card.classList.contains('selected-card')) {
card.classList.remove('border-purple-500');
}
});
});
// Confirm button action
document.getElementById('confirmBtn').addEventListener('click', function() {
if (selectedRole) {
const roleName = roleNames[selectedRole];
window.location.href = `index.html?roleName=${encodeURIComponent(roleName)}`;
}
});
</script>
</body>
</html>

View File

@ -101,14 +101,6 @@ header p {
.recorded-video-section {
margin-bottom: 30px;
text-align: center;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
/* 确保视频区域固定高度并居中 */
min-height: 100vh;
max-height: 100vh;
width: 100%;
}
.recorded-video-section h3 {
@ -117,22 +109,14 @@ header p {
}
#recordedVideo {
max-width: 100%;
max-height: 100%;
width: 100%;
height: 100%;
border-radius: 0;
box-shadow: none;
object-fit: cover; /* 覆盖整个容器 */
background: transparent; /* 透明背景 */
max-width: 400px; /* 限制最大宽度 */
aspect-ratio: 9/16; /* 固定9:16比例 */
border-radius: 10px;
box-shadow: 0 5px 15px rgba(0,0,0,0.2);
object-fit: cover; /* 确保视频填充容器 */
background: #000; /* 视频背景色 */
transition: opacity 0.15s; /* 添加透明度过渡效果 */
margin: 0 auto; /* 左右居中 */
display: block; /* 确保块级显示 */
/* 确保视频始终居中 */
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
}
/* 视频加载时的样式 */
@ -439,50 +423,6 @@ header p {
.video-list {
grid-template-columns: 1fr;
}
/* 移动端视频容器优化 */
.video-container {
height: 100vh;
width: 100vw;
}
#recordedVideo {
width: 100%;
height: 100%;
object-fit: cover;
}
}
/* 桌面端视频容器优化 */
@media (min-width: 769px) {
.video-container {
height: 100vh;
width: 100vw;
}
#recordedVideo {
width: 100%;
height: 100%;
object-fit: cover;
}
}
/* 横屏模式优化 */
@media (orientation: landscape) and (max-height: 500px) {
.video-container {
height: 100vh;
}
.controls {
bottom: 20px;
}
}
/* 竖屏模式优化 */
@media (orientation: portrait) {
.video-container {
height: 100vh;
}
}
/* 动画效果 */
@ -508,22 +448,42 @@ header p {
}
#recordedVideo {
transition: opacity 0.1s ease-in-out; /* 缩短过渡时间 */
transition: opacity 0.2s ease-in-out;
background-color: #1a1a1a; /* 深灰色背景,避免纯黑 */
}
#recordedVideo.loading {
opacity: 0.9; /* 提高loading时的透明度减少黑屏感 */
opacity: 0.8; /* 加载时稍微降低透明度,但不完全隐藏 */
}
#recordedVideo.playing {
opacity: 1;
}
/* 优化加载指示器 */
/* 添加加载指示器 */
.video-container {
position: relative;
}
.video-container::before {
content: '';
position: absolute;
top: 50%;
left: 50%;
width: 40px;
height: 40px;
margin: -20px 0 0 -20px;
border: 3px solid #333;
border-top: 3px solid #fff;
border-radius: 50%;
animation: spin 1s linear infinite;
opacity: 0;
z-index: 10;
transition: opacity 0.3s;
}
.video-container.loading::before {
opacity: 0.8; /* 降低加载指示器的透明度 */
border-top-color: #667eea; /* 使用主题色 */
opacity: 1;
}
@keyframes spin {

Binary file not shown.

Before

Width:  |  Height:  |  Size: 421 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 177 KiB

44
src/video_audio_sync.js Normal file
View File

@ -0,0 +1,44 @@
import { requestMinimaxi } from './minimaxi_stream.js';
import { getMinimaxiConfig } from './config.js';
export async function playVideoWithAudio(videoPath, text) {
// 1. 初始化视频播放
const video = document.createElement('video');
video.src = videoPath;
document.body.appendChild(video);
// 2. 启动音频合成流
const minimaxiConfig = getMinimaxiConfig();
const audioStream = await requestMinimaxi({
apiKey: minimaxiConfig.apiKey,
groupId: minimaxiConfig.groupId,
body: {
model: 'speech-02-hd',
text,
output_format: 'hex', // 流式场景必须使用hex
voice_setting: {
voice_id: 'yantu-qinggang',
speed: 1
}
},
stream: true
});
// 3. 将音频hex转换为可播放格式
const audioCtx = new AudioContext();
const audioBuffer = await audioCtx.decodeAudioData(
hexToArrayBuffer(audioStream.data.audio)
);
// 4. 同步播放
const source = audioCtx.createBufferSource();
source.buffer = audioBuffer;
source.connect(audioCtx.destination);
video.play();
source.start(0);
}
function hexToArrayBuffer(hex) {
// ... hex转ArrayBuffer实现
}

BIN
videos/0.mp4 Normal file

Binary file not shown.

BIN
videos/1-m.mp4 Normal file

Binary file not shown.

BIN
videos/2.mp4 Normal file

Binary file not shown.

BIN
videos/4-m.mp4 Normal file

Binary file not shown.

BIN
videos/5.mp4 Normal file

Binary file not shown.

BIN
videos/6.mp4 Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
videos/d-3s.mp4 Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
videos/s-1.mp4 Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.