Skip to content

[WebRTC_DEMO] 部署成功后访问页面后无法开启start call #103

@xiaojiwang

Description

@xiaojiwang

三方gpu服务器环境(基于容器的):
Ubuntu 22.04
Cuda 13.0
python 3.12

利用oneclick.sh部署成功,防火墙开通了8088、7880端口,浏览器能够访问demo页面,但是点击start call等待一段时间后报错:

Image

启动日志如下:
`

MiniCPM-o WebRTC Demo One-Click Start (without Docker)

[INFO] Running preflight checks...
[OK] livekit-server livekit-server version 1.9.11
[OK] Python: Python 3.12.11 (/usr/local/miniconda3/envs/py312/bin/python)
[OK] Node.js: v20.5.0
[OK] pnpm: 10.30.2
[OK] ffmpeg: ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
[OK] llama-server: /home/soft/llama.cpp-omni/build/bin/llama-server
[OK] Model directory: /home/soft/models/openbmb/MiniCPM-o-4_5-gguf
[OK] Preflight checks passed

[INFO] ========== [1/4] Starting LiveKit Server ==========
[INFO] Detected local IP: 10.60.41.116
[OK] Updated livekit.yaml node_ip and domain to 10.60.41.116
[OK] LiveKit is running (port 7880, waited 1s)

[INFO] ========== [2/4] Starting Backend (FastAPI) ==========
[OK] Backend Python dependencies are ready
[OK] Backend is running (port 8021, waited 5s)
[OK] Backend health check passed: {"status":"healthy","service":"minicpmo-backend"}

[INFO] ========== [3/4] Starting C++ Inference Service (model loading takes 2-3 min) ==========
[INFO] Mode: duplex | Vision backend: default | Port: 9060
[OK] C++ Inference (health check port) is running (port 9061, waited 1s)
[INFO] Waiting for C++ Inference Service health check to pass (max 300s)...
[INFO] Still waiting for C++ Inference Service to start... (30s/300s)
[OK] C++ Inference Service health check passed (waited 55s)
[OK] C++ inference service registered with Backend (status: available)

[INFO] ========== [4/4] Starting Frontend (Vue + Vite) ==========
[OK] Frontend dependencies are ready (node_modules exists)
[OK] HTTPS certificate is ready
[INFO] Frontend will reflect CPP_MODE=duplex (unavailable mode tab shown as disabled)
[INFO] Frontend mode: production build (prod)
[OK] Frontend already built (dist/ exists, set FORCE_BUILD=1 to force rebuild)
[OK] Frontend is running (port 8088, waited 1s)

==============================================
[OK] All services started successfully!

[INFO] Service status:

● livekit running PID=211 port=7880
● backend running PID=239 port=8021
● cpp_server running PID=321 port=9060
● frontend running PID=455 port=8088

[INFO] Log files:
LiveKit: /home/soft/.logs/livekit.log
Backend: /home/soft/.logs/backend.log
C++ Inference: /home/soft/.logs/cpp_server.log
Frontend: /home/soft/.logs/frontend.log

[INFO] Access URLs: (frontend mode: prod)
Frontend: https://10.60.41.116:8088 (accept self-signed certificate on first visit)
Backend API: http://10.60.41.116:8021
LiveKit: ws://10.60.41.116:7880
Inference: http://10.60.41.116:9060
Inference Health: http://10.60.41.116:9061/health
`

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions