Skip to content

Latest commit

 

History

History
966 lines (795 loc) · 34.3 KB

File metadata and controls

966 lines (795 loc) · 34.3 KB

ProWaveDAQ Real-time Data Visualization System

Multi-language Version / 多語言版本
中文版本 (README.md) | English Version (README_EN.md)

System Overview

The ProWaveDAQ Real-time Data Visualization System is a Python-based vibration data acquisition and visualization platform for collecting vibration data from PW-RVT-2-4 (Modbus RTU) devices and displaying continuous curves of all data points in real-time in a web browser, while automatically performing CSV storage and SQL database uploads.

This system provides a complete web interface that allows users to operate through a browser without accessing the terminal:

  • Modify configuration files (ProWaveDAQ.ini, csv.ini, sql.ini)
  • Input data labels
  • Configure SQL server upload (optional)
  • Press "Start Reading" to initiate collection and real-time display
  • System automatically splits and stores data files (based on seconds in csv.ini)
  • Press "Stop" to safely terminate and automatically upload remaining data

Features

Core Features

  • Real-time Data Acquisition: Read vibration data from ProWaveDAQ devices via Modbus RTU protocol
  • Real-time Data Visualization: Display multi-channel continuous curve graphs in browser using Chart.js
  • Automatic CSV Storage: Automatically split and store data files based on configuration
  • SQL Database Upload: Optional SQL server upload functionality, supporting MySQL/MariaDB
  • Web Interface Control: Complete browser-based operation interface, no terminal required
  • Configuration File Management: Edit configuration files through web interface using fixed input fields (prevents accidental parameter deletion)

Technical Features

  • Downsampling Queue Architecture: Uses web_data_queue to store downsampled data (50:1 downsampling), significantly reducing frontend data transmission and plotting burden
  • Incremental Update Mechanism: Frontend uses incremental updates, processing only new data, maintaining fixed window size (500 points)
  • Manufacturer Manual Compliance: ProWaveDAQ module strictly follows manufacturer manual Page 5 specifications, using FC04 to read complete packets
  • Multi-threaded Architecture: 5 independent threads (Flask, DAQ Reading, Collection, CSV Writer, SQL Writer), ensuring components don't interfere with each other
  • Thread-safe Communication: Uses queue.Queue for inter-thread communication, ensuring data consistency
  • Real-time Data Visualization: Uses Chart.js for real-time charts (updates every 100ms), animations disabled for performance
  • Automatic File Splitting: Automatically splits CSV files based on configuration, ensuring correct sample boundaries
  • SQL Batch Upload: Uses temporary file mechanism to batch upload data to SQL server, improving performance
  • Data Protection Mechanism: Retry mechanism, failure retention, ensuring no data loss
  • Unified Logging System: Uses loguru to provide unified log format and level management, supporting TRACE, DEBUG, INFO, WARNING, ERROR levels
  • Channel Misalignment Protection: Ensures correct data order, avoiding channel misalignment
  • Precise Timestamp Calculation: Automatically calculates time for each sample based on sample rate

System Requirements

Hardware Requirements

  • ProWaveDAQ device (connected via Modbus RTU)
  • Serial port (USB-to-serial or direct serial port)
  • System supporting Python 3.10+ (recommended DietPi or other Debian-based systems)
  • (Optional) SQL server (MySQL/MariaDB) for data upload

Software Requirements

  • Python 3.10 or higher
  • Supported operating systems:
    • DietPi (recommended)
    • Debian-based Linux distributions
    • Ubuntu
    • Raspberry Pi OS

Python Package Dependencies

Please refer to requirements.txt file, main dependencies include:

  • pymodbus==3.11.3 - Modbus communication
  • pyserial==3.5 - Serial port communication
  • Flask==3.1.2 - Web server
  • pymysql==1.0.2 - SQL database connection (MySQL/MariaDB)
  • loguru==0.7.3 - Unified logging system

Installation Instructions

1. Clone or Download Project

cd /path/to/ProWaveDAQ_Python_Visualization_Unit

Quick Installation Command

./deploy.sh

Note: The deploy.sh script requires sudo privileges in the following cases:

  • When Python 3, pip3, or venv module is not installed (requires system package installation)
  • When user needs to be added to dialout group to access serial port

If the system already has Python environment installed and user is already in dialout group, sudo is not required.

If sudo is needed, execute:

sudo ./deploy.sh

2. Install Python Dependencies

pip install -r requirements.txt

Or using pip3:

pip3 install -r requirements.txt

3. Set Permissions

Ensure Python scripts have execution permissions:

chmod +x src/main.py
chmod +x src/prowavedaq.py
chmod +x src/csv_writer.py
chmod +x src/sql_uploader.py

4. Set Serial Port Permissions (Linux)

If using USB-to-serial devices, you may need to add user to dialout group:

sudo usermod -a -G dialout $USER

Then re-login or execute:

newgrp dialout

5. Verify Configuration Files

Check configuration files in API/ directory:

  • API/ProWaveDAQ.ini - ProWaveDAQ device settings
  • API/csv.ini - CSV file splitting interval settings
  • API/sql.ini - SQL server connection settings and upload interval settings

6. Set Up SQL Database (Optional)

If SQL upload is enabled, create data table in MariaDB/MySQL:

CREATE TABLE IF NOT EXISTS vibration_data (
    id BIGINT AUTO_INCREMENT PRIMARY KEY,
    timestamp DATETIME NOT NULL,
    label VARCHAR(255) NOT NULL,
    channel_1 DOUBLE NOT NULL,
    channel_2 DOUBLE NOT NULL,
    channel_3 DOUBLE NOT NULL,
    INDEX idx_timestamp (timestamp),
    INDEX idx_label (label)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

Note: If the table doesn't exist, the program will automatically create it on first connection.

7. Set Up OpenVPN Connection (If connecting to IMSL Lab MariaDB server)

Background: Due to Feng Chia University firewall policy, connecting to IMSL Lab's MariaDB server requires establishing VPN connection through OpenVPN.

Install OpenVPN (if not already installed):

sudo apt-get update
sudo apt-get install openvpn

Configure OpenVPN Connection:

  1. Initial Connection Setup:

    ./connection.sh --setup

    Script will prompt for:

    • Private Key Password
    • OVPN Username
    • OVPN Password
    • OVPN Server Address (optional)
  2. Verify OVPN Configuration File Exists:

    • Ensure API/imCloud.ovpn file exists
    • If not, script will create basic configuration file, requires manual editing to add complete settings
  3. Establish OpenVPN Connection:

    ./connection.sh

    Or with sudo privileges:

    sudo ./connection.sh
  4. Check Connection Status:

    # Check if OpenVPN process is running
    pgrep -x openvpn
    
    # View connection status
    ip addr show tun0
  5. Disconnect:

    ./connection.sh --disconnect

    Or:

    sudo killall openvpn

Important Notes:

  • Connection information is stored in API/connection_config.txt (permissions 600, owner read/write only)
  • Do not commit API/connection_config.txt and API/imCloud.ovpn to version control
  • Recommend adding these files to .gitignore
  • After successful connection, SQL upload can normally connect to IMSL Lab's MariaDB server

Usage Instructions

Starting the System

Method 1: Using Startup Script (Recommended)

Using default port 8080:

./run.sh

Specify custom port:

./run.sh 3000    # Use port 3000
./run.sh 9000    # Use port 9000

Using log recording:

./run_with_logs.sh          # Use default port 8080, save logs
./run_with_logs.sh 3000     # Use port 3000, save logs

Method 2: Direct Python Execution

Using default port 8080:

cd src
python3 main.py

Specify custom port:

cd src
python3 main.py --port 3000    # Use port 3000
python3 main.py -p 9000        # Use port 9000 (short form)

View all available options:

python3 src/main.py --help

After successful startup, you will see a message similar to:

============================================================
ProWaveDAQ Real-time Data Visualization System
============================================================
Web interface will be available at http://0.0.0.0:8080/
Press Ctrl+C to stop the server
============================================================

Using Web Interface

  1. Open Browser

    • On local machine: Open http://localhost:<port>/ (default 8080)
    • On remote machine: Open http://<deviceIP>:<port>/ (default 8080)

    Examples:

    • Using default port: http://localhost:8080/
    • Using custom port 3000: http://localhost:3000/
  2. Input Data Label

    • Enter label name for this measurement in "Data Label (Label)" field
    • Examples: test_001, vibration_20240101, etc.
  3. Configure SQL Upload (Optional)

    • Check "Enable SQL Server Upload"
    • Choose "Use INI File Settings" or "Manual Input Settings"
    • If using INI settings, system automatically reads settings from sql.ini
    • If manual input, can override INI settings
  4. Start Data Collection

    • Click "Start Reading" button
    • System will automatically:
      • Connect to ProWaveDAQ device
      • Start reading data
      • Display real-time data curves
      • Automatically save CSV files
      • (If enabled) Automatically upload data to SQL server
  5. View Real-time Data

    • Real-time curve graph automatically updates (every 200ms)
    • Can view data from all three channels simultaneously
    • Displays last 10 seconds of data (approximately 78,120 data points per channel)
    • Data point count displayed in real-time
  6. Stop Data Collection

    • Click "Stop Reading" button
    • System safely stops collection and closes connection
    • Automatically uploads remaining data to SQL server (if enabled)
  7. Manage Configuration Files

    • Click "Configuration Management" link
    • Edit configuration files using fixed input fields (prevents accidental parameter deletion)
    • Can edit ProWaveDAQ.ini, csv.ini, and sql.ini
    • After modification, click "Save Configuration"
  8. Browse and Download Files

    • Click "File Browser" link
    • Can browse all folders and files in output/ProWaveDAQ/ directory
    • Click folder name or "Enter" button to enter folder
    • Click "Download" button to download CSV files
    • Use breadcrumb navigation to return to parent directory

Configuration File Description

ProWaveDAQ.ini

[ProWaveDAQ]
serialPort = /dev/ttyUSB0    # Serial port path
baudRate = 3000000           # Baud rate
sampleRate = 7812            # Sample rate (Hz)
slaveID = 1                  # Modbus slave ID

csv.ini

[CSVServer]
enabled = false

[DumpUnit]
second = 60                  # Data time length per CSV file (seconds)

File Splitting Logic:

  • System calculates data points per CSV file based on sampleRate × channels × second
  • When accumulated data points reach target value, automatically creates new file
  • Example: Sample rate 7812 Hz, 3 channels, 60 seconds → approximately 1,406,160 data points per file

sql.ini

[SQLServer]
enabled = false             # Whether to enable SQL upload (true/false)
host = 192.168.9.13         # SQL server location
port = 3306                 # Port
user = raspberrypi          # Username
password = Raspberry@Pi     # Password
database = daq-data         # Database name

[DumpUnit]
second = 5                  # SQL upload interval (seconds), works same as CSV second

SQL Configuration Notes:

  • enabled: Controls whether SQL upload is enabled
  • If enabled = true, frontend automatically checks "Enable SQL Server Upload"
  • Frontend can choose to use INI settings or manual input (overrides INI settings)
  • Important: If connecting to IMSL Lab's MariaDB server, due to Feng Chia University firewall policy, need to establish OpenVPN connection via connection.sh first (see "Set Up OpenVPN Connection" section)

SQL Upload Logic:

  • SQL upload uses temporary file mechanism, data first written to temporary CSV file
  • System creates temporary file directory (.sql_temp), file naming format: {timestamp}_sql_temp.csv
  • Every second seconds, checks and uploads temporary file
  • After successful upload, automatically deletes temporary file and immediately creates new temporary file (prevents data overflow)
  • On stop, checks and uploads all remaining temporary files
  • Example: Sample rate 7812 Hz, 3 channels, 5 seconds → upload temporary file every 5 seconds

Output Files

CSV Files

CSV files are stored in output/ProWaveDAQ/ directory, file naming format:

YYYYMMDDHHMMSS_<Label>_001.csv
YYYYMMDDHHMMSS_<Label>_002.csv
...

Each CSV file contains:

  • Timestamp - Timestamp (precisely calculated based on sample rate)
  • Channel_1(X) - Channel 1 data (corresponds to X axis)
  • Channel_2(Y) - Channel 2 data (corresponds to Y axis)
  • Channel_3(Z) - Channel 3 data (corresponds to Z axis)

Data Format Description:

  • Data format: [length, X, Y, Z, X, Y, Z, ...]
  • When reading from device, position 0 is length, followed by X, Y, Z cycle
  • Timestamp automatically calculated based on sample rate, ensuring correct time interval for each sample

SQL Database

If SQL upload is enabled, data is stored in vibration_data table, containing:

  • id - Auto-increment primary key
  • timestamp - Data timestamp
  • label - Data label
  • channel_1 - Channel 1 data
  • channel_2 - Channel 2 data
  • channel_3 - Channel 3 data

File Structure

ProWaveDAQ_Python_Visualization_Unit/
│
├── API/
│   ├── ProWaveDAQ.ini      # ProWaveDAQ device configuration file
│   ├── csv.ini              # CSV file splitting interval configuration file
│   ├── sql.ini              # SQL server connection and upload interval configuration file
│   ├── imCloud.ovpn         # OpenVPN connection configuration file (for connecting to IMSL Lab)
│   └── connection_config.txt # OpenVPN connection information configuration file (auto-generated, do not commit to version control)
│
├── connection.sh            # OpenVPN connection script (for connecting to IMSL Lab MariaDB server)
│
├── output/
│   └── ProWaveDAQ/         # CSV output directory
│       └── YYYYMMDDHHMMSS_<Label>/
│           ├── YYYYMMDDHHMMSS_<Label>_*.csv
│           └── .sql_temp/  # SQL temporary file directory (if SQL enabled)
│               └── YYYYMMDDHHMMSS_sql_temp.csv
│
├── src/
│   ├── prowavedaq.py       # ProWaveDAQ core module (Modbus communication)
│   ├── csv_writer.py       # CSV writer module
│   ├── sql_uploader.py     # SQL uploader module
│   ├── main.py             # Main control program (Web interface, includes loguru logging configuration)
│   ├── requirements.txt    # Python dependency package list
│   └── templates/          # HTML template directory
│       ├── index.html      # Main page template
│       ├── config.html     # Configuration management page template
│       └── files.html      # File browser page template
│
├── README.md               # This document
├── README_EN.md            # This document (English version)
├── deploy.sh               # Deployment script
└── run.sh                  # Startup script (enter virtual environment and start program)

API Route Description

Route Method Description
/ GET Main page, displays configuration form, Label input, SQL settings, start/stop buttons and line chart
/data GET Returns current latest data JSON to frontend
/status GET Check data collection status
/sql_config GET Get SQL configuration (from sql.ini)
/config GET Display configuration file editing page (fixed input fields, includes ProWaveDAQ.ini, csv.ini, sql.ini)
/config POST Save modified configuration files
/start POST Start DAQ, CSVWriter, SQLUploader and real-time display
/stop POST Stop all threads, safely close, and upload remaining data
/files_page GET File browser page
/files GET List files and folders in output directory (query parameter: path)
/download GET Download file (query parameter: path)

API Response Format

/data (GET)

{
  "success": true,
  "data": [0.123, 0.456, 0.789, ...],
  "counter": 12345
}

/sql_config (GET)

{
  "success": true,
  "sql_config": {
    "enabled": false,
    "host": "localhost",
    "port": "3306",
    "user": "root",
    "password": "",
    "database": "prowavedaq"
  }
}

/start (POST)

Request:

{
  "label": "test_001",
  "sql_enabled": true,
  "sql_host": "192.168.9.13",
  "sql_port": "3306",
  "sql_user": "raspberrypi",
  "sql_password": "Raspberry@Pi",
  "sql_database": "daq-data"
}

Response:

{
  "success": true,
  "message": "Data collection started (Sample rate: 7812 Hz, File split interval: 600 seconds, SQL upload interval: 600 seconds)"
}

Note:

  • If choosing "Use INI File Settings", only need to send sql_enabled: true
  • If choosing "Manual Input Settings", need to send all SQL configuration parameters

/stop (POST)

Response:

{
  "success": true,
  "message": "Data collection stopped"
}

/status (GET)

Response:

{
  "success": true,
  "is_collecting": true,
  "counter": 12345
}

/files (GET)

Query Parameters:

  • path (optional): Subdirectory path to browse

Response:

{
  "success": true,
  "items": [
    {
      "name": "20240101120000_test_001",
      "type": "directory",
      "path": "20240101120000_test_001"
    },
    {
      "name": "data.csv",
      "type": "file",
      "path": "data.csv",
      "size": 1024
    }
  ],
  "current_path": ""
}

/download (GET)

Query Parameters:

  • path (required): File path to download

Response: Direct file download

Data Flow and Operation Mechanism

Overall Data Flow

ProWaveDAQ Device (Modbus RTU)
    ↓
    Data format: [length, X, Y, Z, X, Y, Z, ...]
    ↓
ProWaveDAQ._read_loop() [Background Thread]
    ├─→ Read data length (position 0)
    ├─→ Read complete data (including length)
    ├─→ Handle cases where length is not multiple of 3 (remaining_data mechanism)
    ├─→ Ensure only complete samples processed (X, Y, Z combination)
    └─→ Data conversion (16-bit integer → float)
    ↓ (Put into queue)
data_queue (queue.Queue, max 1000 entries)
    ↓
collection_loop() [Background Thread]
    ├─→ update_realtime_data()
    │       ↓
    │   realtime_data_buffer (np.ndarray, fixed 234,360 points)
    │   realtime_time_buffer (np.ndarray, fixed 10 time points)
    │       ↓
    │   Flask /data API (HTTP GET, every 200ms)
    │       ↓
    │   Frontend Chart.js (templates/index.html)
    │
    ├─→ CSV Writer
    │   ├─→ Calculate timestamp based on sample rate
    │   ├─→ Ensure timestamp continuity when splitting files
    │   └─→ CSV files (split storage, ensuring sample boundaries)
    │
    └─→ SQL Uploader (if enabled)
            ├─→ Write to temporary CSV file
            │   └─→ .sql_temp/{timestamp}_sql_temp.csv
            │
            └─→ Scheduled upload thread (every sql_upload_interval seconds)
                ├─→ Read temporary file
                ├─→ Batch upload to SQL
                ├─→ Delete temporary file
                └─→ Create new temporary file
                    ↓
                MariaDB/MySQL Database

Real-time Data Return Mechanism

Mechanism: HTTP Polling + Downsampling Queue

  • Request Frequency: Every 100 milliseconds (0.1 seconds)
  • Data Transmission: JSON format (downsampled incremental data)
  • Downsampling Architecture: Downsampling Queue
    • Downsampling ratio: 50 (take 1 point every 50 points)
    • Original sample rate: 7812 Hz → downsampled to approximately 156 Hz
    • Frontend data transmission reduced by approximately 98%
    • Frontend plotting burden significantly reduced
  • Queue Architecture:
    • web_data_queue: Stores downsampled data for frontend use (max 10,000 entries)
    • csv_data_queue: Stores raw data for CSV writing (max 1,000 entries)
    • sql_data_queue: Stores raw data for SQL upload (max 1,000 entries)
  • Display Limit: Frontend displays maximum 500 points (approximately 5-10 seconds of downsampled data)
  • Incremental Update: Frontend only processes new data, uses push() and splice() to maintain fixed window size

Frontend Processing:

// Execute every 200ms
setInterval(updateChart, 200);

function updateChart() {
    fetch('/data')
        .then(response => response.json())
        .then(data => {
            // Group data by channel (every 3 as a group)
            // Update Chart.js chart
            // Limit display to last 10 seconds of data (78,120 data points/channel)
        });
}

CSV File Splitting Mechanism

Trigger Method: Based on data volume

  1. Calculate Target Size:

    target_size = second × sampleRate × channels
    
  2. Accumulate Counter:

    current_data_size += len(data)
  3. File Splitting Logic:

    • If current_data_size < target_size: Directly write to current file
    • If current_data_size >= target_size: Batch process, create new file
    • Important: Ensure split position is at sample boundary (multiple of 3), avoid channel misalignment
  4. Timestamp Calculation:

    • Time interval per sample = 1 / sample_rate seconds
    • Timestamp = global start time + (sample count × sample interval)
    • Ensure timestamp continuity when splitting files

SQL Upload Mechanism

Trigger Method: Based on time interval (scheduled upload)

  1. Temporary File Mechanism:

    • When data collection starts, create .sql_temp temporary directory under output directory
    • Create first temporary file (filename format: {timestamp}_sql_temp.csv)
    • All SQL data directly written to current temporary file
  2. Scheduled Upload Thread:

    • Start independent background thread (sql_upload_timer_loop)
    • Check every sql_upload_interval seconds
    • When time reached:
      • Upload current temporary file to SQL server
      • After successful upload, delete temporary file
      • Immediately create new temporary file (avoid data overflow)
  3. Data Protection Mechanism:

    • Retry Mechanism: Maximum 3 retries on upload failure, incremental delay (0.1s, 0.2s, 0.3s)
    • Failure Retention: Temporary file retained on upload failure, wait for next retry
    • Success Confirmation: Only delete temporary file after successful upload
    • Auto Reconnect: Automatically reconnect on connection interruption
  4. Batch Insert:

    • Read all data from temporary CSV file
    • Use executemany() for batch insert, improving performance
    • Automatically create corresponding SQL table (table name corresponds to CSV filename)
  5. Stop Processing:

    • Upload current temporary file on stop
    • Check and upload all remaining temporary files (ensure no data loss)
    • Delete all temporary files after successful upload

Temporary File Structure:

output/ProWaveDAQ/{timestamp}_{label}/
├── {timestamp}_{label}_001.csv
├── {timestamp}_{label}_002.csv
└── .sql_temp/                    # Temporary file directory
    ├── 20250106120000_sql_temp.csv
    ├── 20250106120600_sql_temp.csv
    └── ...

Advantages:

  • Reduce memory usage: Data directly written to file, doesn't occupy memory buffer
  • Data persistence: Even if program abnormally terminates, temporary files remain
  • Scheduled upload: Avoid frequent SQL connections, improve performance
  • Automatic cleanup: Automatically delete temporary files after successful upload

Troubleshooting

Common Issues

1. Cannot Connect to Device

Symptoms: Unable to read data after startup

Solutions:

  • Check if serial port path is correct (/dev/ttyUSB0 or others)
  • Confirm device is properly connected
  • Check if user has serial port access permissions
  • Try using ls -l /dev/ttyUSB* to confirm device exists

2. Web Interface Cannot Open

Symptoms: Unable to open webpage in browser

Solutions:

  • Confirm firewall allows the port number used (default 8080)
  • Check if other programs are using the port
  • If using custom port, confirm browser URL uses correct port number
  • Confirm Python program is running
  • Check system logs for error messages

3. Incorrect Data Display

Symptoms: Chart displays abnormally or data points incorrect

Solutions:

  • Check if sample rate in configuration file is correct
  • Confirm channel count setting (default 3)
  • Check browser console for JavaScript errors

4. CSV Files Not Generated

Symptoms: Data collection normal but no CSV files

Solutions:

  • Check if output/ProWaveDAQ/ directory has write permissions
  • Confirm Label has been correctly entered
  • Check if disk space is sufficient

5. SQL Upload Failure

Symptoms: SQL upload function not working properly

Solutions:

  • Check if connection settings in sql.ini are correct
  • If connecting to IMSL Lab MariaDB server:
    • Confirm OpenVPN connection established (execute ./connection.sh)
    • Check OpenVPN connection status: pgrep -x openvpn
    • Confirm VPN connection normal: ip addr show tun0
    • If connection fails, check:
      • Whether API/imCloud.ovpn file exists and is correct
      • Whether connection information in API/connection_config.txt is correct
      • Execute ./connection.sh --setup to reconfigure connection information
  • General Connection Issues:
    • Confirm SQL server is reachable (test: mysql -h <host> -P <port> -u <user> -p)
    • Check if database exists
    • Confirm user has sufficient permissions (CREATE TABLE, INSERT)
    • View terminal error messages
    • Check network connection (especially when crossing network segments)

6. Data Collection Stopped

Symptoms: Data collection stops mid-way

Solutions:

  • Check if Modbus connection is interrupted
  • View terminal error messages
  • Confirm device is operating normally
  • Check if SQL connection is normal (if SQL upload enabled)

7. High Memory Usage

Symptoms: System memory usage too high

Solutions:

  • Check if sql_upload_interval setting is too large
  • System automatically limits buffer size (maximum 10,000,000 data points)
  • If memory issues persist, can reduce sql_upload_interval value

8. Channel Order Misalignment

Symptoms: Channel order incorrect in CSV files or charts

Solutions:

  • System automatically handles this issue (using remaining_data mechanism)
  • If problem persists, check:
    • Whether data format is correct: [length, X, Y, Z, X, Y, Z, ...]
    • Check logs for "Remaining data points" warnings
    • View 通道錯誤可能性分析.md file for detailed information

Logging System

The system uses loguru as the unified logging system, providing the following log levels:

  • [TRACE] - Most detailed trace messages (function entry/exit, data processing flow)
  • [DEBUG] - Debug messages (configuration reading, connection status, performance metrics)
  • [INFO] - General information messages (system startup/shutdown, important status changes)
  • [WARNING] - Warning messages
  • [ERROR] - Error messages

All log messages automatically include timestamp and module label, format:

[YYYY-MM-DD HH:MM:SS] [LEVEL] [Module Name] Message content

Log Labels:

  • [main] - Main program related logs
  • [prowavedaq] - DAQ device communication logs
  • [csv_writer] - CSV writer logs
  • [sql_uploader] - SQL uploader logs

Log Output:

  • Console: INFO, DEBUG, WARNING, TRACE output to stdout, ERROR output to stderr
  • File: All levels (TRACE and above) logged to logs/loguru.log, automatic rotation (10 MB) and retention (7 days)

Note: Flask HTTP request logs are hidden by default, only application log messages are displayed.

Debug Mode

To view detailed debug information, you can:

  • View terminal log output
  • View logs/loguru.log file for complete log records
  • Use grep "[module_name]" logs/loguru.log to filter logs for specific modules

Technical Architecture

Thread Design

Thread Function Notes
Main Thread Control flow, wait for user interrupt Synchronous main control core
Flask Thread Provide HTTP interface and API daemon=True
Collection Thread Data collection loop (processes CSV and SQL) Started on /start
DAQ Reading Thread Read data from Modbus device Started on start_reading(), executes _read_loop()

Code Architecture

prowavedaq.py Module Structure

Public API (for external use):

  • scan_devices() - Scan available Modbus devices
  • init_devices(filename) - Initialize device from INI file and establish connection
  • start_reading() - Start data reading (background thread)
  • stop_reading() - Stop data reading and close connection
  • get_data() - Non-blocking get latest batch of data
  • get_data_blocking(timeout) - Blocking get latest batch of data
  • get_counter() - Get read batch count
  • get_sample_rate() - Get sample rate

Internal Methods (for module internal use):

  • _connect() - Establish Modbus RTU connection
  • _disconnect() - Close Modbus connection
  • _ensure_connected() - Ensure connection exists (auto reconnect)
  • _read_chip_id() - Read chip ID (during initialization)
  • _set_sample_rate() - Set sample rate (during initialization)
  • _read_registers_with_header() - Read registers (including Header)
  • _read_normal_data() - Normal Mode read (Address 0x02)
  • _read_bulk_data() - Bulk Mode read (Address 0x15)
  • _get_buffer_status() - Read buffer status
  • _convert_raw_to_float_samples() - Convert to float (ensure XYZ no misalignment)
  • _read_loop() - Main read loop (background thread)

Read Modes:

  • Normal Mode: Used when buffer data volume ≤ 123, read from Address 0x02
  • Bulk Mode: Used when buffer data volume > 123, read from Address 0x15, maximum 9 samples
  • Automatically switch modes based on buffer status, optimize read efficiency

Design Principles:

  • Each read only processes complete XYZ three-axis groups, avoid channel misalignment
  • FIFO buffer size(0x02) read together with data, ensure consistency
  • Auto reconnect mechanism, ensure connection stability
  • Modular design, convenient for future expansion and maintenance

Technical Limitations

  • Does not use asyncio or WebSocket
  • Does not use file-based data exchange
  • All data transfer completed in memory
  • Uses Python variables or global state to save data
  • SQL upload uses HTTP connection, does not support WebSocket

Memory Management

  1. Real-time Data Buffer:

    • Maximum 234,360 data points retained (approximately 1.87 MB, 10 seconds of data)
    • Calculation: 7812 Hz × 3 channels × 10 seconds = 234,360 data points
    • Only updated when active frontend connection exists
    • Frontend display limit: Last 10 seconds of data (78,120 data points per channel)
  2. SQL Data Buffer:

    • Upper limit: min(sql_target_size × 2, 10,000,000) data points
    • Force upload when exceeding limit
  3. DAQ Data Queue:

    • Maximum 1000 entries (approximately 123 points per entry, approximately 1 MB)

Development Notes

Extending Functionality

To extend system functionality, you can:

  1. Modify Frontend Interface: Edit src/templates/index.html and src/templates/config.html templates
  2. Adjust Chart Settings: Modify Chart.js configuration options in src/templates/index.html
  3. Add API Routes: Add route handler functions in src/main.py
  4. Customize CSV Format: Modify write logic in src/csv_writer.py
  5. Customize SQL Format: Modify table structure and insert logic in src/sql_uploader.py

Code Structure

  • src/prowavedaq.py: Responsible for Modbus RTU communication and data reading
    • Handles data format: [length, X, Y, Z, X, Y, Z, ...]
    • Automatically handles cases where length is not multiple of 3 (using remaining_data mechanism)
    • Multi-thread safe protection (using lock mechanism)
    • Data integrity checks
  • src/csv_writer.py: Responsible for CSV file creation and writing
    • Automatically calculates timestamp based on sample rate
    • Ensures timestamp continuity when splitting files
    • Channel labels: Channel_1(X), Channel_2(Y), Channel_3(Z)
  • src/sql_uploader.py: Responsible for SQL database connection and data upload
    • Supports MySQL/MariaDB
    • Retry mechanism and data protection
  • src/main.py: Integrates all functionality, provides Web interface (using Flask + templates)
    • Includes loguru logging system configuration
    • Unified log format and level management
    • File splitting logic ensures sample boundaries (multiple of 3)
    • SQL upload ensures sample boundaries
    • Smart buffer management
  • src/templates/index.html: Main page HTML template (includes Chart.js chart, SQL settings)
  • src/templates/config.html: Configuration management page template (fixed input fields)
  • src/templates/files.html: File browser page template

Key Design Decisions

  1. Data Protection:

    • Retain data in buffer when SQL upload fails
    • Only remove from buffer after successful upload
    • Maximum 3 retries, incremental delay
  2. Memory Protection:

    • SQL buffer has upper limit, prevent memory overflow
    • Force upload when exceeding limit
  3. Configuration File Management:

    • Use fixed input fields, prevent user from accidentally deleting parameters
    • SQL settings independent as sql.ini file
  4. Smart Buffer:

    • Only update real-time data buffer when active frontend connection exists
    • Saves CPU and memory resources

License Information

This project is for internal use, please follow relevant usage regulations.

Contact Information

For questions or suggestions, please contact the project maintainer.


Last Updated: January 6, 2026
Author: Albert Wang
Current Version: 9.0.0

For detailed version update records, please refer to CHANGELOG_EN.md or CHANGELOG.md


Multi-language Version / 多語言版本
中文版本 (README.md) | English Version (README_EN.md)