Multi-language Version / 多語言版本
中文版本 (README.md) | English Version (README_EN.md)
The ProWaveDAQ Real-time Data Visualization System is a Python-based vibration data acquisition and visualization platform for collecting vibration data from PW-RVT-2-4 (Modbus RTU) devices and displaying continuous curves of all data points in real-time in a web browser, while automatically performing CSV storage and SQL database uploads.
This system provides a complete web interface that allows users to operate through a browser without accessing the terminal:
- Modify configuration files (
ProWaveDAQ.ini,csv.ini,sql.ini) - Input data labels
- Configure SQL server upload (optional)
- Press "Start Reading" to initiate collection and real-time display
- System automatically splits and stores data files (based on seconds in
csv.ini) - Press "Stop" to safely terminate and automatically upload remaining data
- Real-time Data Acquisition: Read vibration data from ProWaveDAQ devices via Modbus RTU protocol
- Real-time Data Visualization: Display multi-channel continuous curve graphs in browser using Chart.js
- Automatic CSV Storage: Automatically split and store data files based on configuration
- SQL Database Upload: Optional SQL server upload functionality, supporting MySQL/MariaDB
- Web Interface Control: Complete browser-based operation interface, no terminal required
- Configuration File Management: Edit configuration files through web interface using fixed input fields (prevents accidental parameter deletion)
- Downsampling Queue Architecture: Uses
web_data_queueto store downsampled data (50:1 downsampling), significantly reducing frontend data transmission and plotting burden - Incremental Update Mechanism: Frontend uses incremental updates, processing only new data, maintaining fixed window size (500 points)
- Manufacturer Manual Compliance: ProWaveDAQ module strictly follows manufacturer manual Page 5 specifications, using FC04 to read complete packets
- Multi-threaded Architecture: 5 independent threads (Flask, DAQ Reading, Collection, CSV Writer, SQL Writer), ensuring components don't interfere with each other
- Thread-safe Communication: Uses
queue.Queuefor inter-thread communication, ensuring data consistency - Real-time Data Visualization: Uses Chart.js for real-time charts (updates every 100ms), animations disabled for performance
- Automatic File Splitting: Automatically splits CSV files based on configuration, ensuring correct sample boundaries
- SQL Batch Upload: Uses temporary file mechanism to batch upload data to SQL server, improving performance
- Data Protection Mechanism: Retry mechanism, failure retention, ensuring no data loss
- Unified Logging System: Uses loguru to provide unified log format and level management, supporting TRACE, DEBUG, INFO, WARNING, ERROR levels
- Channel Misalignment Protection: Ensures correct data order, avoiding channel misalignment
- Precise Timestamp Calculation: Automatically calculates time for each sample based on sample rate
- ProWaveDAQ device (connected via Modbus RTU)
- Serial port (USB-to-serial or direct serial port)
- System supporting Python 3.10+ (recommended DietPi or other Debian-based systems)
- (Optional) SQL server (MySQL/MariaDB) for data upload
- Python 3.10 or higher
- Supported operating systems:
- DietPi (recommended)
- Debian-based Linux distributions
- Ubuntu
- Raspberry Pi OS
Please refer to requirements.txt file, main dependencies include:
pymodbus==3.11.3- Modbus communicationpyserial==3.5- Serial port communicationFlask==3.1.2- Web serverpymysql==1.0.2- SQL database connection (MySQL/MariaDB)loguru==0.7.3- Unified logging system
cd /path/to/ProWaveDAQ_Python_Visualization_Unit./deploy.shNote: The deploy.sh script requires sudo privileges in the following cases:
- When Python 3, pip3, or venv module is not installed (requires system package installation)
- When user needs to be added to
dialoutgroup to access serial port
If the system already has Python environment installed and user is already in dialout group, sudo is not required.
If sudo is needed, execute:
sudo ./deploy.shpip install -r requirements.txtOr using pip3:
pip3 install -r requirements.txtEnsure Python scripts have execution permissions:
chmod +x src/main.py
chmod +x src/prowavedaq.py
chmod +x src/csv_writer.py
chmod +x src/sql_uploader.pyIf using USB-to-serial devices, you may need to add user to dialout group:
sudo usermod -a -G dialout $USERThen re-login or execute:
newgrp dialoutCheck configuration files in API/ directory:
API/ProWaveDAQ.ini- ProWaveDAQ device settingsAPI/csv.ini- CSV file splitting interval settingsAPI/sql.ini- SQL server connection settings and upload interval settings
If SQL upload is enabled, create data table in MariaDB/MySQL:
CREATE TABLE IF NOT EXISTS vibration_data (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
timestamp DATETIME NOT NULL,
label VARCHAR(255) NOT NULL,
channel_1 DOUBLE NOT NULL,
channel_2 DOUBLE NOT NULL,
channel_3 DOUBLE NOT NULL,
INDEX idx_timestamp (timestamp),
INDEX idx_label (label)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;Note: If the table doesn't exist, the program will automatically create it on first connection.
Background: Due to Feng Chia University firewall policy, connecting to IMSL Lab's MariaDB server requires establishing VPN connection through OpenVPN.
Install OpenVPN (if not already installed):
sudo apt-get update
sudo apt-get install openvpnConfigure OpenVPN Connection:
-
Initial Connection Setup:
./connection.sh --setup
Script will prompt for:
- Private Key Password
- OVPN Username
- OVPN Password
- OVPN Server Address (optional)
-
Verify OVPN Configuration File Exists:
- Ensure
API/imCloud.ovpnfile exists - If not, script will create basic configuration file, requires manual editing to add complete settings
- Ensure
-
Establish OpenVPN Connection:
./connection.sh
Or with sudo privileges:
sudo ./connection.sh
-
Check Connection Status:
# Check if OpenVPN process is running pgrep -x openvpn # View connection status ip addr show tun0
-
Disconnect:
./connection.sh --disconnect
Or:
sudo killall openvpn
Important Notes:
- Connection information is stored in
API/connection_config.txt(permissions 600, owner read/write only) - Do not commit
API/connection_config.txtandAPI/imCloud.ovpnto version control - Recommend adding these files to
.gitignore - After successful connection, SQL upload can normally connect to IMSL Lab's MariaDB server
Using default port 8080:
./run.shSpecify custom port:
./run.sh 3000 # Use port 3000
./run.sh 9000 # Use port 9000Using log recording:
./run_with_logs.sh # Use default port 8080, save logs
./run_with_logs.sh 3000 # Use port 3000, save logsUsing default port 8080:
cd src
python3 main.pySpecify custom port:
cd src
python3 main.py --port 3000 # Use port 3000
python3 main.py -p 9000 # Use port 9000 (short form)View all available options:
python3 src/main.py --helpAfter successful startup, you will see a message similar to:
============================================================
ProWaveDAQ Real-time Data Visualization System
============================================================
Web interface will be available at http://0.0.0.0:8080/
Press Ctrl+C to stop the server
============================================================
-
Open Browser
- On local machine: Open
http://localhost:<port>/(default 8080) - On remote machine: Open
http://<deviceIP>:<port>/(default 8080)
Examples:
- Using default port:
http://localhost:8080/ - Using custom port 3000:
http://localhost:3000/
- On local machine: Open
-
Input Data Label
- Enter label name for this measurement in "Data Label (Label)" field
- Examples:
test_001,vibration_20240101, etc.
-
Configure SQL Upload (Optional)
- Check "Enable SQL Server Upload"
- Choose "Use INI File Settings" or "Manual Input Settings"
- If using INI settings, system automatically reads settings from
sql.ini - If manual input, can override INI settings
-
Start Data Collection
- Click "Start Reading" button
- System will automatically:
- Connect to ProWaveDAQ device
- Start reading data
- Display real-time data curves
- Automatically save CSV files
- (If enabled) Automatically upload data to SQL server
-
View Real-time Data
- Real-time curve graph automatically updates (every 200ms)
- Can view data from all three channels simultaneously
- Displays last 10 seconds of data (approximately 78,120 data points per channel)
- Data point count displayed in real-time
-
Stop Data Collection
- Click "Stop Reading" button
- System safely stops collection and closes connection
- Automatically uploads remaining data to SQL server (if enabled)
-
Manage Configuration Files
- Click "Configuration Management" link
- Edit configuration files using fixed input fields (prevents accidental parameter deletion)
- Can edit
ProWaveDAQ.ini,csv.ini, andsql.ini - After modification, click "Save Configuration"
-
Browse and Download Files
- Click "File Browser" link
- Can browse all folders and files in
output/ProWaveDAQ/directory - Click folder name or "Enter" button to enter folder
- Click "Download" button to download CSV files
- Use breadcrumb navigation to return to parent directory
[ProWaveDAQ]
serialPort = /dev/ttyUSB0 # Serial port path
baudRate = 3000000 # Baud rate
sampleRate = 7812 # Sample rate (Hz)
slaveID = 1 # Modbus slave ID[CSVServer]
enabled = false
[DumpUnit]
second = 60 # Data time length per CSV file (seconds)File Splitting Logic:
- System calculates data points per CSV file based on
sampleRate × channels × second - When accumulated data points reach target value, automatically creates new file
- Example: Sample rate 7812 Hz, 3 channels, 60 seconds → approximately 1,406,160 data points per file
[SQLServer]
enabled = false # Whether to enable SQL upload (true/false)
host = 192.168.9.13 # SQL server location
port = 3306 # Port
user = raspberrypi # Username
password = Raspberry@Pi # Password
database = daq-data # Database name
[DumpUnit]
second = 5 # SQL upload interval (seconds), works same as CSV secondSQL Configuration Notes:
enabled: Controls whether SQL upload is enabled- If
enabled = true, frontend automatically checks "Enable SQL Server Upload" - Frontend can choose to use INI settings or manual input (overrides INI settings)
- Important: If connecting to IMSL Lab's MariaDB server, due to Feng Chia University firewall policy, need to establish OpenVPN connection via
connection.shfirst (see "Set Up OpenVPN Connection" section)
SQL Upload Logic:
- SQL upload uses temporary file mechanism, data first written to temporary CSV file
- System creates temporary file directory (
.sql_temp), file naming format:{timestamp}_sql_temp.csv - Every
secondseconds, checks and uploads temporary file - After successful upload, automatically deletes temporary file and immediately creates new temporary file (prevents data overflow)
- On stop, checks and uploads all remaining temporary files
- Example: Sample rate 7812 Hz, 3 channels, 5 seconds → upload temporary file every 5 seconds
CSV files are stored in output/ProWaveDAQ/ directory, file naming format:
YYYYMMDDHHMMSS_<Label>_001.csv
YYYYMMDDHHMMSS_<Label>_002.csv
...
Each CSV file contains:
Timestamp- Timestamp (precisely calculated based on sample rate)Channel_1(X)- Channel 1 data (corresponds to X axis)Channel_2(Y)- Channel 2 data (corresponds to Y axis)Channel_3(Z)- Channel 3 data (corresponds to Z axis)
Data Format Description:
- Data format:
[length, X, Y, Z, X, Y, Z, ...] - When reading from device, position 0 is length, followed by X, Y, Z cycle
- Timestamp automatically calculated based on sample rate, ensuring correct time interval for each sample
If SQL upload is enabled, data is stored in vibration_data table, containing:
id- Auto-increment primary keytimestamp- Data timestamplabel- Data labelchannel_1- Channel 1 datachannel_2- Channel 2 datachannel_3- Channel 3 data
ProWaveDAQ_Python_Visualization_Unit/
│
├── API/
│ ├── ProWaveDAQ.ini # ProWaveDAQ device configuration file
│ ├── csv.ini # CSV file splitting interval configuration file
│ ├── sql.ini # SQL server connection and upload interval configuration file
│ ├── imCloud.ovpn # OpenVPN connection configuration file (for connecting to IMSL Lab)
│ └── connection_config.txt # OpenVPN connection information configuration file (auto-generated, do not commit to version control)
│
├── connection.sh # OpenVPN connection script (for connecting to IMSL Lab MariaDB server)
│
├── output/
│ └── ProWaveDAQ/ # CSV output directory
│ └── YYYYMMDDHHMMSS_<Label>/
│ ├── YYYYMMDDHHMMSS_<Label>_*.csv
│ └── .sql_temp/ # SQL temporary file directory (if SQL enabled)
│ └── YYYYMMDDHHMMSS_sql_temp.csv
│
├── src/
│ ├── prowavedaq.py # ProWaveDAQ core module (Modbus communication)
│ ├── csv_writer.py # CSV writer module
│ ├── sql_uploader.py # SQL uploader module
│ ├── main.py # Main control program (Web interface, includes loguru logging configuration)
│ ├── requirements.txt # Python dependency package list
│ └── templates/ # HTML template directory
│ ├── index.html # Main page template
│ ├── config.html # Configuration management page template
│ └── files.html # File browser page template
│
├── README.md # This document
├── README_EN.md # This document (English version)
├── deploy.sh # Deployment script
└── run.sh # Startup script (enter virtual environment and start program)
| Route | Method | Description |
|---|---|---|
/ |
GET | Main page, displays configuration form, Label input, SQL settings, start/stop buttons and line chart |
/data |
GET | Returns current latest data JSON to frontend |
/status |
GET | Check data collection status |
/sql_config |
GET | Get SQL configuration (from sql.ini) |
/config |
GET | Display configuration file editing page (fixed input fields, includes ProWaveDAQ.ini, csv.ini, sql.ini) |
/config |
POST | Save modified configuration files |
/start |
POST | Start DAQ, CSVWriter, SQLUploader and real-time display |
/stop |
POST | Stop all threads, safely close, and upload remaining data |
/files_page |
GET | File browser page |
/files |
GET | List files and folders in output directory (query parameter: path) |
/download |
GET | Download file (query parameter: path) |
{
"success": true,
"data": [0.123, 0.456, 0.789, ...],
"counter": 12345
}{
"success": true,
"sql_config": {
"enabled": false,
"host": "localhost",
"port": "3306",
"user": "root",
"password": "",
"database": "prowavedaq"
}
}Request:
{
"label": "test_001",
"sql_enabled": true,
"sql_host": "192.168.9.13",
"sql_port": "3306",
"sql_user": "raspberrypi",
"sql_password": "Raspberry@Pi",
"sql_database": "daq-data"
}Response:
{
"success": true,
"message": "Data collection started (Sample rate: 7812 Hz, File split interval: 600 seconds, SQL upload interval: 600 seconds)"
}Note:
- If choosing "Use INI File Settings", only need to send
sql_enabled: true - If choosing "Manual Input Settings", need to send all SQL configuration parameters
Response:
{
"success": true,
"message": "Data collection stopped"
}Response:
{
"success": true,
"is_collecting": true,
"counter": 12345
}Query Parameters:
path(optional): Subdirectory path to browse
Response:
{
"success": true,
"items": [
{
"name": "20240101120000_test_001",
"type": "directory",
"path": "20240101120000_test_001"
},
{
"name": "data.csv",
"type": "file",
"path": "data.csv",
"size": 1024
}
],
"current_path": ""
}Query Parameters:
path(required): File path to download
Response: Direct file download
ProWaveDAQ Device (Modbus RTU)
↓
Data format: [length, X, Y, Z, X, Y, Z, ...]
↓
ProWaveDAQ._read_loop() [Background Thread]
├─→ Read data length (position 0)
├─→ Read complete data (including length)
├─→ Handle cases where length is not multiple of 3 (remaining_data mechanism)
├─→ Ensure only complete samples processed (X, Y, Z combination)
└─→ Data conversion (16-bit integer → float)
↓ (Put into queue)
data_queue (queue.Queue, max 1000 entries)
↓
collection_loop() [Background Thread]
├─→ update_realtime_data()
│ ↓
│ realtime_data_buffer (np.ndarray, fixed 234,360 points)
│ realtime_time_buffer (np.ndarray, fixed 10 time points)
│ ↓
│ Flask /data API (HTTP GET, every 200ms)
│ ↓
│ Frontend Chart.js (templates/index.html)
│
├─→ CSV Writer
│ ├─→ Calculate timestamp based on sample rate
│ ├─→ Ensure timestamp continuity when splitting files
│ └─→ CSV files (split storage, ensuring sample boundaries)
│
└─→ SQL Uploader (if enabled)
├─→ Write to temporary CSV file
│ └─→ .sql_temp/{timestamp}_sql_temp.csv
│
└─→ Scheduled upload thread (every sql_upload_interval seconds)
├─→ Read temporary file
├─→ Batch upload to SQL
├─→ Delete temporary file
└─→ Create new temporary file
↓
MariaDB/MySQL Database
Mechanism: HTTP Polling + Downsampling Queue
- Request Frequency: Every 100 milliseconds (0.1 seconds)
- Data Transmission: JSON format (downsampled incremental data)
- Downsampling Architecture: Downsampling Queue
- Downsampling ratio: 50 (take 1 point every 50 points)
- Original sample rate: 7812 Hz → downsampled to approximately 156 Hz
- Frontend data transmission reduced by approximately 98%
- Frontend plotting burden significantly reduced
- Queue Architecture:
web_data_queue: Stores downsampled data for frontend use (max 10,000 entries)csv_data_queue: Stores raw data for CSV writing (max 1,000 entries)sql_data_queue: Stores raw data for SQL upload (max 1,000 entries)
- Display Limit: Frontend displays maximum 500 points (approximately 5-10 seconds of downsampled data)
- Incremental Update: Frontend only processes new data, uses
push()andsplice()to maintain fixed window size
Frontend Processing:
// Execute every 200ms
setInterval(updateChart, 200);
function updateChart() {
fetch('/data')
.then(response => response.json())
.then(data => {
// Group data by channel (every 3 as a group)
// Update Chart.js chart
// Limit display to last 10 seconds of data (78,120 data points/channel)
});
}Trigger Method: Based on data volume
-
Calculate Target Size:
target_size = second × sampleRate × channels -
Accumulate Counter:
current_data_size += len(data)
-
File Splitting Logic:
- If
current_data_size < target_size: Directly write to current file - If
current_data_size >= target_size: Batch process, create new file - Important: Ensure split position is at sample boundary (multiple of 3), avoid channel misalignment
- If
-
Timestamp Calculation:
- Time interval per sample = 1 / sample_rate seconds
- Timestamp = global start time + (sample count × sample interval)
- Ensure timestamp continuity when splitting files
Trigger Method: Based on time interval (scheduled upload)
-
Temporary File Mechanism:
- When data collection starts, create
.sql_temptemporary directory under output directory - Create first temporary file (filename format:
{timestamp}_sql_temp.csv) - All SQL data directly written to current temporary file
- When data collection starts, create
-
Scheduled Upload Thread:
- Start independent background thread (
sql_upload_timer_loop) - Check every
sql_upload_intervalseconds - When time reached:
- Upload current temporary file to SQL server
- After successful upload, delete temporary file
- Immediately create new temporary file (avoid data overflow)
- Start independent background thread (
-
Data Protection Mechanism:
- Retry Mechanism: Maximum 3 retries on upload failure, incremental delay (0.1s, 0.2s, 0.3s)
- Failure Retention: Temporary file retained on upload failure, wait for next retry
- Success Confirmation: Only delete temporary file after successful upload
- Auto Reconnect: Automatically reconnect on connection interruption
-
Batch Insert:
- Read all data from temporary CSV file
- Use
executemany()for batch insert, improving performance - Automatically create corresponding SQL table (table name corresponds to CSV filename)
-
Stop Processing:
- Upload current temporary file on stop
- Check and upload all remaining temporary files (ensure no data loss)
- Delete all temporary files after successful upload
Temporary File Structure:
output/ProWaveDAQ/{timestamp}_{label}/
├── {timestamp}_{label}_001.csv
├── {timestamp}_{label}_002.csv
└── .sql_temp/ # Temporary file directory
├── 20250106120000_sql_temp.csv
├── 20250106120600_sql_temp.csv
└── ...
Advantages:
- Reduce memory usage: Data directly written to file, doesn't occupy memory buffer
- Data persistence: Even if program abnormally terminates, temporary files remain
- Scheduled upload: Avoid frequent SQL connections, improve performance
- Automatic cleanup: Automatically delete temporary files after successful upload
Symptoms: Unable to read data after startup
Solutions:
- Check if serial port path is correct (
/dev/ttyUSB0or others) - Confirm device is properly connected
- Check if user has serial port access permissions
- Try using
ls -l /dev/ttyUSB*to confirm device exists
Symptoms: Unable to open webpage in browser
Solutions:
- Confirm firewall allows the port number used (default 8080)
- Check if other programs are using the port
- If using custom port, confirm browser URL uses correct port number
- Confirm Python program is running
- Check system logs for error messages
Symptoms: Chart displays abnormally or data points incorrect
Solutions:
- Check if sample rate in configuration file is correct
- Confirm channel count setting (default 3)
- Check browser console for JavaScript errors
Symptoms: Data collection normal but no CSV files
Solutions:
- Check if
output/ProWaveDAQ/directory has write permissions - Confirm Label has been correctly entered
- Check if disk space is sufficient
Symptoms: SQL upload function not working properly
Solutions:
- Check if connection settings in
sql.iniare correct - If connecting to IMSL Lab MariaDB server:
- Confirm OpenVPN connection established (execute
./connection.sh) - Check OpenVPN connection status:
pgrep -x openvpn - Confirm VPN connection normal:
ip addr show tun0 - If connection fails, check:
- Whether
API/imCloud.ovpnfile exists and is correct - Whether connection information in
API/connection_config.txtis correct - Execute
./connection.sh --setupto reconfigure connection information
- Whether
- Confirm OpenVPN connection established (execute
- General Connection Issues:
- Confirm SQL server is reachable (test:
mysql -h <host> -P <port> -u <user> -p) - Check if database exists
- Confirm user has sufficient permissions (CREATE TABLE, INSERT)
- View terminal error messages
- Check network connection (especially when crossing network segments)
- Confirm SQL server is reachable (test:
Symptoms: Data collection stops mid-way
Solutions:
- Check if Modbus connection is interrupted
- View terminal error messages
- Confirm device is operating normally
- Check if SQL connection is normal (if SQL upload enabled)
Symptoms: System memory usage too high
Solutions:
- Check if
sql_upload_intervalsetting is too large - System automatically limits buffer size (maximum 10,000,000 data points)
- If memory issues persist, can reduce
sql_upload_intervalvalue
Symptoms: Channel order incorrect in CSV files or charts
Solutions:
- System automatically handles this issue (using
remaining_datamechanism) - If problem persists, check:
- Whether data format is correct:
[length, X, Y, Z, X, Y, Z, ...] - Check logs for "Remaining data points" warnings
- View
通道錯誤可能性分析.mdfile for detailed information
- Whether data format is correct:
The system uses loguru as the unified logging system, providing the following log levels:
[TRACE]- Most detailed trace messages (function entry/exit, data processing flow)[DEBUG]- Debug messages (configuration reading, connection status, performance metrics)[INFO]- General information messages (system startup/shutdown, important status changes)[WARNING]- Warning messages[ERROR]- Error messages
All log messages automatically include timestamp and module label, format:
[YYYY-MM-DD HH:MM:SS] [LEVEL] [Module Name] Message content
Log Labels:
[main]- Main program related logs[prowavedaq]- DAQ device communication logs[csv_writer]- CSV writer logs[sql_uploader]- SQL uploader logs
Log Output:
- Console: INFO, DEBUG, WARNING, TRACE output to stdout, ERROR output to stderr
- File: All levels (TRACE and above) logged to
logs/loguru.log, automatic rotation (10 MB) and retention (7 days)
Note: Flask HTTP request logs are hidden by default, only application log messages are displayed.
To view detailed debug information, you can:
- View terminal log output
- View
logs/loguru.logfile for complete log records - Use
grep "[module_name]" logs/loguru.logto filter logs for specific modules
| Thread | Function | Notes |
|---|---|---|
| Main Thread | Control flow, wait for user interrupt | Synchronous main control core |
| Flask Thread | Provide HTTP interface and API | daemon=True |
| Collection Thread | Data collection loop (processes CSV and SQL) | Started on /start |
| DAQ Reading Thread | Read data from Modbus device | Started on start_reading(), executes _read_loop() |
Public API (for external use):
scan_devices()- Scan available Modbus devicesinit_devices(filename)- Initialize device from INI file and establish connectionstart_reading()- Start data reading (background thread)stop_reading()- Stop data reading and close connectionget_data()- Non-blocking get latest batch of dataget_data_blocking(timeout)- Blocking get latest batch of dataget_counter()- Get read batch countget_sample_rate()- Get sample rate
Internal Methods (for module internal use):
_connect()- Establish Modbus RTU connection_disconnect()- Close Modbus connection_ensure_connected()- Ensure connection exists (auto reconnect)_read_chip_id()- Read chip ID (during initialization)_set_sample_rate()- Set sample rate (during initialization)_read_registers_with_header()- Read registers (including Header)_read_normal_data()- Normal Mode read (Address 0x02)_read_bulk_data()- Bulk Mode read (Address 0x15)_get_buffer_status()- Read buffer status_convert_raw_to_float_samples()- Convert to float (ensure XYZ no misalignment)_read_loop()- Main read loop (background thread)
Read Modes:
- Normal Mode: Used when buffer data volume ≤ 123, read from Address 0x02
- Bulk Mode: Used when buffer data volume > 123, read from Address 0x15, maximum 9 samples
- Automatically switch modes based on buffer status, optimize read efficiency
Design Principles:
- Each read only processes complete XYZ three-axis groups, avoid channel misalignment
- FIFO buffer size(0x02) read together with data, ensure consistency
- Auto reconnect mechanism, ensure connection stability
- Modular design, convenient for future expansion and maintenance
- Does not use
asyncioorWebSocket - Does not use file-based data exchange
- All data transfer completed in memory
- Uses Python variables or global state to save data
- SQL upload uses HTTP connection, does not support WebSocket
-
Real-time Data Buffer:
- Maximum 234,360 data points retained (approximately 1.87 MB, 10 seconds of data)
- Calculation: 7812 Hz × 3 channels × 10 seconds = 234,360 data points
- Only updated when active frontend connection exists
- Frontend display limit: Last 10 seconds of data (78,120 data points per channel)
-
SQL Data Buffer:
- Upper limit:
min(sql_target_size × 2, 10,000,000)data points - Force upload when exceeding limit
- Upper limit:
-
DAQ Data Queue:
- Maximum 1000 entries (approximately 123 points per entry, approximately 1 MB)
To extend system functionality, you can:
- Modify Frontend Interface: Edit
src/templates/index.htmlandsrc/templates/config.htmltemplates - Adjust Chart Settings: Modify Chart.js configuration options in
src/templates/index.html - Add API Routes: Add route handler functions in
src/main.py - Customize CSV Format: Modify write logic in
src/csv_writer.py - Customize SQL Format: Modify table structure and insert logic in
src/sql_uploader.py
src/prowavedaq.py: Responsible for Modbus RTU communication and data reading- Handles data format:
[length, X, Y, Z, X, Y, Z, ...] - Automatically handles cases where length is not multiple of 3 (using
remaining_datamechanism) - Multi-thread safe protection (using lock mechanism)
- Data integrity checks
- Handles data format:
src/csv_writer.py: Responsible for CSV file creation and writing- Automatically calculates timestamp based on sample rate
- Ensures timestamp continuity when splitting files
- Channel labels: Channel_1(X), Channel_2(Y), Channel_3(Z)
src/sql_uploader.py: Responsible for SQL database connection and data upload- Supports MySQL/MariaDB
- Retry mechanism and data protection
src/main.py: Integrates all functionality, provides Web interface (using Flask + templates)- Includes loguru logging system configuration
- Unified log format and level management
- File splitting logic ensures sample boundaries (multiple of 3)
- SQL upload ensures sample boundaries
- Smart buffer management
src/templates/index.html: Main page HTML template (includes Chart.js chart, SQL settings)src/templates/config.html: Configuration management page template (fixed input fields)src/templates/files.html: File browser page template
-
Data Protection:
- Retain data in buffer when SQL upload fails
- Only remove from buffer after successful upload
- Maximum 3 retries, incremental delay
-
Memory Protection:
- SQL buffer has upper limit, prevent memory overflow
- Force upload when exceeding limit
-
Configuration File Management:
- Use fixed input fields, prevent user from accidentally deleting parameters
- SQL settings independent as
sql.inifile
-
Smart Buffer:
- Only update real-time data buffer when active frontend connection exists
- Saves CPU and memory resources
This project is for internal use, please follow relevant usage regulations.
For questions or suggestions, please contact the project maintainer.
Last Updated: January 6, 2026
Author: Albert Wang
Current Version: 9.0.0
For detailed version update records, please refer to CHANGELOG_EN.md or CHANGELOG.md
Multi-language Version / 多語言版本
中文版本 (README.md) | English Version (README_EN.md)