Multi-language Version / 多語言版本
中文版本 (程式運作說明.md) | English Version (Technical_Manual_EN.md)
- Project Overview
- System Architecture
- Core Module Detailed Description
- Data Flow
- Web Interface and API
- Thread Architecture
- Configuration File Description
- File Structure
- Code Detailed Analysis
- Operation Flow
The ProWaveDAQ Real-time Data Visualization System is a Python-based vibration data acquisition and visualization platform with the following main functions:
- Acquire Vibration Data from ProWaveDAQ Device: Read three-channel vibration data from hardware device via Modbus RTU protocol
- Real-time Data Visualization: Display continuous vibration curve graphs in real-time in browser
- Automatic CSV Storage: Automatically split and store data files based on configured time intervals
- Web Interface Control: Provide complete browser-based operation interface, no terminal operation required
- Backend: Python 3.10+
- Web Framework: Flask 3.1.2+
- Communication Protocol: Modbus RTU (via pymodbus 3.11.3+)
- Serial Port Communication: pyserial 3.5+
- Frontend Visualization: Chart.js 3.9.1
- Data Storage: CSV format
- Logging System: loguru 0.7.3+
┌─────────────────────────────────────────────────────────────┐
│ Web Browser (Frontend) │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ index.html: Real-time charts, control buttons, status│ │
│ │ config.html: Configuration file editing interface │ │
│ └───────────────────────────────────────────────────────┘ │
└───────────────────────┬─────────────────────────────────────┘
│ HTTP/JSON
▼
┌─────────────────────────────────────────────────────────────┐
│ Flask Web Server (main.py) │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Flask Thread: Handle HTTP requests │ │
│ │ - /: Main page │ │
│ │ - /data: Return real-time data │ │
│ │ - /start: Start data collection │ │
│ │ - /stop: Stop data collection │ │
│ │ - /config: Configuration file management │ │
│ └───────────────────────────────────────────────────────┘ │
└───────────────────────┬─────────────────────────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Collection │ │ Real-time │ │ CSV Writer │
│ Thread │ │ Data Buffer │ │ Thread │
│ (Data │ │ (Memory │ │ (File Write) │
│ Collection │ │ Variables) │ │ │
│ Loop) │ │ │ │ │
└───────┬───────┘ └───────────────┘ └───────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ ProWaveDAQ Class (prowavedaq.py) │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Reading Thread: Modbus RTU Read Loop │ │
│ │ - Read device data │ │
│ │ - Data conversion (16-bit → float) │ │
│ │ - Put into data queue (queue.Queue) │ │
│ └───────────────────────────────────────────────────────┘ │
└───────────────────────┬─────────────────────────────────────┘
│ Modbus RTU
▼
┌─────────────────────────────────────────────────────────────┐
│ ProWaveDAQ Hardware Device │
│ - Serial Port: /dev/ttyUSB0 │
│ - Baud Rate: 3000000 │
│ - Sample Rate: 7812 Hz │
│ - Slave ID: 1 │
└─────────────────────────────────────────────────────────────┘
-
main.py (Main Control Program)
- Integrates all modules
- Provides Flask Web service
- Manages threads and global state
-
prowavedaq.py (Hardware Communication Module)
- Handles Modbus RTU communication
- Reads device data
- Data conversion and queue management
-
csv_writer.py (Data Storage Module)
- CSV file creation and writing
- Automatic file splitting logic
-
templates/ (Frontend Interface)
- HTML templates and JavaScript
- Chart.js chart display
class ProWaveDAQ:
- client: ModbusSerialClient # Modbus connection object
- serial_port: str # Serial port path
- baud_rate: int # Baud rate
- sample_rate: int # Sample rate (Hz)
- slave_id: int # Modbus slave ID
- reading: bool # Reading status flag
- reading_thread: Thread # Reading thread
- data_queue: queue.Queue # Data queue (max 1000 entries)
- counter: int # Read counterFunction: Initialize device from INI configuration file and establish Modbus connection
Operation Flow:
-
Read
ProWaveDAQ.iniconfiguration fileserialPort: Serial port path (default/dev/ttyUSB0)baudRate: Baud rate (default3000000)sampleRate: Sample rate (default7812Hz)slaveID: Slave ID (default1)
-
Establish Modbus RTU connection
- Use
ModbusSerialClientto create serial port connection - Set parameters:
parity='N',stopbits=1,bytesize=8,framer="rtu" - Connection timeout set to 1 second
- Use
-
Read chip ID (verify connection)
- Read 3 input registers at address
0x80 - Display chip ID for verification
- Read 3 input registers at address
-
Set sample rate
- Write to address
0x01, set device sample rate
- Write to address
Error Handling:
- Output error message and return on connection failure
- Use default values on INI file parsing error
Function: Start background thread to begin reading data
Operation Flow:
- Check if already reading (avoid duplicate startup)
- Set
reading = True - Create and start background thread executing
_read_loop() - Thread set to
daemon=True(automatically terminates when main program ends)
Function: Main data reading loop (executes in independent thread)
Operation Flow (Follows manufacturer manual Page 5 specifications):
-
Read FIFO Buffer Size
- Read 1 input register from address
0x02, get FIFO buffer sizebuffer_size - If buffer is empty (
buffer_size <= 0), wait 2ms then continue
- Read 1 input register from address
-
Calculate Read Length
- Limit single read maximum:
read_count = min(buffer_size, MAX_READ_WORDS)(123 Words) - Ensure complete X,Y,Z read:
read_count = (read_count // CHANNELS) * CHANNELS - If
read_count == 0, skip this read
- Limit single read maximum:
-
Execute Read (FC04, Start Address 0x02)
- Read
read_count + 1Words from 0x02 (including Header) - Packet structure:
[Header(1 word), Data(N words)] - Use
_read_data_packet()method to read complete packet
- Read
-
Parse Packet
raw_packet[0]: Header (FIFO buffer size at read time, remaining size)raw_packet[1:]: Actual vibration data (data from 0x03 onwards)
-
Data Conversion
- Convert 16-bit unsigned integer to signed integer
- Conversion formula:
signed = v if v < 32768 else v - 65536 - Normalize by dividing by 8192.0:
out.append(signed / 8192.0)
-
Put into Queue
- Use
queue.put_nowait(data)to put into queue - If queue is full (5000 entries), remove oldest data (FIFO)
- Use
-
Error Handling
- Output error message and continue on read failure
- Connection error causes read failure but won't auto reconnect (requires re-initialization)
Design Rationale:
- Follows manufacturer manual Page 5 specifications, ensures communication correctness
- Read complete packet in one read, reduces read count, improves performance
- Ensures complete X,Y,Z read (multiple of 3), avoid channel misalignment
- Uses queue buffering, avoid data loss
Data Format:
- Input: 16-bit unsigned integer array (read from Modbus registers)
- Output: Float array (normalized, range approximately -4.0 to 4.0)
Function: Non-blocking get data (get from queue)
Operation Method:
- Use
queue.get_nowait()to non-blocking get data - If queue is empty, return empty array
[] - Avoid blocking main thread
Function: Stop reading and cleanup resources
Operation Flow:
- Set
reading = False(stop reading loop) - Wait for reading thread to end (
join()) - Clear data queue
- Close Modbus connection
Function: Re-establish Modbus connection
Operation Flow:
- Close old connection
- Recreate
ModbusSerialClient - Attempt connection
- Set slave ID
- Return connection success/failure status
class CSVWriter:
- channels: int # Number of channels (fixed at 3)
- output_dir: str # Output directory path
- label: str # Data label
- file_counter: int # File counter
- current_file: file # Currently open file object
- writer: csv.writer # CSV writer objectFunction: Initialize CSV writer
Operation Flow:
- Store parameters (channel count, output directory, label)
- Initialize file counter to 1
- Create output directory (if doesn't exist)
- Create first CSV file
Function: Create new CSV file
Operation Flow:
- Generate filename:
{timestamp}_{label}_{file_counter:03d}.csv- Timestamp format:
YYYYMMDDHHMMSS - File counter: 3 digits, starting from 001
- Timestamp format:
- Open file (UTF-8 encoding)
- Create CSV writer
- Write header row:
['Timestamp', 'Channel_1', 'Channel_2', 'Channel_3'] - Immediately write to disk (
flush())
File Naming Examples:
20250106120000_test_001.csv20250106120000_test_002.csv
Function: Write data block to CSV file
Operation Flow:
- Check if data is empty
- Get current timestamp (ISO format)
- Write by channel groups:
- Data format:
[ch1_val, ch2_val, ch3_val, ch1_val, ch2_val, ch3_val, ...] - Every 3 data points as a group (corresponding to 3 channels)
- Write format:
[timestamp, channel_1_value, channel_2_value, channel_3_value]
- Data format:
- If data is not multiple of 3, pad insufficient channels with 0.0
- Immediately write to disk (
flush())
Data Write Example:
Timestamp,Channel_1,Channel_2,Channel_3
2025-01-06T12:00:00.123456,0.123,0.456,0.789
2025-01-06T12:00:00.123489,0.234,0.567,0.890Function: Create new file (file splitting)
Operation Flow:
- Close current file
- Increment file counter
- Call
_create_new_file()to create new file
Function: Close CSV file
Operation Flow:
- Close file object
- Clear file and writer references
app: Flask # Flask application instance
web_data_queue: queue.Queue # Web display dedicated queue (downsampled data)
WEB_DOWNSAMPLE_RATIO: int = 50 # Downsampling ratio (take 1 point every 50 points)
csv_data_queue: queue.Queue # CSV data queue (raw data)
sql_data_queue: queue.Queue # SQL data queue (raw data)
data_lock: threading.Lock # Data access lock
is_collecting: bool # Data collection status flag
collection_thread: Thread # Data collection thread
daq_instance: ProWaveDAQ # DAQ instance
csv_writer_instance: CSVWriter # CSV writer instance
data_counter: int # Data point counter
target_size: int # Target data points per CSV file
current_data_size: int # Current file written data pointsFunction: Update real-time data (downsampling for Web display)
Core Architecture:
- Downsampling queue:
web_data_queue(max 10,000 entries) - Downsampling ratio: 50 (take 1 point every 50 points)
- Original sample rate: 7812 Hz → downsampled to approximately 156 Hz
- Frontend data transmission reduced by approximately 98%
Operation Flow:
-
Prevent Queue Overflow
- If
web_data_queueis full, discard 10 old entries - Protect memory, avoid memory overflow when frontend freezes
- If
-
Downsampling Processing
- Data format:
[X1, Y1, Z1, X2, Y2, Z2, ...](interleaved format) - Calculate step:
step = channels * WEB_DOWNSAMPLE_RATIO = 3 * 50 = 150 - Use step slicing:
for i in range(0, len(data), step) - Ensure each extraction is complete
[X, Y, Z]group - Extract data segments and put into downsampled array
- Data format:
-
Put into Web Queue
- If downsampled data is not empty, put into
web_data_queue - Use
data_lockto ensure thread safety
- If downsampled data is not empty, put into
-
Update Counter
- Update
data_counter(total data points, for status display)
- Update
Design Rationale:
- Downsampling: Significantly reduces frontend data transmission and plotting burden
- Queue mechanism: Use queue instead of large buffer, more stable memory usage
- Raw data retention: CSV and SQL still use raw data (no downsampling), ensure data integrity
- Automatic cleanup: Automatically discard old data when queue full, avoid memory overflow
Function: Frontend polling API, returns downsampled incremental data
Return Format:
{
"success": True,
"data": [0.123, 0.456, 0.789, ...], # Downsampled incremental data (new data since last request)
"counter": 234360, # Total data points (raw data)
"sample_rate": 7812, # Sample rate
"is_collecting": True, # Collection status
"start_time": "2025-12-22T12:00:00" # Start time (if started)
}Operation Flow:
- Acquire data lock (
data_lock) - Get all accumulated data from
web_data_queue(non-blocking) - Merge all data into one array
- Return JSON response, including:
data: Downsampled incremental data (frontend directly pushes into chart)counter: Total data points (for status display)is_collecting: Collection status (frontend uses this flag to determine if should stop)start_time: Start time (if collection started)
Design Rationale:
- Incremental update: Only return new data, reduce network transmission
- Downsampled data: Frontend receives downsampled data, significantly reduces plotting burden
- Status synchronization: Returns
is_collectingstatus, frontend can automatically sync - Thread safety: Use lock to ensure data consistency
Function: Display main page (includes configuration form, Label input, start/stop buttons, real-time chart)
Response: Renders templates/index.html template
Function: Return current latest data to frontend (JSON format), and track active connection status
Response Format:
{
"success": true,
"data": [0.123, 0.456, 0.789, ...],
"counter": 12345
}Operation Flow:
-
Update Request Time
- Update
last_data_request_timeto current time - Indicates active frontend connection (for smart buffer updates)
- Update
-
Get real-time data copy
-
Get data point counter
-
Return JSON response
Design Rationale:
- Track frontend connection status, optimize resource usage
- Don't update buffer when no connection, save CPU and memory
Function: Check data collection status (for frontend status restoration)
Response Format:
{
"success": true,
"is_collecting": true,
"counter": 12345
}Operation Flow:
- Get
is_collectingstatus - Get data point counter
- Return JSON response
Use Cases:
- Frontend page load checks backend status
- If backend is collecting data, frontend automatically restores status and starts updating chart
GET Request:
- Read
API/ProWaveDAQ.ini,API/csv.iniandAPI/sql.ini - Render
templates/config.html, display configuration file content for editing
POST Request:
- Receive form data (content of three configuration files)
- Write to
API/ProWaveDAQ.ini,API/csv.iniandAPI/sql.ini - Return success/failure JSON response
Error Handling:
- Use default content on file read failure
- Return error message on write failure
Function: Display file browser page
Response: Renders templates/files.html template
Function: List files and folders in output/ProWaveDAQ/ directory
Query Parameters:
path(optional): Subdirectory path to browse
Response Format:
{
"success": true,
"items": [
{
"name": "20240101120000_test_001",
"type": "directory",
"path": "20240101120000_test_001"
},
{
"name": "data.csv",
"type": "file",
"path": "data.csv",
"size": 1024
}
],
"current_path": ""
}Operation Flow:
- Get path parameter (if provided)
- Security check: Ensure path is within
output/ProWaveDAQ/directory - List directory contents
- Distinguish folders and files
- Return JSON response
Security Mechanism:
- Path normalization check, prevent directory traversal attacks
- Only allow access to files under
output/ProWaveDAQ/directory
Function: Download specified CSV file
Query Parameters:
path(required): File path to download
Operation Flow:
- Get path parameter
- Security check: Ensure path is within
output/ProWaveDAQ/directory - Verify file exists and is a file (not directory)
- Use
send_from_directory()to send file
Security Mechanism:
- Path normalization check, prevent directory traversal attacks
- Only allow download of files under
output/ProWaveDAQ/directory
Function: Start data collection, CSV writing and real-time display
Request Format:
{
"label": "test_001"
}Operation Flow:
-
Check Status
- If already collecting, return error
-
Validate Label
- If Label is empty, return error
-
Reset State
- Clear real-time data buffer
- Reset data point counter
- Reset current data size
- Reset request time tracking (
last_data_request_time = 0)
-
Load Configuration Files
- Read
API/csv.ini, getDumpUnit.second(CSV file splitting time interval, default 5 seconds) - Read
API/sql.ini, getDumpUnit.second(SQL upload interval, default 5 seconds)
- Read
-
Initialize DAQ
- Create
ProWaveDAQinstance - Initialize device from
API/ProWaveDAQ.ini - Get sample rate (default 7812 Hz)
- Channel count fixed at 3
- Create
-
Calculate Target Size
target_size = second × sample_rate × channels- Example: 5 seconds × 7812 Hz × 3 channels = 117,180 data points
-
Create Output Directory
- Path:
output/ProWaveDAQ/{timestamp}_{label}/ - Timestamp format:
YYYYMMDDHHMMSS
- Path:
-
Initialize CSV Writer
- Create
CSVWriterinstance
- Create
-
Start Data Collection Thread
- Set
is_collecting = True - Create and start
collection_loopthread (daemon=True)
- Set
-
Start DAQ Reading
- Call
daq_instance.start_reading()
- Call
-
Return Success Response
- Include sample rate and file splitting interval information
Response Format:
{
"success": true,
"message": "Data collection started (Sample rate: 7812 Hz, File split interval: 5 seconds)"
}Function: Stop all threads and safely close
Operation Flow:
- Check if collecting (if not collecting, return error)
- Set
is_collecting = False(stop collection loop) - Stop DAQ reading (
daq_instance.stop_reading()) - Close CSV Writer (
csv_writer_instance.close()) - Return success response
Function: Executes in independent thread, continuously processes data and distributes to real-time display and CSV storage
Operation Flow:
-
Main Loop (
while is_collecting):a. Get Data from DAQ
- Call
daq_instance.get_data()(non-blocking) - If no data, return empty array
b. Continuously Process All Data in Queue
while data and len(data) > 0:
i. Update Real-time Display - Call
update_realtime_data(data)- Data appears in frontend chartii. Write to CSV (File Splitting Logic)
- **Accumulate Data Size**: `current_data_size += len(data)` - **If `current_data_size < target_size`**: - Data hasn't reached file splitting threshold, directly write to current file - `csv_writer_instance.add_data_block(data)` - **If `current_data_size >= target_size`**: - Need file splitting processing - Calculate remaining space: `empty_space = target_size - (current_data_size - len(data))` - **Batch Processing** (`while current_data_size >= target_size`): - Extract one complete batch: `batch = data[:empty_space]` - Write to current file - Update filename (create new file): `csv_writer_instance.update_filename()` - Reduce accumulated size: `current_data_size -= target_size` - **Process Remaining Data**: - If still have remaining data (`pending = len(data) - empty_space`): - Write to new file: `csv_writer_instance.add_data_block(remaining_data)` - Update accumulated size: `current_data_size = pending` - Otherwise: `current_data_size = 0`iii. Continue Getting Next Data from Queue -
data = daq_instance.get_data()c. Brief Rest
time.sleep(0.01)(10ms), avoid CPU overload
- Call
-
Error Handling:
- Catch all exceptions and output error messages
- Wait 0.1 seconds then continue on error
File Splitting Logic Example:
Assume target_size = 117180 (5 seconds × 7812 Hz × 3 channels)
-
Case 1:
current_data_size = 100000, new datalen(data) = 10000- Total: 110000 < 117180
- Directly write,
current_data_size = 110000
-
Case 2:
current_data_size = 110000, new datalen(data) = 20000- Total: 130000 ≥ 117180
- First batch:
empty_space = 117180 - 110000 = 7180 - Write 7180 data points, update file,
current_data_size = 0 - Remaining:
20000 - 7180 = 12820 - Write remaining data to new file,
current_data_size = 12820
Function: Execute Flask server in independent thread
Settings:
- Host:
0.0.0.0(listen on all network interfaces) - Port:
8080 - Debug mode:
False - Reloader:
False(avoid conflicts with threads)
Function: Program entry point
Operation Flow:
- Output startup message
- Start Flask server in background thread (daemon=True)
- Main thread enters infinite loop (
while True: time.sleep(1)) - Wait for user to press Ctrl+C to interrupt
- Cleanup Resources:
- If collecting, stop collection
- Stop DAQ
- Close CSV Writer
- Output shutdown message
ProWaveDAQ Device
│
│ Modbus RTU Communication
│ (Serial Port: /dev/ttyUSB0, Baud Rate: 3000000)
▼
prowavedaq.py::ProWaveDAQ
│
│ _read_loop() Thread
│ - Read data length at address 0x02
│ - Read data registers based on length
│ - 16-bit integer → float conversion (÷8192.0)
│
▼
queue.Queue (max 1000 entries)
│
│ get_data() non-blocking get
▼
main.py::collection_loop() Thread
│
├─→ update_realtime_data()
│ │
│ ▼
│ realtime_data (List[float], max 10000 points)
│ │
│ ▼
│ Flask /data API
│ │
│ ▼
│ Frontend Chart.js (updates every 200ms)
│
├─→ csv_writer.add_data_block()
│ │
│ ▼
│ File Splitting Logic Judgment
│ │
│ ├─→ current_data_size < target_size
│ │ └─→ Directly write to current file
│ │
│ └─→ current_data_size >= target_size
│ ├─→ Write complete batch
│ ├─→ update_filename() (create new file)
│ └─→ Process remaining data
│ │
│ ▼
│ CSV Files
│ output/ProWaveDAQ/{timestamp}_{label}/{timestamp}_{label}_{001-999}.csv
│
└─→ SQL Uploader (if enabled)
│
▼
Write to Temporary CSV File
│
▼
.sql_temp/{timestamp}_sql_temp.csv
│
▼
Scheduled Upload Thread (every sql_upload_interval seconds)
│
├─→ Read temporary file
├─→ Batch upload (executemany)
├─→ Retry mechanism (max 3 times)
├─→ Failure retention (temporary file not deleted)
├─→ Delete temporary file after success
└─→ Create new temporary file
│
▼
MariaDB/MySQL Database
vibration_data table (dynamically created, table name corresponds to CSV filename)
-
Hardware → ProWaveDAQ
- Input: 16-bit signed integer (0-65535)
- Conversion:
value = (value < 32768) ? value : value - 65536 - Normalization:
float_value = value / 8192.0 - Output: Float array
-
ProWaveDAQ → Real-time Display
- Format:
[ch1, ch2, ch3, ch1, ch2, ch3, ...] - Frontend separates channels by every 3 as a group
- Format:
-
ProWaveDAQ → CSV
- Format:
[ch1, ch2, ch3, ch1, ch2, ch3, ...] - CSV writer writes one row per 3 as a group
- CSV format:
Timestamp,Channel_1,Channel_2,Channel_3
- Format:
Data Per Second:
- Sample rate: 7812 Hz
- Channel count: 3
- Data Points Per Second: 7812 × 3 = 23,436 data points
Data Per CSV File (default 5 seconds):
- Data Points: 7812 × 3 × 5 = 117,180 data points
- File Size (estimate): Approximately 3-5 MB (depends on timestamp length)
Memory Usage:
- Real-time data buffer: Fixed 234,360 data points (approximately 1.87 MB, using NumPy Array)
- Time buffer: Fixed 10 time points (approximately 80 bytes)
- DAQ data queue: Maximum 5000 entries (approximately 123 points per entry, approximately 5 MB)
- CSV data queue: Maximum 1000 entries (approximately 123 points per entry, approximately 1 MB)
- SQL data queue: Maximum 1000 entries (approximately 123 points per entry, approximately 1 MB)
Functions:
- Display real-time data curve graph
- Provide Label input field
- Start/stop buttons
- Status display (data point count, collection status)
JavaScript Functions:
-
Chart.js Initialization
- Create 3 datasets (channels 1, 2, 3)
- Set as line chart, no animation (
duration: 0) - Y-axis doesn't start from zero (
beginAtZero: false) - Don't display X-axis labels
-
updateChart() - Update Chart (Version 7.0.0 Incremental Update)
- Called every 100ms (
setInterval(updateChart, 100)) - Get downsampled incremental data from
/dataAPI - Check backend
is_collectingstatus, automatically sync frontend UI - Group data by channel (every 3 as a group)
- Use
push()to add new data to chart right side - Use
splice()to remove left side old data, maintain fixed window size (500 points) - Disabled animation:
animation: false(reduces flicker and improves real-time performance) - Disabled interaction hints:
interaction.mode: 'none'(reduces CPU consumption) - No data points displayed:
pointRadius: 0(improves plotting performance)
- Called every 100ms (
-
startCollection() - Start Collection
- Validate if Label is entered
- Send POST request to
/start - Start chart update timer
- Update UI status (disable start button, enable stop button)
-
stopCollection() - Stop Collection
- Send POST request to
/stop - Stop chart update timer
- Update UI status
- Send POST request to
Functions:
- Display content of configuration files (text areas)
- Provide save button
JavaScript Functions:
saveConfig()- Send POST request to save configuration files
Functions:
- Display file and folder list in
output/ProWaveDAQ/directory - Support folder navigation
- Support file download
JavaScript Functions:
-
loadFiles(path) - Load File List
- Call
/filesAPI to get directory contents - Display file and folder list
- Call
-
displayFiles(items, path) - Display File List
- Distinguish folders (📁) and files (📄)
- Display file size (auto-formatted)
- Provide "Enter" button for folders
- Provide "Download" button for files
-
updateBreadcrumb(path) - Update Breadcrumb Navigation
- Display current path
- Support clicking to return to parent directory
-
navigateTo(path) - Navigate to Specified Path
- Load contents of specified directory
-
downloadFile(path) - Download File
- Open
/downloadAPI to download file
- Open
File Size Formatting:
- Automatically convert bytes to B, KB, MB, GB
New Functions:
- checkAndRestoreStatus() - Check and Restore Status
- Call
/statusAPI on page load - If backend is collecting data, automatically restore frontend status:
- Enable stop button
- Disable start button and Label input
- Start updating chart
- Update status display
- Call
Design Rationale:
- Solves issue of "entering config page while reading then returning to main page gets stuck"
- Ensures frontend status synchronized with backend status
| Route | Method | Function | Request Format | Response Format |
|---|---|---|---|---|
/ |
GET | Main page | - | HTML |
/data |
GET | Get real-time data | - | JSON |
/status |
GET | Check data collection status | - | JSON |
/config |
GET | Display configuration file editing page | - | HTML |
/config |
POST | Save configuration files | FormData | JSON |
/start |
POST | Start data collection | JSON | JSON |
/stop |
POST | Stop data collection | - | JSON |
/files_page |
GET | File browser page | - | HTML |
/files |
GET | List files and folders | ?path=<path> |
JSON |
/download |
GET | Download file | ?path=<path> |
File download |
| Thread | Function | Type | Status Management |
|---|---|---|---|
| Main Thread | Control flow, wait for interrupt | Main Thread | - |
| Flask Thread | Handle HTTP requests | daemon=True | Automatically terminates when main program ends |
| DAQ Reading Thread (ProWaveDAQ) | Modbus data read loop | daemon=True | reading flag |
| Collection Thread (main.py) | Data collection and distribution loop | daemon=True | is_collecting flag |
| CSV Writer Thread (main.py) | CSV file write loop | daemon=True | is_collecting flag + queue status |
| SQL Writer Thread (main.py) | SQL temporary file write loop | daemon=True | is_collecting flag + queue status |
-
Data Lock (
data_lock)- Protects
realtime_dataanddata_counter - Uses
threading.Lock()to ensure thread safety
- Protects
-
Queue Synchronization (
data_queue,csv_data_queue,sql_data_queue)- Uses
queue.Queue(thread-safe) - DAQ data queue: Maximum capacity 5000 entries
- CSV data queue: Maximum capacity 1000 entries
- SQL data queue: Maximum capacity 1000 entries
- Avoid memory overload
- Uses
-
Request Time Tracking Lock (
data_request_lock)- Protects
last_data_request_timevariable - Used to track active frontend connections
- Protects
-
Status Flags
is_collecting: Controls collection loopreading: Controls read loop
Startup Phase:
1. main() starts
2. Flask Thread starts (background)
3. Main thread enters wait loop
Data Collection Phase (/start):
1. Initialize DAQ, CSV Writer and SQL Uploader
2. Collection Thread starts
3. CSV Writer Thread starts (if CSV enabled)
4. SQL Writer Thread starts (if SQL enabled)
5. DAQ Reading Thread starts
6. Multiple threads run in parallel
Stop Phase (/stop or Ctrl+C):
1. Set is_collecting = False
2. Collection Thread ends
3. Wait for CSV and SQL queues to finish processing
4. CSV Writer Thread ends
5. SQL Writer Thread ends
6. Set reading = False
7. DAQ Reading Thread ends
8. Close CSV Writer and SQL Uploader
9. Close Modbus connection
Format: INI file
Section: [ProWaveDAQ]
Parameters:
| Parameter | Description | Default Value | Example |
|---|---|---|---|
serialPort |
Serial port path | /dev/ttyUSB0 |
/dev/ttyUSB0 |
baudRate |
Baud rate (bps) | 3000000 |
3000000 |
sampleRate |
Sample rate (Hz) | 7812 |
7812 |
slaveID |
Modbus slave ID | 1 |
1 |
Example:
[ProWaveDAQ]
serialPort = /dev/ttyUSB0
baudRate = 3000000
sampleRate = 7812
slaveID = 1Notes:
- Serial port path needs adjustment based on actual device
- Baud rate and sample rate must match hardware device
- Slave ID must match device settings
Format: INI file
Section: [DumpUnit]
Parameters:
| Parameter | Description | Default Value | Example |
|---|---|---|---|
second |
Data time length per CSV file (seconds) | 60 |
1800 |
Example:
[CSVServer]
enabled = false
[DumpUnit]
second = 1800File Splitting Logic:
- Data points per CSV file =
second × sampleRate × channels - Example: 1800 seconds × 7812 Hz × 3 channels = 42,184,800 data points
- System automatically calculates and creates new file when target size reached
Format: INI file
Sections: [SQLServer] and [DumpUnit]
Parameters:
| Section | Parameter | Description | Default Value | Example |
|---|---|---|---|---|
[SQLServer] |
enabled |
Whether to enable SQL upload | false |
true |
[SQLServer] |
host |
SQL server location | localhost |
192.168.9.13 |
[SQLServer] |
port |
Port | 3306 |
3306 |
[SQLServer] |
user |
Username | root |
raspberrypi |
[SQLServer] |
password |
Password | "" |
Raspberry@Pi |
[SQLServer] |
database |
Database name | prowavedaq |
daq-data |
[DumpUnit] |
second |
SQL upload interval (seconds) | 5 |
600 |
Example:
[SQLServer]
enabled = false
host = 192.168.9.13
port = 3306
user = raspberrypi
password = Raspberry@Pi
database = daq-data
[DumpUnit]
second = 600SQL Upload Logic:
- SQL upload uses temporary file mechanism, data first written to temporary CSV file
- Every
secondseconds, checks and uploads temporary file - Example: 600 seconds → upload temporary file every 600 seconds
ProWaveDAQ_Python_Visualization_Unit/
│
├── API/ # Configuration file directory
│ ├── ProWaveDAQ.ini # ProWaveDAQ device configuration file
│ ├── csv.ini # CSV file splitting interval configuration file
│ └── sql.ini # SQL server connection and upload interval configuration file
│
├── templates/ # HTML template directory
│ ├── index.html # Main page (real-time charts, control buttons)
│ ├── config.html # Configuration file editing page
│ └── files.html # File browser page
│
├── output/ # Output directory (auto-created)
│ └── ProWaveDAQ/ # CSV file output directory
│ └── {timestamp}_{label}/ # Folder for each collection
│ ├── {timestamp}_{label}_001.csv
│ ├── {timestamp}_{label}_002.csv
│ ├── ...
│ └── .sql_temp/ # SQL temporary file directory (if SQL enabled)
│ └── {timestamp}_sql_temp.csv
│
├── src/
│ ├── prowavedaq.py # ProWaveDAQ core module (Modbus communication)
│ ├── csv_writer.py # CSV writer module
│ ├── sql_uploader.py # SQL uploader module
│ ├── main.py # Main control program (Flask Web server, includes loguru logging configuration)
│ ├── requirements.txt # Python dependency package list
│ └── templates/ # HTML template directory
│ ├── index.html # Main page template
│ ├── config.html # Configuration management page template
│ └── files.html # File browser page template
├── deploy.sh # Automatic deployment script
├── run.sh # Startup script
├── README.md # Usage documentation
├── README_EN.md # Usage documentation (English version)
├── CHANGELOG.md # Changelog
├── CHANGELOG_EN.md # Changelog (English version)
├── 程式運作說明.md # This document (detailed program operation manual)
└── Technical_Manual_EN.md # This document (English version)
| File | Description | Lines | Main Functions |
|---|---|---|---|
src/main.py |
Main control program | 1323 | Flask Web server, thread management, data collection loop, file browser API, loguru logging configuration |
src/prowavedaq.py |
Hardware communication module | 305 | Modbus RTU communication, data reading, data conversion |
src/csv_writer.py |
CSV writer | 182 | CSV file creation, data writing, file splitting logic |
src/sql_uploader.py |
SQL uploader | 468 | SQL database connection, data upload, temporary file management |
src/templates/index.html |
Main page | - | Real-time charts, control interface, JavaScript logic, status restoration |
src/templates/config.html |
Configuration file editing page | - | Configuration file editing interface |
src/templates/files.html |
File browser page | - | File list, folder navigation, file download |
src/requirements.txt |
Dependencies | 18 | Python package version list (includes loguru) |
deploy.sh |
Deployment script | - | Automatic dependency installation, permission setup |
run.sh |
Startup script | - | Start virtual environment and execute main program |
# Read mode judgment
if self.buffer_count <= self.BULK_TRIGGER_SIZE:
# Normal Mode: Read from Address 0x02
collected_data, remaining = self._read_normal_data(samples_to_read)
else:
# Bulk Mode: Read from Address 0x15 (maximum 9 samples)
collected_data, remaining = self._read_bulk_data(samples_to_read)
# Read method (including Header)
def _read_registers_with_header(self, address, count, mode_name):
read_count = count + 1 # Header + data
result = self.client.read_input_registers(address=address, count=read_count)
raw_data = result.registers
payload_data = raw_data[1:] # Actual data (excluding Header)
remaining_samples = raw_data[0] # Remaining sample count (from Header)
return payload_data, remaining_samplesDesign Rationale:
- Automatically switch Normal Mode and Bulk Mode based on buffer status, optimize read efficiency
- FIFO buffer size(0x02) read together with data, ensure data consistency
- Normal Mode: Suitable for smaller data volumes (≤ 123 samples)
- Bulk Mode: Suitable for larger data volumes (> 123 samples), uses dedicated Bulk address
# 16-bit signed integer conversion
value = vib_data[i] if vib_data[i] < 32768 else vib_data[i] - 65536
processed_data.append(value / 8192.0)Conversion Explanation:
- 16-bit unsigned integer range: 0-65535
- Signed integer range: -32768 to 32767
- Conversion rule: Values ≥32768 treated as negative (subtract 65536)
- Normalization: Divide by 8192.0 (device-specific conversion coefficient)
# File splitting processing
if current_data_size < target_size:
# Directly write
csv_writer_instance.add_data_block(data)
else:
# Need file splitting
empty_space = target_size - (current_data_size - data_actual_size)
while current_data_size >= target_size:
batch = data[:empty_space]
csv_writer_instance.add_data_block(batch)
csv_writer_instance.update_filename()
current_data_size -= target_size
# Process remaining data
if pending:
remaining_data = data[empty_space:]
csv_writer_instance.add_data_block(remaining_data)
current_data_size = pendingDesign Rationale:
- Ensure each CSV file contains precise data volume
- Handle data boundaries across files
- Avoid data loss or duplication
# Downsampling processing
channels = 3
step = channels * WEB_DOWNSAMPLE_RATIO # 3 * 50 = 150
downsampled_chunk = []
for i in range(0, len(data), step):
if i + channels <= len(data):
downsampled_chunk.extend(data[i : i + channels])
# Put into Web queue
if downsampled_chunk:
with data_lock:
web_data_queue.put(downsampled_chunk)Design Rationale:
- Downsampling: Significantly reduces frontend data transmission and plotting burden
- Queue mechanism: Use queue instead of large buffer, more stable memory usage
- Raw data retention: CSV and SQL still use raw data (no downsampling), ensure data integrity
# Queue full handling
try:
self.data_queue.put_nowait(processed_data)
except queue.Full:
# Remove oldest data
try:
self.data_queue.get_nowait()
self.data_queue.put_nowait(processed_data)
except queue.Empty:
passDesign Rationale:
- Avoid queue blocking
- When data too fast, discard oldest data (FIFO)
- Ensure latest data prioritized
1. Execute main.py
│
├─→ Create Flask application
│
├─→ Initialize global state variables
│ - realtime_data = []
│ - is_collecting = False
│ - data_counter = 0
│
├─→ Start Flask Thread (background)
│ - Listen on 0.0.0.0:8080
│ - Handle HTTP requests
│
└─→ Main thread enters wait loop
- while True: time.sleep(1)
- Wait for Ctrl+C interrupt
1. Frontend sends POST /start
│
├─→ Validate Label
│
├─→ Reset State
│ - Clear realtime_data
│ - Reset counter
│
├─→ Load Configuration Files
│ - Read csv.ini (CSV file splitting interval)
│ - Read sql.ini (SQL upload interval)
│ - Read ProWaveDAQ.ini (device settings)
│
├─→ Initialize DAQ
│ - Create ProWaveDAQ instance
│ - Establish Modbus connection
│ - Set sample rate
│
├─→ Calculate Target Size
│ - target_size = second × sample_rate × channels
│
├─→ Create Output Directory
│ - output/ProWaveDAQ/{timestamp}_{label}/
│
├─→ Initialize CSV Writer (if enabled)
│ - Create first CSV file
│
├─→ Initialize SQL Uploader (if enabled)
│ - Create temporary file directory
│ - Create first temporary file
│
├─→ Start Collection Thread
│ - is_collecting = True
│ - collection_loop() starts executing
│
├─→ Start CSV Writer Thread (if CSV enabled)
│ - csv_writer_loop() starts executing
│
├─→ Start SQL Writer Thread (if SQL enabled)
│ - sql_writer_loop() starts executing
│
├─→ Start DAQ Reading Thread
│ - daq_instance.start_reading()
│ - _read_loop() starts executing
│
└─→ Return success response
- Frontend receives response, starts updating chart
DAQ Reading Thread (_read_loop):
├─→ Read data length (address 0x02)
├─→ Read data registers based on length
├─→ Convert data format (16-bit → float)
├─→ Put into data queue
└─→ Repeat execution
Collection Thread (collection_loop):
├─→ Get data from DAQ queue
├─→ Update real-time data buffer (for frontend display)
├─→ Put data into CSV queue (if CSV enabled)
├─→ Put data into SQL queue (if SQL enabled)
└─→ Repeat execution
CSV Writer Thread (csv_writer_loop):
├─→ Get data from CSV queue
├─→ Process CSV writing (includes file splitting logic)
└─→ Repeat execution
SQL Writer Thread (sql_writer_loop):
├─→ Get data from SQL queue
├─→ Write to SQL temporary file
├─→ Scheduled upload (if target size reached)
└─→ Repeat execution
Flask Thread:
├─→ Handle HTTP requests
├─→ /data: Return real-time data
├─→ Other routes: Handle corresponding functions
└─→ Continuous operation
Frontend JavaScript:
├─→ Call /data API every 200ms
├─→ Update Chart.js chart
└─→ Display status information
1. Frontend sends POST /stop
│
├─→ Set is_collecting = False
│ - Collection Thread ends loop
│
├─→ Stop DAQ Reading
│ - daq_instance.stop_reading()
│ - Set reading = False
│ - Reading Thread ends loop
│ - Close Modbus connection
│
├─→ Wait for All Queues to Finish Processing
│ - csv_data_queue.join() (if CSV enabled)
│ - sql_data_queue.join() (if SQL enabled)
│
├─→ Close CSV Writer (if enabled)
│ - csv_writer_instance.close()
│ - Close current file
│
├─→ Close SQL Uploader (if enabled)
│ - sql_uploader_instance.close()
│ - Upload remaining temporary files
│
└─→ Return success response
- Frontend stops chart update
- Update UI status
1. User presses Ctrl+C
│
├─→ Catch KeyboardInterrupt
│
├─→ Check if Collecting
│ │
│ ├─→ If yes: Execute stop flow
│ │ - is_collecting = False
│ │ - Stop DAQ
│ │ - Wait for all queues to finish processing
│ │ - Close CSV Writer (if enabled)
│ │ - Close SQL Uploader (if enabled)
│ │
│ └─→ If no: Directly close
│
└─→ Output shutdown message
- Flask Thread automatically terminates (daemon=True)
- Program ends
-
Modbus Connection Interruption
- When connection interruption detected, automatically attempt reconnection
- Maximum 5 attempts
- Stop reading after 5 consecutive failures
-
Read Errors
- Error counter increments on read failure
- Stop reading after 5 consecutive errors
- Output error message to terminal
-
Queue Full
- Automatically remove oldest data
- Ensure latest data prioritized
-
CSV Write Errors
- Catch exceptions and output error messages
- Doesn't affect data collection continuing
-
API Request Failure
- Display error message
- Doesn't interrupt chart update (will continue trying)
-
Data Format Errors
- Check if data exists
- Validate data length
- Real-time Data Buffer: Maximum 10000 points (approximately 80 KB)
- DAQ Data Queue: Maximum 1000 entries (approximately 1 MB)
- Frontend Chart Data: Maximum 5000 points (approximately 40 KB)
- Data Reading: Non-blocking, avoid CPU overload
- Data Processing: Brief rest (10ms), avoid busy waiting
- Chart Update: 200ms interval, balance real-time performance and efficiency
- CSV Writing: Immediately
flush()after each write, ensure no data loss - File Splitting: Avoid single file too large
Currently system is fixed at 3 channels, if modification needed:
- prowavedaq.py: Modify data processing logic
- csv_writer.py: Modify
__init__and header row - main.py: Change
channels = 3to variable - index.html: Modify Chart.js dataset count
Modify sampleRate parameter in API/ProWaveDAQ.ini即可.
Modify second parameter in [DumpUnit] section of API/csv.ini即可.
Modify second parameter in [DumpUnit] section of API/sql.ini即可.
- Add API Routes: Add
@app.route()inmain.py - Add Frontend Pages: Add HTML in
templates/ - Add Data Processing: Add logic in
collection_loop()
This system is a complete real-time data acquisition and visualization platform with the following main characteristics:
- Modular Design: Each module has clear responsibilities, easy to maintain
- Thread Safety: Uses locks and queues to ensure data consistency
- Memory Management: Limits buffer size, avoid memory overload
- Error Handling: Complete error handling mechanism, improves stability
- Web Interface: Provides user-friendly interface, no terminal operation required
- Automatic File Splitting: Automatically splits and stores files based on time intervals, convenient for data management
The system has been optimized for high-frequency data acquisition and can stably process data streams of 23,436 data points per second and display them in real-time in the browser.
Last Updated: January 6, 2026
Document Version: 9.0.0
Author: Albert Wang
Multi-language Version / 多語言版本
中文版本 (程式運作說明.md) | English Version (Technical_Manual_EN.md)