Skip to content

Latest commit

 

History

History
1584 lines (1248 loc) · 54 KB

File metadata and controls

1584 lines (1248 loc) · 54 KB

ProWaveDAQ Python Real-time Data Visualization System - Detailed Program Operation Manual

Multi-language Version / 多語言版本
中文版本 (程式運作說明.md) | English Version (Technical_Manual_EN.md)

Table of Contents

  1. Project Overview
  2. System Architecture
  3. Core Module Detailed Description
  4. Data Flow
  5. Web Interface and API
  6. Thread Architecture
  7. Configuration File Description
  8. File Structure
  9. Code Detailed Analysis
  10. Operation Flow

Project Overview

System Purpose

The ProWaveDAQ Real-time Data Visualization System is a Python-based vibration data acquisition and visualization platform with the following main functions:

  1. Acquire Vibration Data from ProWaveDAQ Device: Read three-channel vibration data from hardware device via Modbus RTU protocol
  2. Real-time Data Visualization: Display continuous vibration curve graphs in real-time in browser
  3. Automatic CSV Storage: Automatically split and store data files based on configured time intervals
  4. Web Interface Control: Provide complete browser-based operation interface, no terminal operation required

Technology Stack

  • Backend: Python 3.10+
  • Web Framework: Flask 3.1.2+
  • Communication Protocol: Modbus RTU (via pymodbus 3.11.3+)
  • Serial Port Communication: pyserial 3.5+
  • Frontend Visualization: Chart.js 3.9.1
  • Data Storage: CSV format
  • Logging System: loguru 0.7.3+

System Architecture

Overall Architecture Diagram

┌─────────────────────────────────────────────────────────────┐
│                  Web Browser (Frontend)                      │
│  ┌───────────────────────────────────────────────────────┐  │
│  │  index.html: Real-time charts, control buttons, status│  │
│  │  config.html: Configuration file editing interface   │  │
│  └───────────────────────────────────────────────────────┘  │
└───────────────────────┬─────────────────────────────────────┘
                        │ HTTP/JSON
                        ▼
┌─────────────────────────────────────────────────────────────┐
│                 Flask Web Server (main.py)                   │
│  ┌───────────────────────────────────────────────────────┐  │
│  │  Flask Thread: Handle HTTP requests                   │  │
│  │  - /: Main page                                        │  │
│  │  - /data: Return real-time data                        │  │
│  │  - /start: Start data collection                       │  │
│  │  - /stop: Stop data collection                         │  │
│  │  - /config: Configuration file management              │  │
│  └───────────────────────────────────────────────────────┘  │
└───────────────────────┬─────────────────────────────────────┘
                        │
        ┌───────────────┼───────────────┐
        │               │               │
        ▼               ▼               ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│  Collection   │ │  Real-time    │ │  CSV Writer   │
│  Thread       │ │  Data Buffer  │ │  Thread       │
│  (Data        │ │  (Memory      │ │  (File Write) │
│  Collection   │ │  Variables)   │ │               │
│  Loop)        │ │               │ │               │
└───────┬───────┘ └───────────────┘ └───────────────┘
        │
        ▼
┌─────────────────────────────────────────────────────────────┐
│              ProWaveDAQ Class (prowavedaq.py)                │
│  ┌───────────────────────────────────────────────────────┐  │
│  │  Reading Thread: Modbus RTU Read Loop                 │  │
│  │  - Read device data                                   │  │
│  │  - Data conversion (16-bit → float)                   │  │
│  │  - Put into data queue (queue.Queue)                  │  │
│  └───────────────────────────────────────────────────────┘  │
└───────────────────────┬─────────────────────────────────────┘
                        │ Modbus RTU
                        ▼
┌─────────────────────────────────────────────────────────────┐
│              ProWaveDAQ Hardware Device                      │
│  - Serial Port: /dev/ttyUSB0                                 │
│  - Baud Rate: 3000000                                        │
│  - Sample Rate: 7812 Hz                                      │
│  - Slave ID: 1                                               │
└─────────────────────────────────────────────────────────────┘

Module Relationships

  1. main.py (Main Control Program)

    • Integrates all modules
    • Provides Flask Web service
    • Manages threads and global state
  2. prowavedaq.py (Hardware Communication Module)

    • Handles Modbus RTU communication
    • Reads device data
    • Data conversion and queue management
  3. csv_writer.py (Data Storage Module)

    • CSV file creation and writing
    • Automatic file splitting logic
  4. templates/ (Frontend Interface)

    • HTML templates and JavaScript
    • Chart.js chart display

Core Module Detailed Description

1. prowavedaq.py - ProWaveDAQ Class

Class Structure

class ProWaveDAQ:
    - client: ModbusSerialClient        # Modbus connection object
    - serial_port: str                  # Serial port path
    - baud_rate: int                    # Baud rate
    - sample_rate: int                  # Sample rate (Hz)
    - slave_id: int                     # Modbus slave ID
    - reading: bool                      # Reading status flag
    - reading_thread: Thread            # Reading thread
    - data_queue: queue.Queue           # Data queue (max 1000 entries)
    - counter: int                       # Read counter

Main Methods

init_devices(filename: str)

Function: Initialize device from INI configuration file and establish Modbus connection

Operation Flow:

  1. Read ProWaveDAQ.ini configuration file

    • serialPort: Serial port path (default /dev/ttyUSB0)
    • baudRate: Baud rate (default 3000000)
    • sampleRate: Sample rate (default 7812 Hz)
    • slaveID: Slave ID (default 1)
  2. Establish Modbus RTU connection

    • Use ModbusSerialClient to create serial port connection
    • Set parameters: parity='N', stopbits=1, bytesize=8, framer="rtu"
    • Connection timeout set to 1 second
  3. Read chip ID (verify connection)

    • Read 3 input registers at address 0x80
    • Display chip ID for verification
  4. Set sample rate

    • Write to address 0x01, set device sample rate

Error Handling:

  • Output error message and return on connection failure
  • Use default values on INI file parsing error
start_reading()

Function: Start background thread to begin reading data

Operation Flow:

  1. Check if already reading (avoid duplicate startup)
  2. Set reading = True
  3. Create and start background thread executing _read_loop()
  4. Thread set to daemon=True (automatically terminates when main program ends)
_read_loop() (Version 8.0.0 - Follows Manual Page 5 Specifications)

Function: Main data reading loop (executes in independent thread)

Operation Flow (Follows manufacturer manual Page 5 specifications):

  1. Read FIFO Buffer Size

    • Read 1 input register from address 0x02, get FIFO buffer size buffer_size
    • If buffer is empty (buffer_size <= 0), wait 2ms then continue
  2. Calculate Read Length

    • Limit single read maximum: read_count = min(buffer_size, MAX_READ_WORDS) (123 Words)
    • Ensure complete X,Y,Z read: read_count = (read_count // CHANNELS) * CHANNELS
    • If read_count == 0, skip this read
  3. Execute Read (FC04, Start Address 0x02)

    • Read read_count + 1 Words from 0x02 (including Header)
    • Packet structure: [Header(1 word), Data(N words)]
    • Use _read_data_packet() method to read complete packet
  4. Parse Packet

    • raw_packet[0]: Header (FIFO buffer size at read time, remaining size)
    • raw_packet[1:]: Actual vibration data (data from 0x03 onwards)
  5. Data Conversion

    • Convert 16-bit unsigned integer to signed integer
    • Conversion formula: signed = v if v < 32768 else v - 65536
    • Normalize by dividing by 8192.0: out.append(signed / 8192.0)
  6. Put into Queue

    • Use queue.put_nowait(data) to put into queue
    • If queue is full (5000 entries), remove oldest data (FIFO)
  7. Error Handling

    • Output error message and continue on read failure
    • Connection error causes read failure but won't auto reconnect (requires re-initialization)

Design Rationale:

  • Follows manufacturer manual Page 5 specifications, ensures communication correctness
  • Read complete packet in one read, reduces read count, improves performance
  • Ensures complete X,Y,Z read (multiple of 3), avoid channel misalignment
  • Uses queue buffering, avoid data loss

Data Format:

  • Input: 16-bit unsigned integer array (read from Modbus registers)
  • Output: Float array (normalized, range approximately -4.0 to 4.0)
get_data() -> List[float]

Function: Non-blocking get data (get from queue)

Operation Method:

  • Use queue.get_nowait() to non-blocking get data
  • If queue is empty, return empty array []
  • Avoid blocking main thread
stop_reading()

Function: Stop reading and cleanup resources

Operation Flow:

  1. Set reading = False (stop reading loop)
  2. Wait for reading thread to end (join())
  3. Clear data queue
  4. Close Modbus connection
_reconnect() -> bool

Function: Re-establish Modbus connection

Operation Flow:

  1. Close old connection
  2. Recreate ModbusSerialClient
  3. Attempt connection
  4. Set slave ID
  5. Return connection success/failure status

2. csv_writer.py - CSVWriter Class

Class Structure

class CSVWriter:
    - channels: int                    # Number of channels (fixed at 3)
    - output_dir: str                   # Output directory path
    - label: str                        # Data label
    - file_counter: int                 # File counter
    - current_file: file                # Currently open file object
    - writer: csv.writer                # CSV writer object

Main Methods

__init__(channels, output_dir, label)

Function: Initialize CSV writer

Operation Flow:

  1. Store parameters (channel count, output directory, label)
  2. Initialize file counter to 1
  3. Create output directory (if doesn't exist)
  4. Create first CSV file
_create_new_file()

Function: Create new CSV file

Operation Flow:

  1. Generate filename: {timestamp}_{label}_{file_counter:03d}.csv
    • Timestamp format: YYYYMMDDHHMMSS
    • File counter: 3 digits, starting from 001
  2. Open file (UTF-8 encoding)
  3. Create CSV writer
  4. Write header row: ['Timestamp', 'Channel_1', 'Channel_2', 'Channel_3']
  5. Immediately write to disk (flush())

File Naming Examples:

  • 20250106120000_test_001.csv
  • 20250106120000_test_002.csv
add_data_block(data: List[float])

Function: Write data block to CSV file

Operation Flow:

  1. Check if data is empty
  2. Get current timestamp (ISO format)
  3. Write by channel groups:
    • Data format: [ch1_val, ch2_val, ch3_val, ch1_val, ch2_val, ch3_val, ...]
    • Every 3 data points as a group (corresponding to 3 channels)
    • Write format: [timestamp, channel_1_value, channel_2_value, channel_3_value]
  4. If data is not multiple of 3, pad insufficient channels with 0.0
  5. Immediately write to disk (flush())

Data Write Example:

Timestamp,Channel_1,Channel_2,Channel_3
2025-01-06T12:00:00.123456,0.123,0.456,0.789
2025-01-06T12:00:00.123489,0.234,0.567,0.890
update_filename()

Function: Create new file (file splitting)

Operation Flow:

  1. Close current file
  2. Increment file counter
  3. Call _create_new_file() to create new file
close()

Function: Close CSV file

Operation Flow:

  1. Close file object
  2. Clear file and writer references

3. main.py - Flask Web Server and Main Control Program

Global State Variables (Version 7.0.0)

app: Flask                              # Flask application instance
web_data_queue: queue.Queue             # Web display dedicated queue (downsampled data)
WEB_DOWNSAMPLE_RATIO: int = 50          # Downsampling ratio (take 1 point every 50 points)
csv_data_queue: queue.Queue             # CSV data queue (raw data)
sql_data_queue: queue.Queue             # SQL data queue (raw data)
data_lock: threading.Lock              # Data access lock
is_collecting: bool                     # Data collection status flag
collection_thread: Thread               # Data collection thread
daq_instance: ProWaveDAQ                # DAQ instance
csv_writer_instance: CSVWriter          # CSV writer instance
data_counter: int                       # Data point counter
target_size: int                        # Target data points per CSV file
current_data_size: int                  # Current file written data points

Core Functions

update_realtime_data(data: List[float]) (Version 7.0.0)

Function: Update real-time data (downsampling for Web display)

Core Architecture:

  • Downsampling queue: web_data_queue (max 10,000 entries)
  • Downsampling ratio: 50 (take 1 point every 50 points)
  • Original sample rate: 7812 Hz → downsampled to approximately 156 Hz
  • Frontend data transmission reduced by approximately 98%

Operation Flow:

  1. Prevent Queue Overflow

    • If web_data_queue is full, discard 10 old entries
    • Protect memory, avoid memory overflow when frontend freezes
  2. Downsampling Processing

    • Data format: [X1, Y1, Z1, X2, Y2, Z2, ...] (interleaved format)
    • Calculate step: step = channels * WEB_DOWNSAMPLE_RATIO = 3 * 50 = 150
    • Use step slicing: for i in range(0, len(data), step)
    • Ensure each extraction is complete [X, Y, Z] group
    • Extract data segments and put into downsampled array
  3. Put into Web Queue

    • If downsampled data is not empty, put into web_data_queue
    • Use data_lock to ensure thread safety
  4. Update Counter

    • Update data_counter (total data points, for status display)

Design Rationale:

  • Downsampling: Significantly reduces frontend data transmission and plotting burden
  • Queue mechanism: Use queue instead of large buffer, more stable memory usage
  • Raw data retention: CSV and SQL still use raw data (no downsampling), ensure data integrity
  • Automatic cleanup: Automatically discard old data when queue full, avoid memory overflow
get_data() -> Dict

Function: Frontend polling API, returns downsampled incremental data

Return Format:

{
    "success": True,
    "data": [0.123, 0.456, 0.789, ...],  # Downsampled incremental data (new data since last request)
    "counter": 234360,  # Total data points (raw data)
    "sample_rate": 7812,  # Sample rate
    "is_collecting": True,  # Collection status
    "start_time": "2025-12-22T12:00:00"  # Start time (if started)
}

Operation Flow:

  1. Acquire data lock (data_lock)
  2. Get all accumulated data from web_data_queue (non-blocking)
  3. Merge all data into one array
  4. Return JSON response, including:
    • data: Downsampled incremental data (frontend directly pushes into chart)
    • counter: Total data points (for status display)
    • is_collecting: Collection status (frontend uses this flag to determine if should stop)
    • start_time: Start time (if collection started)

Design Rationale:

  • Incremental update: Only return new data, reduce network transmission
  • Downsampled data: Frontend receives downsampled data, significantly reduces plotting burden
  • Status synchronization: Returns is_collecting status, frontend can automatically sync
  • Thread safety: Use lock to ensure data consistency

Flask Routes

@app.route('/') - Main Page

Function: Display main page (includes configuration form, Label input, start/stop buttons, real-time chart)

Response: Renders templates/index.html template

@app.route('/data') - Get Real-time Data

Function: Return current latest data to frontend (JSON format), and track active connection status

Response Format:

{
  "success": true,
  "data": [0.123, 0.456, 0.789, ...],
  "counter": 12345
}

Operation Flow:

  1. Update Request Time

    • Update last_data_request_time to current time
    • Indicates active frontend connection (for smart buffer updates)
  2. Get real-time data copy

  3. Get data point counter

  4. Return JSON response

Design Rationale:

  • Track frontend connection status, optimize resource usage
  • Don't update buffer when no connection, save CPU and memory
@app.route('/status') - Check Data Collection Status

Function: Check data collection status (for frontend status restoration)

Response Format:

{
  "success": true,
  "is_collecting": true,
  "counter": 12345
}

Operation Flow:

  1. Get is_collecting status
  2. Get data point counter
  3. Return JSON response

Use Cases:

  • Frontend page load checks backend status
  • If backend is collecting data, frontend automatically restores status and starts updating chart
@app.route('/config', methods=['GET', 'POST']) - Configuration File Management

GET Request:

  • Read API/ProWaveDAQ.ini, API/csv.ini and API/sql.ini
  • Render templates/config.html, display configuration file content for editing

POST Request:

  • Receive form data (content of three configuration files)
  • Write to API/ProWaveDAQ.ini, API/csv.ini and API/sql.ini
  • Return success/failure JSON response

Error Handling:

  • Use default content on file read failure
  • Return error message on write failure
@app.route('/files_page') - File Browser Page

Function: Display file browser page

Response: Renders templates/files.html template

@app.route('/files') - List Files and Folders

Function: List files and folders in output/ProWaveDAQ/ directory

Query Parameters:

  • path (optional): Subdirectory path to browse

Response Format:

{
  "success": true,
  "items": [
    {
      "name": "20240101120000_test_001",
      "type": "directory",
      "path": "20240101120000_test_001"
    },
    {
      "name": "data.csv",
      "type": "file",
      "path": "data.csv",
      "size": 1024
    }
  ],
  "current_path": ""
}

Operation Flow:

  1. Get path parameter (if provided)
  2. Security check: Ensure path is within output/ProWaveDAQ/ directory
  3. List directory contents
  4. Distinguish folders and files
  5. Return JSON response

Security Mechanism:

  • Path normalization check, prevent directory traversal attacks
  • Only allow access to files under output/ProWaveDAQ/ directory
@app.route('/download') - Download File

Function: Download specified CSV file

Query Parameters:

  • path (required): File path to download

Operation Flow:

  1. Get path parameter
  2. Security check: Ensure path is within output/ProWaveDAQ/ directory
  3. Verify file exists and is a file (not directory)
  4. Use send_from_directory() to send file

Security Mechanism:

  • Path normalization check, prevent directory traversal attacks
  • Only allow download of files under output/ProWaveDAQ/ directory
@app.route('/start', methods=['POST']) - Start Data Collection

Function: Start data collection, CSV writing and real-time display

Request Format:

{
  "label": "test_001"
}

Operation Flow:

  1. Check Status

    • If already collecting, return error
  2. Validate Label

    • If Label is empty, return error
  3. Reset State

    • Clear real-time data buffer
    • Reset data point counter
    • Reset current data size
    • Reset request time tracking (last_data_request_time = 0)
  4. Load Configuration Files

    • Read API/csv.ini, get DumpUnit.second (CSV file splitting time interval, default 5 seconds)
    • Read API/sql.ini, get DumpUnit.second (SQL upload interval, default 5 seconds)
  5. Initialize DAQ

    • Create ProWaveDAQ instance
    • Initialize device from API/ProWaveDAQ.ini
    • Get sample rate (default 7812 Hz)
    • Channel count fixed at 3
  6. Calculate Target Size

    • target_size = second × sample_rate × channels
    • Example: 5 seconds × 7812 Hz × 3 channels = 117,180 data points
  7. Create Output Directory

    • Path: output/ProWaveDAQ/{timestamp}_{label}/
    • Timestamp format: YYYYMMDDHHMMSS
  8. Initialize CSV Writer

    • Create CSVWriter instance
  9. Start Data Collection Thread

    • Set is_collecting = True
    • Create and start collection_loop thread (daemon=True)
  10. Start DAQ Reading

    • Call daq_instance.start_reading()
  11. Return Success Response

    • Include sample rate and file splitting interval information

Response Format:

{
  "success": true,
  "message": "Data collection started (Sample rate: 7812 Hz, File split interval: 5 seconds)"
}
@app.route('/stop', methods=['POST']) - Stop Data Collection

Function: Stop all threads and safely close

Operation Flow:

  1. Check if collecting (if not collecting, return error)
  2. Set is_collecting = False (stop collection loop)
  3. Stop DAQ reading (daq_instance.stop_reading())
  4. Close CSV Writer (csv_writer_instance.close())
  5. Return success response
collection_loop() - Data Collection Main Loop

Function: Executes in independent thread, continuously processes data and distributes to real-time display and CSV storage

Operation Flow:

  1. Main Loop (while is_collecting):

    a. Get Data from DAQ

    • Call daq_instance.get_data() (non-blocking)
    • If no data, return empty array

    b. Continuously Process All Data in Queue

    • while data and len(data) > 0:

    i. Update Real-time Display - Call update_realtime_data(data) - Data appears in frontend chart

    ii. Write to CSV (File Splitting Logic)

       - **Accumulate Data Size**: `current_data_size += len(data)`
       
       - **If `current_data_size < target_size`**:
         - Data hasn't reached file splitting threshold, directly write to current file
         - `csv_writer_instance.add_data_block(data)`
       
       - **If `current_data_size >= target_size`**:
         - Need file splitting processing
         - Calculate remaining space: `empty_space = target_size - (current_data_size - len(data))`
         - **Batch Processing** (`while current_data_size >= target_size`):
           - Extract one complete batch: `batch = data[:empty_space]`
           - Write to current file
           - Update filename (create new file): `csv_writer_instance.update_filename()`
           - Reduce accumulated size: `current_data_size -= target_size`
         - **Process Remaining Data**:
           - If still have remaining data (`pending = len(data) - empty_space`):
             - Write to new file: `csv_writer_instance.add_data_block(remaining_data)`
             - Update accumulated size: `current_data_size = pending`
           - Otherwise: `current_data_size = 0`
    

    iii. Continue Getting Next Data from Queue - data = daq_instance.get_data()

    c. Brief Rest

    • time.sleep(0.01) (10ms), avoid CPU overload
  2. Error Handling:

    • Catch all exceptions and output error messages
    • Wait 0.1 seconds then continue on error

File Splitting Logic Example:

Assume target_size = 117180 (5 seconds × 7812 Hz × 3 channels)

  • Case 1: current_data_size = 100000, new data len(data) = 10000

    • Total: 110000 < 117180
    • Directly write, current_data_size = 110000
  • Case 2: current_data_size = 110000, new data len(data) = 20000

    • Total: 130000 ≥ 117180
    • First batch: empty_space = 117180 - 110000 = 7180
    • Write 7180 data points, update file, current_data_size = 0
    • Remaining: 20000 - 7180 = 12820
    • Write remaining data to new file, current_data_size = 12820
run_flask_server() - Flask Server Execution Function

Function: Execute Flask server in independent thread

Settings:

  • Host: 0.0.0.0 (listen on all network interfaces)
  • Port: 8080
  • Debug mode: False
  • Reloader: False (avoid conflicts with threads)
main() - Main Function

Function: Program entry point

Operation Flow:

  1. Output startup message
  2. Start Flask server in background thread (daemon=True)
  3. Main thread enters infinite loop (while True: time.sleep(1))
  4. Wait for user to press Ctrl+C to interrupt
  5. Cleanup Resources:
    • If collecting, stop collection
    • Stop DAQ
    • Close CSV Writer
  6. Output shutdown message

Data Flow

Complete Data Flow Diagram

ProWaveDAQ Device
    │
    │ Modbus RTU Communication
    │ (Serial Port: /dev/ttyUSB0, Baud Rate: 3000000)
    ▼
prowavedaq.py::ProWaveDAQ
    │
    │ _read_loop() Thread
    │ - Read data length at address 0x02
    │ - Read data registers based on length
    │ - 16-bit integer → float conversion (÷8192.0)
    │
    ▼
queue.Queue (max 1000 entries)
    │
    │ get_data() non-blocking get
    ▼
main.py::collection_loop() Thread
    │
    ├─→ update_realtime_data()
    │   │
    │   ▼
    │   realtime_data (List[float], max 10000 points)
    │   │
    │   ▼
    │   Flask /data API
    │   │
    │   ▼
    │   Frontend Chart.js (updates every 200ms)
    │
    ├─→ csv_writer.add_data_block()
    │       │
    │       ▼
    │   File Splitting Logic Judgment
    │       │
    │       ├─→ current_data_size < target_size
    │       │   └─→ Directly write to current file
    │       │
    │       └─→ current_data_size >= target_size
    │           ├─→ Write complete batch
    │           ├─→ update_filename() (create new file)
    │           └─→ Process remaining data
    │               │
    │               ▼
    │           CSV Files
    │           output/ProWaveDAQ/{timestamp}_{label}/{timestamp}_{label}_{001-999}.csv
    │
    └─→ SQL Uploader (if enabled)
            │
            ▼
        Write to Temporary CSV File
            │
            ▼
        .sql_temp/{timestamp}_sql_temp.csv
            │
            ▼
        Scheduled Upload Thread (every sql_upload_interval seconds)
            │
            ├─→ Read temporary file
            ├─→ Batch upload (executemany)
            ├─→ Retry mechanism (max 3 times)
            ├─→ Failure retention (temporary file not deleted)
            ├─→ Delete temporary file after success
            └─→ Create new temporary file
                │
                ▼
            MariaDB/MySQL Database
            vibration_data table (dynamically created, table name corresponds to CSV filename)

Data Format Conversion

  1. Hardware → ProWaveDAQ

    • Input: 16-bit signed integer (0-65535)
    • Conversion: value = (value < 32768) ? value : value - 65536
    • Normalization: float_value = value / 8192.0
    • Output: Float array
  2. ProWaveDAQ → Real-time Display

    • Format: [ch1, ch2, ch3, ch1, ch2, ch3, ...]
    • Frontend separates channels by every 3 as a group
  3. ProWaveDAQ → CSV

    • Format: [ch1, ch2, ch3, ch1, ch2, ch3, ...]
    • CSV writer writes one row per 3 as a group
    • CSV format: Timestamp,Channel_1,Channel_2,Channel_3

Data Volume Calculation

Data Per Second:

  • Sample rate: 7812 Hz
  • Channel count: 3
  • Data Points Per Second: 7812 × 3 = 23,436 data points

Data Per CSV File (default 5 seconds):

  • Data Points: 7812 × 3 × 5 = 117,180 data points
  • File Size (estimate): Approximately 3-5 MB (depends on timestamp length)

Memory Usage:

  • Real-time data buffer: Fixed 234,360 data points (approximately 1.87 MB, using NumPy Array)
  • Time buffer: Fixed 10 time points (approximately 80 bytes)
  • DAQ data queue: Maximum 5000 entries (approximately 123 points per entry, approximately 5 MB)
  • CSV data queue: Maximum 1000 entries (approximately 123 points per entry, approximately 1 MB)
  • SQL data queue: Maximum 1000 entries (approximately 123 points per entry, approximately 1 MB)

Web Interface and API

Frontend Pages

index.html - Main Page

Functions:

  • Display real-time data curve graph
  • Provide Label input field
  • Start/stop buttons
  • Status display (data point count, collection status)

JavaScript Functions:

  1. Chart.js Initialization

    • Create 3 datasets (channels 1, 2, 3)
    • Set as line chart, no animation (duration: 0)
    • Y-axis doesn't start from zero (beginAtZero: false)
    • Don't display X-axis labels
  2. updateChart() - Update Chart (Version 7.0.0 Incremental Update)

    • Called every 100ms (setInterval(updateChart, 100))
    • Get downsampled incremental data from /data API
    • Check backend is_collecting status, automatically sync frontend UI
    • Group data by channel (every 3 as a group)
    • Use push() to add new data to chart right side
    • Use splice() to remove left side old data, maintain fixed window size (500 points)
    • Disabled animation: animation: false (reduces flicker and improves real-time performance)
    • Disabled interaction hints: interaction.mode: 'none' (reduces CPU consumption)
    • No data points displayed: pointRadius: 0 (improves plotting performance)
  3. startCollection() - Start Collection

    • Validate if Label is entered
    • Send POST request to /start
    • Start chart update timer
    • Update UI status (disable start button, enable stop button)
  4. stopCollection() - Stop Collection

    • Send POST request to /stop
    • Stop chart update timer
    • Update UI status

config.html - Configuration File Editing Page

Functions:

  • Display content of configuration files (text areas)
  • Provide save button

JavaScript Functions:

  • saveConfig() - Send POST request to save configuration files

files.html - File Browser Page

Functions:

  • Display file and folder list in output/ProWaveDAQ/ directory
  • Support folder navigation
  • Support file download

JavaScript Functions:

  1. loadFiles(path) - Load File List

    • Call /files API to get directory contents
    • Display file and folder list
  2. displayFiles(items, path) - Display File List

    • Distinguish folders (📁) and files (📄)
    • Display file size (auto-formatted)
    • Provide "Enter" button for folders
    • Provide "Download" button for files
  3. updateBreadcrumb(path) - Update Breadcrumb Navigation

    • Display current path
    • Support clicking to return to parent directory
  4. navigateTo(path) - Navigate to Specified Path

    • Load contents of specified directory
  5. downloadFile(path) - Download File

    • Open /download API to download file

File Size Formatting:

  • Automatically convert bytes to B, KB, MB, GB

index.html - Main Page (Update)

New Functions:

  1. checkAndRestoreStatus() - Check and Restore Status
    • Call /status API on page load
    • If backend is collecting data, automatically restore frontend status:
      • Enable stop button
      • Disable start button and Label input
      • Start updating chart
      • Update status display

Design Rationale:

  • Solves issue of "entering config page while reading then returning to main page gets stuck"
  • Ensures frontend status synchronized with backend status

API Endpoints

Route Method Function Request Format Response Format
/ GET Main page - HTML
/data GET Get real-time data - JSON
/status GET Check data collection status - JSON
/config GET Display configuration file editing page - HTML
/config POST Save configuration files FormData JSON
/start POST Start data collection JSON JSON
/stop POST Stop data collection - JSON
/files_page GET File browser page - HTML
/files GET List files and folders ?path=<path> JSON
/download GET Download file ?path=<path> File download

Thread Architecture

Thread List

Thread Function Type Status Management
Main Thread Control flow, wait for interrupt Main Thread -
Flask Thread Handle HTTP requests daemon=True Automatically terminates when main program ends
DAQ Reading Thread (ProWaveDAQ) Modbus data read loop daemon=True reading flag
Collection Thread (main.py) Data collection and distribution loop daemon=True is_collecting flag
CSV Writer Thread (main.py) CSV file write loop daemon=True is_collecting flag + queue status
SQL Writer Thread (main.py) SQL temporary file write loop daemon=True is_collecting flag + queue status

Thread Synchronization

  1. Data Lock (data_lock)

    • Protects realtime_data and data_counter
    • Uses threading.Lock() to ensure thread safety
  2. Queue Synchronization (data_queue, csv_data_queue, sql_data_queue)

    • Uses queue.Queue (thread-safe)
    • DAQ data queue: Maximum capacity 5000 entries
    • CSV data queue: Maximum capacity 1000 entries
    • SQL data queue: Maximum capacity 1000 entries
    • Avoid memory overload
  3. Request Time Tracking Lock (data_request_lock)

    • Protects last_data_request_time variable
    • Used to track active frontend connections
  4. Status Flags

    • is_collecting: Controls collection loop
    • reading: Controls read loop

Thread Lifecycle

Startup Phase:
1. main() starts
2. Flask Thread starts (background)
3. Main thread enters wait loop

Data Collection Phase (/start):
1. Initialize DAQ, CSV Writer and SQL Uploader
2. Collection Thread starts
3. CSV Writer Thread starts (if CSV enabled)
4. SQL Writer Thread starts (if SQL enabled)
5. DAQ Reading Thread starts
6. Multiple threads run in parallel

Stop Phase (/stop or Ctrl+C):
1. Set is_collecting = False
2. Collection Thread ends
3. Wait for CSV and SQL queues to finish processing
4. CSV Writer Thread ends
5. SQL Writer Thread ends
6. Set reading = False
7. DAQ Reading Thread ends
8. Close CSV Writer and SQL Uploader
9. Close Modbus connection

Configuration File Description

API/ProWaveDAQ.ini

Format: INI file

Section: [ProWaveDAQ]

Parameters:

Parameter Description Default Value Example
serialPort Serial port path /dev/ttyUSB0 /dev/ttyUSB0
baudRate Baud rate (bps) 3000000 3000000
sampleRate Sample rate (Hz) 7812 7812
slaveID Modbus slave ID 1 1

Example:

[ProWaveDAQ]
serialPort = /dev/ttyUSB0
baudRate = 3000000
sampleRate = 7812
slaveID = 1

Notes:

  • Serial port path needs adjustment based on actual device
  • Baud rate and sample rate must match hardware device
  • Slave ID must match device settings

API/csv.ini

Format: INI file

Section: [DumpUnit]

Parameters:

Parameter Description Default Value Example
second Data time length per CSV file (seconds) 60 1800

Example:

[CSVServer]
enabled = false

[DumpUnit]
second = 1800

File Splitting Logic:

  • Data points per CSV file = second × sampleRate × channels
  • Example: 1800 seconds × 7812 Hz × 3 channels = 42,184,800 data points
  • System automatically calculates and creates new file when target size reached

API/sql.ini

Format: INI file

Sections: [SQLServer] and [DumpUnit]

Parameters:

Section Parameter Description Default Value Example
[SQLServer] enabled Whether to enable SQL upload false true
[SQLServer] host SQL server location localhost 192.168.9.13
[SQLServer] port Port 3306 3306
[SQLServer] user Username root raspberrypi
[SQLServer] password Password "" Raspberry@Pi
[SQLServer] database Database name prowavedaq daq-data
[DumpUnit] second SQL upload interval (seconds) 5 600

Example:

[SQLServer]
enabled = false
host = 192.168.9.13
port = 3306
user = raspberrypi
password = Raspberry@Pi
database = daq-data

[DumpUnit]
second = 600

SQL Upload Logic:

  • SQL upload uses temporary file mechanism, data first written to temporary CSV file
  • Every second seconds, checks and uploads temporary file
  • Example: 600 seconds → upload temporary file every 600 seconds

File Structure

Project Directory Structure

ProWaveDAQ_Python_Visualization_Unit/
│
├── API/                              # Configuration file directory
│   ├── ProWaveDAQ.ini                # ProWaveDAQ device configuration file
│   ├── csv.ini                       # CSV file splitting interval configuration file
│   └── sql.ini                       # SQL server connection and upload interval configuration file
│
├── templates/                         # HTML template directory
│   ├── index.html                    # Main page (real-time charts, control buttons)
│   ├── config.html                   # Configuration file editing page
│   └── files.html                    # File browser page
│
├── output/                            # Output directory (auto-created)
│   └── ProWaveDAQ/                   # CSV file output directory
│       └── {timestamp}_{label}/      # Folder for each collection
│           ├── {timestamp}_{label}_001.csv
│           ├── {timestamp}_{label}_002.csv
│           ├── ...
│           └── .sql_temp/            # SQL temporary file directory (if SQL enabled)
│               └── {timestamp}_sql_temp.csv
│
├── src/
│   ├── prowavedaq.py                 # ProWaveDAQ core module (Modbus communication)
│   ├── csv_writer.py                 # CSV writer module
│   ├── sql_uploader.py               # SQL uploader module
│   ├── main.py                       # Main control program (Flask Web server, includes loguru logging configuration)
│   ├── requirements.txt              # Python dependency package list
│   └── templates/                    # HTML template directory
│       ├── index.html                # Main page template
│       ├── config.html               # Configuration management page template
│       └── files.html                # File browser page template
├── deploy.sh                         # Automatic deployment script
├── run.sh                            # Startup script
├── README.md                         # Usage documentation
├── README_EN.md                      # Usage documentation (English version)
├── CHANGELOG.md                      # Changelog
├── CHANGELOG_EN.md                   # Changelog (English version)
├── 程式運作說明.md                   # This document (detailed program operation manual)
└── Technical_Manual_EN.md           # This document (English version)

File Description

File Description Lines Main Functions
src/main.py Main control program 1323 Flask Web server, thread management, data collection loop, file browser API, loguru logging configuration
src/prowavedaq.py Hardware communication module 305 Modbus RTU communication, data reading, data conversion
src/csv_writer.py CSV writer 182 CSV file creation, data writing, file splitting logic
src/sql_uploader.py SQL uploader 468 SQL database connection, data upload, temporary file management
src/templates/index.html Main page - Real-time charts, control interface, JavaScript logic, status restoration
src/templates/config.html Configuration file editing page - Configuration file editing interface
src/templates/files.html File browser page - File list, folder navigation, file download
src/requirements.txt Dependencies 18 Python package version list (includes loguru)
deploy.sh Deployment script - Automatic dependency installation, permission setup
run.sh Startup script - Start virtual environment and execute main program

Code Detailed Analysis

Key Code Snippets

1. Data Read Logic (prowavedaq.py)

# Read mode judgment
if self.buffer_count <= self.BULK_TRIGGER_SIZE:
    # Normal Mode: Read from Address 0x02
    collected_data, remaining = self._read_normal_data(samples_to_read)
else:
    # Bulk Mode: Read from Address 0x15 (maximum 9 samples)
    collected_data, remaining = self._read_bulk_data(samples_to_read)

# Read method (including Header)
def _read_registers_with_header(self, address, count, mode_name):
    read_count = count + 1  # Header + data
    result = self.client.read_input_registers(address=address, count=read_count)
    raw_data = result.registers
    payload_data = raw_data[1:]  # Actual data (excluding Header)
    remaining_samples = raw_data[0]  # Remaining sample count (from Header)
    return payload_data, remaining_samples

Design Rationale:

  • Automatically switch Normal Mode and Bulk Mode based on buffer status, optimize read efficiency
  • FIFO buffer size(0x02) read together with data, ensure data consistency
  • Normal Mode: Suitable for smaller data volumes (≤ 123 samples)
  • Bulk Mode: Suitable for larger data volumes (> 123 samples), uses dedicated Bulk address

2. Data Conversion Logic (prowavedaq.py)

# 16-bit signed integer conversion
value = vib_data[i] if vib_data[i] < 32768 else vib_data[i] - 65536
processed_data.append(value / 8192.0)

Conversion Explanation:

  • 16-bit unsigned integer range: 0-65535
  • Signed integer range: -32768 to 32767
  • Conversion rule: Values ≥32768 treated as negative (subtract 65536)
  • Normalization: Divide by 8192.0 (device-specific conversion coefficient)

3. File Splitting Logic (main.py)

# File splitting processing
if current_data_size < target_size:
    # Directly write
    csv_writer_instance.add_data_block(data)
else:
    # Need file splitting
    empty_space = target_size - (current_data_size - data_actual_size)
    while current_data_size >= target_size:
        batch = data[:empty_space]
        csv_writer_instance.add_data_block(batch)
        csv_writer_instance.update_filename()
        current_data_size -= target_size
    # Process remaining data
    if pending:
        remaining_data = data[empty_space:]
        csv_writer_instance.add_data_block(remaining_data)
        current_data_size = pending

Design Rationale:

  • Ensure each CSV file contains precise data volume
  • Handle data boundaries across files
  • Avoid data loss or duplication

4. Downsampling Processing and Queue Management (main.py - Version 7.0.0)

# Downsampling processing
channels = 3
step = channels * WEB_DOWNSAMPLE_RATIO  # 3 * 50 = 150
downsampled_chunk = []

for i in range(0, len(data), step):
    if i + channels <= len(data):
        downsampled_chunk.extend(data[i : i + channels])

# Put into Web queue
if downsampled_chunk:
    with data_lock:
        web_data_queue.put(downsampled_chunk)

Design Rationale:

  • Downsampling: Significantly reduces frontend data transmission and plotting burden
  • Queue mechanism: Use queue instead of large buffer, more stable memory usage
  • Raw data retention: CSV and SQL still use raw data (no downsampling), ensure data integrity

5. Queue Management (prowavedaq.py)

# Queue full handling
try:
    self.data_queue.put_nowait(processed_data)
except queue.Full:
    # Remove oldest data
    try:
        self.data_queue.get_nowait()
        self.data_queue.put_nowait(processed_data)
    except queue.Empty:
        pass

Design Rationale:

  • Avoid queue blocking
  • When data too fast, discard oldest data (FIFO)
  • Ensure latest data prioritized

Operation Flow

System Startup Flow

1. Execute main.py
   │
   ├─→ Create Flask application
   │
   ├─→ Initialize global state variables
   │   - realtime_data = []
   │   - is_collecting = False
   │   - data_counter = 0
   │
   ├─→ Start Flask Thread (background)
   │   - Listen on 0.0.0.0:8080
   │   - Handle HTTP requests
   │
   └─→ Main thread enters wait loop
       - while True: time.sleep(1)
       - Wait for Ctrl+C interrupt

Data Collection Startup Flow (User Clicks "Start Reading")

1. Frontend sends POST /start
   │
   ├─→ Validate Label
   │
   ├─→ Reset State
   │   - Clear realtime_data
   │   - Reset counter
   │
   ├─→ Load Configuration Files
   │   - Read csv.ini (CSV file splitting interval)
   │   - Read sql.ini (SQL upload interval)
   │   - Read ProWaveDAQ.ini (device settings)
   │
   ├─→ Initialize DAQ
   │   - Create ProWaveDAQ instance
   │   - Establish Modbus connection
   │   - Set sample rate
   │
   ├─→ Calculate Target Size
   │   - target_size = second × sample_rate × channels
   │
   ├─→ Create Output Directory
   │   - output/ProWaveDAQ/{timestamp}_{label}/
   │
   ├─→ Initialize CSV Writer (if enabled)
   │   - Create first CSV file
   │
   ├─→ Initialize SQL Uploader (if enabled)
   │   - Create temporary file directory
   │   - Create first temporary file
   │
   ├─→ Start Collection Thread
   │   - is_collecting = True
   │   - collection_loop() starts executing
   │
   ├─→ Start CSV Writer Thread (if CSV enabled)
   │   - csv_writer_loop() starts executing
   │
   ├─→ Start SQL Writer Thread (if SQL enabled)
   │   - sql_writer_loop() starts executing
   │
   ├─→ Start DAQ Reading Thread
   │   - daq_instance.start_reading()
   │   - _read_loop() starts executing
   │
   └─→ Return success response
       - Frontend receives response, starts updating chart

Data Collection Operation Flow (Continuous Execution)

DAQ Reading Thread (_read_loop):
├─→ Read data length (address 0x02)
├─→ Read data registers based on length
├─→ Convert data format (16-bit → float)
├─→ Put into data queue
└─→ Repeat execution

Collection Thread (collection_loop):
├─→ Get data from DAQ queue
├─→ Update real-time data buffer (for frontend display)
├─→ Put data into CSV queue (if CSV enabled)
├─→ Put data into SQL queue (if SQL enabled)
└─→ Repeat execution

CSV Writer Thread (csv_writer_loop):
├─→ Get data from CSV queue
├─→ Process CSV writing (includes file splitting logic)
└─→ Repeat execution

SQL Writer Thread (sql_writer_loop):
├─→ Get data from SQL queue
├─→ Write to SQL temporary file
├─→ Scheduled upload (if target size reached)
└─→ Repeat execution

Flask Thread:
├─→ Handle HTTP requests
├─→ /data: Return real-time data
├─→ Other routes: Handle corresponding functions
└─→ Continuous operation

Frontend JavaScript:
├─→ Call /data API every 200ms
├─→ Update Chart.js chart
└─→ Display status information

Data Collection Stop Flow (User Clicks "Stop Reading")

1. Frontend sends POST /stop
   │
   ├─→ Set is_collecting = False
   │   - Collection Thread ends loop
   │
   ├─→ Stop DAQ Reading
   │   - daq_instance.stop_reading()
   │   - Set reading = False
   │   - Reading Thread ends loop
   │   - Close Modbus connection
   │
   ├─→ Wait for All Queues to Finish Processing
   │   - csv_data_queue.join() (if CSV enabled)
   │   - sql_data_queue.join() (if SQL enabled)
   │
   ├─→ Close CSV Writer (if enabled)
   │   - csv_writer_instance.close()
   │   - Close current file
   │
   ├─→ Close SQL Uploader (if enabled)
   │   - sql_uploader_instance.close()
   │   - Upload remaining temporary files
   │
   └─→ Return success response
       - Frontend stops chart update
       - Update UI status

System Shutdown Flow (Ctrl+C)

1. User presses Ctrl+C
   │
   ├─→ Catch KeyboardInterrupt
   │
   ├─→ Check if Collecting
   │   │
   │   ├─→ If yes: Execute stop flow
   │   │   - is_collecting = False
   │   │   - Stop DAQ
   │   │   - Wait for all queues to finish processing
   │   │   - Close CSV Writer (if enabled)
   │   │   - Close SQL Uploader (if enabled)
   │   │
   │   └─→ If no: Directly close
   │
   └─→ Output shutdown message
       - Flask Thread automatically terminates (daemon=True)
       - Program ends

Error Handling and Exceptions

Connection Error Handling

  1. Modbus Connection Interruption

    • When connection interruption detected, automatically attempt reconnection
    • Maximum 5 attempts
    • Stop reading after 5 consecutive failures
  2. Read Errors

    • Error counter increments on read failure
    • Stop reading after 5 consecutive errors
    • Output error message to terminal

Data Processing Errors

  1. Queue Full

    • Automatically remove oldest data
    • Ensure latest data prioritized
  2. CSV Write Errors

    • Catch exceptions and output error messages
    • Doesn't affect data collection continuing

Frontend Error Handling

  1. API Request Failure

    • Display error message
    • Doesn't interrupt chart update (will continue trying)
  2. Data Format Errors

    • Check if data exists
    • Validate data length

Performance Considerations

Memory Usage

  • Real-time Data Buffer: Maximum 10000 points (approximately 80 KB)
  • DAQ Data Queue: Maximum 1000 entries (approximately 1 MB)
  • Frontend Chart Data: Maximum 5000 points (approximately 40 KB)

CPU Usage

  • Data Reading: Non-blocking, avoid CPU overload
  • Data Processing: Brief rest (10ms), avoid busy waiting
  • Chart Update: 200ms interval, balance real-time performance and efficiency

Disk I/O

  • CSV Writing: Immediately flush() after each write, ensure no data loss
  • File Splitting: Avoid single file too large

Extension and Customization

Modify Channel Count

Currently system is fixed at 3 channels, if modification needed:

  1. prowavedaq.py: Modify data processing logic
  2. csv_writer.py: Modify __init__ and header row
  3. main.py: Change channels = 3 to variable
  4. index.html: Modify Chart.js dataset count

Modify Sample Rate

Modify sampleRate parameter in API/ProWaveDAQ.ini即可.

Modify File Splitting Interval

Modify second parameter in [DumpUnit] section of API/csv.ini即可.

Modify SQL Upload Interval

Modify second parameter in [DumpUnit] section of API/sql.ini即可.

Add New Features

  1. Add API Routes: Add @app.route() in main.py
  2. Add Frontend Pages: Add HTML in templates/
  3. Add Data Processing: Add logic in collection_loop()

Summary

This system is a complete real-time data acquisition and visualization platform with the following main characteristics:

  1. Modular Design: Each module has clear responsibilities, easy to maintain
  2. Thread Safety: Uses locks and queues to ensure data consistency
  3. Memory Management: Limits buffer size, avoid memory overload
  4. Error Handling: Complete error handling mechanism, improves stability
  5. Web Interface: Provides user-friendly interface, no terminal operation required
  6. Automatic File Splitting: Automatically splits and stores files based on time intervals, convenient for data management

The system has been optimized for high-frequency data acquisition and can stably process data streams of 23,436 data points per second and display them in real-time in the browser.


Last Updated: January 6, 2026
Document Version: 9.0.0
Author: Albert Wang


Multi-language Version / 多語言版本
中文版本 (程式運作說明.md) | English Version (Technical_Manual_EN.md)