Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions docs/api-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,6 +263,39 @@ Get the VNC console URL for the instance.
}
```

### GET /instances/:id/stats
Get real-time resource usage stats for an instance (CPU, memory, network I/O, disk I/O).

**Response:**
```json
{
"cpu_percentage": 25.5,
"memory_usage_bytes": 524288000,
"memory_limit_bytes": 1073741824,
"memory_percentage": 48.8,
"network_rx_bytes": 1024000,
"network_tx_bytes": 512000,
"disk_read_bytes": 10240000,
"disk_write_bytes": 5120000,
"cpu_time_nanoseconds": 5000000000
}
```

**Fields:**
- `cpu_percentage` (float): CPU usage as percentage of available
- `memory_usage_bytes` (float): Current memory usage in bytes
- `memory_limit_bytes` (float): Memory limit in bytes
- `memory_percentage` (float): Memory usage as percentage of limit
- `network_rx_bytes` (uint64): Total network bytes received
- `network_tx_bytes` (uint64): Total network bytes transmitted
- `disk_read_bytes` (uint64): Total disk bytes read
- `disk_write_bytes` (uint64): Total disk bytes written
- `cpu_time_nanoseconds` (uint64, optional): Cumulative CPU time in nanoseconds (Libvirt backend)

**Error Responses:**
- `404` — Instance not found
- `503` — Backend stats unavailable
Comment on lines +295 to +297
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Documented error code does not match actual handler behavior

Line 297 documents 503, but this endpoint currently returns 500 for internal/backend failures via httputil.Error default mapping. Please align docs with implementation (or implement explicit 503 mapping if that is the intended contract).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/api-reference.md` around lines 295 - 297, The docs list a `503` response
but the handler currently returns `500` via the httputil.Error default mapping;
update either the docs or the code so they match. If you intend to keep `503`,
change the error mapping in the handler (where httputil.Error is
produced/returned) to emit an HTTP 503 for backend/internal stats failures
(adjust the error creation or mapping logic in the endpoint handler function
that calls into httputil.Error); otherwise, update the docs in
docs/api-reference.md to document `500` instead of `503` so the documentation
matches the current implementation.


### POST /instances/:id/pause
Pause a running instance (freezes CPU, retains memory/network).

Expand Down
16 changes: 16 additions & 0 deletions docs/swagger/docs.go
Original file line number Diff line number Diff line change
Expand Up @@ -9056,6 +9056,16 @@ const docTemplate = `{
"cpu_percentage": {
"type": "number"
},
"cpu_time_nanoseconds": {
"description": "only populated by Libvirt backend; Docker uses delta-based percentage instead",
"type": "integer"
},
"disk_read_bytes": {
"type": "integer"
},
"disk_write_bytes": {
"type": "integer"
},
"memory_limit_bytes": {
"type": "number"
},
Expand All @@ -9064,6 +9074,12 @@ const docTemplate = `{
},
"memory_usage_bytes": {
"type": "number"
},
"network_rx_bytes": {
"type": "integer"
},
"network_tx_bytes": {
"type": "integer"
}
}
},
Expand Down
16 changes: 16 additions & 0 deletions docs/swagger/swagger.json
Original file line number Diff line number Diff line change
Expand Up @@ -9048,6 +9048,16 @@
"cpu_percentage": {
"type": "number"
},
"cpu_time_nanoseconds": {
"description": "only populated by Libvirt backend; Docker uses delta-based percentage instead",
"type": "integer"
},
"disk_read_bytes": {
"type": "integer"
},
"disk_write_bytes": {
"type": "integer"
},
"memory_limit_bytes": {
"type": "number"
},
Expand All @@ -9056,6 +9066,12 @@
},
"memory_usage_bytes": {
"type": "number"
},
"network_rx_bytes": {
"type": "integer"
},
"network_tx_bytes": {
"type": "integer"
}
}
},
Expand Down
12 changes: 12 additions & 0 deletions docs/swagger/swagger.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -663,12 +663,24 @@ definitions:
properties:
cpu_percentage:
type: number
cpu_time_nanoseconds:
description: only populated by Libvirt backend; Docker uses delta-based percentage
instead
type: integer
disk_read_bytes:
type: integer
disk_write_bytes:
type: integer
memory_limit_bytes:
type: number
memory_percentage:
type: number
memory_usage_bytes:
type: number
network_rx_bytes:
type: integer
network_tx_bytes:
type: integer
Comment on lines +666 to +683
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In OpenAPI 2.0 (Swagger), how do code generators treat type: integerwhenformat is omitted, and what is the recommended format for uint64-like counters?

💡 Result:

  1. In OpenAPI/Swagger 2.0, when a schema/parameter uses type: integer but omits format, code generators commonly treat it as the default 32-bit signed integer (int32). For example, OpenAPI Generator’s Go generator maps integer (without format) to an int32-like type, rather than treating it as a generic integer. 2) For “uint64-like counters”, the recommended approach is to use the uint64 format (not integer/int64). In the OpenAPI format registry, uint64 is defined as an unsigned 64-bit integer and recommends representing values as JSON strings when they are outside the 53-bit safe range.

Citations:


Specify uint64 format for high-range counters in schema

Lines 666-683 define byte/time counters as bare type: integer. Code generators interpret this as 32-bit signed integers by default, which is insufficient for backend uint64 fields and risks overflow/truncation in SDK consumers.

Use format: uint64 to explicitly match backend semantics. For JSON serialization of values exceeding 53-bit safe range, consider representing large values as strings when consumed by JavaScript clients.

🛠️ Suggested schema patch
       cpu_time_nanoseconds:
         description: only populated by Libvirt backend; Docker uses delta-based percentage
           instead
         type: integer
+        format: uint64
       disk_read_bytes:
         type: integer
+        format: uint64
       disk_write_bytes:
         type: integer
+        format: uint64
@@
       network_rx_bytes:
         type: integer
+        format: uint64
       network_tx_bytes:
         type: integer
+        format: uint64
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/swagger/swagger.yaml` around lines 666 - 683, The integer counters in
the schema (cpu_time_nanoseconds, disk_read_bytes, disk_write_bytes,
memory_limit_bytes, memory_usage_bytes, network_rx_bytes, network_tx_bytes)
should be annotated with format: uint64 so code generators treat them as 64-bit
unsigned counters; update those property definitions in swagger.yaml to add
format: uint64 and leave memory_percentage as number (float). Also add a note in
the schema docs or examples recommending string encoding for values >53-bit when
consumed by JavaScript clients to avoid precision loss.

type: object
domain.InstanceStatus:
enum:
Expand Down
28 changes: 24 additions & 4 deletions internal/core/domain/instance.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,15 @@ type Instance struct {
// InstanceStats contains real-time resource usage metrics.
// Values are instantaneous snapshots from the compute backend.
type InstanceStats struct {
CPUPercentage float64 `json:"cpu_percentage"`
MemoryUsageBytes float64 `json:"memory_usage_bytes"`
MemoryLimitBytes float64 `json:"memory_limit_bytes"`
MemoryPercentage float64 `json:"memory_percentage"`
CPUPercentage float64 `json:"cpu_percentage"`
MemoryUsageBytes float64 `json:"memory_usage_bytes"`
MemoryLimitBytes float64 `json:"memory_limit_bytes"`
MemoryPercentage float64 `json:"memory_percentage"`
NetworkRxBytes uint64 `json:"network_rx_bytes"`
NetworkTxBytes uint64 `json:"network_tx_bytes"`
DiskReadBytes uint64 `json:"disk_read_bytes"`
DiskWriteBytes uint64 `json:"disk_write_bytes"`
CPUTimeNanoseconds uint64 `json:"cpu_time_nanoseconds,omitempty"` // only populated by Libvirt backend; Docker uses delta-based percentage instead
}

// RawDockerStats mirrors Docker's stats payload for CPU/memory calculations.
Expand All @@ -104,6 +109,7 @@ type RawDockerStats struct {
TotalUsage uint64 `json:"total_usage"`
} `json:"cpu_usage"`
SystemCPUUsage uint64 `json:"system_cpu_usage"`
CPUTime uint64 `json:"cpu_time"` // libvirt: cumulative CPU time in nanoseconds
} `json:"cpu_stats"`
PreCPUStats struct {
CPUUsage struct {
Expand All @@ -115,4 +121,18 @@ type RawDockerStats struct {
Usage uint64 `json:"usage"`
Limit uint64 `json:"limit"`
} `json:"memory_stats"`
NetworkStats map[string]struct {
RxBytes uint64 `json:"rx_bytes"`
TxBytes uint64 `json:"tx_bytes"`
} `json:"network_stats"`
BlkioStats struct {
IoServiceBytes []BlkioStatEntry `json:"ioservice_bytes"`
} `json:"blkio_stats"`
}

// BlkioStatEntry represents a single block I/O stat entry.
type BlkioStatEntry struct {
Op string `json:"op"`
Device string `json:"device"`
Value uint64 `json:"value"`
}
31 changes: 27 additions & 4 deletions internal/core/services/instance.go
Original file line number Diff line number Diff line change
Expand Up @@ -1290,11 +1290,34 @@ func (s *InstanceService) calculateInstanceStats(stats *domain.RawDockerStats) *
memPercent = (memUsage / memLimit) * 100.0
}

// Sum network rx/tx across all interfaces
var rxBytes, txBytes uint64
for _, net := range stats.NetworkStats {
rxBytes += net.RxBytes
txBytes += net.TxBytes
}

// Sum block read/write bytes
var readBytes, writeBytes uint64
for _, entry := range stats.BlkioStats.IoServiceBytes {
switch entry.Op {
case "read", "Read":
readBytes += entry.Value
case "write", "Write":
writeBytes += entry.Value
}
}

return &domain.InstanceStats{
CPUPercentage: cpuPercent,
MemoryUsageBytes: memUsage,
MemoryLimitBytes: memLimit,
MemoryPercentage: memPercent,
CPUPercentage: cpuPercent,
MemoryUsageBytes: memUsage,
MemoryLimitBytes: memLimit,
MemoryPercentage: memPercent,
NetworkRxBytes: rxBytes,
NetworkTxBytes: txBytes,
DiskReadBytes: readBytes,
DiskWriteBytes: writeBytes,
CPUTimeNanoseconds: stats.CPUStats.CPUTime,
}
}

Expand Down
90 changes: 80 additions & 10 deletions internal/core/services/instance_internal_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -98,20 +98,90 @@ func TestInstanceServiceInternalUpdateVolumesAfterLaunch(t *testing.T) {

func TestInstanceService_CalculateInstanceStats(t *testing.T) {
svc := &InstanceService{}
stats := &domain.RawDockerStats{}

stats.CPUStats.CPUUsage.TotalUsage = 1000
stats.CPUStats.SystemCPUUsage = 10000
t.Run("Basic CPU and Memory", func(t *testing.T) {
stats := &domain.RawDockerStats{}
stats.CPUStats.CPUUsage.TotalUsage = 1000
stats.CPUStats.SystemCPUUsage = 10000
stats.PreCPUStats.CPUUsage.TotalUsage = 500
stats.PreCPUStats.SystemCPUUsage = 5000
stats.MemoryStats.Usage = 1024
stats.MemoryStats.Limit = 2048

res := svc.calculateInstanceStats(stats)
assert.InDelta(t, 10.0, res.CPUPercentage, 0.01) // (1000-500)/(10000-5000) * 100 = 10%
assert.InDelta(t, 50.0, res.MemoryPercentage, 0.01)
assert.Equal(t, uint64(0), res.NetworkRxBytes)
assert.Equal(t, uint64(0), res.NetworkTxBytes)
assert.Equal(t, uint64(0), res.DiskReadBytes)
assert.Equal(t, uint64(0), res.DiskWriteBytes)
})

t.Run("Network I/O multiple interfaces", func(t *testing.T) {
stats := &domain.RawDockerStats{}
stats.NetworkStats = map[string]struct {
RxBytes uint64 `json:"rx_bytes"`
TxBytes uint64 `json:"tx_bytes"`
}{
"eth0": {RxBytes: 1000, TxBytes: 500},
"eth1": {RxBytes: 2000, TxBytes: 1500},
}

res := svc.calculateInstanceStats(stats)
assert.Equal(t, uint64(3000), res.NetworkRxBytes) // 1000 + 2000
assert.Equal(t, uint64(2000), res.NetworkTxBytes) // 500 + 1500
})

stats.PreCPUStats.CPUUsage.TotalUsage = 500
stats.PreCPUStats.SystemCPUUsage = 5000
t.Run("Block I/O read and write", func(t *testing.T) {
stats := &domain.RawDockerStats{}
stats.BlkioStats.IoServiceBytes = []domain.BlkioStatEntry{
{Op: "read", Value: 5000},
{Op: "write", Value: 3000},
{Op: "Read", Value: 1000}, // uppercase variant
{Op: "Write", Value: 2000}, // uppercase variant
}

res := svc.calculateInstanceStats(stats)
assert.Equal(t, uint64(6000), res.DiskReadBytes) // 5000 + 1000
assert.Equal(t, uint64(5000), res.DiskWriteBytes) // 3000 + 2000
})

stats.MemoryStats.Usage = 1024
stats.MemoryStats.Limit = 2048
t.Run("CPU time nanoseconds", func(t *testing.T) {
stats := &domain.RawDockerStats{}
stats.CPUStats.CPUTime = 5000000000 // 5 nanoseconds

res := svc.calculateInstanceStats(stats)
assert.InDelta(t, 10.0, res.CPUPercentage, 0.01) // (1000-500)/(10000-5000) * 100 = 10%
assert.InDelta(t, 50.0, res.MemoryPercentage, 0.01)
res := svc.calculateInstanceStats(stats)
assert.Equal(t, uint64(5000000000), res.CPUTimeNanoseconds)
})

t.Run("Combined all fields", func(t *testing.T) {
stats := &domain.RawDockerStats{}
stats.CPUStats.CPUUsage.TotalUsage = 800
stats.CPUStats.SystemCPUUsage = 8000
stats.PreCPUStats.CPUUsage.TotalUsage = 400
stats.PreCPUStats.SystemCPUUsage = 4000
stats.MemoryStats.Usage = 512
stats.MemoryStats.Limit = 1024
stats.CPUStats.CPUTime = 3000000000
stats.NetworkStats = map[string]struct {
RxBytes uint64 `json:"rx_bytes"`
TxBytes uint64 `json:"tx_bytes"`
}{
"eth0": {RxBytes: 500, TxBytes: 250},
}
stats.BlkioStats.IoServiceBytes = []domain.BlkioStatEntry{
{Op: "read", Value: 2048},
}

res := svc.calculateInstanceStats(stats)
assert.InDelta(t, 10.0, res.CPUPercentage, 0.01)
assert.InDelta(t, 50.0, res.MemoryPercentage, 0.01)
assert.Equal(t, uint64(500), res.NetworkRxBytes)
assert.Equal(t, uint64(250), res.NetworkTxBytes)
assert.Equal(t, uint64(2048), res.DiskReadBytes)
assert.Equal(t, uint64(0), res.DiskWriteBytes)
assert.Equal(t, uint64(3000000000), res.CPUTimeNanoseconds)
})
}

func TestInstanceService_FormatContainerName(t *testing.T) {
Expand Down
9 changes: 9 additions & 0 deletions internal/core/services/instance_unit_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1578,6 +1578,15 @@ func testInstanceServiceUnitRepoErrors(t *testing.T) {
require.Error(t, err)
})

t.Run("GetInstanceStats_ComputeError", func(t *testing.T) {
repo.On("GetByName", mock.Anything, "test-inst").Return(inst, nil).Once()
compute.On("GetInstanceStats", mock.Anything, "cid-1").Return(nil, fmt.Errorf("stats unavailable")).Once()

_, err := svc.GetInstanceStats(ctx, "test-inst")
require.Error(t, err)
assert.Contains(t, err.Error(), "stats unavailable")
})

t.Run("Exec_NotFound", func(t *testing.T) {
repo.On("GetByName", mock.Anything, mock.Anything).Return(nil, svcerrors.New(svcerrors.NotFound, "not found")).Once()
repo.On("GetByID", mock.Anything, mock.Anything).Return(nil, svcerrors.New(svcerrors.NotFound, "not found")).Once()
Expand Down
30 changes: 29 additions & 1 deletion internal/handlers/instance_handler_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -460,7 +460,17 @@ func TestInstanceHandlerGetStats(t *testing.T) {
r.GET(instancesPath+"/:id/stats", handler.GetStats)

id := uuid.New().String()
stats := &domain.InstanceStats{CPUPercentage: 10.5, MemoryUsageBytes: 128}
stats := &domain.InstanceStats{
CPUPercentage: 10.5,
MemoryUsageBytes: 128,
MemoryLimitBytes: 256,
MemoryPercentage: 50.0,
NetworkRxBytes: 1024,
NetworkTxBytes: 512,
DiskReadBytes: 4096,
DiskWriteBytes: 2048,
CPUTimeNanoseconds: 3000000000,
}
mockSvc.On("GetInstanceStats", mock.Anything, id).Return(stats, nil)

req := httptest.NewRequest(http.MethodGet, instancesPath+"/"+id+"/stats", nil)
Expand All @@ -469,6 +479,24 @@ func TestInstanceHandlerGetStats(t *testing.T) {
r.ServeHTTP(w, req)

assert.Equal(t, http.StatusOK, w.Code)

// Verify all new stats fields are present in the JSON response
// httputil.Success wraps data in {"data": {...}}
var wrapper struct {
Data map[string]interface{} `json:"data"`
}
err := json.Unmarshal(w.Body.Bytes(), &wrapper)
require.NoError(t, err)

assert.InDelta(t, 10.5, wrapper.Data["cpu_percentage"], 0.01)
assert.InDelta(t, 128, wrapper.Data["memory_usage_bytes"], 0.01)
assert.InDelta(t, 256, wrapper.Data["memory_limit_bytes"], 0.01)
assert.InDelta(t, 50.0, wrapper.Data["memory_percentage"], 0.01)
assert.Equal(t, uint64(1024), uint64(wrapper.Data["network_rx_bytes"].(float64)))
assert.Equal(t, uint64(512), uint64(wrapper.Data["network_tx_bytes"].(float64)))
assert.Equal(t, uint64(4096), uint64(wrapper.Data["disk_read_bytes"].(float64)))
assert.Equal(t, uint64(2048), uint64(wrapper.Data["disk_write_bytes"].(float64)))
assert.Equal(t, uint64(3000000000), uint64(wrapper.Data["cpu_time_nanoseconds"].(float64)))
}

func TestInstanceHandlerLaunchWithVolumesAndVPC(t *testing.T) {
Expand Down
Loading
Loading