feat(execution): Predictive Pipeline — previsão inteligente de resultados#589
feat(execution): Predictive Pipeline — previsão inteligente de resultados#589nikolasdehor wants to merge 1 commit intoSynkraAI:mainfrom
Conversation
…ados de tasks Pipeline preditivo que estima resultados antes da execução usando padrões históricos. Implementa k-NN ponderado com vetores de features, EWMA para estimativa de duração, detecção de anomalias, avaliação de risco e motor de recomendação de agentes/estratégias. 89 testes unitários cobrindo todos os cenários.
|
@nikolasdehor is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughIntroduces a new PredictivePipeline class that implements a k-NN-based predictive system for task execution with multi-stage orchestration, persistent data storage, risk assessment, and event-based observability. Includes a backward-compatible wrapper module and comprehensive test suite covering all public APIs and edge cases. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Pipeline as PredictivePipeline
participant Preprocess
participant Match
participant Predict
participant Score
participant Recommend
participant Storage as Persistence Layer
participant Emitter as EventEmitter
Client->>Pipeline: predict(taskType, complexity, ...)
Pipeline->>Preprocess: extractFeatures()
Preprocess-->>Pipeline: normalized features
Pipeline->>Storage: loadPersistedOutcomes()
Storage-->>Pipeline: historical outcomes
Pipeline->>Match: findKNearestNeighbors()
Match->>Match: compute similarity scores
Match-->>Pipeline: k similar tasks
Pipeline->>Predict: computeWeightedPrediction()
Predict->>Predict: aggregate success probability<br/>duration EWMA, resources
Predict-->>Pipeline: prediction data
Pipeline->>Score: validateConfidence()<br/>detectAnomalies()
Score->>Score: assess risk factors
Score-->>Pipeline: confidence + risk level
Pipeline->>Recommend: selectBestAgent()<br/>selectBestStrategy()
Recommend-->>Pipeline: recommendations
Pipeline->>Emitter: emit('prediction', result)
Emitter-->>Client: observable event
Client->>Pipeline: recordOutcome(result)
Pipeline->>Storage: persist outcome + stats
Storage-->>Pipeline: write complete
Pipeline->>Emitter: emit('outcome-recorded')
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (1)
tests/core/execution/predictive-pipeline.test.js (1)
12-17: Please cover the.aios-corecompatibility path with one smoke test.Every new test loads the canonical
.aiox-coremodule directly, so.aios-core/core/execution/predictive-pipeline.jscan regress without any red signal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/predictive-pipeline.test.js` around lines 12 - 17, Add a smoke test to cover the .aios-core compatibility path by requiring the same exported symbols from the alternate module path and exercising a minimal API call: import PredictivePipeline, PipelineStage, RiskLevel, DEFAULTS from '../../../.aios-core/core/execution/predictive-pipeline' (mirroring the existing import), instantiate a PredictivePipeline (or call a small method) and assert that the key exports exist and behave as expected (e.g., typeof PredictivePipeline === 'function', PipelineStage/RiskLevel enums present, DEFAULTS has expected keys) to ensure the compatibility entrypoint doesn't regress.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.aios-core/core/execution/predictive-pipeline.js:
- Around line 1-2: Replace the brittle relative require in the compat shim with
the repo's absolute import path: update the line in predictive-pipeline.js that
currently does module.exports =
require('../../../.aiox-core/core/execution/predictive-pipeline'); to use the
absolute import (for example module.exports =
require('.aiox-core/core/execution/predictive-pipeline') or the project's
configured package alias), leaving module.exports intact so the file remains a
retrocompat wrapper.
In @.aiox-core/core/execution/predictive-pipeline.js:
- Around line 71-81: The constructor currently assigns numeric options directly
(kNeighbors, minSamplesForPrediction, anomalyThreshold, ewmaAlpha,
highRiskThreshold, maxOutcomes, confidenceSampleCap) without validation; add
input validation in the constructor to verify types and ranges (e.g., kNeighbors
and minSamplesForPrediction and maxOutcomes and confidenceSampleCap are positive
integers >=1, anomalyThreshold/ewmaAlpha/highRiskThreshold are numbers in [0,1])
and throw a clear Error if any check fails so invalid values fail-fast; retain
use of DEFAULTS when an option is undefined but reject out-of-range or
non-numeric values before storing to the instance fields.
- Around line 509-513: _stagePredict currently only falls back to
_defaultPrediction when neighbors.length === 0, so the configured
minSamplesForPrediction option never gates predictions; update the logic in
_stagePredict to check the configured threshold
(this.options.minSamplesForPrediction or similar) and call
this._defaultPrediction(features) whenever neighbors.length is less than that
threshold (including the zero case), ensuring the option is honored before
running the normal prediction path in _runStage(PipelineStage.PREDICT, ...).
- Around line 305-312: When auto-pruning _outcomes to enforce maxOutcomes, also
update the in-memory _model so its aggregated stats remain consistent (currently
splicing _outcomes then persisting causes getModelAccuracy(), assessRisk(),
recommendations and model.json to remain overstated). Fix by applying the same
pruning logic to _model before calling _persistModel(): either call the existing
prune() routine (or a new helper like _updateModelOnRemove/ _recalculateModel)
after computing excess and before persisting, or iterate the removed outcome
entries and decrement/remove their contributions from _model so that _model,
_outcomes, _persistOutcomes, and _persistModel remain in sync (affecting methods
getModelAccuracy, assessRisk, retrain, and prune).
- Around line 129-150: The _loadSync method currently swallows all errors when
reading/parsing this._outcomesPath and this._modelPath; change the catch logic
in both read/parse blocks (inside _loadSync) to distinguish JSON parse errors
from real I/O errors: if the thrown error is a SyntaxError (or JSON parse
failure) treat it as recoverable and reset this._outcomes or this._model (via
this._emptyModel()), but for other errors (ENoENT aside if you want missing-file
treated as empty) rethrow or throw a new Error that includes the path
(this._outcomesPath or this._modelPath) and the original error info so
permission/read errors surface with context rather than being silently ignored.
- Around line 177-181: The _enqueueWrite method swallows write errors by
catching them and not propagating them, causing callers like recordOutcome(),
retrain(), and prune() to observe success even when persistence failed; change
the catch handler on this._writeChain so after calling this._emitSafeError({
type: 'persistence', error: err }) it rethrows the error (or returns a rejected
promise) so the returned promise remains rejected and callers receive the
failure; update _enqueueWrite (and any similar write-queue logic using
_writeChain) to propagate the error instead of resolving it.
---
Nitpick comments:
In `@tests/core/execution/predictive-pipeline.test.js`:
- Around line 12-17: Add a smoke test to cover the .aios-core compatibility path
by requiring the same exported symbols from the alternate module path and
exercising a minimal API call: import PredictivePipeline, PipelineStage,
RiskLevel, DEFAULTS from
'../../../.aios-core/core/execution/predictive-pipeline' (mirroring the existing
import), instantiate a PredictivePipeline (or call a small method) and assert
that the key exports exist and behave as expected (e.g., typeof
PredictivePipeline === 'function', PipelineStage/RiskLevel enums present,
DEFAULTS has expected keys) to ensure the compatibility entrypoint doesn't
regress.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e628c9ab-e22c-4de9-a73c-a89bd2d0f9f3
📒 Files selected for processing (4)
.aios-core/core/execution/predictive-pipeline.js.aiox-core/core/execution/predictive-pipeline.js.aiox-core/install-manifest.yamltests/core/execution/predictive-pipeline.test.js
| // Retrocompatible wrapper — canonical source in .aiox-core/ | ||
| module.exports = require('../../../.aiox-core/core/execution/predictive-pipeline'); |
There was a problem hiding this comment.
Use the repo's absolute import path in the compat shim.
This wrapper is a long-lived compatibility entry point, so the ../../../.aiox-core/... hop is brittle and breaks the project import rule.
As per coding guidelines, "**/*.{js,jsx,ts,tsx}: Use absolute imports instead of relative imports in all code`."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aios-core/core/execution/predictive-pipeline.js around lines 1 - 2, Replace
the brittle relative require in the compat shim with the repo's absolute import
path: update the line in predictive-pipeline.js that currently does
module.exports =
require('../../../.aiox-core/core/execution/predictive-pipeline'); to use the
absolute import (for example module.exports =
require('.aiox-core/core/execution/predictive-pipeline') or the project's
configured package alias), leaving module.exports intact so the file remains a
retrocompat wrapper.
| constructor(projectRoot, options = {}) { | ||
| super(); | ||
|
|
||
| this.projectRoot = projectRoot ?? process.cwd(); | ||
| this.kNeighbors = options.kNeighbors ?? DEFAULTS.kNeighbors; | ||
| this.minSamplesForPrediction = options.minSamplesForPrediction ?? DEFAULTS.minSamplesForPrediction; | ||
| this.anomalyThreshold = options.anomalyThreshold ?? DEFAULTS.anomalyThreshold; | ||
| this.ewmaAlpha = options.ewmaAlpha ?? DEFAULTS.ewmaAlpha; | ||
| this.highRiskThreshold = options.highRiskThreshold ?? DEFAULTS.highRiskThreshold; | ||
| this.maxOutcomes = options.maxOutcomes ?? DEFAULTS.maxOutcomes; | ||
| this.confidenceSampleCap = options.confidenceSampleCap ?? DEFAULTS.confidenceSampleCap; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Validate constructor options before storing them.
This public entry point accepts any numeric values today. Values like kNeighbors < 0, confidenceSampleCap <= 0, or weights outside [0, 1] silently degrade the model instead of failing fast.
As per coding guidelines, "Check for proper input validation on public API methods."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/predictive-pipeline.js around lines 71 - 81, The
constructor currently assigns numeric options directly (kNeighbors,
minSamplesForPrediction, anomalyThreshold, ewmaAlpha, highRiskThreshold,
maxOutcomes, confidenceSampleCap) without validation; add input validation in
the constructor to verify types and ranges (e.g., kNeighbors and
minSamplesForPrediction and maxOutcomes and confidenceSampleCap are positive
integers >=1, anomalyThreshold/ewmaAlpha/highRiskThreshold are numbers in [0,1])
and throw a clear Error if any check fails so invalid values fail-fast; retain
use of DEFAULTS when an option is undefined but reject out-of-range or
non-numeric values before storing to the instance fields.
| _loadSync() { | ||
| try { | ||
| if (fs.existsSync(this._outcomesPath)) { | ||
| const raw = fs.readFileSync(this._outcomesPath, 'utf8'); | ||
| const parsed = JSON.parse(raw); | ||
| this._outcomes = Array.isArray(parsed) ? parsed : []; | ||
| } | ||
| } catch { | ||
| this._outcomes = []; | ||
| } | ||
|
|
||
| try { | ||
| if (fs.existsSync(this._modelPath)) { | ||
| const raw = fs.readFileSync(this._modelPath, 'utf8'); | ||
| const parsed = JSON.parse(raw); | ||
| if (parsed && typeof parsed === 'object') { | ||
| this._model = { ...this._emptyModel(), ...parsed }; | ||
| } | ||
| } | ||
| } catch { | ||
| this._model = this._emptyModel(); | ||
| } |
There was a problem hiding this comment.
Differentiate parse recovery from real I/O failures.
These bare catch blocks reset to empty state for every error. Malformed JSON may be recoverable, but permission/read errors should surface with path context; otherwise the next write can overwrite valid persisted history.
As per coding guidelines, "Verify error handling is comprehensive with proper try/catch and error context."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/predictive-pipeline.js around lines 129 - 150, The
_loadSync method currently swallows all errors when reading/parsing
this._outcomesPath and this._modelPath; change the catch logic in both
read/parse blocks (inside _loadSync) to distinguish JSON parse errors from real
I/O errors: if the thrown error is a SyntaxError (or JSON parse failure) treat
it as recoverable and reset this._outcomes or this._model (via
this._emptyModel()), but for other errors (ENoENT aside if you want missing-file
treated as empty) rethrow or throw a new Error that includes the path
(this._outcomesPath or this._modelPath) and the original error info so
permission/read errors surface with context rather than being silently ignored.
| _enqueueWrite(writeFn) { | ||
| this._writeChain = this._writeChain.then(() => writeFn()).catch((err) => { | ||
| this._emitSafeError({ type: 'persistence', error: err }); | ||
| }); | ||
| return this._writeChain; |
There was a problem hiding this comment.
Propagate persistence failures back to callers.
Line 178 converts a failed write into a resolved promise. recordOutcome(), retrain(), and prune() can therefore report success even when nothing reached disk.
🛠️ Suggested fix
_enqueueWrite(writeFn) {
- this._writeChain = this._writeChain.then(() => writeFn()).catch((err) => {
- this._emitSafeError({ type: 'persistence', error: err });
- });
- return this._writeChain;
+ const writePromise = this._writeChain.then(() => writeFn());
+ this._writeChain = writePromise.catch((err) => {
+ this._emitSafeError({ type: 'persistence', error: err });
+ });
+ return writePromise;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/predictive-pipeline.js around lines 177 - 181, The
_enqueueWrite method swallows write errors by catching them and not propagating
them, causing callers like recordOutcome(), retrain(), and prune() to observe
success even when persistence failed; change the catch handler on
this._writeChain so after calling this._emitSafeError({ type: 'persistence',
error: err }) it rethrows the error (or returns a rejected promise) so the
returned promise remains rejected and callers receive the failure; update
_enqueueWrite (and any similar write-queue logic using _writeChain) to propagate
the error instead of resolving it.
| // Auto-prune if exceeding max | ||
| if (this._outcomes.length > this.maxOutcomes) { | ||
| const excess = this._outcomes.length - this.maxOutcomes; | ||
| this._outcomes.splice(0, excess); | ||
| } | ||
|
|
||
| await this._persistOutcomes(); | ||
| await this._persistModel(); |
There was a problem hiding this comment.
Keep _model consistent when auto-pruning.
This branch drops records from _outcomes after their stats were already accumulated. After the first overflow, getModelAccuracy(), assessRisk(), recommendations, and persisted model.json are all overstated until a later retrain() or explicit prune().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/predictive-pipeline.js around lines 305 - 312,
When auto-pruning _outcomes to enforce maxOutcomes, also update the in-memory
_model so its aggregated stats remain consistent (currently splicing _outcomes
then persisting causes getModelAccuracy(), assessRisk(), recommendations and
model.json to remain overstated). Fix by applying the same pruning logic to
_model before calling _persistModel(): either call the existing prune() routine
(or a new helper like _updateModelOnRemove/ _recalculateModel) after computing
excess and before persisting, or iterate the removed outcome entries and
decrement/remove their contributions from _model so that _model, _outcomes,
_persistOutcomes, and _persistModel remain in sync (affecting methods
getModelAccuracy, assessRisk, retrain, and prune).
| _stagePredict(neighbors, features) { | ||
| return this._runStage(PipelineStage.PREDICT, () => { | ||
| if (neighbors.length === 0) { | ||
| return this._defaultPrediction(features); | ||
| } |
There was a problem hiding this comment.
minSamplesForPrediction never gates predictions.
The option is documented and exposed, but this only falls back when there are zero neighbors. With one or two matches, callers still receive a full prediction even though the configured minimum has not been met.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/predictive-pipeline.js around lines 509 - 513,
_stagePredict currently only falls back to _defaultPrediction when
neighbors.length === 0, so the configured minSamplesForPrediction option never
gates predictions; update the logic in _stagePredict to check the configured
threshold (this.options.minSamplesForPrediction or similar) and call
this._defaultPrediction(features) whenever neighbors.length is less than that
threshold (including the zero case), ensuring the option is honored before
running the normal prediction path in _runStage(PipelineStage.PREDICT, ...).
|
@Pedrovaleriolopez, solicitando review deste PR. Implementação do Predictive Pipeline (previsão de resultados antes da execução) com decision-memory persistente. A feature original foi submetida no PR #575 (10/mar) e reaberta aqui para incorporar feedback. O PR concorrente #579 (@rafaelscosta, também 10/mar porém posterior) implementa a mesma feature. Ambos têm a mesma arquitetura (KNN, EWMA, anomaly detection). Peço que avaliem a cronologia e considerem o trabalho prévio. Posso fazer rebase para resolver os conflitos se houver preferência por este PR. |
Summary
Testes
Summary by CodeRabbit