-
Notifications
You must be signed in to change notification settings - Fork 9
files sorting #239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
files sorting #239
Conversation
|
CodeAnt AI is reviewing your PR. |
|
🌀 Tests overview by Testomatio Found 234 mocha tests in 26 files ✔️ Added 2 tests+ analyzer: env variable params: should sort files alphabetically
+ analyzer: env variable params: should maintain consistent file order across multiple runs📑 List all tests
📝 tests/updateIds_codeceptjs_test.js
📝 tests/updateIds_gauge_test.js
📝 tests/updateIds_markdown_test.js
📝 tests/updateIds_nightwatch_test.js
📝 tests/updateIds_playwright_test.js
|
Nitpicks 🔍
|
| analyzer1.analyze('./example/codeceptjs/*.js'); | ||
| const files1 = analyzer1.getStats().files; | ||
|
|
||
| const analyzer2 = new Analyzer('mocha', path.join(__dirname, '..')); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The test that claims to verify consistent file order across multiple runs actually creates two analyzers with different frameworks (one using CodeceptJS and one using Mocha) and compares their file lists, so it is asserting cross-framework equality instead of checking that repeated runs with the same configuration yield the same ordered list; this can fail even when each framework is internally consistent and does not accurately test the intended behavior. [logic error]
Severity Level: Major ⚠️
- ⚠️ Unit test asserts cross-framework equality erroneously.
- ⚠️ CI can fail spuriously for legitimate framework differences.
- ⚠️ Test does not verify deterministic ordering across repeats.| const analyzer2 = new Analyzer('mocha', path.join(__dirname, '..')); | |
| const analyzer2 = new Analyzer('codeceptJS', path.join(__dirname, '..')); |
Steps of Reproduction ✅
1. Run the test suite including tests/analyzer_test.js (the file contains the test at
tests/analyzer_test.js:210-220).
Expected: mocha runs the test block labeled "should maintain consistent file order
across multiple runs".
2. The test constructs analyzer1 as new Analyzer('codeceptJS', ...) and calls
analyzer1.analyze('./example/codeceptjs/*.js') (tests/analyzer_test.js:211-213) then reads
files1 from analyzer1.getStats().files (tests/analyzer_test.js:213).
This exercises the Analyzer implementation for the CodeceptJS framework on
example/codeceptjs files.
3. The test then constructs analyzer2 with new Analyzer('mocha', ...) and calls
analyzer2.analyze('./example/codeceptjs/*.js') (tests/analyzer_test.js:215-217) then reads
files2 from analyzer2.getStats().files (tests/analyzer_test.js:217).
This exercises the Analyzer implementation for the Mocha framework on the same file
glob.
4. The test asserts expect(files1).to.deep.equal(files2) (tests/analyzer_test.js:219). If
the Analyzer implementations for 'codeceptJS' and 'mocha' produce different file lists or
ordering (intentional differences in path handling, prefixes, or framework parsing), the
assertion will fail even when each Analyzer implementation is deterministic across
repeated runs with the same framework.
Therefore the test does not reproduce the intended bug (non-deterministic ordering
across repeated runs), but instead checks cross-framework equality which can
legitimately differ.
5. The corrected reproduction to validate determinism is: change analyzer2 to use the same
framework as analyzer1 (both 'codeceptJS') and re-run the test; a failure then
demonstrates non-deterministic ordering across repeated runs of the same Analyzer
configuration (tests/analyzer_test.js:211-219).Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** tests/analyzer_test.js
**Line:** 215:215
**Comment:**
*Logic Error: The test that claims to verify consistent file order across multiple runs actually creates two analyzers with different frameworks (one using CodeceptJS and one using Mocha) and compares their file lists, so it is asserting cross-framework equality instead of checking that repeated runs with the same configuration yield the same ordered list; this can fail even when each framework is internally consistent and does not accurately test the intended behavior.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| tests: { | ||
| 'Create a new todo item': '@T1111111', | ||
| 'Todos containing weird characters': '@T22222111111', | ||
| 'DataTable example#Todos containing weird characters': '@T22222111111', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The ID value for the "DataTable example#Todos containing weird characters" test does not match what the assertions expect ('@T22222111'), so updateIds will insert a different ID ('@T22222111111') and the tests checking for '@T22222111' in the datatable file will fail. [logic error]
Severity Level: Critical 🚨
- ❌ Unit test failure in tests/update_fs_test.js.
- ❌ CI run blocked by failing test suite.
- ⚠️ ID insertion behavior for datatable fixture.
- ⚠️ Testomat.io import verification affected.| 'DataTable example#Todos containing weird characters': '@T22222111111', | |
| 'DataTable example#Todos containing weird characters': '@T22222111', |
Steps of Reproduction ✅
1. Run the test file `tests/update_fs_test.js` (entry point shown at file top). The test
'should add suite and test ids' begins at `tests/update_fs_test.js:55-74` and calls
`updateIds` at `tests/update_fs_test.js:60` with `analyzer.rawTests` and `idMap`.
2. The `idMap` used by that call contains the mapping line at `tests/update_fs_test.js:34`
where the datatable test name is mapped to '@T22222111111' (current code). This is read by
`updateIds` when it inserts IDs into copied fixture files in the `update_examples` folder
(created by `createTestFiles` at `tests/update_fs_test.js:41-44`).
3. After `updateIds` runs, the test reads the file `update_examples/datatable_test.js` at
`tests/update_fs_test.js:64-66` into `file2` and asserts
`expect(file2).to.include('@T22222111')` at `tests/update_fs_test.js:72`. Because the
inserted ID is '@T22222111111' (the idMap value), the assertion for '@T22222111' fails.
4. Concrete reproduction: run `npm test` or `mocha tests/update_fs_test.js` (or the
project's test runner). Observe a failing assertion showing that `file2` does not include
'@T22222111' while the code inserted '@T22222111111' (traceable to
`tests/update_fs_test.js:34` and `tests/update_fs_test.js:60`).
5. Why this is a real bug (not stylistic): the test expectations explicitly look for
'@T22222111' (lines `tests/update_fs_test.js:71-72`) but the idMap supplies a different
string at `tests/update_fs_test.js:34`. Fixing the idMap to '@T22222111' aligns inserted
output with test assertions, resolving the failure.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** tests/update_fs_test.js
**Line:** 34:34
**Comment:**
*Logic Error: The ID value for the "DataTable example#Todos containing weird characters" test does not match what the assertions expect (`'@T22222111'`), so `updateIds` will insert a different ID (`'@T22222111111'`) and the tests checking for `'@T22222111'` in the datatable file will fail.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI finished reviewing your PR. |
User description
Sort files before analyzing. This fixes an issue with reversed folder order when importing into Testomat.io.
Issue: #237
CodeAnt-AI Description
Sort discovered test files alphabetically to ensure consistent import order
What Changed
Impact
✅ Consistent import order for test files✅ Fewer reversed-folder import issues when importing into Testomat.io✅ Stable test file ordering across frameworks and runs💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.