As an academic stakeholder (e.g., professor, program coordinator, or institutional assessor), I want an automated system that processes Course Outcomes (COs), maps them to Program Outcomes (POs), and aggregates POs into Institutional Outcomes (IOs) using existing codebase calculations, so that I can assess student performance across courses, programs, and institutional goals for accreditation (e.g., ABET) and curriculum decisions. The system should accept a JSON configuration file, produce Excel files for results, use CrewAI to analyze student capabilities based on CO/PO/IO outcomes, and provide a Panel-based GUI for interactive visualization and analysis.
Acceptance Criteria
File Input
-
The system must accept a JSON configuration file (acat_config.json) specifying courses, sections, outcome files, assignment mappings, grade files, CO-to-PO mappings, and PO-to-IO mappings.
-
Input files (outcomes, assignments, grades) must be in Excel format, validated for correct structure (e.g., expected columns).
-
The system must display an error message if file formats or content are invalid.
Data Processing
-
The system must use the existing codebase (e.g., read_outcomes, read_assignments, read_grades) to:
-
Compute CO performance (e.g., Likert averages) based on individual gradebook items.
-
Map COs to POs using JSON mappings and compute PO attainment (e.g., weighted averages of CO scores).
-
Aggregate POs into IOs using JSON mappings (e.g., weighted averages of PO scores).
-
Calculations must align with the ACAT methodology, ensuring individual gradebook items (not final course grades) are used to avoid issues with single indicators measuring multiple outcomes.
-
The system must handle multiple courses and sections concurrently.
CrewAI Student Assessment
-
The system must use CrewAI with four agents to analyze student outcomes:
-
Course Outcome Assessment Agent: Analyzes CO data to identify student strengths/weaknesses at the course level.
-
Program Outcome Assessment Agent: Analyzes PO data to assess program-level capabilities.
-
Institutional Outcome Assessment Agent: Analyzes IO data to evaluate institutional goal attainment.
-
Student Learning Overall Assessment Agent: Combines CO/PO/IO data to model each student’s overall capabilities and provide qualitative insights (e.g., strengths, areas for improvement).
-
CrewAI must take CO/PO/IO outcomes (from Excel files) as input and produce textual summaries of student capabilities, not perform numerical computations.
-
The impact of CrewAI is limited to qualitative analysis, as computations are handled by the existing codebase to ensure mathematical accuracy.
Output
-
The system must produce results as Excel files (e.g., COMP-101_Semester_01_outcomes.xlsx) for each course and section, containing:
-
Student-level CO, PO, and IO scores.
-
Class-level averages for COs, POs, and IOs (e.g., Likert averages).
-
Excel files must be saved to a configurable output directory specified in the JSON config.
-
CrewAI student assessment summaries must be saved as separate Excel files (e.g., COMP-101_Semester_01_student_assessment.xlsx) with textual insights.
Panel GUI
-
The system must include a Panel-based GUI to:
-
Allow file uploads for the JSON config and Excel files.
-
Display interactive tables of CO, PO, IO, and CrewAI student assessment summaries.
-
Provide filters (e.g., dropdowns for course, section, PO, or IO).
-
Show visualizations (e.g., bar charts for PO/IO averages, pie charts for attainment distribution) using Plotly or Panel-compatible libraries.
-
Offer downloadable Excel files of displayed results.
-
The GUI must be intuitive, with clear labels, error messages, and loading indicators, consistent with the Adaptive Learning codebase.
Testing
-
File Input Parsing and Validation
-
CO, PO, and IO Calculations Against Expected Results
-
CrewAI Output Validation
-
GUI Functionality (filters, visualizations, downloads)
-
Edge Case Coverage (e.g., invalid files, missing mappings)
Conditions of Satisfaction
| Condition |
Test |
Satisfaction |
| File Input Handling |
Upload sample acat_config.json for COMP-101 |
Files validated, and CO/PO/IO data loaded |
| Data Processing |
Process COMP-101 data |
Metrics match expected outputs |
| CrewAI Assessment |
Feed outcomes to CrewAI |
Relevant summaries saved as Excel |
| Output |
Verify Excel files |
Accurate, formatted, and saved correctly |
| Panel GUI |
Interact with GUI |
Filters, visualizations, and downloads function correctly |
| Testing |
Run pytest |
No failures and edge cases handled |
| Accuracy |
Compare with Moodle assessments |
Outputs are consistent |
| User Feedback |
Collect stakeholder feedback |
Users find GUI intuitive |
Definition of Done
Functional Requirements
-
System processes JSON and Excel inputs.
-
Computes CO/PO/IO metrics using the existing codebase.
-
CrewAI generates student summaries based on outcomes.
-
Panel GUI allows file uploads, filters, visualizations, and downloads.
Non-Functional Requirements
Testing & Validation
Security
User Experience
Documentation
-
Setup, file formats, CrewAI, GUI usage, and tests are documented.
Deployment
Tasks
Task 1: Update File Input and Validation (8 hours)
-
Parse CO-to-PO and PO-to-IO mappings (load_config) – 3h
-
Validate Excel formats (outcomes, assignments, grades) – 3h
-
Error handling for invalid inputs – 2h
Task 2: Compute Program Outcomes (10 hours)
Task 3: Compute Institutional Outcomes (10 hours)
Task 4: Build CrewAI Student Assessment Module (12 hours)
-
Course Outcome Agent – 3h
-
Program Outcome Agent – 3h
-
Institutional Outcome Agent – 3h
-
Student Learning Overall Agent – 3h
Task 5: Develop Panel GUI (20 hours)
-
File upload widgets – 4h
-
Tabbed interface & filters – 4h
-
Tables for metrics & summaries – 4h
-
Visualizations using Plotly – 5h
-
Excel download buttons – 3h
Task 6: Test and Validate with Pytest (12 hours)
Task 7: Document and Deploy (6 hours)
-
Documentation (setup, files, GUI, CrewAI, pytest) – 4h
-
Deployment testing – 2h
As an academic stakeholder (e.g., professor, program coordinator, or institutional assessor), I want an automated system that processes Course Outcomes (COs), maps them to Program Outcomes (POs), and aggregates POs into Institutional Outcomes (IOs) using existing codebase calculations, so that I can assess student performance across courses, programs, and institutional goals for accreditation (e.g., ABET) and curriculum decisions. The system should accept a JSON configuration file, produce Excel files for results, use CrewAI to analyze student capabilities based on CO/PO/IO outcomes, and provide a Panel-based GUI for interactive visualization and analysis.
Acceptance Criteria
File Input
The system must accept a JSON configuration file (
acat_config.json) specifying courses, sections, outcome files, assignment mappings, grade files, CO-to-PO mappings, and PO-to-IO mappings.Input files (outcomes, assignments, grades) must be in Excel format, validated for correct structure (e.g., expected columns).
The system must display an error message if file formats or content are invalid.
Data Processing
The system must use the existing codebase (e.g.,
read_outcomes,read_assignments,read_grades) to:Compute CO performance (e.g., Likert averages) based on individual gradebook items.
Map COs to POs using JSON mappings and compute PO attainment (e.g., weighted averages of CO scores).
Aggregate POs into IOs using JSON mappings (e.g., weighted averages of PO scores).
Calculations must align with the ACAT methodology, ensuring individual gradebook items (not final course grades) are used to avoid issues with single indicators measuring multiple outcomes.
The system must handle multiple courses and sections concurrently.
CrewAI Student Assessment
The system must use CrewAI with four agents to analyze student outcomes:
Course Outcome Assessment Agent: Analyzes CO data to identify student strengths/weaknesses at the course level.
Program Outcome Assessment Agent: Analyzes PO data to assess program-level capabilities.
Institutional Outcome Assessment Agent: Analyzes IO data to evaluate institutional goal attainment.
Student Learning Overall Assessment Agent: Combines CO/PO/IO data to model each student’s overall capabilities and provide qualitative insights (e.g., strengths, areas for improvement).
CrewAI must take CO/PO/IO outcomes (from Excel files) as input and produce textual summaries of student capabilities, not perform numerical computations.
The impact of CrewAI is limited to qualitative analysis, as computations are handled by the existing codebase to ensure mathematical accuracy.
Output
The system must produce results as Excel files (e.g.,
COMP-101_Semester_01_outcomes.xlsx) for each course and section, containing:Student-level CO, PO, and IO scores.
Class-level averages for COs, POs, and IOs (e.g., Likert averages).
Excel files must be saved to a configurable output directory specified in the JSON config.
CrewAI student assessment summaries must be saved as separate Excel files (e.g.,
COMP-101_Semester_01_student_assessment.xlsx) with textual insights.Panel GUI
The system must include a Panel-based GUI to:
Allow file uploads for the JSON config and Excel files.
Display interactive tables of CO, PO, IO, and CrewAI student assessment summaries.
Provide filters (e.g., dropdowns for course, section, PO, or IO).
Show visualizations (e.g., bar charts for PO/IO averages, pie charts for attainment distribution) using Plotly or Panel-compatible libraries.
Offer downloadable Excel files of displayed results.
The GUI must be intuitive, with clear labels, error messages, and loading indicators, consistent with the Adaptive Learning codebase.
Testing
File Input Parsing and Validation
CO, PO, and IO Calculations Against Expected Results
CrewAI Output Validation
GUI Functionality (filters, visualizations, downloads)
Edge Case Coverage (e.g., invalid files, missing mappings)
Conditions of Satisfaction
Definition of Done
Functional Requirements
System processes JSON and Excel inputs.
Computes CO/PO/IO metrics using the existing codebase.
CrewAI generates student summaries based on outcomes.
Panel GUI allows file uploads, filters, visualizations, and downloads.
Non-Functional Requirements
Robust system validated with pytest.
GUI follows Adaptive Learning design.
Testing & Validation
Pytest suite passes.
Outputs match reference implementation.
User testing confirms usability.
Security
File uploads and data processing are secure and validated.
User Experience
GUI provides clear feedback, error handling, and intuitive interactions.
Documentation
Setup, file formats, CrewAI, GUI usage, and tests are documented.
Deployment
System can run locally or be deployed without issues.
Tasks
Task 1: Update File Input and Validation (8 hours)
Parse CO-to-PO and PO-to-IO mappings (
load_config) – 3hValidate Excel formats (outcomes, assignments, grades) – 3h
Error handling for invalid inputs – 2h
Task 2: Compute Program Outcomes (10 hours)
Read CO Excel files – 2h
Implement JSON CO-to-PO mapping – 4h
Compute PO scores, save as Excel – 4h
Task 3: Compute Institutional Outcomes (10 hours)
Read PO Excel files – 2h
Implement PO-to-IO mapping – 4h
Compute IO scores, save as Excel – 4h
Task 4: Build CrewAI Student Assessment Module (12 hours)
Course Outcome Agent – 3h
Program Outcome Agent – 3h
Institutional Outcome Agent – 3h
Student Learning Overall Agent – 3h
Task 5: Develop Panel GUI (20 hours)
File upload widgets – 4h
Tabbed interface & filters – 4h
Tables for metrics & summaries – 4h
Visualizations using Plotly – 5h
Excel download buttons – 3h
Task 6: Test and Validate with Pytest (12 hours)
Test input validation and calculations – 4h
Test CrewAI outputs – 3h
Test GUI functionality – 3h
Bug fixing – 2h
Task 7: Document and Deploy (6 hours)
Documentation (setup, files, GUI, CrewAI, pytest) – 4h
Deployment testing – 2h