[WIP]Fix benchmark end= log format and update tools docs#304
[WIP]Fix benchmark end= log format and update tools docs#304indigo1973 wants to merge 1 commit intohw-native-sys:mainfrom
Conversation
- Align DEV_ALWAYS("end=") format with orch_start= convention:
"BENCHMARK: thread=%d end=%llu" -> "Thread=%d end=%llu"
Fixes benchmark_rounds.sh parse_timing(), which greps for
"Thread=[0-9]+ (orch_start|end)=" — the old lowercase "thread"
with "BENCHMARK:" prefix never matched.
- Add paged_attention_unroll to benchmark_rounds.sh example list
in tools/README.md
- Fix perf_to_mermaid.py --direction help text: TD is the default,
not LR
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on improving the accuracy and consistency of logging, documentation, and command-line tool help messages. It addresses a parsing issue in benchmark scripts by standardizing a log format, updates the list of available benchmark examples in the documentation, and corrects misleading information in a Python script's help text, enhancing overall system clarity and usability. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request delivers three targeted improvements. First, it aligns a benchmark logging statement in aicpu_executor.cpp with the existing format, which fixes a parsing issue in benchmark_rounds.sh. Second, it updates tools/README.md to include the paged_attention_unroll example in the list of benchmarks. Third, it corrects the help text for the --direction argument in perf_to_mermaid.py to accurately reflect its default value. The changes are straightforward and address the described issues.
| #if PTO2_PROFILING | ||
| // Benchmark: record orchestrator end timestamp before waiting for schedulers | ||
| DEV_ALWAYS("BENCHMARK: thread=%d end=%llu", thread_idx, (unsigned long long)get_sys_cnt_aicpu()); | ||
| // Record orchestrator end timestamp before waiting for schedulers |
There was a problem hiding this comment.
You shall check orchestrator/scheduler start/end togather.
Align DEV_ALWAYS("end=") format with orch_start= convention: "BENCHMARK: thread=%d end=%llu" -> "Thread=%d end=%llu" Fixes benchmark_rounds.sh parse_timing(), which greps for "Thread=[0-9]+ (orch_start|end)=" — the old lowercase "thread" with "BENCHMARK:" prefix never matched.
Add paged_attention_unroll to benchmark_rounds.sh example list in tools/README.md
Fix perf_to_mermaid.py --direction help text: TD is the default, not LR