Skip to content

[WIP]Fix benchmark end= log format and update tools docs#304

Open
indigo1973 wants to merge 1 commit intohw-native-sys:mainfrom
indigo1973:tools_0317
Open

[WIP]Fix benchmark end= log format and update tools docs#304
indigo1973 wants to merge 1 commit intohw-native-sys:mainfrom
indigo1973:tools_0317

Conversation

@indigo1973
Copy link
Contributor

  • Align DEV_ALWAYS("end=") format with orch_start= convention: "BENCHMARK: thread=%d end=%llu" -> "Thread=%d end=%llu" Fixes benchmark_rounds.sh parse_timing(), which greps for "Thread=[0-9]+ (orch_start|end)=" — the old lowercase "thread" with "BENCHMARK:" prefix never matched.

  • Add paged_attention_unroll to benchmark_rounds.sh example list in tools/README.md

  • Fix perf_to_mermaid.py --direction help text: TD is the default, not LR

- Align DEV_ALWAYS("end=") format with orch_start= convention:
  "BENCHMARK: thread=%d end=%llu" -> "Thread=%d end=%llu"
  Fixes benchmark_rounds.sh parse_timing(), which greps for
  "Thread=[0-9]+ (orch_start|end)=" — the old lowercase "thread"
  with "BENCHMARK:" prefix never matched.

- Add paged_attention_unroll to benchmark_rounds.sh example list
  in tools/README.md

- Fix perf_to_mermaid.py --direction help text: TD is the default,
  not LR
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the accuracy and consistency of logging, documentation, and command-line tool help messages. It addresses a parsing issue in benchmark scripts by standardizing a log format, updates the list of available benchmark examples in the documentation, and corrects misleading information in a Python script's help text, enhancing overall system clarity and usability.

Highlights

  • Benchmark Log Format Alignment: The DEV_ALWAYS benchmark log format in aicpu_executor.cpp was updated to align with the orch_start= convention, changing from "BENCHMARK: thread=%d end=%llu" to "Thread=%d end=%llu". This change specifically fixes parsing issues in benchmark_rounds.sh.
  • Documentation Update: The tools/README.md file was updated to include paged_attention_unroll in the list of example benchmarks.
  • Help Text Correction: The help text for the --direction argument in tools/perf_to_mermaid.py was corrected to accurately state that TD (Top-Down) is the default direction for flowcharts, not LR (Left-Right).

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/a2a3/runtime/tensormap_and_ringbuffer/aicpu/aicpu_executor.cpp
    • Modified the DEV_ALWAYS macro usage to change the benchmark end log format.
  • tools/README.md
    • Added paged_attention_unroll to the list of pre-configured examples.
  • tools/perf_to_mermaid.py
    • Corrected the help text for the --direction argument to reflect the correct default value.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request delivers three targeted improvements. First, it aligns a benchmark logging statement in aicpu_executor.cpp with the existing format, which fixes a parsing issue in benchmark_rounds.sh. Second, it updates tools/README.md to include the paged_attention_unroll example in the list of benchmarks. Third, it corrects the help text for the --direction argument in perf_to_mermaid.py to accurately reflect its default value. The changes are straightforward and address the described issues.

#if PTO2_PROFILING
// Benchmark: record orchestrator end timestamp before waiting for schedulers
DEV_ALWAYS("BENCHMARK: thread=%d end=%llu", thread_idx, (unsigned long long)get_sys_cnt_aicpu());
// Record orchestrator end timestamp before waiting for schedulers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You shall check orchestrator/scheduler start/end togather.

@indigo1973 indigo1973 changed the title Fix benchmark end= log format and update tools docs [WIP]Fix benchmark end= log format and update tools docs Mar 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants