feat: add CompilerConfig dataclass for structured compiler options#3
feat: add CompilerConfig dataclass for structured compiler options#3AoyuQC wants to merge 1 commit intoaws-neuron:mainfrom
Conversation
nkipy/src/nkipy/core/compile.py
Outdated
|
|
||
| # Internal/advanced options | ||
| if self.tensorizer_opt_level: | ||
| args.append(f"--internal-tensorizer-opt-level={self.tensorizer_opt_level}") |
There was a problem hiding this comment.
done, already removed options not documented in public api, they can still be passed via extra_args or through pure string mode
Replace string-based compiler arguments with a type-safe CompilerConfig
dataclass that provides discoverability and easy customization.
Changes:
- Add CompilerConfig dataclass with fields for common neuronx-cc options
(lnc, model_type, auto_cast, enable_mixed_precision_accumulation, etc.)
- Add factory methods for_nkipy() and for_nki() with appropriate defaults
- Add get_default_compiler_args() helper to inspect default settings
- Add compiler_config parameter to @baremetal_jit, baremetal_run_traced_kernel,
and DeviceKernel.compile_and_load()
- Export CompilerConfig and get_default_compiler_args from nkipy.runtime
- Add tutorial section demonstrating CompilerConfig usage
Backward compatible: legacy additional_compiler_args parameter still works.
07f7331 to
a8eb03c
Compare
vgene
left a comment
There was a problem hiding this comment.
My understanding is that the most critical ask here is to not have --lnc=1 as the hardcoded flag. Let's use this as an opportunity to remove all hardcoded flags. I find the new dataclass unnecessary since expert users can directly use command line compiler flags. Keeping the dataclass adds maintenance complexity.
There was a problem hiding this comment.
I prefer not to change this simple tutorial. As the first tutorial, we want to limit the information exposed to the user
|
|
||
| # Core options | ||
| lnc: int = 1 # Logical NeuronCore config (1 or 2) | ||
| model_type: Optional[str] = None # "generic", "transformer", "unet-inference" |
There was a problem hiding this comment.
Do we want to keep these compiler specific knowledge as a class in NKIPy? People who want to change these level of details are likely to know what they are doing, and they can use "--model-type transformer" directly. Supporting all these in NKIPy means we need to keep it in sync with the compiler
| ) | ||
|
|
||
| @classmethod | ||
| def for_nki( |
There was a problem hiding this comment.
The NKI related compilation flags should be removed now. We don't support directly compiling NKI kernels
| # Compiler arguments | ||
| DEFAULT_ADDITIONAL_COMPILER_ARGS = "--lnc 1" | ||
| NKIPY_KERNEL_ADDITIONAL_COMPILER_ARGS = "--internal-tensorizer-opt-level=2" | ||
| NKI_KERNEL_ADDITIONAL_COMPILER_ARGS = "--internal-tensorizer-opt-level=nki" |
There was a problem hiding this comment.
All internal-tensorizer-opt-level should be dropped from the code. Since we stop supporting compiling NKI kernel directly (support through kernel invocation), all NKI kernel flags should be dropped too.
Replace string-based compiler arguments with a type-safe CompilerConfig dataclass that provides discoverability and easy customization.
Changes:
Backward compatible: legacy additional_compiler_args parameter still works.
Issue #, if available:
N/A
Description of changes:
Replace string-based compiler arguments with a type-safe CompilerConfig dataclass that provides discoverability and easy customization.
Changes:
Usage:
Backward compatible: legacy additional_compiler_args parameter still works.
Runned Tests:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.