Founder @Tuklus-Labs | AI inference & memory systems | Former 25S SATCOM | Building on AMD ROCm
-
Tuklus Labs
- Washington State
- https://linkedin.com/in/garyjduncan
Popular repositories Loading
-
kernel-anvil
kernel-anvil PublicProfile-guided GPU kernel optimizer for AMD/RDNA3. Auto-tunes llama.cpp MMVQ kernels per model shape. 2x decode speedup on 7900 XTX.
-
llama-cpp-turboquant
llama-cpp-turboquant PublicForked from TheTom/llama-cpp-turboquant
LLM inference in C/C++
C++ 1
-
hamm-r
hamm-r PublicHeadless Agent Mobile Management Relay -- Run Claude Code from your phone
Kotlin
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.

