# Prerequisites - [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now. - [X ] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md). - [X ] I [searched using keywords relevant to my issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/filtering-and-searching-issues-and-pull-requests) to make sure that I am creating a new issue that is not already open (or closed). - [X ] I reviewed the [Discussions](https://github.com/ggerganov/llama.cpp/discussions), and have a new bug or useful enhancement to share. # Your exact command line to replicate the issue ``` ./falcon_main .... ``` # Environment and Context * Physical (or virtual) hardware you are using, e.g. for Linux: intel cpu * Operating System, e.g. for Linux: CentOS # Steps to Reproduce 1. ./falcon_main ... 2. see in the log: "falcon_model_load_internal: using CUDA for GPU acceleration" 3. desired: "falcon_model_load_internal: using CUDA 11.8 for GPU acceleration"
Prerequisites
Your exact command line to replicate the issue
Environment and Context
Steps to Reproduce