Running LLMs locally on Intel iMac with AMD eGPU
My last comment was closed out; however, the problem has been resolved. I found an article on the Medium titled "Building llama.cpp for macOS on Intel Silicon | by Wesley Costa" The only catch is I had to use the "-S ." flag for the cmake build to work. Other than that the GPU works great and I can get some more life out of this intel iMac.