Running LLMs locally on Intel iMac with AMD eGPU

My last comment was closed out; however, the problem has been resolved. I found an article on the Medium titled "Building llama.cpp for macOS on Intel Silicon | by Wesley Costa" The only catch is I had to use the "-S ." flag for the cmake build to work. Other than that the GPU works great and I can get some more life out of this intel iMac.




Posted on Jun 5, 2025 5:16 PM

Reply

There are no replies.

Running LLMs locally on Intel iMac with AMD eGPU

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.