1 year ago
Thurs Feb 22, 2024 9:26am PST
Is AMD a bad choice for fine-tuning large-scale models locally?
When I was doing research for my company, I had a GPU allocated to me at Google's colab, and I did a lot of experiments using the TPU. However, recently, I have some personal fine-tuning that I want to do, so I'm trying to train locally on my personal computer. The problem is that my GPU is a Radeon. I've tried using ROCm, but I don't like it as much as I thought I would. (I could be doing something wrong.) My fellow researchers and colleagues jokingly tell me to sell my Radeon and buy an RTX when I have the time. Recently, something like ZULDA has come out, but... I wonder if AMD will ever release something directly optimized for AI? I'm looking at the HIPs and official documentation, but it doesn't really set me up for success.
comments:
add comment
loading comments...