Torsten Slok, chief economist at Apollo Global Management, recently argued that the stock market currently overvalues a handful of tech giants – including Nvidia and Microsoft –...
For sure, I expect that the most likely outcome is that LLMs will be something you run locally going forward unless you have very specific needs for a very large model. On the one hand, the technology itself is constantly getting better and more efficient, and on the other we have hardware improving and getting faster. You can already run a full blown DeepSeek on a Mac studio for 8k or so. It’s a lot of money, but it’s definitely in the consumer realm. In a few years the cost will likely drop enough that any laptop will be able to run these kinds of models.
For sure, I expect that the most likely outcome is that LLMs will be something you run locally going forward unless you have very specific needs for a very large model. On the one hand, the technology itself is constantly getting better and more efficient, and on the other we have hardware improving and getting faster. You can already run a full blown DeepSeek on a Mac studio for 8k or so. It’s a lot of money, but it’s definitely in the consumer realm. In a few years the cost will likely drop enough that any laptop will be able to run these kinds of models.