By Ljubisa Bajic Many believe AI is the real deal. In narrow domains, it already surpasses human performance. Used well, it is an unprecedented amplifier of human ingenuity and productivity. Its widespread adoption is hindered by two key barriers: high latency and astronomical cost. Interactions with language models lag far...
It depends how the chip is designed, with something like FPGA or memristors you could reconfigure the chip itself to support different network topologies and weights. But even with a chip that’s not configurable, this is still pretty useful. Like if you can make a chip for running a full DeepSeek, it can do a ton of very useful tasks right now even without any future upgrades. So, it’s not like outdated chips will become useless. If you set it up for whatever task you need, and it does the job, then you just keep using it for that.
This is just the reification of the technology. Same as CPU architectures. You’ll have chips designed with a specific instruction set and then you’ll be sold one with the new instruction set.
I guess the only problem is if the model is going to be superseded by a better model the chip becomes useless.
It depends how the chip is designed, with something like FPGA or memristors you could reconfigure the chip itself to support different network topologies and weights. But even with a chip that’s not configurable, this is still pretty useful. Like if you can make a chip for running a full DeepSeek, it can do a ton of very useful tasks right now even without any future upgrades. So, it’s not like outdated chips will become useless. If you set it up for whatever task you need, and it does the job, then you just keep using it for that.
This is just the reification of the technology. Same as CPU architectures. You’ll have chips designed with a specific instruction set and then you’ll be sold one with the new instruction set.