well yeah, usual ml, where you feed it datasets labeled a,b,c,d and then ask to determine type of new info is what ml is very good at (but it has been good at it for a decade). (it’s also good at sorting like tomatoes or whatever the fuck)
I’m glad the writer made the comparison to blockchain, another technology with a very small number of actual use cases that an endless number of firms still poured money into just to prove to shareholders they were “staying current” or whatever. I felt insane throughout that entire period, not helped by the fact I worked for a time at a relatively early blockchain/wallet startup
mayhaps (do you have a linky, sounds hilarious). But you can’t force feed them thousands of images, and then put a current one and have it highlight areas of concern or saying probability or something. In any case, cancer on images is the more grifty one, cause they just have millions of datapoints, instead of like 50 haphazard biopsies and pet scans
I didn’t really go through it to see if they compare it with ML performance, but it’s still a fun anecdote.
Plus, you don’t have to feed an ML model or clean its pen. Or train new ones when the current generation dies. I don’t really know if there is –or should be– a practical application in scale of these findings, but it’s pretty neat!
well yeah, usual ml, where you feed it datasets labeled a,b,c,d and then ask to determine type of new info is what ml is very good at (but it has been good at it for a decade). (it’s also good at sorting like tomatoes or whatever the fuck)
I’m glad the writer made the comparison to blockchain, another technology with a very small number of actual use cases that an endless number of firms still poured money into just to prove to shareholders they were “staying current” or whatever. I felt insane throughout that entire period, not helped by the fact I worked for a time at a relatively early blockchain/wallet startup
Can we have another joke term like dark blockchain that sounds vaguely AI hype related
Aren’t pigeons better than most ML at catching certain kinds of cancer in imaging?
mayhaps (do you have a linky, sounds hilarious). But you can’t force feed them thousands of images, and then put a current one and have it highlight areas of concern or saying probability or something. In any case, cancer on images is the more grifty one, cause they just have millions of datapoints, instead of like 50 haphazard biopsies and pet scans
Here’s the link the article I read was referencing: https://pmc.ncbi.nlm.nih.gov/articles/PMC4651348/
I didn’t really go through it to see if they compare it with ML performance, but it’s still a fun anecdote.
Plus, you don’t have to feed an ML model or clean its pen. Or train new ones when the current generation dies. I don’t really know if there is –or should be– a practical application in scale of these findings, but it’s pretty neat!
deleted by creator