- cross-posted to:
- technology@lemmy.world
- aboringdystopia@lemmy.world
- cross-posted to:
- technology@lemmy.world
- aboringdystopia@lemmy.world
A week and a half ago, Goldman Sachs put out a 31-page-report (titled "Gen AI: Too Much Spend, Too Little Benefit?”) that includes some of the most damning literature on generative AI I’ve ever seen.
The report includes an interview with economist Daron Acemoglu of MIT (page 4), an Institute Professor who published a paper back in May called “The Simple Macroeconomics of AI” that argued that “the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters expect.” A month has only made Acemoglu more pessimistic, declaring that “truly transformative changes won’t happen quickly and few – if any – will likely occur within the next 10 years,” and that generative AI’s ability to affect global productivity is low because “many of the tasks that humans currently perform…are multi-faceted and require real-world interaction, which AI won’t be able to materially improve anytime soon.”
What makes this interview – and really, this paper — so remarkable is how thoroughly and aggressively it attacks every bit of marketing collateral the AI movement has. Acemoglu specifically questions the belief that AI models will simply get more powerful as we throw more data and GPU capacity at them, and specifically ask a question: what does it mean to “double AI’s capabilities”? How does that actually make something like, say, a customer service rep better?
While Acemoglu has some positive things to say — for example, that AI models could be trained to help scientists conceive of and test new materials (which happened last year) — his general verdict is quite harsh: that using generative AI and “too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides.” In essence, replacing humans with AI might break everything if you’re one of those bosses that doesn’t actually know what the fuck it is they’re talking about.
every commentator (both pro and ai-sceptic) seems to not be aware of science in protein designs and docking, where ml is actually doing fantastic things, never before done level of stuff, and can conceivably do drug design much faster (the issue there is, once its done, you don’t need to reinvent protein chain making a drug/compound for pennies). For drug design revenues however - the cost of design pales in comparison to clinical trials (10-100 mlns compared to 1-3 billions)
Covello believes that the combined expenditure of all parts of the generative AI boom — data centers, utilities and applications — will cost a trillion dollars in the next several years alone, and asks one very simple question: “what trillion dollar problem will AI solve?” He notes that “replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions [he’s] witnessed in the last thirty years.”
In plain English: generative AI isn’t making any money for anybody because it doesn’t actually make companies that use it any extra money. Efficiency is useful, but it is not company-defining. He also adds that hyperscalers like Google and Microsoft will “also garner incremental revenue” from AI — not the huge returns they’re perhaps counting on, given their vast AI-related expenditure over the past two years.
I quite like Ed’s writing for a cathartic rant against the stupidity of AI.
Has anyone got any reading recommendations on the LLM insanity from a marxist perspective though? Assuming AI can replace labour in some industries, it immediately comes up against the LTV, with the value of the output immediately going to almost zero. Companies therefore have to maintain monopolistic false scarcity, which of course tech companies are already trying to do, but it seems to have wider implications for the economy - technofeudalism I guess.
increasing reserve army of labor, get cheaper labor, get increased levels of exploitation, reduce aggregate demand. Its not feudalism if instead of serfs you have 10 % excess population of working age, its rather like do ubi or get fucked by violence. Feasibly its cancels out via reduced population growth in developed economies, but they are temporally mismatched - decline will onset in 20 years, ai is now (if it works, which is big if). I rather think it will proletrianize some white collar workers into shittier jobs
That certainly seems like the idea most techbros have, turn white-collar workers into quality checkers rather than producers. The thing is I’m not sure the output of a lot of white collar jobs can be treated the same as manufacturing output.
If you replace half your labour in manufacturing a TV, the value of a TV drops and with competition between firms prices & profits tend to drop too. But you can slow down this profit loss due to competition with anti-competitive behaviour: patents, cartels etc.
If you replace your graphic designers with AI, the value of graphic design drops but what you were really trying to “produce” with your fancy branding and packaging was a sense of perceived quality (value). Now that this is lower, consumers adjust their perceptions quickly and you have to demonstrate your product quality by spending money on things AI can’t replicate yet e.g. in-person experiences, or even just video promotion (in the short term at least).
I mean if we take rather broad marxist view on advertising (redistribution of surplus value of the labor to unproductive industries, whose job is basically marketshare fencing) cheaper adverts lead to increase of this share to capital intensive tech giants instead of labor intensive marketers. So google/fb will either get more money as a monopolist or the cost of advertising campaign drops, thus very marginally increasing producers share or consumers (via decrease in price). As marketing budgets are noticeable but not that noticeable (unless its coca cola or pharma), the overall effect, i suspect, would be very small (both on prices and revenues of ad companies)