

? eff has international reach and the article applies everywhere. it was clearly intentionally written to apply anywhere. your comment makes no sense.
I exist or something probably
? eff has international reach and the article applies everywhere. it was clearly intentionally written to apply anywhere. your comment makes no sense.
do you think the us is the only state that works with corps to surveil?
jailbreaks actually are relevant with the use of llm for anything with i/o, such as “automated administrative assistants”. hide jailbreaks in a webpage and you have a lot of vectors for malware or social engineering, broadly hacking. as well as things like extracting controlled information.
you cannot unstir an egg, the guardrails and biases can be finetuned to not be as visible, but the training is ultimately irreversible.
what money saved on wages?? it’s competing with a dollar a day laborers. $10 per 1 million tokens, for the “bad” (they all suck) models (something that cant even do this job!). if you can pretend the hallucinations dont matter, you are getting a phone call for (4 letters per token, 6 minute avg support call, 135 wpm talking rate let’s say 120 to be nice -> 720 tokens per call) = $0.0072 per call. the average call center employee handles around 40 calls a day, so hey, the bad cant-actually-do-it chatgpt 4 is 70 cents per day cheaper than your typical call center indian!
Except. that is the massively subsidized money hemorrhaging rate. We know that oai should be charging probably an oom or two more. and the newer models are vastly more expensive, o1 takes around 100x the compute, and still couldnt be a call center employee. so that price is actually at least $30 per day. Cheaper than a us employee, but still cant actually do the job anyway.
except current robot systems and people are likely cheaper, especially when you consider companies are liable for what llm say. which leaves, essentially, scams and other slop, as the last remaining use cases. multi trillion dollar business without a use case.
the tech is barely good enough that it is vaguely maybe feasibly cheaper to waste someone’s time using a robot rather than a human- oh wait we do that already with other tech.
“in 20 years imagine how good it’ll be!” alas, no, it scales logarithmically at best and all discussion is poisoned by “what it might be!” in the future, rather than what it is.
regardless of where you want to define the starting point of the boom, it’s been clear for months up to years depending on who you ask that they are plateuing. and harshly. stop listening to hypesters and people with a financial interest in llm being magic.
Oh! Hahahaha. No.
the vc techfeudalist wet dreams of llm replacing humans are dead, they just want to milk the illusion as long as they can.
the unit is just a report of orientation, not magnitude. if you have a digital counter you are limited by the precision of the digital counter, not the units chosen. an analog measurement however is limited instead by other uncertanties. precision has, genuinely, no direct relationship to units. precision is a statistical concept, not a dimensional one.
all this started in 2023? alas no time marches on, llm have been a thing for decades and the main boom happened more in 2021. progress is not fast, no, these are companies throwing as much compute at their problems as they can. deepseek’s caused a 2t drop by being marginal progress in a field (llms specifically) out of ideas.
Well both of those things have been true months if not years, so if those are the conditions for a pop then they are met.
an arms race for what? more efficient slop? most of their value comes from the expected exclusivity - that say openai is the only one who can run something like o1. deepseek has made that collapse. i doubt they will stop doing stuff, but i dont think you understand the nature of the situation here.
also lol, “performs well in synthetic tests it was optimized to score well in” yes that literally describes every llm. Make no mistake: none of this has a real use case. not deepseek’s model, not openai’s, not apples, etc. this is all nonsense, literally. the stock market lost 2 trillion dollars overnight because something that doesnt have a use case was one upped by something else that also doesnt have a use case. it’s very funny.
significance refers to a measurement certainty about a number itself, especially its precision! and is unrelated to the magnitude/scale. the number and dimension “2.5634 mm” has more significant digits than the number “5,000 mm”, though the most significant digit is 2 and 5 respectively, and least significant 4 and 5 respectively. this is true if i rewrite it as 0.0025634 m and 5 m. it does work for doing what you say in this case because a date is equivalent to a single number, but is not correct in other situations. that’s why i said it does work here.
largest to smallest increment is completely adequate, and describes the actual goal here well. most things are ambiguous if you try hard enough.
largest to smallest is correct. 1 mile is larger than 20 meters. if i had specified numerical value or somesuch, maybe you’d be correct. though significance works as well.
You are looking not for precision but for largest to smallest, descending order. this is distinct from precision, a measure of how finely measured something is. 2025.07397 is actually more precise than 2025/01/27, but is measured by the largest increment.
tech has been subsidizing ai costs by magnitudes for years trying to make fetch happen, slop is slop. it’s overvalued like crazy and the first hint of market competition has drained trillions from the stocks because it’s an overvalued bubble. if china can do that by releasing competition then ok. maybe we should all be putting these trillions in things actually useful to humans.
it is certainly spurred by the developing situation in the us and has examples from it, but otherwise nothing applie only to the us. and the us finalizing it going to shit effects you anywhere in the world.