UFO50 and Diceomancer, both are fantastic! Also some lethal company, plateup and remnant 2 with my friends. Want to try out the new factorio dlc but not sure if I want to sink the time it demands.
UFO50 and Diceomancer, both are fantastic! Also some lethal company, plateup and remnant 2 with my friends. Want to try out the new factorio dlc but not sure if I want to sink the time it demands.
Your “new alien intelligence” couldn’t even count how many Rs are in strawberry, shut the fuck up.
The funny thing is that he’s correct when he says that we are not sufficiently organized to deal with climate change. He probably wouldn’t like the solution though.
Honestly, this is expected of tech bros, just look at crypto. Shame on every computer scientist that gave legitimacy to these dipshits for a paycheck, especially the big names of the deep learning old guard huffing that heavy copium.
We consistently find across all our experiments that, across concepts, the frequency of a concept in the pretraining dataset is a strong predictor of the model’s performance on test examples containing that concept. Notably, model performance scales linearly as the concept frequency in pretraining data grows exponentially
This reminds me of an older paper on how LLMs can’t even do basic math when examples fall outside the training distribution (note that this was GPT-J and as far as I’m aware no such analysis is possible with GPT4, I wonder why), so this phenomena is not exclusive to multimodal stuff. It’s one thing to pre-train a large capacity model on a general task that might benefit downstream tasks, but wanting these models to be general purpose is really, really silly.
I’m of the opinion that we’re approaching a crisis in AI, we’ve hit a barrier on what current approaches are capable of achieving and no amount of data, labelers and tinkering with architectural minutiae or (god forbid) “prompt engineering” can fix that. My hopes are that with the bubble bursting the field will have to reckon with the need for algorithmic and architectural innovation, more robust standards for what constitutes a proper benchmark and reproducibility at the very least, and maybe, just maybe, extend its collective knowledge from other fields of study past 1960’s neuroscience and explore the ethical and societal implications of your work more deeply than the oftentimes tiny obligatory ethics section of a paper. That is definetly a overgeneralization, so sorry for any researchers out here <3, I’m just disillusioned with the general state of the field.
You’re correct about the C suites though , all they needed to see was one of those stupid graphs that showed line going up, with model capacity on the x axis and performance on the y axis, and their greed did the rest.
deleted by creator
There is a disconnect between what computer scientists understands as AI and what the general public understands as AI. This was previously not a problem, nerds give confusing names to stuff all the time, but it became a problem after this latest hype cycle where incurious laypeople are in charge of the messaging (or in a less charitable interpretation, benefit from fear of the singularity™). Doesn’t help that scientific communication is dogshit.
got the Samsung buds pro 2 at half price recently and I kind of like them, but they were a bit underwhelming even at that price. I’ve never spent a lot on audio in general, so they were actually a big improvement, but there was no “wow” factor or anything. Plus having to install bloatware that asks for all permissions under the sun sucks (why the fuck would a settings menu want to know my location???).
I do think you underestimate how nice the noise cancelation can be though. I moved to a big city and my hick ass cannot deal with all the fucking noise. Plus I’m clumsy and end up getting wires caught on everything, which means wire stuff also becomes e-waste fairly quickly.
the ice levels in Spelunky HD are my least favorite, but this track almost makes up for all the stupid UFOs crashing into me from out of screen
I think going for the Rebuild treatment was a really cool idea and they mostly executed it pretty well. One thing I didn’t get is why they put that blur effect on everything.
doing that as well, but asking here can’t hurt right?
The only guy I have on my friends list that plays this stuff is definetly not afraid to let others know, he does extensive reviews of every datable girl in a given game. but also entertaining to read/make fun of sometimes.
And even then, OP still has a point.
Yeah, kinda. But the framing is all fucked. Someone that can’t improve themselves because of depression don’t need “tough love” or to hear they are disinteresting and on their own, they need to see the inate value in themselves. Everyone IS interesting, they just have to nurture that and demonstrate it to others.
There is no deeper understanding about the issues they are marching for. It is all just slogans.
. I don’t understand how people who don’t know shit don’t just shut the fuck up until they learn more.
I’m sick of simulation theory as well and want something cooler to take its place. Maybe Gnosticism?
They do once their depression gets better though? Anhedonia, loss of interest/libido/attention/whatever the fuck else are symptoms of depression. I’m all for self-improvement, my own mental health improved greatly as a result of trying to improve myself, to the point I consider myself no longer depressed. But we’re social creatures and no one builds self-confidence and mental resilience in a vacuum. It’s often up to the depressed person to put themselves out in situations where this can happen, but sometimes it does not work out for whatever reason and the whole thing is a long process. In this situation self-compassion is a lot better than telling yourself you’re a sack of shit.
Also, isn’t the interesting life thing all backwards? If you like a person you get curious and find them interesting. If I like a guy I’ll find what they are into cool, be it singing, playing chess or knowing a lot about bugs.
No one is owed that kind of attention, but most people are worthy of compassion.
I don’t know shit but from casually checking the news mega that seems to be the general sentiment. What I don’t get is wouldn’t this ground invasion be a completely unforced error on Israel’s part? Why are they even considering it?
Implementing fascism as a mechanic only for it to be unsustainable gameplaywise is a good bit tbh
Could be because cats can be really distrustful of strangers. My cat is really sweet but she’ll hiss to almost every guest. Could be a upbringing thing though, my neighbor has 4 and 3 of them were street cats that she took in, when I occasionally take care of them the only one that gives me any trouble is the one that was there since she was a kitten.
So this guy has the time to complain about vuvuzela no food but not to provide some context for the indictment, instead linking to a 39 page pdf? And the rest of the article is just factoids about gold?? Western journalists are a fucking disgrace.
Also England stole withheld like 1 billion $ in gold from Venezuela, but I guess that tidbit didn’t make the cut
1 - Some of the appeal of the books is present in the series, specially early seasons, so there’s that. More cynically, it’s probably the first instance of medieval fantasy prestige TV (I know there was other stuff like Pillars of the Earth and whatnot but those didn’t have the HBO brand) so there was some novelty to it. Borrowing from Lindsay Ellis, it’s “hot fantasy that FUCKS”. It’s juvenile as all hell and eyeroll inducing but it was key to marketing it to general audiences.
2 - decent 7 while it was running because of the whole social aspect and no hindsight of how shit it was going to get, light 3 now, and a 0 if you’re in any way averse to gratuitous violence/sex.
3 - I think the appeal of the books that transfers to the series is a) some of the characters and b) the politicking.
The problem with the characters is that they get flanderized to all hell later on, or sometimes it becomes clear the showrunners didn’t really understand them, or at least didn’t know what to do with them. But still, some of them are compelling, even if they are fucking assholes.
I think the core appeal of GoT though is seeing an inflection point in the history of this fictional world. Not because there’s wars going on, but because characters like Jon and Daenarys (not coincidentally fan favorites) are struggling to surpass the “might makes right” world they live in, and sometimes succeeding. Also because of the growing presence of supernatural shit. It gives this feeling of this new world peeking in and all the promise and terror that comes with this kind of change. The show fumbles both os these aspects REALLY hard later on though. That’s because it bought into the “dark fantasy” meme and played down the supernatural aspects (nerd shit) and made a U turn back to the status quo (the last few episodes reek of liberalism).
4 - Typical to early seasons in some ways in the sense that it borrows heavily from the books, but the focus shifts to war stuff so there’s some distinction there.
5 - it’ been a while since I’ve seen the series, but seasons 1-4 are pretty decent, 5 is okayish, 6 was pretty bad and 7-8 are atrocious.
6 - Dunno, not a native speaker. I was fine with subtitles off most of the time though.
7 - Haven’t watched The Sopranos yet, but GoT has probably one of the worst endings I’ve ever seen.
Mengele is the right comparison because this is not just some oafs botching a surgery, this is sadist shit.
Also the poor monkey finding some comfort by holding hands with her “roommate” through the cage makes me want to cry
This is fucked, you don’t use a black box approach in anything high risk without human supervision. Whisper probably could be used to help accelerate a transcriptions done by an expert, maybe some sort of “first pass” that needs to be validated, but even then it might not help speed things up and might impact quality (see coding with copilot). Maybe also use the timestamp information for some filtering of the most egregious hallucinations, or a bespoke fine-tuning setup (assuming it was fine-tuned it the first place)? Just spitballing here, I should probably read the paper to see what the common error cases are.
It’s funny, because this is the openAI model I had the least cynicism towards, did they bazinga it up when I wasn’t looking?