Are you seriously equating security software running on business systems with state violence / surveillance on people? Those two things are not even remotely comparable, starting with business systems not being people that have rights
Are you seriously equating security software running on business systems with state violence / surveillance on people? Those two things are not even remotely comparable, starting with business systems not being people that have rights
Does anyone actually think that day 5 of a work week has a zero or even negative productivity value?
The rationale is that productivity increases a lot on the remaining four days if employees can actually relax and get private shit done over a 3 day weekend. I do see that this is probably gonna work differently for things like factory line workers, but for office jobs I can totally see this work
Check for yourself: https://gitlab.futo.org/keyboard/latinime/-/blob/master/LICENSE.md
You can release modified versions as long as they’re non-commercial and follow a couple of additional rules.
It looks like they opened the door normally, until the motorcycle got caught on it and forced it all the way open
Gotcha, thanks for explaining!
Wait how is what you’re proposing different from ICE hybrids?
the religion that gets you the most stuff in the afterlife.
I think it would be rather the opposite, should be the one that promises the worst fate in the afterlife to non-believers
Iirc,mass effect lets you buy anything you miss in a store later, at least
Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn’t help.
Companies and their legal departments do care though, and that’s where the big money lies for Microsoft when it comes to Windows
Training and fine tuning happens offline for LLMs, it’s not like they continuously learn by interacting with users. Sure, the company behind it might record conversations and use them to further tune the model, but it’s not like these models inherently need that
And it makes sense to me that a business would leverage that data in ways to benefit themselves.
Big fat nope on that one. This is exactly what the GDPR is about. I’m giving you my data for a specific purpose, and unless I tell you otherwise, you have no fucking business using that data for anything else. Gonna be interesting to see how this one plays out in the EU.
Happened with Lone Echo for me. It’s a VR game where you’re in a space station, and you move around in zero g by just grabbing your surroundings and pulling yourself along or pushing yourself off of them. I started reflexively attempting to do that in real life for a bit after longer sessions
Isn’t that the difference between a slur and an insult, that a slur is offensive in itself against a certain group of people*, while an insult depends on context?
*Unless used by people from that group itself
HTTP is not Google-controlled, you don’t need to replace that in order to build something new without Google
I agree with your first point, but the latter two:
—GPS data that could be stored and extracted from the dealership and sold or given to the government, insurance companies, and law enforcement. —GPS data that could be sent in real time if the car has a cellular connection or hijacks the cellular connection in your phone when you connect it to the car.
Why do you think this is more likely to happen with this new regulation, when most modern cars already have a functioning GPS module for navigation and cellular connection for software updates?
Yeah, it certainly still feels icky, especially since a lot of those materials in all likelihood will still have ended up in the model without the original photo subjects knowing about it or consenting. But that’s at least much better than having a model straight up trained on CSAM, and at least hypothetically, there is a way to make this process entirely “clean”.
There are legit, non-CSAM types of images that would still make these changes apparent, though. Not every picture of a naked child is CSAM. Family photos from the beach, photos in biology textbooks, even comic-style illustrated children’s books will allow inferences about what real humans look like. So no, I don’t think that an image generation model has to be trained on any CSAM in order to be able to produce convincing CSAM.
That’s already happening. Slightly different example, but Home Assistant has an integration that gives an LLM of your choice control over your home automation devices. Just talking to your home in natural language without having to memorize very specific phrases is honestly pretty powerful, as long as it works correctly. You can say stuff like “hey it’s a bit dark in the office”, and it just knows to either switch on the office lights, or make them brighter if they’re already on