I am several hundred opossums in a trench coat

  • 5 Posts
  • 106 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • Not every change is going to completely overhaul the app. More than likely, the changes are a fix to some obscure bug not caught in testing that only affects a small percentage of devices. Just because you don’t encounter it with your workflow and device doesn’t mean it isn’t a critical bug preventing someone from using the app. It could also be a new feature targeting a different use case to yours. It could even be as simple as bringing the app into compliance with new platform requirements or government regulations (which can happen a couple times a year, for example Android often bumps the minimum SDK target such that apps are forced to comply with new privacy improvements).








  • After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you’re writing ui code, makes very strange decisions (since it has no special/visual reasoning).

    Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can’t do that, and only you can (and you can’t skip learning to code to just get on to architecture and patterns)








  • How much computing power do you think it takes to approximately recognise a predefined word or phrase? They do that locally, on device, and then stream whatever audio follows to more powerful computers in AWS (the cloud). To get ahead of whatever conspiratorial crap you’re about to say next, Alexa devices are not powerful enough to transcribe arbitrary speech.

    Again, to repeat, people smarter than you and me have analysed the network traffic from Alexa devices and independently verified that it is not streaming audio (or transcripts) unless it has heard something close (i.e close enough such that the fairly primative audio processing (which is primitive because it’s cheap, not for conspiracy reasons) recognises it) to the wake word. I have also observed this, albeit with less rigorous methodology. You can check this yourself, why don’t you do that and verify for yourself whether this conspiracy holds up?