cultural reviewer and dabbler in stylistic premonitions

  • 290 Posts
  • 761 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle









  • Arthur Besse@lemmy.mlOPtoAI@lemmy.mlAI Needs Your Help!
    link
    fedilink
    arrow-up
    1
    ·
    18 hours ago

    Something that people need to understand is that AI companies (let’s talk about them instead of “AIs” that have no agency) are on a race to use less energy and less water per request for a very simple and selfish reason: it costs money.

    I agree here; left to their own devices, money is generally what matters to for-profit companies. Which is why they are mostly continuing to build datacenters (including those that are primarily for “AI”) where they do, which is almost entirely in places where they are competing with others for scarce water: because the alternatives are even more expensive.

    Underwater experiments in 2020

    That’s a neat idea, and maybe will be widespread one day.

    However that particular experimental project from Microsoft was conceived of in 2013, deployed in 2018, and concluded in 2020. Microsoft is not currently operating or planning to operate any more underwater datacenters: https://www.datacenterdynamics.com/en/news/microsoft-confirms-project-natick-underwater-data-center-is-no-more/

    Among things they’re doing instead (specifically for AI) is restarting a decommissioned nuclear plant at Three Mile Island in Pennsylvania, a state with (like most states) a long history of conflict related to water scarcity and privatization.

    Real world deployement in 2021

    This appears to be the only currently-operating (though the most recent news about it I can find is from 2023) underwater datacenter project, and it is in a certain country where it is somewhat easier for long-term environmental concerns to supersede capitalism’s profit motive. It would be great if they can make it an economically viable model which becomes commonplace, but until they do… datacenters today are still extremely thirsty.




  • Arthur Besse@lemmy.mlOPtoAI@lemmy.mlAI Needs Your Help!
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Did you even read the second part of my comment before getting mad?

    yeah, i did. you wrote:

    So it should be easy enough to build them in locations that have easy access to cheap energy and large amounts of water

    if you think it should be easy enough, what is your explanation for why datacenters are continuing to be built in locations where they’re competing with agriculture, other industries, and/or residential demand for scarce water resources (as you can read about in the links in my previous comment)?









  • i don’t usually cross-post my comments but I think this one from a cross-post of this meme in programmerhumor is worth sharing here:

    The statement in this meme is false. There are many programming languages which can be written by humans but which are intended primarily to be generated by other programs (such as compilers for higher-level languages).

    The distinction can sometimes be missed even by people who are successfully writing code in these languages; this comment from Jeffrey Friedl (author of the book Mastering Regular Expressions) stuck with me:

    I’ve written full-fledged applications in PostScript – it can be done – but it’s important to remember that PostScript has been designed for machine-generated scripts. A human does not normally code in PostScript directly, but rather, they write a program in another language that produces PostScript to do what they want. (I realized this after having written said applications :-)) —Jeffrey

    (there is a lot of fascinating history in that thread on his blog…)











  • They have to know who the message needs to go to, granted. But they don’t have to know who the message comes from, hence why the sealed sender technique works. The recipient verifies the message via the keys that are exchanged if they have been communicating with that correspondent before or else it is a new message request.

    So I don’t see how they can build social graphs if they don’t know who the sender if all messages are, they can only plot recipients which is not enough.

    1. You need to identify yourself to receive your messages, and you send and receive messages from the same IP address, and there are typically not many if any other Signal users sharing the same IP address. So, the cryptography of “sealed sender” is just for show - the metadata privacy remains dependent on them keeping their promise not to correlate your receiving identity with the identities of the people you’re sending to. If you assume that they’ll keep that promise, then the sealed sender cryptography provides no benefit; if they don’t keep the promise, sealed sender doesn’t really help. They outsource the keeping of their promises to Amazon, btw (a major intelligence contractor).

    2. Just in case sealed sender was actually making it inconvenient for the server to know who is talking to who… Signal silently falls back to “unsealed sender” messages if server returns 401 when trying to send “sealed sender” messages, which the server actually does sometimes. As the current lead dev of Signal-for-Android explains: “Sealed sender is not a guarantee, but rather a best-effort sort of thing” so “I don’t think notifying the user of a unsealed send fallback is necessary”.

    Given the above, don’t you think the fact that they’ve actually gone to the trouble of building sealed sender at all, which causes many people to espouse the belief you just did (that their cryptographic design renders them incapable of learning the social graph, not to mention learning which edges in the graph are most active, and when) puts them rather squarely in doth protest too much territory? 🤔