Research shows 25% of web pages posted between 2013 and 2023 have vanished. A few organisations are racing to save the echoes of the web, but new risks threaten their very existence.

It’s possible, thanks to surviving fragments of papyrus, mosaics and wax tablets, to learn what Pompeiians ate for breakfast 2,000 years ago. Understand enough Medieval Latin, and you can learn how many livestock were reared at farms in Northumberland in 11th Century England – thanks to the Domesday Book, the oldest document held in the UK National Archives. Through letters and novels, the social lives of the Victorian era – and who they loved and hated – come into view.

But historians of the future may struggle to understand fully how we lived our lives in the early 21st Century. That’s because of a potentially history-deleting combination of how we live our lives digitally – and a paucity of official efforts to archive the world’s information as it’s produced these days.

However, an informal group of organisations are pushing back against the forces of digital entropy – many of them operated by volunteers with little institutional support. None is more synonymous with the fight to save the web than the Internet Archive, an American non-profit based in San Francisco, started in 1996 as a passion project by internet pioneer Brewster Kahl. The organisation has embarked what may be the most ambitious digital archiving project of all time, gathering 866 billion web pages, 44 million books, 10.6 million videos of films and television programmes and more. Housed in a handful of data centres scattered across the world, the collections of the Internet Archive and a few similar groups are the only things standing in the way of digital oblivion.

  • mspencer712@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 months ago

    Last time I went snooping:

    15 installs of phpbb, which would require work to put back online as their communities are of course gone. Remove spam, undo defacement, etc.

    7 installs of Dormando’s Oekaki BBS Clone

    5 installs of WonderCatStudio BBS

    4 installs of OekakiPotato / RanmaGuy etc.

    and several users who just used php to ‘include’ headers and table of contents page parts.

    (Yes I was quite the weeb. Still am, but I was one too. :-) )

    • Onno (VK6FLAB)
      link
      fedilink
      arrow-up
      5
      ·
      3 months ago

      If this was my problem to solve, I would host it internally, as-is, on a virtual machine of your choice, then create a a static html mirror version from the public information and put that up on AWS S3 as a static website.

      • mspencer712@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        3 months ago

        That does make a lot of sense.

        I think I’m feeling embarrassed about not being a perfect ops person, while I was going to school for computer science. Like, part of me wants to create this unrealistic private cloud thing, like I’m going to pretend “I’m still around, where have you been? See your old password still works, and look at all the awesome stuff I can do now!”. I already have my 20+ year old passwd file imported into OpenLDAP / slapd and email is using that already.

        It’s not realistic. I feel fondness for the internet of 20-25 years ago, but it’s not coming back. If people can log in with 20 year old passwords and upload web content, we both know what’s really going to happen.

        I just feel like such a failure for letting it rot away. Really, any place that accepts submissions requires a live audience and staff to keep it moderated, and accepting new submissions is the only reason to even run original code. What you’re describing is probably the only sane way to do this.

        Edit: although I do still feel that the world needs that sort of private cloud in a box. Sure Facebook has taken all the wind out of the sails of many private web hosting efforts - the “family nerd” no longer gets love and gratitude for offering to host forums and chat, they get “that’s stupid, I’ll just use Facebook” - but we still need the capability.

        And an open security architecture to clone would help cover the daylight between “here’s a web app in a docker container” and an actual secure hosted instance of it. It would require more inconvenience than necessary for the substantial security benefits it would offer. (A better designed, more customized solution would help that, but one step at a time.) But that would give the average homelab user protection against future attacks that today would feel like wild “whoa who are you protecting against, the NSA?” paranoia.