• 2 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle









  • Never heard of before and dgaf about whoever Linus Sebastion is. All this stuff I’ve been seeing about what an asshole “Linus” is thinking it must be some kerfuffle about Linus Torvalds but the bits and pieces I read made no sense. Even less now I’ve figured out it’s just some random asshole named Linus. How did I end up here? Take me back to my room, please.




  • wunderwaffen

    too good a word not to research… comes from WWII, naturally…

    panjandrum (British) - two wheels connected by a sturdy, drum-like axle, with rockets on the wheels to propel it forward. Packed with explosives, it was supposed to charge toward the enemy defenses, smashing into them and exploding, creating a breach large enough for a tank to pass through. But when it was tested on an otherwise peaceful English beach, things didn’t go quite as planned. The 70 slow-burning cordite rockets attached to the two 10-foot steel wheels sparked into action, and for about 20 seconds it was quite impressive. Until the rockets started to dislodge and fly off in all directions, sending a dog chasing after one of them and generals running for cover. The rest was sheer chaos, as the Panjandrum charged around the beach, completely out of control. Unsurprisingly, the Panjandrum never saw battle. the panjamdrum two wheels connected by a sturdy, drum-like axle, with rockets on the wheels to propel it forward

    The Goliath Tracked Mine (German) The tracked vehicle could carry 60kg of explosives and was steered remotely using a joystick control box attached to the rear of the Goliath by 650m of triple-strand cable. Two of the strands accelerated and manoeuvred the Goliath, while the third was used to trigger the detonation.

    Each Goliath had to be disposable, as each was built specifically to be blown up along with an enemy target. The first models were powered by an electric motor, but these proved difficult to repair on the battlefield, and at 3,000 Reichsmarks were not exactly cost effective. As a result, later models (the SdKfz 303) used a simpler, more reliable gasoline engine.

    Being sent back to the drawing board is a disgrace usually reserved for weapons that never saw battlefield action. Goliaths did see combat and were deployed on all German fronts beginning in the spring of 1942. Their role in the action was usually nugatory, however, having been rendered immobile by uncompromising terrain or deactivated by cunning enemy soldiers who had cut their command cables.

    solidiers standing with several small goliath remotely controlled (by wire) explosive devices

    The bat bomb (American) Shortly after the attack on Pearl Harbor, a Pennsylvania dentist named Lytle S. Adams contacted the White House with a plan of retaliation: bat bombs.

    The plan involved dropping a bomb containing more than 1000 compartments, each containing a hibernating bat attached to a timed incendiary device. A bomber would then drop the principal bomb over Japan at dawn and the bats would be released mid-flight, dispersing into the roofs and attics of buildings over a 20- to 40-mile radius. The timed incendiary devices would then ignite, setting fire to Japanese cities.

    Despite the somewhat outlandish proposal, the National Research Defense Committee took the idea seriously. Thousands of Mexican free-tailed bats were captured (they were, for some reason, considered the best option) and tiny napalm incendiary devices were built for them to carry. A complicated release system was developed and tests were carried out. The tests, however, revealed an array of technical problems, especially when some bats escaped prematurely and blew up a hangar and a general’s car.

    In December 1943, the Marine Corps took over the project, running 30 demonstrations at a total cost of $2 million. Eventually, however, the program was canceled, probably because the U.S. had shifted its focus onto the development of the atomic bomb.

    picture of bat attached to small explosive device

    Gustav rail gun (German) The railway-mounted weapon was the largest gun ever built. Fully assembled, it weighed in at 1,344 tons, and was four stories tall, 20 feet wide, and 140 feet long. It required a 500-man crew to operate it, and had to be moved to be fully disassembled, as the railroad tracks could not bear its weight in transit. It required 54 hours to assemble and prepare for firing.

    The bore diameter was just under 3 feet and required 3,000 pounds of smokeless powder charge to fire two different projectiles. The first was a 10,584-pound high explosive shell that could produce a crater 30 feet in diameter. The other was a 16,540-pound concrete-piercing shell, capable of punching through 264 feet of concrete. Both projectiles could be shot, with relatively correct aim, from more than 20 miles away.

    The Gustav Gun was used in Sevastopol in the Soviet Union during Operation Barbarossa and destroyed various targets, including a munitions facility in the bay. It was also briefly used during the Warsaw Uprising in Poland. The Gustav Gun was captured by the Allies before the end of World War II and dismantled for scrap. The second massive rail gun, the Dora, was disabled to keep it from falling into Soviet hands near the end of the War.





  • With respect to pricing, I’ve been using SES for maybe 10 years, possibly more - this month is the first time I think I’ve ever been charged. The free tier used to include a very large number - I think it was 30,000 or or more emails a day that I never exceeded. Now it’s 0.10 USD per thousand messages. Which is a pretty big change from free, even though the overall costs are small - and it’s still a bargain. As with everything in “the cloud” though, the big players will squeeze the competition out then increase prices. I fully expect SES prices to keep increasing now they’ve figured out they can extract a few extra dollars from users and how relatively cheap SES is compared to the other overpriced crap. It won’t surprise me if they jack this up significantly in the coming years.

    Referencing sending quotas - Amazon is very lenient - I was talking about the big providers like gmail. It might be different now that my accounts have a long reputation as trustworthy senders, but when I first started using SES way back when, gmail and yahoo would start rejecting mail if more than something like 200 or so messages were submitted in a single batch, so I had to check the recipient domains and limit the numbers for each hourly iteration to stop them rejecting. I keep the email batches pretty small since I’m only sending out about 5-10K at a time and I stagger the send over several hours.

    It’s a bit of a minefield but overall pretty happy with SES, mainly because the mail gets delivered. You don’t need to originate sending from an EC2 hosts (the pricing is the same, even though they make a distinction in the price list:

    Outbound email from EC2 $0.10/1000 emails $0.12 for each GB of attachments you send*

    Outbound email from non-EC2 $0.10/1000 emails $0.12 for each GB of attachments you send

    *You might incur additional data transfer charges for using EC2 (it seems very likely they will increase the non EC2 price to drive you to a place where they are getting your compute and storage $ as well).

    https://aws.amazon.com/ses/pricing/


  • SES is indeed the best option if you want reliable delivery for a reasonable cost. The pricing changed just last month so it’s no longer effectively free for small users but it’s relatively cheap (for now). I looked at the prices you quoted for other services and they seem ridiculously high, but it’s fair to say that sending legitimate (non spam) bulk email is not so easy if you do everything yourself - getting your mail accepted is very challenging. For example, even using SES, if you attempt to originate too many emails to one provider in a single call, they may start rejecting everything - I had to put counters into the code to limit how many gmail addresses would be sent with each iteration. SES also rate limits so you need to manage that somehow. It sounds like you’re planning to send a LOT of email. You’ll also need to be mindful of the bounce rate and complaints (spam / abuse reports from recipients) because SES will shut you down if they go over a certain threshold, which you can see in the dashboard. It sounds like you’ve already figured a lot of this stuff out though - it’s not rocket science but it can be frustrating to work with bulk email delivery for a number of reasons.


  • I don’t blame you re the third party - I wouldn’t either. I generally download a transaction file periodically and import it locally using the app. I think you’re going to find it difficult to find an API that will allow little people access, even though they are obviously happy to offer that to the big companies. Some of the brokerages have checking accounts and it might be possible to pull the transaction data via the brokers API (maybe), but whichever way you look at it, I suspect the most pragmatic solution is probably going to be a download/import of some kind.


  • what are you looking to do? I don’t know of any consumer bank APIs but most equity and exchange brokerages will let you check account balances and make trades with an API key and credentials. Probably not initiate payments or transfers though. There are too many security risks involved for allowing that via a consumer-level API. There are also tools like Mint that store your credentials and can presumably access your data because they have corporate level agreements with the Financial institutions - I haven’t used that and would not normally recommend a corporate-based solution like that personally, but it might work for your needs.


  • Yes you understand the suggested approach. I don’t know about the mariadb tool and if it looks good, by all means use it, but I would offer that the fastest, simplest way to restore a reasonably small database that I can think of is with a sql dump. Any additional complexity just seems like it’s adding potential failure points. You don’t want to be messing around with borg or any other tools to replay transactions when all you want to do is get your database rebuilt. Also, if you have an encrypted local copy of the dump, then restoring from borg is the last resort, because most of the time you’ll just need the latest backup. I would bring the data local and back it up there if feasible. Then you only need a remote connection to grab the encrypted file and you’ll always have a recent local copy if your server goes kaput. Borg will back it up incrementally.


  • for the database, consider a script that does a “mysqldump” of the entire database that you schedule to run on the system daily/weekly. Also consider using gpg to encrypt the plain text file and delete the original in the same script. This is so you don’t leave a copy of the data unencrypted anywhere outside the database. You can then initiate either a copy of the encrypted file to a local folder that you’re backing up, or if you’ve set this up to back up directly on the remote that’s fine too - bringing it local gives you a staged copy outside the archive and not on the original host in case you need an immediately available backup of your database.

    With respect to the 3 separate repos, I would say keep them separate unless you have a large amount of duplicated data. Borg does not deduplicate over different repos as far as I’m aware. The downside of using a single repo is that the repo is locked during backups and if you’re running different scripts from each host, the lock files borg creates can become stale if the script doesn’t complete and one day (probably the day you’re trying to restore) you’ll find that borg hasn’t been backing your stuff up because a lock file is holding the backup archive open due to a failed backup that terminated due to an untimely reboot months ago. I don’t recall now why this occurs and doesn’t self-correct but do remember concluding that if deduplication isn’t a major factor, it’s easier and safer to keep the borg repos separate by host. Deduplication is the only reason to combine them as far as I can tell.

    When it comes to backup scripts, try to keep everything foolproof and use checks where you can to make sure the script is seeing the expected data, completes successfully and so on. Setting up automatic backups isn’t a trivial task, although maybe tools like rclone and borgmatic simplify it - I haven’t used those, just borg command line and scp/gpg in shell scripts. Have fun!