• 7 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • I find that they’re great for headings, titles, dates, etc - a little emphasis in my notes. With that said, my pilot metropolitan’s stub nib has also always been really scratchy too, and hard starts a lot. It’s always been one of my most disappointing pens.

    One of my favorite stub nibs is a Jinhao 80 (Lamy 2000 clone, usually sub-$10). I swapped out the Jinhao nib for a Lamy 1.1 stub, and it writes like a dream!


  • Nice - I had misread this as Diamine Earl Grey at first, and was very confused (“I’ve used this before and don’t remember any orange tones!”). But it does look beautiful!

    Saddle Brown also looks very nice and versatile. Do you think that you need a medium nib to get the full spectrum of shading? I’ve sometimes been disappointed with browns that are too light with an F nib (bought a sample of Robert Oster Caffe Crema, but it really was too light for my daily use unless in an M, B, or stub).














  • This sounds a lot like me! Though I’m closer to 8 or 9 inked at a time. Lots of notes in meetings; I have a few stub nibs inked up that I use for headers, then rotate (less systematically) through the other pens throughout the day. The changing colors for each meeting help to provide a good visual separation between meetings in my notes. Plus, it’s a nice little change of pace to “reset” between meetings by choosing a new pen.






  • I agree that these changes have all been incredibly stupid and devalue one of the few remaining producers of quality TV (HBO), but I think that this is missing the point. The key is this:

    Notably, the loss in subscribers didn’t seem to affect streaming revenue. It grew to $2.73 billion this quarter, marking a 13 percent increase.

    In other words, fill up the service with cheap / easy to produce reality crap and hike up prices over time. Revenue goes up and costs go way down. People drift away but you keep growing the bottom line, at least for now. The shareholders rejoice and the consumers lose.





  • Yeah - though I had thought that still the one should be higher than the other, even if the numbers are small. In the actual equation, this would be multiplied by a scaling factor of 10000, though. (See the code discussion in the other comments). Though, in this case, the rank would still be very close to zero.

    What I had missed is that, in the actual code, the equation is wrapped in floor() and returns an integer. So both are treated as rank = 0 and maybe randomly sorted.

    The question is why are rank 0 posts showing up at all? In my other comment, if you do the math, I think that it should take quite a bit of time for any post with an appreciable score to decay to a rank of zero. Yet we see that these sorts of old posts are appearing relatively high in the hot feed.

    One possible answer was suggested in another comment – it may have to do with how often the scores are recalculated for older posts, and if some have not decayed to zero by the time that the score recalculation stops, they might persist with a non zero score until the instance is restarted. I’m still not sure that that is the right answer, however, because I am guessing that instances like lemmy.world (which I am using) have been restarted recently with the various hacking attempts?


  • Can someone who knows PL/pgSQL help parse this line:

    return floor(10000*log(greatest(1,score+3)) / power(((EXTRACT(EPOCH FROM (timezone('utc',now()) - published))/3600) + 2), 1.8))::integer;
    

    It seems to me that the issue might be that the function returns an integer. If the scaling factor is inadequately large, then floor() would return zero for tons of posts (any post where the equation inside floor() evaluates to less than one). All of those posts would have equivalent ranks. This could explain why we start seeing randomly sorted old posts after a certain score threshold. Maybe better not to round here or dramatically increase the scaling factor?

    I’m not sure what the units of the post age would be in here, though. Probably hours based on the division by 3600? And is log() the natural log or base 10 by default?

    In any case, something still must be going wrong. If I’m doing the math correctly, a post with a score of +25 should take approximately 203 hours (assuming log base 10) before it reaches a raw rank score of < 1 and gets floored to zero, joining all of the really old posts. So we should be seeing all posts from the last 8.5 days that had +25 scores before we see any of these really old posts… But that isn’t what’s happening.