

Huh? I’m streaming from my Jellyfin just fine when I’m on the go, with no tailscale or other VPN set up
Huh? I’m streaming from my Jellyfin just fine when I’m on the go, with no tailscale or other VPN set up
Okay, different example. If a country dropped a couple of wounded soldiers without weapons over another country’s territory, would you call that an invasion?
If someone threw the dead body of a robber into a store, would you also call that store being robbed?
Why not simply say donation
It’s about setting expectations. The wording is chosen because they believe that paying open source developers for their work should be the norm, not the exception. Calling it a donation would not do that justice. Their wording is saying “Here’s the software, we’ll trust you to pay us for it if it brings you value and you can afford it”. It’s an explicit expectation to pay, unless you have good reasons not to, which is also fine but should be the exception. Whereas a donation is very much optional and not the default expectation by nature.
In the end it’s just a semantic difference, it’s just all about making expectations clear even if there is no enforcement around them.
I agree that this way of displaying the data is appropriate, but it would be nice to have a very visible indicator of this. Some kind of highlighted “fold” line or something at the very bottom of the chart, maybe. If I can deduce the units from context, and the trend is more interesting than absolute numbers, then I’m not going to look at the axes most of the time
That holds only if you assume that random chance decides whether someone votes or not. That is a big assumption to make. A lot of factors that affect your ability or willingness to vote also affect your political leaning, so I highly doubt that it’s a reasonable assumption.
And Romans can’t be homophobic for some reason, or what’s your point?
Maybe we don’t need to resort to casual homophobia though to criticize corporates
Fortran is Proto-Indo-Germanic or whatever it’s called again
The algorithm is actually tailored to find out if/when you fall asleep while watching videos, and then recommends longer videos in autoplay when it believes you are, because they’ll get to play you more ads and cash out more.
You might be misremembering / misinterpreting a little there. This behavior is not intentional, it’s just a side effect of how the algorithm currently works. Showing you longer videos doesn’t equate to showing you more ads. On the contrary, if you get loads of short videos you’ll have way more opportunities to see pre-roll ads, but with longer videos, you’re just to just the mid-roll spots in that video. So YouTube doesn’t really have an incentive to make it work like that, it’s just accidental.
Here’s the spiffing Brit video on this, which I think you might have gotten this idea from: https://youtu.be/8iOjeb5DTZI
Edit: to be clear, I fully agree that YouTube will do anything to shove ads down our throats no matter how effective they actually are. I’m just saying that this example you’ve brought is not really that.
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.
I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens