

Whichever version it is, I hope that one day I can delete a mail, change my mind, press ctrl-z and it will actually undo the last delete and not some random one from earlier in the day.
Whichever version it is, I hope that one day I can delete a mail, change my mind, press ctrl-z and it will actually undo the last delete and not some random one from earlier in the day.
8610 Writing the digits in descending order is the best way to write any number.
I like that I can interface with it in ways that I already understand (eg rclone, sync, sshfs).
Being able to run some commands on the server meant that I could use rclone to copy my AWS and OneDrive backups directly cloud-to-cloud.
Based in reality, I think.
And if anyone is wondering, riz is charisma.
All the new slang is just abbreviation, e.g. based, riz.
Are you sure? With my knees, you’ll need to help me up again.
Segmentation fault?
Before even getting to documentation, I see so many projects that don’t have a short summary of what they do (and maybe what to not expect them to do).
As an example, Home Assistant. I can tell that it involves home automation, so can I replace Google Home with it? It seems like it doesn’t do voice recognition without add-ons and it can work with Google Assistant. Do I still need accounts with the providers of smart appliances, or can it control my bulbs directly?
None of that is very clear from the website.
I’ve seen plenty of other projects where it’s assumed there’s no need to explain it’s overall purpose.
Due to a typo, I ended up with “The Cod War”
https://www.icelandreview.com/travel/the-cod-wars-in-iceland/
That was covered pretty well already!
Or maybe it’s using Fluidic logic.
Well that’s of the same order of magnitude as the quoted figure. I was suggesting that it sounded vastly larger than it should be.
It’s true, I don’t know how large the models are that are being accessed in data centers. Although if the article’s estimate is correct, it’s sad that such excessively-demanding models are always being used for use-cases that could often be handled with much lower power usage.
140Wh seems off.
It’s possible to run an LLM on a moderately-powered gaming PC (even a Steam Deck).
Those consume power in the range of a few hundred watts and they can generate replies in a seconds, or maybe a minute or so. Power use throttles down when not actually working.
That means a home pc could generate dozens of email-sized texts an hour using a few hundred watt-hours.
I think that the article is missing some factor, such as how many parallel users the racks they’re discussing can support.
You thinking of Apple headsets. These are budget things, maybe $300.
Woot made a success of this, their most coveted product
He decided that it was unethical to have an AI/LLM impersonate a real person, but set up the “wizard” as an AI assistant for his fake crypto site helpline.
I govenment site I visited recenly made a point of how it accepts emojis in passwords!
True, poor choice of phrase.
But I was thnking of something like
#define my_macro does not fit\
on one line
deleted by creator