Eh, that’s pretty metal.
It’s definitely pretty, and as thermite is a mixture of metal powder and metal oxide, your statement is entirely correct.
Eh, that’s pretty metal.
It’s definitely pretty, and as thermite is a mixture of metal powder and metal oxide, your statement is entirely correct.
ah they were making a nice and lame pun (anova brand == another brand)
They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.
It models only use of language
This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.
If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence “The stolen painting was found by a tree”, you need to know what a tree is in order to interpret this correctly.
You can’t really use language *unless* you have a model of the universe.
Heroic works really well. I’ve just installed it myself recently, motivated mostly by a desire to finally play the free games I got off Epic. I’ve only installed two EGS games so far - Civ 6 and Guardians of the Galaxy - but they’re working perfectly, running via proton.
The experience is so good I was actually inspired to buy my first game outside of steam in years, namely Wartales which I just bought yesterday on GOG. Installation is a breeze, it runs under proton, and as far as I can tell it is running perfectly.
I sort of prefer Heroic to Steam in fact, because it starts almost immediately - no waiting around for 30 seconds while it tries to connect to the Steam network etc
they, in fact, will have some understanding
These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.
What they do not have (and IMO won’t ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It’s very illuminating on the subject of what consciousness is, by providing a new example of what it is not.
They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.
They had a veto and they also had the Tories
I think it’s possible that internal language did exist before it could be vocalised. That is, before we evolved the necessary structures in the throat to make words, we were thinking according to basic grammatical rules e.g subject-verb-object. Words in human language are like labels for internal concepts, and those internal concepts would have existed before language was a thing.
What do you think evolved first - verbal communication or thoughts? Presumably we were able to think before we could speak, no? The words we have in our language are like pointers to internal concepts, and it seems to me that those internal concepts would have existed before language was a thing. The mouth-sounds as you put it are not the thoughts themselves, rather just labels for specific concepts. It might be possible and even convenient to think in mouth-sounds but it’s not necessary for logical thought.
privacy on that site was horrible, and I stoped de-selecting vendors who want permission to track me after two minutes.
Just open the page in a private window at that point, and click the “yeah sure track everything bro” button.
Mastodon where it’s focused on a person’s single post
This is a good observation, it means that kind of social media (twitter, Facebook, LinkedIn) is much more egotistical and self-aggrandizing,which in turn explains why people like Musk and Trump are so enamoured with the format.
How would I even know if this is correct?
You’re gonna have to go to a lot of parties
I cannot wait until architecture-agnostic ML libraries are dominant and I can kiss CUDA goodbye for good
I really hope this happens. After being on Nvidia for over a decade (960 for 5 years and similar midrange cards before that), I finally went AMD at the end of last year. Then of course AI burst onto the scene this year, and I’ve not yet managed to get stable diffusion running to the point it’s made me wonder if I might have made a bad choice.
Same. I had an Nvidia 960 for about 5 years on arch with very few problems. Maybe twice over that time I had to rollback to an older version temporarily due to some incompatibility with wine or such like.
Towards the end of last year I finally decided to upgrade (mostly to play RDR2) and I went with AMD. I love the feel of using a pure open source gfx stack, but there is no real functional advantage to it.
I taught myself to touch-type when I was a schoolkid using something similar to Mavis Beacon. All the while, I had a voice in my head saying, “This is pointless, everyone will be talking to their computers like in Star Trek in a couple of years”. Well, that was the 90s and it turned out to be one of the most useful skills I taught myself - but surely the age of the keyboard must soon be coming to an end now??