![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
ULA is already a private company. I don’t think the US government has done any of their own work to get to space since the shuttle.
ULA is already a private company. I don’t think the US government has done any of their own work to get to space since the shuttle.
I believe that is correct.
In the book, they also took pains to point out the steps he took to try to avoid it happening to the other airlocks after that point too - by actually balancing out their usage a bit more, instead of just always using the same one.
How long did you play BoI for if getting burned out on Hades after 40hrs was fairly quick?
But intelligence is the capacity to solve problems. If you can solve problems quickly, you are by definition intelligent
To solve any problems? Because when I run a computer simulation from a random initial state, that’s technically the computer solving a problem it’s never seen before, and it is trillions of times faster than me. Does that mean the computer is trillions of times more intelligent than me?
the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
If we built a true super-genius AI but never let it leave a small container, is it not intelligent because WE never let it manipulate its environment? And regarding the tests in the Merriam Webster definition, I suspect it’s talking about “IQ tests”, which in practice are known to be at least partially not objective. Just as an example, it’s known that you can study for and improve your score on an IQ test. How does studying for a test increase your “ability to apply knowledge”? I can think of some potential pathways, but we’re basically back to it not being clearly defined.
In essence, what I’m trying to say is that even though we can write down some definition for “intelligence”, it’s still not a concept that even humans have a fantastic understanding of, even for other humans. When we try to think of types of non-human intelligence, our current models for intelligence fall apart even more. Not that I think current LLMs are actually “intelligent” by however you would define the term.
If you’re mixing things up in the kitchen, typically you try to be somewhat precise with ratios.
The difference in this case being that because the actual ratio of the blend is unknown, you don’t actually know how it would crystallize. Technically they could even change up the ratio week to week based on the price of high-fructose corn syrup so you wouldn’t even get consistency from it.
If this actually did lead to faster matrix multiplication, then essentially anything that can be done on a GPU would benefit. That definitely could include games, and physics models, along with a bunch of other applications (and yes, also AI stuff).
I’m sure the papers authors know all of that, but somehow along the line the article just became"faster and better AI"
The above post is referencing/quoting a line from the show “It’s always sunny in Philadelphia”, which is why people up voting it
I agree with many of the other commenters that OP debating their husband might not be the best idea.
But if that’s what they want, “Decoding the gurus” did at least one Rogan specific episode, and I think they do a better job covering and dismantling Rogan’s rhetorical approach than the podcasts above.
Those stats are misleading though. Autopilot only runs on highways, which are much safer per mile even for human drivers.
Tesla are basically comparing their system, which only runs in pristine, ideal conditions, against an average human that has to deal with the real world.
As far as I’m aware they haven’t released safety per mile data from the FSD cars yet, and until they do I will remain skeptical about how much safer it currently is.
Yes, but notably you can design to reduce the risk of leaking hydrogen. If the areas around the tanks are designed to allow any leakage to vent before it reaches dangerous levels, you can reduce the risk. Yes hydrogen is flammable, so tanks of it are dangerous. Jet fuel is also quite flammable, and we’ve used that for a long time.
This is all in contrast to the design of the Hindenburg, which was specifically trying to hold onto a bunch of hydrogen in the flammable regime
I’m guessing that they are (falsely) equating it to the hindenburg, when IMO it wouldn’t be much different safety-wise than current fossil fuel powered planes.
It’s not like they would be filling the wings and luggage compartment with free-floating hydrogen, it stays in it’s tank
It’s optional for beating the games story, but required if you are trying for full (112%) completion.
After a few years the orbit will degrade enough that it’ll start to fall back to earth. At that point, the satellite will either burn up completely on re-entry, or partially and the rest will fall to earth.
Either way, each of these satellites will be completely gone from orbit after a few years.