I know enough about how LLMs work to gauge how intelligent they are. The reason I have a different opinion than you is not because you or I lack understanding of how LLMs or diffusion models work, its simply that my definition of AI is more “lenient” than yours.
EDIT: Arguing about which definition is more correct is pointless because it’s totally subjective. However I think that a more lenient definition of AI is more useful in this case, because with more strict definitions we probably never will have something that could be considered AI.
It’s not completely subjective. Think about it from an information theory perspective. We want a word that maximizes the amount of information conveyed, and there are many situations where you need a word that distinguishes AGI, LLMs, deep learning, reinforcement learning, pathfinding, decision trees and the like from the outputs of other computer science subfields. “AI” has historically been that word, so redefining it without a replacement means we don’t have a word for this thing we want to talk about anymore.
I refuse to replace a single commonly used word in my vocabulary with a full sentence. If anyone wants to see this changed, then offer an alternative.
…then we will never have something considered AI then. Making the definition more lenient doesn’t magically make something that isn’t AI into something that is.
Or we just use the definition that many people have used for ages and call the code controlling Minecraft creepers are. it’s only recently that everyone has been getting upset about ai being used too much
There is no intelligence. There is only algorithms. The place we are at is not anywhere near approaching artificial intelligence, it is only buzzwords. If you know about how this works this should be clear. I think I was being very objective: we have statistical engines and diffusion formulas. No intelligence, of any kind, is being demonstrated. AI is a marketing term at this point. No original ideas, no real knowledge of past or future events, no ability to determine correct answers from false ones. Even the better models that try to basically watch the other models are still not that great beyond the basics of “what is the next most likely word here”.
While I think that is a different meaning than the current fad/bubble driven meaning that marketing groups would have people believe AI is, it’s interesting how fast people have forgotten the old uses of the term.
Personally I try to avoid using it to describe those now, since AI in popular parlance has been extended to at least imply a lot more lately.
Is my dog intelligent? What about a horse or dolphin? Macaws or chimpanzees?
Human brains do a number of different things behind the scenes, and some of those things look an awful lot like AI. Do you consider each of them to be intelligence, or is part of intelligence not enough to call it intelligence?
If you don’t consider it sufficient to say that part of intelligence is itself “intelligence,” then can you at least understand that some people do apply metonymy when saying the word “intelligence?”
If I convinced you to consider it or if you already did, then can you clarify:
The thing with machine learning is that it is inexplicable, much like parts of the human brain is inexplicable. Algorithms can be explained and understood, but machine learning, and its efficacy with problem spaces as they get larger and it’s fed more and more data, isn’t truly understood even by people who work deeply with it. These capabilities allow them to solve problems that are otherwise very difficult to solve algorithmically - similar to how we solve problems. Unless you think you have a deeper understanding than they do, how can you, as you claim, understand machine learning and its capabilities well enough to say that it is not at least similar to a part of intelligence?
Machine learning is a subset of artificial intelligence, along with things like machine perception, reasoning, and planning. Like I said in a different thread, ai is a really, really broad term. It doesn’t need to actually be Jarvis to be AI. You’re thinking of general ai
No we haven’t. We have an appearance of a AI. Large language models and diffusion models are just machine learning. Algorithm statistic engines.
Nothing thinks, creates, cares, or knows the difference between something correct or wrong.
I know enough about how LLMs work to gauge how intelligent they are. The reason I have a different opinion than you is not because you or I lack understanding of how LLMs or diffusion models work, its simply that my definition of AI is more “lenient” than yours.
EDIT: Arguing about which definition is more correct is pointless because it’s totally subjective. However I think that a more lenient definition of AI is more useful in this case, because with more strict definitions we probably never will have something that could be considered AI.
It’s not completely subjective. Think about it from an information theory perspective. We want a word that maximizes the amount of information conveyed, and there are many situations where you need a word that distinguishes AGI, LLMs, deep learning, reinforcement learning, pathfinding, decision trees and the like from the outputs of other computer science subfields. “AI” has historically been that word, so redefining it without a replacement means we don’t have a word for this thing we want to talk about anymore.
I refuse to replace a single commonly used word in my vocabulary with a full sentence. If anyone wants to see this changed, then offer an alternative.
…then we will never have something considered AI then. Making the definition more lenient doesn’t magically make something that isn’t AI into something that is.
Or we just use the definition that many people have used for ages and call the code controlling Minecraft creepers are. it’s only recently that everyone has been getting upset about ai being used too much
There is no intelligence. There is only algorithms. The place we are at is not anywhere near approaching artificial intelligence, it is only buzzwords. If you know about how this works this should be clear. I think I was being very objective: we have statistical engines and diffusion formulas. No intelligence, of any kind, is being demonstrated. AI is a marketing term at this point. No original ideas, no real knowledge of past or future events, no ability to determine correct answers from false ones. Even the better models that try to basically watch the other models are still not that great beyond the basics of “what is the next most likely word here”.
Like I said, that’s where we disagree. I call the code controlling Creepers in Minecraft AI
While I think that is a different meaning than the current fad/bubble driven meaning that marketing groups would have people believe AI is, it’s interesting how fast people have forgotten the old uses of the term.
Personally I try to avoid using it to describe those now, since AI in popular parlance has been extended to at least imply a lot more lately.
How do you define “intelligence,” precisely?
Is my dog intelligent? What about a horse or dolphin? Macaws or chimpanzees?
Human brains do a number of different things behind the scenes, and some of those things look an awful lot like AI. Do you consider each of them to be intelligence, or is part of intelligence not enough to call it intelligence?
If you don’t consider it sufficient to say that part of intelligence is itself “intelligence,” then can you at least understand that some people do apply metonymy when saying the word “intelligence?”
If I convinced you to consider it or if you already did, then can you clarify:
The thing with machine learning is that it is inexplicable, much like parts of the human brain is inexplicable. Algorithms can be explained and understood, but machine learning, and its efficacy with problem spaces as they get larger and it’s fed more and more data, isn’t truly understood even by people who work deeply with it. These capabilities allow them to solve problems that are otherwise very difficult to solve algorithmically - similar to how we solve problems. Unless you think you have a deeper understanding than they do, how can you, as you claim, understand machine learning and its capabilities well enough to say that it is not at least similar to a part of intelligence?
Machine learning is a subset of artificial intelligence, along with things like machine perception, reasoning, and planning. Like I said in a different thread, ai is a really, really broad term. It doesn’t need to actually be Jarvis to be AI. You’re thinking of general ai
your definition of intelligence sounds an awful lot like a human, stop being entityist