slatestarcodex

Probably what *this* should be called.
User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 9:29 pm

Ashenai wrote: Thu Jul 01, 2021 9:28 pm
Rylinks wrote: Thu Jul 01, 2021 9:22 pm general intelligence is not defined with respect to any specific task! that's what makes it general
That doesn't answer my question, unless you're going for the ultimate cop-out answer of "I know it when I see it" or "it is impossible to tell if an entity is intelligent, regardless of its behavior or how many or what types of tests it passes"
your question proves too much. can you give me a one-liner simple test for what counts as 'alive'? does this mean that 'alive' is not a coherent concept

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 9:33 pm

Like I said, the definition that people are using in practice is "I'm not sure what intelligence is, but show me this AI of yours... aha! It can't do <names task achievable by a human>. See, it's just a dumb machine, no intelligence at all."

The result is literally what I said. Intelligence is defined as "what a human can do but a machine can't". This definition is never formally stated like that because it is so obviously flawed, but it is understood to be that. People want it to be that, they assume it to be that.

My answer, then, is that 0 progress will be made on AI until we suddenly have an AI that is capable of everything a human is. (Because anything less won't be accepted as any type of intelligence, due to the above heuristic.)

User avatar
Crunchums
Forum Elf
Forum Elf
Posts: 16116
Joined: Aug 24, 2018

Re: slatestarcodex

Post by Crunchums » Thu Jul 01, 2021 9:38 pm

Ashenai wrote: Thu Jul 01, 2021 9:01 pm GPT proves that many problems we thought of as "higher thought" can be reduced to complicated pattern-matching. For example, did you know that GPT-3 can write code?
yes, i knew that
And you can of course try to explain how it's not really writing code, right, it's just "spitting out some output based on some input". But when you didn't yet know that GPT-3 can do this, if I asked you if a simple GPT-type deep learning system could do this, you would have thought obviously not. I thought obviously not. I thought writing code based on a simple and somewhat vague description of the desired result would be evidence of "true AI".

Well, I guess it's not. But what's actually happening is that we've been forced backwards again in our God Of The Gaps-style strategy of defining real thinking as "whatever humans can do but machines can't".

It used to be that the test of true intelligent AI was the Turing Test. Then machines passed the Turing Test, and we decided that oh, that was not good enough actually.

Writing code is not good enough anymore either.

What is good enough? What defines "true AI" or "real thought"? (It has to be something testable, no handwaving about "being aware of its own existence" please.)

The answer is we don't know, but the REAL answer is that we don't want to say because the gaps are rapidly getting smaller and narrower. The God Of The Gaps that is "actual thought" or "self-consciousness" or "real understanding" or "true AI" is sort of... vanishing.
yeah i'm already on board with this part of things. the part that i get stuck on is i think "maximize paperclips" is a qualitatively different task than "write code that does X". alphazero plays chess against itself and learns from that. GPT is spitting out things that humans might write based on text created by humans. i don't see how those approaches will ever get you to paperclipping the whole universe. though i guess where this is pointing me is that the real key is getting to the takeoff point where an AI can modify itself
u gotta skate

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 9:38 pm

Rylinks wrote: Thu Jul 01, 2021 9:29 pm
Ashenai wrote: Thu Jul 01, 2021 9:28 pm
Rylinks wrote: Thu Jul 01, 2021 9:22 pm general intelligence is not defined with respect to any specific task! that's what makes it general
That doesn't answer my question, unless you're going for the ultimate cop-out answer of "I know it when I see it" or "it is impossible to tell if an entity is intelligent, regardless of its behavior or how many or what types of tests it passes"
your question proves too much. can you give me a one-liner simple test for what counts as 'alive'? does this mean that 'alive' is not a coherent concept
I didn't ask for a one-liner test! I asked for any test or series of tests.

Also, with the term "life", we've accepted the fact that it's not a clear-cut binary thing. Are viruses alive? Who knows, they're borderline. We're comfortable with that.

We have yet to take this step of acceptance with thinking. We persist in assuming some kind of magic that we call "awareness" but refuse to define, except that an entity either has it or doesn't, and machines don't (for reasons that are also unclear except for the heuristic that "if we can understand its behavior, then it's not aware")

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 9:44 pm

Crunchums wrote: Thu Jul 01, 2021 9:38 pm yeah i'm already on board with this part of things. the part that i get stuck on is i think "maximize paperclips" is a qualitatively different task than "write code that does X".
The rules of the material world are different and more complicated than the rules of language or code. I'm not sure they are qualitatively different.

AIs have been capable of behaving in intelligent-seeming ways in the real world as well, I'm sure you've read the experiments where AIs learned to team up or lie to each other in order to get "food" etc. There's no sharp barrier between that and buying a paperclip factory. Deciding to buy a paperclip factory and then accomplishing that requires a large number of additional skills, but NONE of those skills are theoretically beyond the reach of an AI.

User avatar
Crunchums
Forum Elf
Forum Elf
Posts: 16116
Joined: Aug 24, 2018

Re: slatestarcodex

Post by Crunchums » Thu Jul 01, 2021 9:48 pm

this conversation has been helpful. i think where i've ended up is, maybe the "discrete" distinction that i'm getting at isn't actually important? like yeah chess is more simple than paperclip maximization in some sense, but like you said it's all still inputs and outputs. just more complicated ones. and then the question of "well how would you give an AI enough inputs" is just an engineering problem - feed it tweets made from set of twitter accounts, feed it a webcam feed, whatever.
u gotta skate

User avatar
Crunchums
Forum Elf
Forum Elf
Posts: 16116
Joined: Aug 24, 2018

Re: slatestarcodex

Post by Crunchums » Thu Jul 01, 2021 9:49 pm

WHO IS ONLINE
Users browsing this forum: Bing [Bot], Crunchums, Google [Bot] and 0 guests

they're watching 👀
u gotta skate

User avatar
Crunchums
Forum Elf
Forum Elf
Posts: 16116
Joined: Aug 24, 2018

Re: slatestarcodex

Post by Crunchums » Thu Jul 01, 2021 9:51 pm

Ashenai wrote: AIs have been capable of behaving in intelligent-seeming ways in the real world as well, I'm sure you've read the experiments where AIs learned to team up or lie to each other in order to get "food" etc.
i have not; link?
u gotta skate

User avatar
Doug
Has anybody seen my parrot
Forum Elf
Posts: 20552
Joined: Aug 23, 2018

Re: slatestarcodex

Post by Doug » Thu Jul 01, 2021 9:56 pm

Ashenai wrote: Thu Jul 01, 2021 9:33 pm My answer, then, is that 0 progress will be made on AI until we suddenly have an AI that is capable of everything a human is. (Because anything less won't be accepted as any type of intelligence, due to the above heuristic.)
That reads strangely, as though an ordinary person recognizing progress is the only thing that counts as progress
It's your turn in Cthulhu Wars
It's your turn in Squirrel Wars
It's your turn in Demon Wars
It's your turn in Wall Street Wars

http://devilsbiscuit.com/

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 9:56 pm

Ashenai wrote: Thu Jul 01, 2021 9:38 pm
Rylinks wrote: Thu Jul 01, 2021 9:29 pm
Ashenai wrote: Thu Jul 01, 2021 9:28 pm

That doesn't answer my question, unless you're going for the ultimate cop-out answer of "I know it when I see it" or "it is impossible to tell if an entity is intelligent, regardless of its behavior or how many or what types of tests it passes"
your question proves too much. can you give me a one-liner simple test for what counts as 'alive'? does this mean that 'alive' is not a coherent concept
I didn't ask for a one-liner test! I asked for any test or series of tests.

Also, with the term "life", we've accepted the fact that it's not a clear-cut binary thing. Are viruses alive? Who knows, they're borderline. We're comfortable with that.

We have yet to take this step of acceptance with thinking. We persist in assuming some kind of magic that we call "awareness" but refuse to define, except that an entity either has it or doesn't, and machines don't (for reasons that are also unclear except for the heuristic that "if we can understand its behavior, then it's not aware")
im not thinking of ambiguity around the edges of the category. There's no test for 'alive' easily explicable in a forums post that will cleanly differentiate clearly alive things such as birds from clearly unalive things such as a robot that makes copies of itself

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 9:59 pm

and like 'alive' there's no series of tests that will clearly differentiate vegetables from nonvegetables or rocks from nonrocks

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 9:59 pm

Crunchums wrote: Thu Jul 01, 2021 9:48 pm this conversation has been helpful. i think where i've ended up is, maybe the "discrete" distinction that i'm getting at isn't actually important? like yeah chess is more simple than paperclip maximization in some sense, but like you said it's all still inputs and outputs. just more complicated ones. and then the question of "well how would you give an AI enough inputs" is just an engineering problem - feed it tweets made from set of twitter accounts, feed it a webcam feed, whatever.
Yeah. A deep learning network is designed to learn a little bit like a human baby. A baby knows nothing at first, except that hunger hurts, cold hurts, heat hurts, being alone hurts, swallowing nutrients is good. For an AI, an analogy to these would be its utility function, e.g.: paperclips are good, a lack of productivity hurts, disobeying orders from humans hurts.

Next, both the baby and the AI get all kinds of information, and their brains rapidly organize this information in ways that help them become more efficient at reaching their utility function. Babies learn that crying is a good way of getting food when hunger hurts. Much later, when they've learned to walk, they'll develop better strategies in an emergent way, as they'll try different things and find out through experimentation that lying down on the ground is not useful when hunger hurts, but finding Mom and asking her for a cookie is.

This is a process of simple steps leading to qualitative improvements. A baby that discovered that smiling at its mommy gets hugs is not qualitatively different from a baby that hasn't yet discovered that, even though its behavior will change drastically.

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 9:59 pm

Ashenai wrote: Thu Jul 01, 2021 9:59 pm A baby knows nothing at first
this is wrong

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 10:02 pm

Crunchums wrote: Thu Jul 01, 2021 9:51 pm
Ashenai wrote: AIs have been capable of behaving in intelligent-seeming ways in the real world as well, I'm sure you've read the experiments where AIs learned to team up or lie to each other in order to get "food" etc.
i have not; link?
I was thinking of this experiment. It was done 13 years ago and the results are hardly impressive today, it just shows that even behaviors we think of as highly purposeful (like lying) can be learned in emergent ways.

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 10:04 pm

Rylinks wrote: Thu Jul 01, 2021 9:59 pm
Ashenai wrote: Thu Jul 01, 2021 9:59 pm A baby knows nothing at first
this is wrong
Obviously, but the wrongness is not important for my point.

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 10:10 pm

Ashenai wrote: Thu Jul 01, 2021 10:04 pm
Rylinks wrote: Thu Jul 01, 2021 9:59 pm
Ashenai wrote: Thu Jul 01, 2021 9:59 pm A baby knows nothing at first
this is wrong
Obviously, but the wrongness is not important for my point.
what finding about baby or human neurology would convince you that they're not analogous

User avatar
Doug
Has anybody seen my parrot
Forum Elf
Posts: 20552
Joined: Aug 23, 2018

Re: slatestarcodex

Post by Doug » Thu Jul 01, 2021 10:11 pm

Rylinks wrote: Thu Jul 01, 2021 9:59 pm and like 'alive' there's no series of tests that will clearly differentiate vegetables from nonvegetables or rocks from nonrocks
Determining what doesn't count as life is only hard sometimes -- most of the time it's easy
It's your turn in Cthulhu Wars
It's your turn in Squirrel Wars
It's your turn in Demon Wars
It's your turn in Wall Street Wars

http://devilsbiscuit.com/

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 10:12 pm

Rylinks wrote: Thu Jul 01, 2021 10:10 pm
Ashenai wrote: Thu Jul 01, 2021 10:04 pm
Rylinks wrote: Thu Jul 01, 2021 9:59 pm this is wrong
Obviously, but the wrongness is not important for my point.
what finding about baby or human neurology would convince you that they're not analogous
That's an extremely vague question, "analogous" is not really a useful term. There are ways in which babies are different from deep learning neural nets, including some important ways. But there are also parallels.

User avatar
Crunchums
Forum Elf
Forum Elf
Posts: 16116
Joined: Aug 24, 2018

Re: slatestarcodex

Post by Crunchums » Thu Jul 01, 2021 10:13 pm

Ashenai wrote:The rules of the material world are different and more complicated than the rules of language or code. I'm not sure they are qualitatively different.
coming back to this, the "more complicated" part is kind of a big deal. like chess, alphazero works, but starcraft it doesn't. and yeah we can keep pushing that boundary - it's not hard to imagine AI eventually hanging with or exceeding the best starcraft players. so now i'm thinking the big concern is the tipping point where it learns to improve by modifying its own programming somehow? which seems like a plausible thing to be worried about
u gotta skate

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 10:13 pm

Ashenai wrote: Thu Jul 01, 2021 10:12 pm
Rylinks wrote: Thu Jul 01, 2021 10:10 pm
Ashenai wrote: Thu Jul 01, 2021 10:04 pm

Obviously, but the wrongness is not important for my point.
what finding about baby or human neurology would convince you that they're not analogous
That's an extremely vague question, "analogous" is not really a useful term. There are ways in which babies are different from deep learning neural nets, including some important ways. But there are also parallels.
i'll rephrase: what finding about baby or human neurology would be important to your point

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 10:21 pm

Crunchums wrote: Thu Jul 01, 2021 10:13 pm
Ashenai wrote:The rules of the material world are different and more complicated than the rules of language or code. I'm not sure they are qualitatively different.
coming back to this, the "more complicated" part is kind of a big deal. like chess, alphazero works, but starcraft it doesn't. and yeah we can keep pushing that boundary - it's not hard to imagine AI eventually hanging with or exceeding the best starcraft players. so now i'm thinking the big concern is the tipping point where it learns to improve by modifying its own programming somehow? which seems like a plausible thing to be worried about
Modifying its own programming would be extremely dangerous, but not actually required! Just giving itself more hardware (processing power, RAM) might be enough to make it significantly smarter (again, the quantity tipping over into quality thing).

But the usual "singularity" theory as I understand it is that the AI will be able to program a different AI to help it in its task. And since we're talking about an AI that's smarter than people, it'll be able to design an AI smarter than itself (since it was made by people, right?). And then that smarter AI will be able to make an even smarter one, and so on. Hence, a rapid "AI explosion".

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 10:24 pm

Rylinks wrote: Thu Jul 01, 2021 10:13 pm
Ashenai wrote: Thu Jul 01, 2021 10:12 pm
Rylinks wrote: Thu Jul 01, 2021 10:10 pm

what finding about baby or human neurology would convince you that they're not analogous
That's an extremely vague question, "analogous" is not really a useful term. There are ways in which babies are different from deep learning neural nets, including some important ways. But there are also parallels.
i'll rephrase: what finding about baby or human neurology would be important to your point
The core of my point is that both AIs and babies develop new skills in an emergent way, by experimenting, finding what actions are helpful (+utility) in what types of situations, and building their own model of useful behaviors based on that. A large part of how they develop is through this highly adaptive process.

I am aware that this is not the only way humans gain function, and that human brains are predisposed to learn language, for example.

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 10:29 pm

I am also aware of mirror neurons. Stuff like that does not invalidate the analogy, it just tells us that humans are not blank slates, they are predisposed to develop in certain ways. This is why e.g. normal people have empathy, which would otherwise be a tricky concept to learn purely through reinforcement learning.

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 10:40 pm

Ashenai wrote: Thu Jul 01, 2021 10:24 pm
Rylinks wrote: Thu Jul 01, 2021 10:13 pm
Ashenai wrote: Thu Jul 01, 2021 10:12 pm

That's an extremely vague question, "analogous" is not really a useful term. There are ways in which babies are different from deep learning neural nets, including some important ways. But there are also parallels.
i'll rephrase: what finding about baby or human neurology would be important to your point
The core of my point is that both AIs and babies develop new skills in an emergent way, by experimenting, finding what actions are helpful (+utility) in what types of situations, and building their own model of useful behaviors based on that. A large part of how they develop is through this highly adaptive process.

I am aware that this is not the only way humans gain function, and that human brains are predisposed to learn language, for example.
i don't understand where this reasoning starts or where it's supposed to go. No AI neural network is similar to a human brain on a local scale--human neurons are complicated chemical systems which perform multiple functions. They are not fully understood, but they are definitely not modeled by neural networks where each neuron is a set of mathematical functions of the inputs. This isn't a criticism of AI work--modeling the human brain is not something they are trying to do--but there is no analogy on the small scale that guarantees biological neurons don't combine to create some qualitative difference between humans and AI

so then there's the external observed behaviors, where babies and AIs both take external input, perform various actions, and learn behaviors based on that. and as you said this is similar in some respects and different in some respects. But unless you have complete knowledge of the differences 'similar in some ways and different in some ways' does not support a conclusion that humans and AIs don't differ in ways called 'awareness' or what have you

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 10:52 pm

Rylinks wrote: Thu Jul 01, 2021 10:40 pm But unless you have complete knowledge of the differences 'similar in some ways and different in some ways' does not support a conclusion that humans and AIs don't differ in ways called 'awareness' or what have you
Sure, but that's shifting the burden of proof. I asked what this "awareness" was and how to test it and got nothing usable. "Awareness" is special pleading, in that the proof for humans having it is "well I know I have it, I assume you do too", and the proof for machines not having it is "they're not people".

"Awareness" is an extraneous variable in that it provides zero predictive power.

Humans and AIs certainly differ in a lot of ways! They think in different ways. I'm not disputing that. I am saying that our "awareness", if it exists, does not seem to provide us with any obvious benefit; there is no measurable mental feat we can accomplish that we can confidently say an AI will not be able to surpass.

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 11:08 pm

Ashenai wrote: Thu Jul 01, 2021 10:52 pm
Rylinks wrote: Thu Jul 01, 2021 10:40 pm But unless you have complete knowledge of the differences 'similar in some ways and different in some ways' does not support a conclusion that humans and AIs don't differ in ways called 'awareness' or what have you
Sure, but that's shifting the burden of proof. I asked what this "awareness" was and how to test it and got nothing usable. "Awareness" is special pleading, in that the proof for humans having it is "well I know I have it, I assume you do too", and the proof for machines not having it is "they're not people".

"Awareness" is an extraneous variable in that it provides zero predictive power.

Humans and AIs certainly differ in a lot of ways! They think in different ways. I'm not disputing that. I am saying that our "awareness", if it exists, does not seem to provide us with any obvious benefit; there is no measurable mental feat we can accomplish that we can confidently say an AI will not be able to surpass.
i'm not too attached to awareness as a term but here is a parable

Plato: a man is a featherless biped
Diogenes [holding a plucked chicken]: behold plato's man!
Plato: okay fine a man is a featherless biped with broad nails
Diogenes [returning from a biology lab where he has grafted toenails onto a chicken] behold plato's man!
Plato: a man has a >99% dna match with this genome sequence
Diogenes: [growing liver cells from a stem cell line] behold plato's man!
Plato: i can't define what man is but i know one when i see one
Diogenes: this is just a definition of the gaps, clearly these interactions have shown the term 'man' has no predictive power, there is no trait of this "man" that my biology lab cannot replicate

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Thu Jul 01, 2021 11:18 pm

But I'm not the one trying to define awareness! I don't care about awareness! I don't think it's important.

I do believe "awareness" does in fact map to something in the human brain (and/or some animal brains), but it's not an important thing.

My argument is not that machines can have awareness. My argument is that it doesn't matter whether or not they do. Machines that, to use Crunchums' phrasing, only "spit out some output based on some input" have shown themselves capable of increasingly impressive mental feats. I am saying that in the future, there will be no (clearly defined and measurable) mental feat that only humans will be capable of, nothing that will be off-limits to machines. That's the thing I really care about.

User avatar
Rylinks
her skirt got quite a lot smaller,
but her heart is still the same
size it was before
Forum Elf
Posts: 12359
Joined: Jun 13, 2018

Re: slatestarcodex

Post by Rylinks » Thu Jul 01, 2021 11:50 pm

Ashenai wrote: Thu Jul 01, 2021 11:18 pm But I'm not the one trying to define awareness! I don't care about awareness! I don't think it's important.

I do believe "awareness" does in fact map to something in the human brain (and/or some animal brains), but it's not an important thing.

My argument is not that machines can have awareness. My argument is that it doesn't matter whether or not they do. Machines that, to use Crunchums' phrasing, only "spit out some output based on some input" have shown themselves capable of increasingly impressive mental feats. I am saying that in the future, there will be no (clearly defined and measurable) mental feat that only humans will be capable of, nothing that will be off-limits to machines. That's the thing I really care about.
we agree that there are large differences between humans and AIs. How do you know that none of these differences will prevent AIs from performing some mental task? Are you just relying on the fact that machines have become increasingly more impressive and concluding this trend must continue until all mental tasks can be done?

User avatar
Ashenai
Forum Elf
Forum Elf
Posts: 10522
Joined: May 29, 2019

Re: slatestarcodex

Post by Ashenai » Fri Jul 02, 2021 12:20 am

I think the thing that convinced me the most is... the lack of a counterargument, actually.

I talked about the Turing Test, how it used to be the gold standard for whether a machine could "really think". The machines passed it, so whoops, the Turing Test was scrapped, and replaced by...

Nothing. It was not replaced by anything. To the best of my knowledge, we no longer have a generally accepted test for human-like intelligence. And the most obvious reason for this is because we can't come up with anything that we're confident can't be defeated, and quite quickly at that.

Note that as long as we don't have such a test, we can kind of pretend we do! We can say "machines can't write good sonnets", and that is currently correct, but of course the reason is that there's no great incentive to make a sonnet-writing AI, so no one has done it yet. If we ever make a Sonnet Test, with an associated prize and prestige like the Turing Test had, we know perfectly well that we'll see the Sonnet Test defeated within a year. But since it's pretty clear that would happen, there's little point in it, so no Sonnet Test.

To me, no tests says that we resign. We don't quite acknowledge that machines are going to surpass us in thinking, we're not quite there yet, but it does mean that our defensive positions have been overrun and all we have left is moving into untestable, non-scientific realms. "Machines will never be able to feel emotion", "Machines will never have qualia", "Machines can never really think, it's all just 1s and 0s, while our brains are a delicate mystery that can never be matched".

I wonder if, before cameras were invented, people thought that the human eye (I) was an incredible and unique device? Instead of what it is: a shitty camera made out of gelatin, which is a terrible material for a camera, but it was all that evolution had to work with.

That's what our brains are, in the grand scheme of things. Shitty computers, made out of gelatin. Not the material you would want to make a computer out of, but, again, evolution didn't have better options.

Rylinks wrote: Thu Jul 01, 2021 11:50 pm Are you just relying on the fact that machines have become increasingly more impressive and concluding this trend must continue until all mental tasks can be done?
We saw the God of the Gaps get weaker and weaker as science progressed. Once we understood how lightning works, he became unable to throw lightning bolts, and once we understood earthquakes, he lost that power as well.

I am seeing this exact pattern of regression in what we consider proof of "human thinking". Chess used to be one of those markers, I don't know if you knew that. I read old newspaper articles about how machines would never catch up to humans in chess, because that game requires true human insight.

Well, we lost chess. And then we lost the Turing Test. Then Go. We're beginning to lose writing and coding. I suppose it's possible that our Brain Magic of the Gaps has yet to show its true power, and it'll turn out that computers will never understand sonnets. A lot of things are possible. But that is not what I'm seeing. What I'm seeing is a shitty computer made out of gelatin, slowly being overtaken by better computers, and still trying to argue that gelatin is magic.

User avatar
Shiny Days
...but history refused to change
...but history refused to change
Posts: 2000
Joined: Feb 08, 2019

Re: slatestarcodex

Post by Shiny Days » Fri Jul 02, 2021 12:53 am

Ashenai wrote:Like I said, the definition that people are using in practice is "I'm not sure what intelligence is, but show me this AI of yours... aha! It can't do <names task achievable by a human>. See, it's just a dumb machine, no intelligence at all."

The result is literally what I said. Intelligence is defined as "what a human can do but a machine can't". This definition is never formally stated like that because it is so obviously flawed, but it is understood to be that. People want it to be that, they assume it to be that.

My answer, then, is that 0 progress will be made on AI until we suddenly have an AI that is capable of everything a human is. (Because anything less won't be accepted as any type of intelligence, due to the above heuristic.)
i will formally state it: being a human is doing what a machine can't, like enjoying a delicious and refreshing chilled coca cola beverage on a hot summer day

Post Reply