?

Log in

No account? Create an account
 
 
09 January 2010 @ 08:11 pm
Knutsford Scibar  
I gave a talk at a Scibar on Monday evening. For those of you who don't know, a scibar is a meeting in a pub. The assembled throng invite a tame scientist to give a presentation for half an hour and then grill them for the next hour. This whole prospect is kind of scary, should you happen to be a scientist. B. got roped in to do one by the Knutsford crowd and, in a surprising failure of spousal responsibility, blithely said "Oh yes! My wife does Artificial Intelligence and Satellites, she'll come and talk to you."

I then, of course, made things much worse by saying, in response to the email I got sent, "Well, I could talk about artificial intelligence based programming languages for satellites, but everyone* always seems to want to know whether a machine could think". Suddenly I found I'd agreed to give a talk, to a pub full of members of the public accustomed to putting scientists on the spot, on the subject of "Could machines think" when I consider myself, at best, an educated layman on that particular topic.

Various members of my flist were witness to the ensuing panic.

Knutsford Scibar were really nice.

I had calmed down a little when I read their website and discovered that I was billed as an expert in Automated Reasoning and was giving a talk titled "Reasoning Machines". This let me split my talk into two and talk about Automated Reasoning for fifteen minutes** before biting the bullet and discussing the Turing Test, the Chinese Room argument and my own belief that I've no doubt a machine will, one day, pass almost any variant on the Turing Test you might choose, but that the Chinese Room*** argument, and some other attacks on the test have convinced me that we have a very poor grasp of what we mean by words like intelligence, thought and consciousness. I'm not convinced that the Turing Test actually is a good test of these things, but I've no idea what would be a good test.

I wasn't entirely surprised that we spent the next hour and a half discussing the Turing Test, the difference between behaving like you think or feel something and actually thinking or feeling something, and whether human thought was necessarily analogy/quantam/or non-digital in some other fashion. I didn't feel I got any questions I simply couldn't answer****, though there were a few where I wasn't quite sure what the question was, or at least what statement of mine the questionner was challenging. I think maybe, by starting the talk looking at Automated Reasoning techniques, I gave the impression I thought they were, in some fashion, the one true approach to computational intelligence when there are, of course, statistical and other approaches that are almost certainly going to prove hugely important in producing anything like a machine intelligence.

They then took me out for a very nice meal and put me on a train just as the snow started to fall.

Conclusion: The general public are not nearly as scary as they may, at first, appear.

* By everyone I, of course, meant random roleplayers met in pubs.

** One of the committee very nicely said he thought my explanation of Turing Machines, Logic and Automated Reasoning was one of the most accessible he'd come across.

*** The Chinese Room argument, broadly speaking, points out that behaving intelligently and being intelligent may not be quite the same thing.

**** B spent the preceding week or two pointing out I studied Philosophy at university, did a masters with components on the philosophy of AI, am interested in the subject and have a habit of going to talks and lectures on it when I get the chance and that, really, I'm about as educated as a lay person can get on the subject.
 
 
 
rodlox: Crane & friendsrodlox on January 9th, 2010 09:30 pm (UTC)
>he thought my explanation of Turing Machines, Logic and Automated Reasoning was one of the most accessible he'd come across.
you could write a book {on the subject(s)} - and it would be a best-seller.

>behaving intelligently and being intelligent may not be quite the same thing.
not sure I understand.

sort of like how the TARDIS is intelligent, but it can't act on its own, so it doesn't behave intelligently?
louisedennislouisedennis on January 9th, 2010 11:09 pm (UTC)
Well in this instance we're mostly looking at the question the other way round.

The Turing Test explicitly tests whether a machine behaves intelligently. The question is whether something that can pass the Turing Test is, necessarily, intelligent. The Chinese Room argument hypothesises a man, in a room, with a big book of rules. Chinese symbols appear on a screen, he looks rules up in his book and then sends more chinese symbols out. Someone outside the room thinks the room contains someone who understands Chinese, but it doesn't, it contains an English speaker and a big book of rules. So the room behaves as if it contains someone who understands Chines, but that doesn't mean that the person in the room actually does understand Chinese.

You can take that argument in several directions, and there is a vast literature refuting it. But I generally take it as highlighting that we have an inadequate understanding of what we mean by understanding (and also thought, consciousness and self-awareness, all of which fall under the same argument more or less).

Our best way of telling if some understands Chinese is to judge it's behave but that still doesn't mean that behaving as if you understand Chinese is the whole story...

EDIT: And, as you point out, it seems plausible that something could also be intelligent without behaving intelligently.

Edited at 2010-01-09 11:09 pm (UTC)
rodlox: Amitarodlox on January 10th, 2010 12:31 am (UTC)
thank you for explaining it.

hm...*me thinking further*...so there'd be no way to know if inside the room was nothing more than Babelfish - unless you tripped it up with figures of speech or entredes.

>it seems plausible that something could also be intelligent without behaving intelligently.
admittedly, my example was of something that could not behave (independently) at all.
wellinghallwellinghall on January 10th, 2010 12:25 pm (UTC)
Thanks for that. And well done!
fredbassettfredbassett on January 10th, 2010 07:17 pm (UTC)
You're a brave woman! And yes, B did fail in the spousal responsibility test!

Are you around for a conference call sometime later this week re the VS4 ep? We'll need to coordinate with Munchkin.
louisedennis: primevallouisedennis on January 11th, 2010 10:22 am (UTC)
Should be. Today, tomorrow (Tuesday) or Friday would be best.
fredbassettfredbassett on January 11th, 2010 10:29 am (UTC)
At the moment, Friday is looking best for me :)
the little creep: snufkinnyarbaggytep on January 10th, 2010 07:59 pm (UTC)
I imagine you'd be ace at giving exactly that kind of talk. You are a lot smarter than you give yourself credit for after all. So I feel vindicated hearing it went well! :)
louisedennis: acadenialouisedennis on January 11th, 2010 10:36 am (UTC)
Thanks!

One of the huge problems with this kind of thing is knowing at what level to pitch the talk - assume too much and you loose people, assume too little and you appear patronising. I took the fact there was a lively debate after I finished the presentation as a sign that I got the level about right. I'm usually fine once I'm up in front of an audience, but a lot of agonising went into preparing the slides.
the little creepnyarbaggytep on January 11th, 2010 11:49 am (UTC)
*nods*
That does sound very hard. Lively debate is a good sign!