Log in

No account? Create an account
20 December 2012 @ 02:01 pm
Paper Accepted: Agent Reasoning for Norm Compliance: A Semantic Approach  

We are pleased to inform you that your paper #671
Title: Agent Reasoning for Norm Compliance: A Semantic Approach
has been accepted for full publication and oral presentation in the proceedings of the Twelfth International Conference on Autonomous Agents and Multiagent Systems (AAMAS2013).

This will be my first full AAMAS paper and I'll confess to being pleasantly surprised that it was accepted since it ended up being a frantic rush to complete, with me tweaking the draft while Birna (the first author) was on a transatlantic flight and then Birna doing a final pass and submitting it at some hideous hour while jet-lagged. So I think it's fair to say it's not the most polished piece of writing I've ever been involved with. Fortunately we now have the time to sort it out before publication. B, a this point, would make disparaging remarks about Computer Science and our fondness for conference publication over journals.

If you have a bunch of agents working together in some kind of organisation then one of the mechanisms for controlling that work is via norms (e.g., things the agents should or should not do while working for the organisation). We argued that the kind of logic-based cognitive agents that we work with, can't have the norms programmed in, in advance (we want it to be possible to form organisations on the fly) and that norms can't be treated in the same way as the other standard components of logical agents (specifically they can't be treated as plans or goals). So we proposed an extended logical framework to allow an agent to obey its norms. We tried to prove some properties of the framework but actually had a lot of difficulty finding any (in my role as counter-example finder in chief, I kept finding out that our proposed theorems weren't actually true). We did manage to characterise the situations in which a norm could prevent an agent doing something it might otherwise have done and show that that characterisation was accurate.

It's a four author paper. Birna was the main driver behind it and a lot of the work setting up the framework was hers. I came up with counter-examples, helped with the proofs and wrote the motivating example with which I'm rather pleased:

As an example let us consider an autonomous robot intended for use in emergency search and rescue situations. We can expect this robot to be deployed in a variety of situations as part of a mixture of organisational structures; structures which may well have been assembled rapidly and may well change frequently as events progress. As such, while the robot’s basic capabilities (e.g., searching buildings, removing rubble, etc.) will remain unchanged they will need to fit within different organisational protocols. For instance in some situations a robot may be expected to perform some tasks (e.g. moving rubble) only under supervision from a human; in other situations it may be trusted to complete all its tasks entirely autonomously. It is vital, therefore, that the robot can flexibly incorporate different normative rules into its reasoning.

So for instance, such a robot’s basic behaviour may be to search all rooms in a building exhaustively, proceeding from room to room based upon its internal beliefs about which rooms have been surveyed. Now suppose this robot is placed within an organisational structure where groups of agents are assigned to each building and work together cooperatively to search it. A key to this exploration happening efficiently will be the agents communicating with each other to prevent individual rooms being visited by more than one robot. A typical norm for such a group might be that the robot always communicates that a room has been surveyed before it moves to another room. It is important to note that such a norm should not be hard-wired into the robot in advance since, the next time it is deployed, the organisational structure may be different. Also, we should reasonably expect such norms not only to prohibit the execution of actions but also to insist that actions not normally occurring in the agent’s plans take place (such as communicating information, requesting assistance, etc.).

This entry was originally posted at http://purplecat.dreamwidth.org/84344.html.
fredbassett: Basset with pompomsfredbassett on December 20th, 2012 03:01 pm (UTC)
I am officially impressed!
louisedennislouisedennis on December 20th, 2012 03:20 pm (UTC)
It's a nice piece of news to get just before Christmas.
lukadreaminglukadreaming on December 20th, 2012 03:29 pm (UTC)
Splendid news - congratulations! I *think* I see what you explained, but please don't test me on it ...
louisedennislouisedennis on December 20th, 2012 03:59 pm (UTC)
wellinghallwellinghall on December 20th, 2012 03:37 pm (UTC)
This will be my first full AAMAS paper

Well done!
louisedennis: ailouisedennis on December 20th, 2012 04:01 pm (UTC)
I have grumped a little about how previous papers, which have represented considerably more work, have only had short presentations. I even complained a bit about how theory always seems to trump implementation, though I don't actually have any data to back that up.
Kargicqkargicq on December 20th, 2012 06:23 pm (UTC)
Keep that complaint as a theory, not properly developed, and you're obviously onto a winner...

athene: connor smiling s2deinonychus_1 on December 20th, 2012 09:46 pm (UTC)
Well done! :-)
louisedennislouisedennis on December 21st, 2012 09:34 am (UTC)
Cordelia Delayne: [stock] snow leopardcordeliadelayne on December 22nd, 2012 12:38 am (UTC)
Very impressive!