?

Log in

No account? Create an account
 
 
19 October 2008 @ 08:09 pm
Swarm Verification  
I keep telling people, well academics interested in Computer Verification, about Gerard Holzmann's talk at Automated Software Engineering on Swarm Verification


Gerard Holzmann develops a model-checker called SPIN. In many ways it's the best known model checker out there. Model checkers check if a program is correct by running them on all possible inputs (sort of) - obviously this takes a long time. Anyway Holtzmann said that model checkers had been getting better and better partly because computers had been getting faster and faster.

However in the past five to ten years actual CPU speeds have not been getting faster at the same rate (certainly not at the Moore's Law rate), instead computer manufacturers have been constructing multi-cored machines. This means that for model checking software to keep up it has to exploit multi-cores. The interesting bit was that Holzmann had done some tests on running SPIN so it had several processes running at once. The "obvious" idea would be to get one run of SPIN to search several different inputs using shared memory to avoid duplicating the search. There was then lots of technical stuff about computer hardware from which I understood that you build a multi-core machine by having groups of four cores clustered around a bit of memory. Accessing the bit of memory a core is close to is "quick" but accessing memory close to a different core cluster takes a long time.

Holzmann had done some experiments with multiple processes and shared memory and it seemed pretty clear that with up to four processes you got a good speed up with the multi-core machine but above four processes and the speed up was wiped out by the slow down caused by accessing the memory that was at a different core cluster. Worse, once the nearby processes had finished the far away ones didn't migrate close to the memory - they stayed slow.

So instead of using shared memory Holzmann instead ran lots of completely separate versions of SPIN. He set them to search randomly - so potentially there was duplication in the search - but in fact the system did cover much of the search space. In a simple trial, setting the system to search for 100 numbers in a large set of possibilities the multiple process version found 98 of the numbers in an hour while the single cored system found only 2.
 
 
 
wellinghallwellinghall on October 20th, 2008 10:16 am (UTC)
model checking

adaese gets cross when I try to check models ... ;-)
louisedennis: rantlouisedennis on October 20th, 2008 10:46 am (UTC)
I bet she does :-)
king_pellinorking_pellinor on October 20th, 2008 10:40 am (UTC)
Why can't the search space be divided up between the versions of SPIN, rather than having them all tackle the same thing randomly?
louisedennis: ailouisedennis on October 20th, 2008 10:46 am (UTC)
Well err basically because you don't know what the search space looks like until you start exploring it. It's a big tree structure - they just get each version of SPIN to pick a branch at random whenever there is a choice and trust this will get them a good spread through the possible branches.
king_pellinorking_pellinor on October 20th, 2008 11:07 am (UTC)
OK, fair enough :-)

Is there any way to bias them in different directions, so they don't overlap so much, or would that risk missing out some areas altogether?

I speak from complete ignorance here, of course :-D
louisedennis: ailouisedennis on October 20th, 2008 11:22 am (UTC)
He has one process that always takes the left-most branch and one which always takes the rightmost (and they then do standard depth-first search within that) - but I think the others are all just set to random. You could probably weight the searches to tend to the left or the right and then have a spread of such tendencies.

I'm fairly sure they're not doing that but, now you've suggested it, it seems a fairly obvious direction to go in, in order to try and improve coverage.