Tag Archives: Artificial Intelligence

Collective Intelligence

10 Dec

Thomas Malone runs a fascinating group called the MIT Centre for Collective Intelligence. He is working “to understand the conditions that lead to collective intelligence rather than collective stupidity”.

(He has a good grounding having studied maths, computer science, economic systems and cognitive psychology.)

Here are the key points from an interview he recently gave about his work.

He describes the internet as a form of collective intelligence drawing particular attention to Linux and Wikipedia. He thinks “they’re just barely the beginning of the story. We’re likely to see lots more examples of Internet-enabled collective intelligence – and other kinds of collective intelligence as well – over the coming decades”.

As such he says his group is trying to find out how “people and computers can be connected so that—collectively—they act more intelligently than any person, group or computer has ever done before”. He says that “if you take that question seriously, the answers you get are often very different from the kinds of organizations and groups we know today”.

A major key to understanding this is finding a measure of collective intelligence using “a single statistical factor that predicts how well a given group will do on a very wide range of different tasks”.

Interestingly “the average and the maximum intelligence of the individual group members was correlated, but only moderately correlated, with the collective intelligence of the group as a whole”.

He found that there were two factors that had the most influence on his measure of group intelligence:

The first was the average social perceptiveness of the group members. We measured social perceptiveness in this case using a test developed essentially to measure autism. It’s called the “Reading the Mind and the Eyes Test”. It works by letting people look at pictures of other people’s eyes and try to guess what emotions those people are feeling. People who are good at that work well in groups. When you have a group with a bunch of people like that, the group as a whole is more intelligent.

The second factor we found was the evenness of conversational turn taking. In other words, groups where one person dominated the conversation were, on average, less intelligent than groups where the speaking was more evenly distributed among the different group members.

I find this area of discussion particularly interesting working, as I do, on large IT projects. Their success is significantly influenced by the way the people work together given the processes and personalities involved. Also as the problem domains of human endeavour become ever more complex, it’s less easy for individuals to solve problems, and it’s group efforts that are required – the Large Hadron Collider is a great example.

It’s interesting to contrast his work with the principles in James Surowiecki’s book The Wisdom Of Crowds which talks about bringing the knowledge of individuals to bear on a problem. He claims that wise decisions are made by taking the average of the decisions of a group of individuals providing that three conditions hold: diversity of experience among the individuals, decentralisation of knowledge and independence of decision-making across the group.

In this fun article Tom Stafford writes that we rely more on our environment for intelligence than we like to think. Whether it’s Google and Wikipedia or people and our surroundings. He describes how we take so many cognitive short cuts that we actually don’t bother remembering many things, for example, if the people around us are likely to remember them for us. He says “our minds are made up just as much by the people and tools around us as they are by the brain cells inside our skull“.

Malone suggests: “You might well argue that human intelligence has all along been primarily a collective phenomenon rather than an individual one. Most of the things we think of as human intelligence really arise in the context of our interactions with other human beings. We learn languages. We learn to communicate. Most of our intellectual achievements as humans really result not just from a single person working all alone by themselves, but from interactions of an individual with a culture, with a body of knowledge, with a whole community and network of other humans.

I think and I hope that this approach to thinking about collective intelligence can help us to understand not only what it means to be individual humans, but what it means for us as humans to be part of some broader collectively intelligent entity.

On Intelligence by Jeff Hawkins and Sandra Blakeslee

13 Mar

A book with many fascinating insights into how the mind works, sadly flawed with some brash assumptions and glossing over important issues.

So his aim is to correctly understand how the mind works – he says most other people have got it wrong, and no one has a grand theory of how the mind works like he does. Ahem, yes, bit grandiose and martyr-like. The book is that, i.e. a grand theory of how the mind works. It’s the first part of his work. The second will be to figure out how to truly put this algorithm to work inside a machine. He says that while the AI industry has come up with some great applications, they missed the point as they started implementing AI before they fully understood the intelligent brain they were aiming to base it on.

I laud his aims – he made his money building personal gadgets (he designed the Palm Pilot) and is now spending his gains on his intelligence institute.

In summary, he is a long way from his grand claim of a complete understanding of the way “intelligence” works, but what he has produced is a good step in the right direction towards increasing our understanding.

Let’s get the negatives out of the way first:

On p41 he makes the big, big assumption that only neocortex houses intelligence. This is pretty brash, given how we’re still learning about how the mind works and so much other stuff I’ve read shows how all the parts of the mind influence the others. That said, he hadn’t defined intelligence at that point.

And to say on p43 that the mind is produced only by the brain period is also bold given all the research about body-mind, and the nervous system around the body which many think has a lot more to play in the makeup of the mind than we intuitively assume. It has been argued that the mind wouldn’t function without the body, c.f. the feedback stuff he mentions. And this seems to be an unnecessary assumption.

And to his excellent paradigm for understanding how the mind works:

He posits that the mind can take any input – we have sight, hearing, etc. and learn to process it. It works the same for each: it takes in data over a period of time. Sight is not a snapshot – the eye has three “saccades” every second; it takes in a little part of the field of vision each time and builds up a picture over time. Similarly and more intuitively with hearing – we process a series of sounds over time. It wouldn’t make any sense if we simply had a snapshot of sound at a point in time. And so with touch – if you wake up touching something you can’t figure out what it is until you’ve moved along it, i.e. a sequence of touch input over time.

Then the cortex is made of 6 layers of neurons, each layer holding data that are an abstraction of the data in the lower layer. So for example when you hear music the lowest layer will register the notes, the next layer will put those notes into riffs, and so on. Or when you’re reading you’ll get letters at the lowest level, morphemes at the next, then words at the third, then phrases, and so on until you have more abstract understanding at the top.

When we’re learning something new, say reading, the simple part, i.e. the letters will go right to the top layer and we’ll be aware of that. As we learn, the letter bit goes down to a layer of which we’re not aware, and we can think more of the words, and as we get a bit more adept the meanings are all we need to consider at the top layer, and so on. So as we practise something, it gets so that we need to consider less of the details, which means we’ll only be conscious of the highest, i.e. most abstract layer.

Unless of course there’s an “error”. Only errors filters up the chain, say if you are walking into your house, the way you always do, but you suddenly notice that a floorboard is loose – that will shoot up the layers until the higher layers become aware of it. Otherwise all the actions are pretty autonomous.

So the abstraction process is what the brain excels at, and something I’ve always intuitively thought the brain did too, so good to see someone else confirming the theory.

Now given his system for how the mind works, the final chapter on applying this algorithm to all walks of life is most inspiring. The idea of this algorithm’s ability to learn given any input is powerful indeed. So just plug it into a camera and a car, for example, and it would use that same algorithm to figure out driving. And once we spend the time training one system we can simply copy it and refine it. It would truly revolutionise our world.

%d bloggers like this: