My Authors
Read all threads
This week in "Network Epistemology" we looked at collective search in complex landscapes.

A group of people are trying to find the high point in a fitness landscape. The question: how should the group communicate in order to ensure the best outcome?
Imagine you want to find the best place to lay on a beach. You can't anticipate ahead of time how good each spot will be, but you can try one and then move to see if another is better. You ultimately want to find the best spot to plant your umbrella.
Some of your friends are also on the same beach, but they are trying out different spots. You can communicate with some of them, but not all. They can say "here's where I am and here's how good it is." If a friend has a better spot, you might move toward her.
This is the basic idea, and the fundamental question is: how should you communicate with your friends?

We started by reading this paper by Grim, @philosophydan, and colleagues.

cambridge.org/core/journals/…

They use computer simulations to address this question.
Their main findings are that groups do better when they communicate less. That is, people should pay attention to a few of their friends, but not that many. This is consistent with several earlier papers on the subject, but collectively they are all quite surprising.
The surprising conclusion: it's better to be less informed. The basic idea is that, by being less informed the group searches the entire beach better than they would if they all talked. Communicating widely makes it likely that you'll find a "local optimum" but not a global one.
The paper explores variations in what they call the "fiendishness" of the fitness landscape, and shows how the conclusions are largely robust to variation in difficulty.
It also seems plausible that by limiting information you slow down group learning. And this is demonstrated in this model: those networks that do better also take longer to succeed.
I've written about this before, and I think this may be a robust trade-off in a lot of learning situations. We can have accurate or fast, but not both, in collective learning. Which we prefer may depend on the type of problem we face.
This model is presented as a model of scientific decision making. The class spent a lot of time talking about whether the models assumptions fit that situation. In particular, we talked quite a lot about the degree to which the "landscape" metaphor was appropriate.
Shifting gears, slightly, we then looked at this paper by @RobertGoldston5 and colleagues that presents two lab experiments on humans tasked with very similar problems: trying to find the place place to locate themselves on a landscape.

gureckislab.org/papers/Goldsto…
One of the laboratory experiments is very similar to the model from Grim, Singer, et al. And they conclusions are also very similar. Goldstone et al. find that humans sometimes do better when they have limited information.
We talked a little about what to make of this consonance between results. One the one hand it confirms some aspects of the Grim, Singer model. That is, if we replace their silicon agents with real humans -- agents made of meat -- we get similar conclusion.
But there still remains a tricky inference problem. Will meat agents (i.e. humans) perform the same in realistic settings that they do in Goldstone's experiment? We have to make some guesses about that in both cases.
This shows how experiments often have many of the same inference problems that mathematical models do. In both cases, you have to infer from the experiment/model to real-world behavior. And in both cases, you have to make your best guess about that connection.
Lastly we looked at the most realistic online experiment, conducted by @winteram and @duncanjwatts. They had people play an online game which was equivalent to the landscape search problems discussed above.

pnas.org/content/109/3/…
They didn't focus on the *amount* of information communicated, but instead investigated other features of network structure. They found some interesting conclusions. Most notably that networks with low "clustering" did very well.
But also they found that low mean-path length was highly correlated with success. This meant that information about good locations traveled fast. This contradicted the results of some agent-based models and leads to some concern with the applicability of those models.
This was an interesting week because we had perhaps the most clear comparison between simulation models and human experiments -- which allowed some results to be confirmed while others were challenged.
In addition to the surprising conclusions about group learning. I think these examples present a really good case study for philosophers interested in the connection between models, lab experiments, and the real world.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Kevin J.S. Zollman

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!