Simple Mental Models

This project draws inspiration from a theoretical model proposed by Haghtalab and Jackson (PNAS, 2021). They propose to model economic agents as machine learners who are using some regularization method to determine which variables to include in their models of the world. 

Agents regularly receive information and have to make predictions about uncertain states of the world. In their model, each agent observes information and predicts the state repeatedly. They then observe whether their prediction was correct or not. Once they learn the truth, they are able to fit their model again and make a new prediction.  However, when there is abundant information, the mental cost of incorporating all the variables into the model is too high. Because of these, people might resort to "simple" models. These are models in which only a subset of the relevant variables is used for prediction (as in a Lasso or Ridge regression model).

In this project, we investigate the assumption that people might use simple models when learning. We further investigate whether this assumption holds by investigating its implications on the polarization of beliefs. If, indeed, people use simple models, then different people might choose to incorporate different variables into their personal models. If they do so, then we can identify situations in which those models will cause people to disagree on what the optimal prediction is. We find that, in fact, people are significantly more likely to disagree in situations where the theory predicts so, suggesting that the use of simple models could be behind persistent disagreements. 

To investigate these hypotheses, we designed a simple laboratory experiment where participants observe the value of 5 different variables and need to predict whether an unknown outcome will be red or blue. 


This plot shows the share of correct predictions made by subjects in the first stage of the experiment. We see that when participants had access to all 5 variables, as they gained more experience, they became better at making predictions. And after only 10 rounds of practice, they became significantly better at predicting than a random guess. 

This positive trend and significant difference from 50% accuracy indicate that subjects are, in fact, learning and possibly forming mental models. 

We allow participants to observe all 5 variables for several periods to allow them to learn which variables are the most informative. We then randomize the number of variables that they can see. In each subsequent round, they are allowed to see only 1, 2, 3, 4, or 5 of the variables. Conditional on the number of variables drawn that round, they can choose which of the 5 they want to use. The restriction on the number of variables can be seen as a choice of different regularization parameters in each round. This allows us to identify the size of the models participants use to inform their predictions. Because the assignment is random, a simple regression is enough to estimate the causal effect of allowing them to use an additional variable. If an additional variable has no effect, it means that the subject has reached their cognitive limit and is not incorporating that additional information into their model. This tells us the size of the model that the subjects are using. 

The figures below show the accuracy or prediction by the number of variables that participants chose to see, as well as whether they are actively seeking to gather as much information as they can.

This figure shows that allowing subjects to use two variables rather than just one, improves their accuracy of prediction by 10 percentage points above a random guess. However, when allowed to use 3 or more, their accuracy no longer increases.

This figure shows that participants are not exhausting the information that is available to them. When they are allowed to observe up to 5 variables, they, on average, choose to observe only 4.  When they are allowed to observe 4, they choose an average of 3.7.

We interpret these results as evidence that people are indeed simplifying the information that they receive and could be using mental models. Further evidence of  the use of simple models is that we find more polarization in situations where the theoretical model predicts it according to the models being used by each person. 

To determine polarization, the experimental design is such that subjects are placed in pairs at the beginning of the experiment (without their knowledge). Both people within a pair observe the same information and must predict the state in every round. This way, the only reason for being polarized is that they have developed different prediction models and not that they have received different information.

The plot on the right shows that polarization happens more often in situations where it is predicted by the theory and the difference is statistically significant at a 1% confidence.

Another feature of the mental models that people are using is that they seem to be sticky: people do not realize that the model that they are using might not be the best one, and they continue to consider the same variables even after performing poorly. This implies that differences in model use do not disappear, and therefore, we can expect polarization to continue to exist even with infinite amounts of information.

The plot on the left shows the average accuracy of the models chosen round by round, disaggregated by the number of variables. The dashed lines indicate the maximal level of accuracy achievable conditional on each model size.

There is no significant upward trend, and the average accuracy is significantly lower than the maximum. These two things tell us that people do not initially develop optimal models and they are not converging to them either. 

These results brig us closer to understanding how people learn how to make predictions about uncertain outcomes. By understanding this, we can take the first steps in addressing learning gaps and developing policies to facilitate correct learning, thus mitigating some of the negative effects of polarization.

As a part of this project, I also compared people's own performance in learning how to predict to the performance of two machine learning algorithms: SVM and Naive Bayes. 

I fed the algorithms the exact same data as that observed by the participants for multiple values of regularization parameters. I was looking for a parametrization that would lead to a model with only one or two variables since that is what the experiment participants seem to be using. 

I find that Naive Bayes performs with an accuracy that is very similar to that of the highest-performing humans. However, with enough information, the algorithm outperforms our participants by about 6pp in terms of accuracy. On the other hand, SVM performs significantly worse than humans when there is little information available and only catches up with human performance after six rounds of information. This suggests that when information is limited, humans might be better at recognizing the important patterns and not overfitting the data. As the amount of data increases, algorithms perform better since they are no longer overfitting.  The figures below show human performance (for two groups: those who did better than random in the first part of the experiment,  and those who did worse ), compared to algorithmic performance.

A draft of the paper will be available soon in my research section. If you are interested in the code used to generate the plots above and the experimental interface used to collect the data, you can find it in this repository