Mohammed Alsobay Mohammed Alsobay

Empirica Stories: Exploring cultural evolution in the virtual lab

Welcome to “Empirica Stories”, a series in which we highlight innovative research from the Empirica community, showcasing the possibilities of virtual lab experiments!

Welcome to “Empirica Stories”, a series in which we highlight innovative research from the Empirica community, showcasing the possibilities of virtual lab experiments!

Today, we highlight the work of Levin Brinkmann and collaborators on their paper Hybrid social learning in human-algorithm cultural transmission, recently published in Philosophical Transactions of the Royal Society A.

Levin Brinkmann is a predoctoral fellow at the Center for Human and Machines at the Max Planck Institute for Human Development. In 2014, he received a Master in Physics from the University of Goettingen. He joined the center after working for several years as a data scientist in the fashion and advertising industry, where he developed inspirational algorithms for creative workers. Levin's current interests are on the influence of algorithms on collective intelligence and on cultural evolution.

Tell us about your experiment!

There is anecdotal evidence that humans have reutilized solutions of an algorithm to their advantage. However, the scope and limits of such social learning between humans and AI are still unknown. To investigate this question, we ran an online study (N=177) with Empirica, in which participants solved a sequential decision-making task. We arranged participants in chains, such that participants' solutions earlier in the chain could influence participants later in the chain. In some of the chains, an algorithm would take the place of a human participant, allowing us to observe the extent to which humans adopted the learnings from observing the solution generated by their algorithmic predecessor.

Screenshot of the experiment interface: Participants were tasked with traversing a network by choosing 8 successive nodes, and were incentivized to choose paths that maximized the cumulative score (each transition’s value is indicated by the arrow connecting the respective nodes). The player’s score updated in real-time as they moved from one node to the next.

What parts of Empirica’s functionality made implementing your experimental design particularly easy?

I love that Empirica is very opinionated, but at the same time, very versatile. The well-structured data model allows you to get started quickly. But at the same time, it is impressive how much customization is possible. We made a lot of use of the different event hooks provided by Empirica and they were all pretty much self-explanatory. For instance, we used a hook to pull a new chain and network structure from the database after each round, to display to the participant.

How much effort did it take to get your experiment up and running? Did you develop it or outsource?

Both — in our case, we developed the game interface ourselves, and had resources available to hire a developer to help us with the integration into Empirica. That allowed us to be 100% focused on the research question. Later on, we were able to make modifications to the code ourselves.

Are there any interesting workarounds you came up with when Empirica didn’t do quite exactly what you needed?

Our experiment relied on chains of sessions, which was a data model not natively supported by Empirica. However, it was relatively easy to add an additional data model for chains that suited our needs.

What value do you think virtual lab experiments can add to your field of research?

Generally, the scale of human interactions is increasing steadily, with interactions between humans and machines happening predominantly online. While we do work with data collected in "the wild", I believe that controlled virtual lab experiments will remain the gold standard for research on human-AI interaction.

What’s next?

Since the writeup of this research, we have already started a new project using Empirica. In the new study, we run an interactive two player game and ask the question: “What mechanism do humans use to distinguish themselves from bots?”

Read More
Mohammed Alsobay Mohammed Alsobay

Empirica Stories: Experimenting with partisan bots to study political polarization

Welcome to “Empirica Stories”, a series in which we highlight innovative research from the Empirica community, showcasing the possibilities of virtual lab experiments!

Welcome to “Empirica Stories”, a series in which we highlight innovative research from the Empirica community, showcasing the possibilities of virtual lab experiments!

Today, we highlight the work of Eaman Jahani and collaborators on their preprint Exposure to Common Enemies can Increase Political Polarization: Evidence from a Cooperation Experiment with Automated Partisans.

Eaman Jahani is a postdoctoral associate at the UC Berkeley Department of Statistics, and completed his PhD at MIT, where his research was focused on micro-level structural factors, such as network structure, that contribute to unequal distribution of resources or information. As a computational social scientist, he uses methods from network science, statistics, experiment design and causal inference. He is also interested in understanding collective behavior in institutional settings, and the institutional mechanisms that promote cooperative behavior in networks or lead to unequal outcomes for different groups.

Tell us about your experiment!

Our group ran an experiment with two “people” in each instance to explore political polarization. In each instance, the network was a simple dyad, and the second person was in fact a bot that reacted to the real human, with the agents interacting for 5 rounds. The human subject and the bot interacted by updating their response to a question after receiving the response submitted by the other player. Overall, we recruited about 1000 subjects into 4 different treatments.

Experimental interface and design: Participants were randomly shown one of three priming articles: (1) “neutral” content on early human carvings in South Africa; (2) “patriotic” content on July 4th celebrations; and (3) “common-enemy” content on the combined threat of Iran, China, and Russia. After reading the article, participants were incentivized to accurately estimate the answer to a question about a political issue (immigration, for example, shown above). After submitting their initial estimate, participants were shown the estimate of a politically-opposed bot (presented as a human), and allowed to revise their estimates.

What parts of Empirica’s functionality made implementing your experimental design particularly easy?

BOTS! Empirica’s flexible bot framework made it easy to program bots that allowed for cleaner interventions.

How much effort did it take to get your experiment up and running? Did you develop it or outsource?

Although I wasn’t very familiar with front-end development, I managed to design the experiments myself. The platform design was very intuitive and the callbacks were clearly explained, making it easy to get up to speed with experiment design. At the time, I believe I was one of the first few users and there was not much documentation, but the user guide has significantly improved now and it should be much easier for someone new to design an experiment from scratch. That being said, even with limited documentation, the framework was intuitive enough that I was able to successfully implement and launch 2 experiments with no problems.

What value do you think virtual lab experiments can add to your field of research?

Virtual lab experiments enable us to test hypotheses that are nearly impossible and extremely costly to test in the field. Results from Empirica can give us a strong initial understanding of a phenomenon, potentially justifying a follow-up field experiment that is more costly and difficult to implement. The particular setup in which Empirica shines is networked experiments. These experiments can go wrong in so many ways in the field, but here we get to control the environment as much as possible and even create networks with bots. Bots are particularly useful, as they allow us to test counterfactuals that might be rare in reality.

Read More
Mohammed Alsobay Mohammed Alsobay

Empirica Stories: Using online multiplayer experiments to study team hierarchy

Welcome to “Empirica Stories”, a series in which we highlight innovative research from the Empirica community, showcasing the possibilities of virtual lab experiments!

Welcome to “Empirica Stories”, a series in which we highlight innovative research from the Empirica community, showcasing the possibilities of virtual lab experiments!

In our first interview, we highlight the creative work of Christopher To and collaborators on “Victorious and Hierarchical: Past Performance as a Determinant of Team Hierarchical Differentiation”, published in Organization Science. For this project, Christopher To, an incoming Assistant Professor at Rutgers SMLR, was joined by Tom Taiyi Yan (Assistant Professor at UCL School of Management), and Elad Sherf (Assistant Professor at the University of North Carolina at Chapel Hill Kenan-Flagler Business School).

Chris’ research explores the antecedents and consequences of competition and inequality. In some of Chris’ ongoing work, he asks questions such as "when does hierarchy and inequality reproduce in teams," "how does inequality change our ethical standards," and "how does inequality shape the meaning of work?". Tom’s research examines competition and social network effects such as brokerage and structural holes, as well as gender disparity in the workplace. Elad’s research investigates managers’ resistance to feedback and input from others, and the structural and psychological barriers managers face in attempting to be fair to others at work.

Tell us about your experiment!

We recruited approximately 750 participants from an online panel and assigned them to teams of three. Teams completed a spatial judgment task (i.e., guessing the direction of moving dots), and earned points based on the accuracy of each team member's guess. The "catch" was that teams only had 10 chances to guess, and they needed to allocate how many guesses each team member received. How would these distributions in guesses differ based on whether a team was told they were performing well or poorly (our experimental manipulation)?

What was helpful for us, was the game was fairly interactive. Participants could watch their teammates guess in real-time, and had to decide as a team (via real-time chat) how to allocate their guesses after receiving performance feedback - any decision one team member made would be immediately seen by the others. Relative to other online experiments in our field, this created a real sense that participants were part of a real live team.

Screenshot of the experiment interface: Participants were tasked with guessing the direction in which a collection of dots was moving on the screen, where half of the dots moved together in the same direction, and the remaining half moved in random directions.

What parts of Empirica’s functionality made implementing your experimental design particularly easy?

One neat thing about this study was that we did not need to manipulate network structures. In fact, we approached the platform as teams researchers, and used Empirica primarily to create online teams who can play a game in real-time. In a COVID world, Empirica provides a convenient solution to a huge logistical issue faced by teams researchers. It also opens opportunities to non-network researchers who want to study how teams interact in a more digital context.

The flexibility of the platform is incredible - any game/task/design you can think of, you can likely find a way to implement. The data it collects is also wonderful - essentially, any action made by the participants is recorded and can be analyzed. We found this helpful in the review process as some reviewers were curious about the content of chat logs and what/how teams were communicating. Luckily, Empirica captured this data and we were able to address the reviewer's concerns.

How much effort did it take to get your experiment up and running? Did you develop it or outsource?

We outsourced a large portion of the experiment to a freelancer. It took a small learning curve for the programmer to learn the system, but it was relatively short, and programmers who know Javascript are in high supply. It was quite easy to work with the freelancer - for the front-end, we just told him what we wanted, provided some sample screenshots, and he made it work; on the back-end we told him what actions should be recorded, and what we wanted the final output CSV/Excel file to look like. He took care of the rest.

For the smaller back-end details (e.g., changing questionnaire wordings), the lead author had a minor but sufficient programming background to make changes on the fly. Programming experience is extremely helpful, but perhaps not necessary.

Are there any interesting workarounds you came up with when Empirica didn’t do quite exactly what you needed?

Not that we can think of. Perhaps the main challenge was participant recruitment. We recruited respondents from Amazon Mechanical Turk, and they would sometimes populate a room (by clicking the game link) and then walk away from the computer. As a result, sometimes, virtual teams/rooms would be created, but games could not progress because a participant was not present at their computer.

To address this, we scheduled times where respondents had to proactively join (e.g., games took place at 9AM local time). This ensured that the respondents had to actively set aside time to opt into a game during its scheduled time (rather than passively joining or clicking the link while browsing).

What value do you think virtual lab experiments can add to your field of research?

Empirica helped us (as team researchers) in three ways. First, it provided us with a scalable and convenient context to study teams. Naturally, bringing people into a laboratory context during COVID is difficult. Even if you are able to bring people into the lab, data collection is usually slow and cumbersome. With Empirica, the data collection was faster and easily scalable. Second, new methods can introduce new questions. Empirica can introduce factors such as multiple rounds, member turnover, and repeated interactions, which may be more difficult to explore in traditional laboratory contexts. Third, because it is web-based, Empirica provides flexibility in how complex/simple you want the task to be. If you can design a game in a web-browser, you can design it in Empirica.

Read More
News Abdullah Almaatouq News Abdullah Almaatouq

Empirica has been chosen as the 2021 winner of SAGE’s £15,000 Concept Grant!

Funded by SAGE Publishing, Empirica Will Make Online Experiment Design More Accessible for Social Scientists.

SAGE Publishing — Press Release PR@sagepub.co.uk    

Six innovative software tools benefitting the social and behavioral sciences will receive SAGE Publishing’s 2021 SAGE Concept Grants, with £15,000 awarded to Empirica. An additional five seed grants of £2,000 will enable the development of earlier-stage tools that support research methods in the social sciences. 

“Despite the rising popularity of online research, social scientists still face many technical and logistical trade-offs when implementing virtual lab experiments,” say Empirica developers Abdullah Almaatoug and Nicolas Paton. “Existing tools that promise ‘build anything’ functionality often require advanced programming skills to design, while more accessible models limit researchers to predetermined research templates. Empirica offers a solution to the usability-functionality trade-off.” 

Empirica supports methodological advancement in two areas: First, it enables researchers to test thousands of experimental conditions for any given experiment. Second, it allows users to study groups comprising hundreds of interacting individuals. By using a flexible default structure and settings, Empirica provides modifiable templates for novice programmers and unlimited customization for advanced users. 

The funding from SAGE will allow the team behind Empirica to increase the accessibility of virtual lab experiments, remove barriers to innovation in experiment design, and enable rapid progress in the understanding of human behavior. New features will provide researchers with more options for designing group tasks and studying interpersonal dynamics. 

SAGE has awarded five additional grants of £2,000 to software tools in the early stages of development, to enable concept testing and software development. The 2021 winners are:  

  • Multytude by Hatice Ugurel, Yalin Solmaz, and Hande Enes: A social media platform designed to facilitate meaningful online conversations. Replicating in-person focus groups, the platform aims to help social science researchers collect and track robust public opinion data. 

  • SMIDGen by Matthew Louis Mauriello: A scalable, mixed-initiative dataset generation tool for online social science research – a semi-computational approach to enhance the replicability and scalability of data collection from online social networks. 

  • Intelliplanner by Willem Jan Horninge Roestenburg, Janus Roestenburg, and Emmerentie Oliphant: A software application to guide students and researchers in planning, mapping, and making decisions about the methods used in their social research projects. 

  • AcademicTwitteR Studio by Christopher Barrie and Justin Chun-ting Ho: An R package that makes the new Academic Research Product Track more accessible to researchers. 

  • REFI-QDA Project Exchange Standard by Christina Silver, Kristi Jackson, Fred van Blommestein, and Graham Gibbs: An open access and free standard to enable the transfer and accessibility of qualitatively analysed data across CAQDAS tools. 

 “When we launched the SAGE Concepts Grants program in 2018, we focused on funding tools that would help computational social scientists,” says Katie Metzler, Vice President of Books and Social Science Innovation at SAGE Publishing. “Now in its fourth year, the program has expanded to fund new tools that support the adoption, development, and application of established and emerging research methods. As the world’s leading research methods publisher, we are excited to broaden the scope of the Concept Grants to empower more social scientists to conduct impactful research.” 

To learn more, read an interview with the winners on Methodspace: https://www.methodspace.com/six-new-software-tools-supporting-research-methods-in-the-social-sciences-awarded-sage-concept-grants/  

Read More