Evaluating inter-brain synchrony during (dis)agreement in monastic debate
Monastic debate as practiced by Tibetan monks (van Vugt et al. 2020).
The Quick Take
Inter-brain synchrony is a measure of synchrony between the brain activity of two or more people. This appears to be affected by social circumstances. For example, the inter-brain synchrony between two people is increased during cooperation, whilst it is not during competition. A similar effect was found for periods of agreement and disagreement in monastic debate. However, this was based on subjective annotations. In this thesis, I focused on explicit utterances of agreement and disagreement. In this case, no significant differences were found. However, I was able to distinguish between the two conditions using a Support-Vector Machines (SVM) classifier, which is a popular machine learning algorithm. This disconnect between the statistics and the machine learning algorithms opened up new questions about the exact role of inter-brain synchrony in agreement and disagreement.
Introduction
Inter-brain synchrony is the synchronisation of brain activity between two or more people. It is thought to play a big role in interpersonal dynamics and shared experiences. It has therefore been studied to better understand how our brains work during social activities, such as talking, sharing stories, and cooperation.
This social phenomenon is particularly interesting in the context of monastic debate. Monastic debate is a highly social and interactive form of meditation that helps Buddhist monks deepen their understanding of philosophical material. The debate helps monks develop strong skills for assessing their partner's mental and emotional states. In addition, it also helps to cultivate cognitive and emotional skills.
Buddhist monks are believed to have strong inter-brain synchrony due to their years of intense training. Furthermore, it is thought to increase during agreement in debate. This is also what my thesis supervisor found by comparing the inter-brain synchrony of debating monks using electroencephalography (EEG). In her study, the debate was annotated by fellow monks to assess whether they were in agreement and disagreement. My thesis aimed to test this using a more standardised approach: by only focusing on the (standardised) reply of the defending monk to assess if they are in disagreement or not.
Execution
I decided to approach this in two different ways: first I wanted to see if it was possible to find a significant difference in the inter-brain synchrony between when the monks agreed versus when they disagreed. Afterwards, I put this further to the test by training various machine learning models to see if any of them would be able to distinguish between agreement and disagreement on a single-trial basis.
For the statistical analysis, I tested every combination of channel (electrode) and frequency band, as the brain operates at different frequencies. Most statistical tests assume data to be independent. Since the trials come from pairs of monks, the data within each pair isn't independent of each other. I chose the linear mixed effects model, as it accounts for non-independence in data, treating differences between pairs as random effects. After correcting the results for multiple measurements, no significant differences remained. As a result, I couldn't draw any conclusions about whether inter-brain synchrony differed between agreement and disagreement.
After the statistical approach, I moved on to machine learning techniques, to see if they could offer a clearer distinction. I selected three classifiers known to handle small sample sizes well: Support-Vector Machines (SVM), shrinkage Linear Discriminant Analysis, and Extreme Learning Machine, which are machine learning algorithms that classify data based on their patterns. Different algorithms may be more suited to pick up different patterns.
I fitted the hyper-parameters and trained the models using cross-validation, a method to evaluate performance by splitting the data into training and test sets. All of the classifiers performed above chance-level, but the SVM clearly outperformed the others with an accuracy of 93%. I also measured the F1-score, which is the harmonic mean of precision and recall and gives a balanced measure of model accuracy. The F1-score was 0.93 which means that the model performs very well.
Results
Although I couldn't find a significant statistical difference between agreement and disagreement, I was able to distinguish them using machine learning techniques. This aligns more with the earlier study's findings. Instead of supporting the earlier finding that agreement increases inter-brain synchrony, my results raise more questions. Why is there such a disconnect between the SVM performance and the statistical findings? And, does agreement only lead to an increased inter-brain synchrony, or is there more going on?
Nathan