Workshops
Workshop 1: MSc Zak Hussain & Prof. Dr. Rui Mata
A tutorial on open-source large language models for behavioral science
Large language models (LLMs) have the potential to revolutionize behavioral science by accelerating and improving the research cycle, from conceptualization to data analysis. Unlike closed-source solutions, open-source frameworks for LLMs can enable transparency, reproducibility, and adherence to data protection standards, which gives them a crucial advantage for use in behavioral science. To help researchers harness the promise of LLMs, this tutorial offers a primer on the open-source Hugging Face ecosystem and demonstrates several applications that advance conceptual and empirical work in behavioral science, including feature extraction, fine-tuning of models for prediction, and generation of behavioral responses. Tutorial participants will work in Python notebook environments. Since the notebooks come with the code already written and explained, participants are not required to have any Python-specific knowledge. However, a basic understanding of general programming concepts is important. Finally, the tutorial opens up a discussion on the challenges faced by research with (open-source) LLMs related to interpretability and safety and offers a perspective on future research at the intersection of language modeling and behavioral science.
Workshop 2: Dr. Dirk U. Wulff
Clarifying the organization of psychological constructs and measures with the R package embedR
Embeddings from large language models provide numerical representations that can accurately predict the relatedness between texts and, thus, present powerful tools for qualitative data analysis. In this workshop, we will use state-of-the-art embedding models from Hugging Face and other sources, including OpenAI, to clarify the relationships between psychological constructs and measures with a focus on personality psychology. These analyses will be facilitated by the R package embedR (https://dwulff.github.io/embedR).