Anthony DeCostanzo

Representational alignment between large language models and neural activity

Team: Lorenzo Fontolan (Inmed/CPT)

His background

December - present |  CENTURI Post-doc, Marseille, France
November 2022 - present | Founder, Stealth Startup, Tokyo, Japan
September 2020 - October 2022 | Product Manager, Toyota, Tokyo, Japan
January 2018 - August 2020 | Product Manager, Ascent Robotics, Tokyo, Japan
October 2012 - December 2017 | Research Scientist, RIKEN, Tokyo, Japan
February 2008 - September 2012 | Post-doc, Columbia University, New York, US

 

About his postdoctoral project

Large language models (LLMs) trained on naturalistic data learn high-dimensional representations that support flexible, context-dependent inference. In parallel, neural activity underlying human decision-making exhibits complex, distributed dynamics that remain difficult to characterize using traditional experimental approaches. Although LLMs are trained on linguistic data, their emergent behavior suggests that their internal representations may capture abstract computational structures that are also relevant to decision-making processes.

This project investigates whether internal representations of large language models align with human neural activity during decision-making. We extract representations from pretrained models and evaluate their ability to predict neural signals. By comparing representations across model architectures and layers, we aim to characterize representational features that may correspond to decision-related neural dynamics.