Speaker
Description
By drawing parallels between linguistic structure and the formulation of Lagrangians — where fields, terms, and symmetry conservation resemble words, sentences, and grammatical structures — we explore the use of transformer architectures, specifically BART, for generating Lagrangians in particle theory. Trained on datasets of approximately 300,000 Lagrangians each, our models demonstrate high accuracy in generating symmetry-conserving Lagrangians containing up to six fields. They demonstrate the ability to recognise important features such as dummy indices, spins, field conjugations and much more. While still in the exploratory phase, it demonstrates the potential of leveraging machine learning for formal theoretical tasks in particle physics and lays the groundwork for developing foundational models in this field.