Workshop Series "Linguistics Meets ChatGPT", starts Dec. 12, 2025, details under "Workshops".
Gauss:AI promotes interdisciplinary dialogue across linguistics, computer science, and education — continuing the Gauss-inspired spirit of uniting human and artificial intelligence.
Gauss:AI is the research platform for the linguistic and theoretical findings of MANOVA AI.
Gauss:AI brings together three thematic Transformers, each dedicated to a core question of human and artificial intelligence:
LingTransformer — Language research in the light of mathematics and Large Language Models (LLMs)
LearningTransformer — Learning in the AI era: human learning versus machine learning
CodeTransformer — Vibe coding: natural languages versus programming languages
Some details
Language and LLMs (The transformer architecture)
Large Language Models (LLMs) do not have grammar of any kind, what does this mean for our understanding of language, its evolution, acquisition and processing by the human brain?
In 2021, I posted on the linguistic archive LingBuzz the paper The linear order of elements in prominent linguistic sequences: Deriving Tns-Asp-Mood orders and Greenberg’s Universal 20 with n-grams in which I, among other things, show that syntactic trees used for formal representation of language in linguistics research (cf. Chomsky's approach) are not hierarchical structures and have an unnatural direction of growth: from leaves to the root (the paper has over 6700 downloads only on LingBuzz). Since 2023, Noam Chomsky, Matilde Marcolli and Robert Berwick have been looking for new representations of syntactic structures and have thus confirmed the correctness of my research, their LingBuzz papers can be accessed here. For my other papers on the topic of LLMs in relation to linguistics theory, click here.
Human and Machine Learning
Do humans and machines learn in the same way, i.e. the role of linear sequences of elements (sequential data as used in LLMs) in human learning; as well as text writing with AI: LLMs tend to make specific mistakes when writing texts, especially scientific ones; why is this the case and is it possible to produce LLM-generated texts that provide only correct scientific information? This part of the initiative is called LearningTransformer.
Computer Programming
It is so easy to code with an LLM such as, e.g., ChatGPT? Does this mean that human and computer languages work in the same way, see on Language above? Is code written by an LLM trustworthy or similar to a scientific text written by AI may contain errors and hallucinations?
Why Gauss:AI? My personal story
I received an international and interdisciplinary education that included mathematics, computer science, languages and literatures, pedagogy and pedagogical psychology, all this topped with general linguistics. Research on AI gives me the chance to unite the various fields of my education and to make the world a better place -- by challenging stereotypical ways of thinking about humans and machines, better & worse nations, genders, and educational systems. In Gauss:AI Global everybody is welcome. If you are interested in joining the initiative, just drop me a message here.