Competition in science: *stela.manova@univie.ac.at* has been hacked!
🤖 Gauss:AI Global is still in preparation! 🤖
The website of the initiative, https://gaussaiglobal.com, is under construction!
For inquiries: manova@gaussaiglobal.com
Social media: https://x.com/GaussAIGlobal
Gauss:AI Global, or human intelligence (HI) and artificial intelligence (AI), honors the genius of the great mathematician Carl Friedrich Gauss and the recent developments in AI research. That is, it considers both HI and AI equally important. Gauss:AI researches AI models in relation to:
Language
Large Language Models (LLMs) do not have grammar of any kind, what does this mean for our understanding of language, its evolution, acquisition and processing by the human brain? This part of the initiative is called LingTransformer* and its goal is, among other things, to give language research a novel perspective comparable and compatible with the development of LLMs in computer science.
In 2021, I posted on the linguistic archive LingBuzz the paper The linear order of elements in prominent linguistic sequences: Deriving Tns-Asp-Mood orders and Greenberg’s Universal 20 with n-grams in which I, among other things, show that syntactic trees used for formal representation of language in linguistics research (cf. Chomsky's approach) are not hierarchical structures and have an unnatural direction of growth: from leaves to the root (the paper has over 5000 downloads only on LingBuzz). Since 2023, Noam Chomsky, Matilde Marcolli and Robert Berwick have been looking for new representations of syntactic structures and have thus confirmed the correctness of my research, their LingBuzz papers can be accessed here. For my other papers on the topic of LLMs in relation to linguistics theory, click here.
Human and Machine Learning
Do humans and machines learn in the same way, i.e. the role of linear sequences of elements (sequential data as used in LLMs) in human learning; as well as text writing with AI: LLMs tend to make specific mistakes when writing texts, especially scientific ones; why is this the case and is it possible to produce LLM-generated texts that provide only correct scientific information? This part of the initiative is called LearningTransformer.
Computer Programming
It is so easy to code with a LLM such as, e.g., ChatGPT? Does this mean that human and computer languages work in the same way, see on Language above? Is code written by a LLM trustworthy or similar to a scientific text written by AI contains mistakes, see on Learning above? Can code produced by AI be improved and, if yes, how? This part of the initiative is called CodeTransformer.
* "A transformer model is a neural network that learns context and thus meaning by tracking relationships in sequential data like the words in this sentence. ", https://blogs.nvidia.com/blog/what-is-a-transformer-model/
Why Gauss:AI? My personal story
I received an international and interdisciplinary education that included mathematics, computer science, languages and literatures, pedagogy and pedagogical psychology, all this topped with general linguistics. Research on AI gives me the chance to unite the various fields of my education and to make the world a better place -- by challenging stereotypical ways of thinking about humans and machines, better & worse nations, genders, and educational systems. In Gauss:AI Global everybody is welcome. If you are interested in joining the initiative, just drop me a message here.
The website of the initiative, https://gaussaiglobal.com, is under construction!
🤖
Meanwhile, you can have a look at the following papers of mine that are on LLMs and related issues (just click on a title):
(2024). Linguistic theory, psycholinguistics and large language models
(2024). A reply to Moro et alia’s claim that “LLMs can produce ‘impossible’ languages”
(2023). ChatGPT, n-grams and the power of subword units: The future of research in morphology