hype surrounding AI, some ill-informed concepts concerning the nature of LLM intelligence are floating round, and I’d like to deal with a few of these. I’ll present sources—most of them preprints—and welcome your ideas on the matter.
Why do I believe this matter issues? First, I really feel we’re creating a brand new intelligence that in some ways competes with us. Subsequently, we must always intention to guage it pretty. Second, the subject of AI is deeply introspective. It raises questions on our pondering processes, our uniqueness, and our emotions of superiority over different beings.
Millière and Buckner write [1]:
Specifically, we have to perceive what LLMs characterize concerning the sentences they produce—and the world these sentences are about. Such an understanding can’t be reached by armchair hypothesis alone; it requires cautious empirical investigation.
LLMs are greater than prediction machines
Deep neural networks can kind complicated constructions, with linear-nonlinear paths. Neurons can tackle a number of capabilities in superpositions [2]. Additional, LLMs construct inside world fashions and thoughts maps of the context they analyze [3]. Accordingly, they don’t seem to be simply prediction machines for the subsequent phrase. Their inside activations suppose forward to the tip of a press release—they’ve a rudimentary plan in thoughts [4].
Nonetheless, all of those capabilities rely on the scale and nature of a mannequin, so they might fluctuate, particularly in particular contexts. These basic capabilities are an lively area of analysis and are most likely extra just like the human thought course of than to a spellchecker’s algorithm (if you must choose one of many two).
LLMs present indicators of creativity
When confronted with new duties, LLMs do extra than simply regurgitate memorized content material. Moderately, they will produce their very own solutions [5]. Wang et al. analyzed the relation of a mannequin’s output to the Pile dataset and located that bigger fashions advance each in recalling info and at creating extra novel content material.
But Salvatore Raieli lately reported on TDS that LLMs should not artistic. The quoted research largely targeted on ChatGPT-3. In distinction, Guzik, Erike & Byrge discovered that GPT-4 is within the high percentile of human creativity [6]. Hubert et al. agree with this conclusion [7]. This is applicable to originality, fluency, and adaptability. Producing new concepts which might be in contrast to something seen within the mannequin’s coaching information could also be one other matter; that is the place distinctive people should still be better off.
Both means, there’s an excessive amount of debate to dismiss these indications totally. To be taught extra concerning the basic matter, you may lookup computational creativity.
LLMs have an idea of emotion
LLMs can analyze emotional context and write in numerous kinds and emotional tones. This implies that they possess inside associations and activations representing emotion. Certainly, there’s such correlational proof: One can probe the activations of their neural networks for sure feelings and even artificially induce them with steering vectors [8]. (One solution to establish these steering vectors is to find out the contrastive activations when the mannequin is processing statements with an reverse attribute, e.g., disappointment vs. happiness.)
Accordingly, the idea of emotional attributes and their attainable relation to inside world fashions appears to fall inside the scope of what LLM architectures can characterize. There’s a relation between the emotional illustration and the next reasoning, i.e., the world because the LLM understands it.
Moreover, emotional representations are localized to sure areas of the mannequin, and plenty of intuitive assumptions that apply to people may also be noticed in LLMs—even psychological and cognitive frameworks might apply [9].
Be aware that the above statements don’t suggest phenomenology, that’s, that LLMs have a subjective expertise.
Sure, LLMs don’t be taught (post-training)
LLMs are neural networks with static weights. Once we are chatting with an LLM chatbot, we’re interacting with a mannequin that doesn’t change, and solely learns in-context of the continuing chat. This implies it might probably pull extra information from the online or from a database, course of our inputs, and so forth. However its nature, built-in data, abilities, and biases stay unchanged.
Past mere long-term reminiscence techniques that present extra in-context information to static LLMs, future approaches might be self-modifying by adapting the core LLM’s weights. This may be achieved by frequently pretraining with new information or by frequently fine-tuning and overlaying extra weights [10].
Many different neural community architectures and adaptation approaches are being explored to effectively implement continuous-learning techniques [11]. These techniques exist; they’re simply not dependable and economical but.
Future growth
Let’s not overlook that the AI techniques we’re presently seeing are very new. “It’s not good at X” is a press release which will shortly grow to be invalid. Moreover, we’re often judging the low-priced shopper merchandise, not the highest fashions which might be too costly to run, unpopular, or nonetheless stored behind locked doorways. A lot of the final yr and a half of LLM growth has targeted on creating cheaper, easier-to-scale fashions for shoppers, not simply smarter, higher-priced ones.
Whereas computer systems might lack originality in some areas, they excel at shortly making an attempt totally different choices. And now, LLMs can choose themselves. Once we lack an intuitive reply whereas being artistic, aren’t we doing the identical factor—biking by ideas and choosing the perfect? The inherent creativity (or no matter you wish to name it) of LLMs, coupled with the power to quickly iterate by concepts, is already benefiting scientific analysis. See my earlier article on AlphaEvolve for an instance.
Weaknesses resembling hallucinations, biases, and jailbreaks that confuse LLMs and circumvent their safeguards, in addition to security and reliability points, are nonetheless pervasive. However, these techniques are so highly effective that myriad functions and enhancements are attainable. LLMs additionally don’t have for use in isolation. When mixed with extra, conventional approaches, some shortcomings could also be mitigated or grow to be irrelevant. As an example, LLMs can generate lifelike coaching information for conventional AI techniques which might be subsequently utilized in industrial automation. Even when growth had been to decelerate, I imagine that there are many years of advantages to be explored, from drug analysis to training.
LLMs are simply algorithms. Or are they?
Many researchers at the moment are discovering similarities between human pondering processes and LLM data processing (e.g., [12]). It has lengthy been accepted that CNNs may be likened to the layers within the human visible cortex [13], however now we’re speaking concerning the neocortex [14, 15]! Don’t get me unsuitable; there are additionally clear variations. However, the capability explosion of LLMs can’t be denied, and our claims of uniqueness don’t appear to carry up properly.
The query now’s the place it will lead, and the place the bounds are—at what level should we focus on consciousness? Respected thought leaders like Geoffrey Hinton and Douglas Hofstadter have begun to understand the potential for consciousness in AI in gentle of current LLM breakthroughs [16, 17]. Others, like Yann LeCun, are uncertain [18].
Professor James F. O’Brien shared his thoughts on the subject of LLM sentience final yr on TDS, and requested:
Will we’ve a solution to check for sentience? If that’s the case, how will it work and what ought to we do if the outcome comes out constructive?
Shifting on
We ought to be cautious when ascribing human traits to machines—anthropomorphism occurs all too simply. Nonetheless, additionally it is straightforward to dismiss different beings. Now we have seen this occur too usually with animals.
Subsequently, no matter whether or not present LLMs develop into artistic, possess world fashions, or are sentient, we’d wish to chorus from belittling them. The subsequent technology of AI might be all three [19].
What do you suppose?
References
- Millière, Raphaël, and Cameron Buckner, A Philosophical Introduction to Language Models — Part I: Continuity With Classic Debates (2024), arXiv.2401.03910
- Elhage, Nelson, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, et al., Toy Models of Superposition (2022), arXiv:2209.10652v1
- Kenneth Li, Do Large Language Models learn world models or just surface statistics? (2023), The Gradient
- Lindsey, et al., On the Biology of a Large Language Model (2025), Transformer Circuits
- Wang, Xinyi, Antonis Antoniades, Yanai Elazar, Alfonso Amayuelas, Alon Albalak, Kexun Zhang, and William Yang Wang, Generalization v.s. Memorization: Tracing Language Models’ Capabilities Back to Pretraining Data (2025), arXiv:2407.14985
- Guzik, Erik & Byrge, Christian & Gilde, Christian, The Originality of Machines: AI Takes the Torrance Test (2023), Journal of Creativity
- Hubert, Okay.F., Awa, Okay.N. & Zabelina, D.L, The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks (2024), Sci Rep 14, 3440
- Turner, Alexander Matt, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDiarmid, Activation Addition: Steering Language Models Without Optimization. (2023), arXiv:2308.10248v3
- Tak, Ala N., Amin Banayeeanzade, Anahita Bolourani, Mina Kian, Robin Jia, and Jonathan Gratch, Mechanistic Interpretability of Emotion Inference in Large Language Models (2025), arXiv:2502.05489
- Albert, Paul, Frederic Z. Zhang, Hemanth Saratchandran, Cristian Rodriguez-Opazo, Anton van den Hengel, and Ehsan Abbasnejad, RandLoRA: Full-Rank Parameter-Efficient Fine-Tuning of Large Models (2025), arXiv:2502.00987
- Shi, Haizhou, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Zifeng Wang, Sayna Ebrahimi, and Hao Wang, Continual Learning of Large Language Models: A Comprehensive Survey (2024), arXiv:2404.16789
- Goldstein, A., Wang, H., Niekerken, L. et al., A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations (2025), Nat Hum Behav 9, 1041–1055
- Yamins, Daniel L. Okay., Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo, Performance-Optimized Hierarchical Models Predict Neural Responses in Higher Visual Cortex (2014), Proceedings of the Nationwide Academy of Sciences of america of America 111(23): 8619–24
- Granier, Arno, and Walter Senn, Multihead Self-Attention in Cortico-Thalamic Circuits (2025), arXiv:2504.06354
- Han, Danny Dongyeop, Yunju Cho, Jiook Cha, and Jay-Yoon Lee, Mind the Gap: Aligning the Brain with Language Models Requires a Nonlinear and Multimodal Approach (2025), arXiv:2502.12771
- https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/
- https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-hofstadter-changes-his-mind-on-deep-learning-and-ai
- Yann LeCun, A Path Towards Autonomous Machine Intelligence (2022), OpenReview
- Butlin, Patrick, Robert Lengthy, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Fixed, George Deane, et al., Consciousness in Artificial Intelligence: Insights from the Science of Consciousness (2023), arXiv: 2308.08708