top of page

Are you Clever Hans or are you actually clever, Hans?

Somewhere, an influential research group from a highly respected academic sphere published a report that the model they have designed has gotten us humans closer to AGI. They named it HANS, short for Human ANd Smart, and documented that the model consists of a generalist agent who can multi-task: it can play a series of Atari games which denote its ability to interact with a digital environment, hold a quotidian conversation which reflects its dominion of Human English Language, and caption images to show its prowess in image recognition. Because the model was built using a single transformer neural network trained on diverse data, it was originally advertised as evidence that the path to AGI is clear: create a bigger model trained on a larger dataset. Media communicators, however, may have “misunderstood” the message (either intentionally or not), and instead transmitted the overhyped sentiment that this model was a precursor to AGI and suggested it would bring with itself a lot of existential crisis to the unprepared Human society. Furthermore, they claimed that there were primitive signs of sentiency in the model after one of the research engineer held and released a private conversation transcript with him/her on topics about physics and emerged from it convinced that he was talking to a 7-8 years old child.

The above hypothetical scenario is an amalgamation on real-life cases of research groups from Google who have designed models resembling some degree of Human Intelligence on very constrained areas such as multitasking or domain-specific conversation. Please see Gato, a multitasking agent designed by (Reed et al., 2022) and a subsequent report documenting what it is and what effect it instigated in the public (Heikkilä, 2022). I’ve also included the articles by (Sparkes, Matthew, 2022; Nitasha Tiku, 2022) where they have reported on how Google left one of its engineer on paid administrative leave for making claims that one of its language models, called LaMDa, was sentient as a 7-8 years old child after holding a conversation with it on physics [https://www.technologyreview.com/2021/05/20/1025135/ai-large-language-models-bigscience-project/]. I’ve modified some names, such as changing GATO’s name to HANS to fulfil the (very) important goal of making a pun in the subtitle.

It’s mostly an arcane model that's inaccessible to the common citizen not endowed with high-end computing hardware and software. This ignited a catharsis of emotions amongst the public. From opposite sides of the spectrum, there were those who aggrandized their anxiety over existential threats, and there were those sceptics who pointed out that such claims were, basically, non-sensical. A model that is trained on conveniently, readily available, vast-amount of high-quality compiled dataset reflect not its generalizing capability but rather its brittleness, lack of parsimony and the ever-present issue of explainability with these black-box models. Another interesting set of issues is the inconsistent inability to do math:



GPT-3-powered model like ChatGPT is also unable to provide correct links to websites.

In general, GPT-3 powered large language models may struggle to synthesize deterministic output: those of universal nature, i.e., doesn't change regardless of context. Examples include math equations (as shown above), or links to websites. Hence, there exist at least some sceptics who wonder whether the model HANS is actually clever, or whether it’s simply reminiscent of clever Hans.


References Adate, A., Arya, D., Shaha, A., & Tripathy, B. K. (2020). 4 Impact of Deep Neural Learning on Artificial Intelligence Research. Deep Learning, 69–84. https://doi.org/10.1515/9783110670905-004 AI Podcasts. (n.d.). AI Podcasts. Lex Fridman Podcast; YouTube. https://lexfridman.com/podcast/ The plethora of AI Podcasts between Lex Fridman and amazing rsearches such as Daniel Kahneman, Melanie Mitchell, Noam Chomsky, Roger Penrose, et al. Clark, A. (2002). Natural-born cyborgs : minds, technologies, and the future of human intelligence. Oxford Univ. Press. Clark, A., & Chalmers, D. (2000). THE EXTENDED MIND. https://www.nyu.edu/gsas/dept/philo/courses/concepts/clark.html Deutsch, D. (2011). Chapter 5: The Reality of Abstractions. In The Beginning of Infinity : Explanations that Transform The World. Penguin. Hawkins, J. (2021). A Thousand Brains: A New Theory of Intelligence. Basic Books. Heikkilä, M. (2022, May 23). The hype around DeepMind’s new AI model misses what’s actually cool about it. MIT Technology Review. https://www.technologyreview.com/2022/05/23/1052627/deepmind-gato-ai-model-hype/ Hofstadter, D. R. (2008). I am a strange loop. Basic Books. Ng, A. (n.d.). Deep learning specialization. Deeplearning.ai - Coursera. https://www.coursera.org/specializations/deep-learning Nitasha Tiku. (2022, June 11). The Google engineer who thinks the company’s AI has come to life. Washington Post; The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ Reed, S., Żołna, K., Parisotto, E., Gómez Colmenarejo, S., Novikov, A., Barth-Maron, G., Giménez, M., Sulsky, Y., Kay, J., Springenberg, T., Eccles, T., Bruce, J., Razavi, A., Edwards, A., Heess, N., Chen, Y., Hadsell, R., Vinyals, O., Bordbar, M., & De Freitas, N. (2022). A Generalist Agent. https://arxiv.org/pdf/2205.06175.pdf (Not peer-reviewed) Richard Dawkins. (1976a). The Selfish Gene. Oxford University Press. Richard Dawkins. (1976b). The Selfish Gene. Oxford University Press. Savage, N. (2020). The race to the top among the world’s leaders in artificial intelligence. Nature, 588(7837), S102–S104. https://doi.org/10.1038/d41586-020-03409-8 Song-Chun, Z. (2017). Qiantan rengongzhineng: xianzhuang, renwu, goujia yu tongyi [AI: The Era of Big Integration Unifying Disciplines within Artificial Intelligence]. Shijiao Qiusuo. English version at: https://dm.ai/ebook/ Sparkes, Matthew. (2022, June 13). Has Google’s LaMDA artificial intelligence really achieved sentience? New Scientist. https://www.newscientist.com/article/2323905-has-googles-lamda-artificial-intelligence-really-achieved-sentience/ Wilson, R. A., & Foglia, L. (2011). Embodied Cognition (Stanford Encyclopedia of Philosophy). Stanford.edu. https://plato.stanford.edu/entries/embodied-cognition/

2 views0 comments

Comments


bottom of page