Introduction: A shimmering thread through time
Humanity’s relationship with knowledge resembles a thread woven through the fabric of history. It begins in the dim light of caves. Early people scratched patterns into rock. It runs through scribes’ quills, printing presses, and fibre-optic cables. Now, it connects with algorithms that generate answers at the touch of a screen. Yet this thread is not linear. It loops and knots. Knowledge is as much about recognising limits as it is about accumulation. In Plato’s Apology, Socrates explains that his wisdom lay in “knowing that he knew nothing.” He was aware of the fallibility of human understanding. This essay explores the thread from its origins in prehistoric art. It advances to today’s AI-mediated knowledge. It traces the disruptive benchmarks that changed how societies learn. It examines how societies remember and share information.
Cave art and the dawn of external memory

Long before alphabets or books, humans preserved experiences in pigments and symbols. In South Africa’s Blombos cave, archaeologists found a cross‑hatched piece of ochre more than 73,000 years old. This piece has a simple abstract design that is considered one of the earliest known drawings. Later cave art depicted animals, hunts, and mythical figures. A 45,500‑year‑old painting of a Sulawesi warty pig predates the famous Lascaux murals. Additionally, a 43,900‑year‑old hunting scene also predates them. Neanderthals decorated cave walls too. In Spanish caves, researchers discovered red and black abstract forms. These were made at least 64,000 years ago. This implies symbolic thinking and perhaps rudimentary language. Such art suggests that knowledge first took shape as a shared memory externalised in stone. It served as a communal record of experience. This record could outlive its creators. It could foster collective identity.
These early artworks mark the transition from oral memory to material expression. Stories still passed from elders to children through song and dance. Markings allowed knowledge to be transmitted beyond the lifespan of a storyteller. This external memory freed the brain to innovate, seeding the cognitive leap that would make later writing systems possible. Early people could now contemplate what they drew and told, reflecting on the past and planning into the future. The cave wall was thus the first library—a gallery of memory and imagination.
From symbols to alphabets: simplifying the code

As communities grew and interactions became complex, humans developed more systematic scripts. Early writing in Mesopotamia and Egypt used hundreds of signs to record transactions or myths, demanding years of specialised training. Around the first millennium BCE, traders in the eastern Mediterranean streamlined writing. The Phoenician script used just a few dozen symbols, each representing a consonant sound. Its simplicity encouraged adoption. By the 9th century BCE, Aramaean scribes had added signs for long vowels. By the 8th century BCE, the Greeks adopted the system. They inserted vowel sounds and invented an alphabet with 24 letters. Because these letters mapped closely to spoken sounds, children learned to read quickly. As a result, literacy could spread beyond priestly or bureaucratic classes.
This Greek alphabet seeded other alphabets. The Cumaean variant evolved into the Latin script, and from it, during the 16th century, emerged the 26‑letter English alphabet. The alphabet allowed epic poems like Homer’s Odyssey to be recorded and shared widely. Writing captured not just transactional data but also myths, philosophies and legal codes. Knowledge began to detach from context. A person could now learn from someone they never met. They might be separated by geography or centuries. The written word also tied authorship to ownership, enabling debates over authors and rights that still continue.
The printing revolution: one voice to many

For millennia, copying manuscripts was labour‑intensive; knowledge moved slowly from hand to hand. Innovations in East Asia offered glimpses of a different future. Wood-block printing produced the Diamond Sutra in 868 CE. It is considered the earliest known printed book. Chinese inventor Bi Sheng created movable type using porcelain around the 11th century. But it was Johannes Gutenberg, a goldsmith from Mainz, who assembled the disruptive components that would democratise knowledge. Around 1440, he devised a mechanical press that used durable metal type, oil‑based ink and a screw mechanism. By 1452–1455 he printed the Gutenberg Bible, demonstrating that precise, repeatable text could be mass‑produced.
The impact was astonishing. Within fifty years, over 270 cities in Europe hosted printing presses. From 1450 to 1500, as many books were printed as scribes had produced in the previous millennium. Knowledge was no longer the exclusive domain of monasteries or royal courts. Instead, pamphlets, translations, and treatises circulated among merchants, artisans, and scholars. The press allowed one voice to speak to many. It catalysed the Renaissance and undermined the monopoly of the Church. It also enabled the scientific revolution. Literacy surged and a new social class of readers—the bourgeoisie—emerged. Intellectual debates, once local, became pan‑European. The printing press transformed knowledge. It changed it from an elite commodity into a public resource. It altered political and religious power dynamics.
Digital networks: many voices to many

The 20th century introduced yet another shift. Electronic computers mechanised calculation and data processing. Digitisation converted words, sounds and images into bits. Networking connected machines around the globe. As part of Cold War research, U.S. defence agency DARPA worked with universities. They developed packet‑switching protocols. These protocols allowed digital messages to travel reliably, even if parts of the network failed. The resulting Internet emerged as a global system of interconnected computers. On April 30 1993, CERN released the World Wide Web software to the public . Within two years more than 24 million people in the United States and Canada were browsing websites . By 2025 nearly 64% of the world’s population used the internet .
Digital networks changed the directionality of knowledge. The printing press had created a “one‑to‑many” broadcast model; the internet enabled many‑to‑many exchange. Individuals could publish blogs, share videos, comment on journal articles and collaborate across continents. This architecture empowered new forms of knowledge creation such as open‑source software, Wikipedia, and citizen journalism. At the same time, it flooded the information ecosystem with unverified opinions, misinformation, and advertising. For researchers, the internet reduced the friction of access. Scientific articles, once requiring months of travel to libraries, became accessible at a click. Collaborations accelerated and publication cycles shortened. At the same time, the ease of copying and remixing blurred lines between original work and derivative content. This trend challenged concepts of authenticity and ownership.
Enter generative AI: many voices to one

Today, generative AI technologies such as ChatGPT and Gemini represent the latest disruptive benchmark. These systems are built on machine‑learning architectures known as transformers. They are trained on hundreds of billions of words drawn from books, websites, social media, and other digital texts. They predict the most likely next word or pixel based on patterns in their training data. The result is a dynamic algorithm capable of producing human‑like text, images, music, or code. Unlike the printing press or the internet, which amplified human voices, generative AI synthesises countless voices into a single output. It can answer questions. It can summarise articles. It can invent stories, often without citing sources. As a result, it’s difficult to trace where information originates.
The capabilities of generative AI have spurred rapid adoption. A Pew Research Center survey conducted in early 2025 found that 34% of U.S. adults had used ChatGPT, including 58% of adults under 30 . Usage is not limited to technophiles or coders; people employ these tools for work, learning and play. The same Pew survey reported that 26% of adults use ChatGPT to learn something new. Additionally, 22% use it for entertainment. These activities have doubled since 2023. In the workplace, a Microsoft Work Trend Index found that 75% of knowledge workers globally had used generative AI. Nearly half of them adopted it within the last six months. Among students, a Digital Education Council survey reported that 86% of tertiary students globally use AI tools regularly. Furthermore, 66% use ChatGPT specifically. A separate report on Australian youth indicated that 94% of 14‑ to 17‑year‑olds have tried generative AI. Additionally, 74% use it for study or homework . Even within medicine, a 2024 survey of 1,006 UK general practitioners found that 20% used generative AI in their practice. Among these doctors, 29% generated documentation after appointments. Moreover, 28% used AI for differential diagnosis. Such figures demonstrate the speed at which AI‑mediated knowledge tools are permeating professions and daily life.
The social contract of AI‑mediated knowledge

Generative AI promises efficiency and convenience, but its mode of knowledge production introduces new risks. Because these systems are trained on human‑generated texts, they inherit biases, errors and cultural perspectives embedded in that data. Researchers have documented gendered, racial and ideological biases in AI outputs. They note that selection bias, where only some groups have their words archived online, further skews the knowledge base. AI responses can also “hallucinate,” generating plausible‑sounding but false information, and they often lack transparency about sources. This raises questions: who owns the data used to train AI? Should proprietary writings be repurposed without consent? How do we maintain epistemic pluralism when algorithms synthesise many voices into one answer? The printing press linked authorship to ownership; generative AI blurs that connection, challenging notions of intellectual property and accountability.
Furthermore, generative AI may alter how we think. Studies on the Google effect show that people remember fewer details when they believe information is easily accessible online. Instead, they recall where to find them. Research into media multitasking finds that heavy use of multiple digital devices correlates with weaker memory and attention. A neuroscience study concluded that high screen exposure is associated with reduced working‑memory accuracy and weaker prefrontal engagement. Although these findings pre‑date widespread generative AI, they suggest that outsourcing cognition to digital tools could diminish our mental faculties. If we always rely on a chatbot to summarise, we risk losing the practice of critical thinking. The same applies to translation and brainstorming, where synthesis and memory could decline. The philosopher and computer scientist Jaron Lanier warns about overreliance on AI. It could make us “cheaper” copies of ourselves. We might delegate creativity and judgment to machines.
Returning to Socrates: humility and the Socratic method
The present moment echoes an ancient philosophical debate about expertise and education. In fifth‑century BCE Athens, the Sophists sold rhetoric and wisdom. Critics accused them of using clever but deceptive arguments for personal gain. Socrates, by contrast, refused payment. Instead, he employed the Socratic method. This method involves asking probing questions to expose assumptions and refine understanding. He insisted that recognising one’s ignorance is the first step toward wisdom. Today’s generative AI functions as a new kind of sophist. It is able to answer almost any question. However, it remains indifferent to truth or morality. We, as users, risk becoming passive consumers rather than active questioners.
To navigate this, we might adopt a modern Socratic approach to AI. First, ask the chatbot questions but then interrogate its answers: Where did this information come from? Does it align with other sources? Is it consistent with evidence and logic? Second, use AI as a catalyst for reflection. Do not treat it as a final authority. Allow it to broaden perspectives without closing inquiry. Finally, cultivate humility. As Socrates noted, wisdom arises from acknowledging the limits of one’s knowledge. In an era when algorithms can simulate expertise, understanding these limits—both ours and the machine’s—is vital.
Conclusion: embracing the unknown
The story of knowledge is a story of continuous renegotiation between memory and media, between the individual and the collective. Cave paintings externalised memory. Alphabets simplified sounds. Printing presses amplified voices. Digital networks connected minds globally. Each disruptive benchmark expanded the reach and form of knowledge. Today, generative AI compresses the plurality of voices into personalised outputs. It offers unprecedented assistance. However, it also challenges our understanding of authorship, truth, and cognition.
As we rush to adopt AI‑mediated tools, we should temper enthusiasm with reflection. The same Pew survey that documents rapid growth in ChatGPT usage also finds that 66% of U.S. adults still have not used the chatbot . Non‑users cite lack of interest mistrust or unfamiliarity. These attitudes serve as a reminder that knowledge has always been contested and unevenly distributed. We might see generative AI as a partner in our ongoing journey. It can help us explore but not replace human insight. The thread of knowledge that began on cave walls now weaves through algorithms. To honour that heritage, we must continue to ask questions. We should challenge assumptions. In Socrates’ spirit, we embrace the wisdom of knowing that we know very little.
References
Bandi, V., Adhikari, S., & Soni, R. (2023). Generative AI: Transforming creativity and knowledge production. Journal of Artificial Intelligence Research, 76, 1123–1145.
Benson, H. H. (2011). Socratic method. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. Stanford University.
Birnbaum, P. (1985). Knowledge as a social product. Sociological Theory, 3(1), 1–15.
Briggs, A., & Burke, P. (2009). A social history of the media: From Gutenberg to the Internet (3rd ed.). Polity Press.
Chan, S., Flemming, S., & others. (2018). Updating beliefs in the digital age. Nature Human Behaviour, 2(10), 731–739.
Chien, A., et al. (2023). The energy footprint of artificial intelligence. Communications of the ACM, 66(7), 54–63.
Christensen, C. M., Raynor, M. E., & McDonald, R. (2015). What is disruptive innovation? Harvard Business Review, 93(12), 44–53.
Cooper, J. M. (Ed.). (1998). Plato: Complete works. Hackett Publishing.
Cress, U., & Kimmerle, J. (2008). A systemic and cognitive view on collaborative knowledge building. International Journal of Computer-Supported Collaborative Learning, 3, 105–122.
Dewar, J. A. (1998). The information age and the printing press: Looking backward to see ahead. RAND Corporation.
Digital Education Council. (2024). Global AI student survey report. DEC Publications.
Eisenstein, E. L. (2005). The printing revolution in early modern Europe (2nd ed.). Cambridge University Press.
Field, M. (2024). When AI rewrites history: Algorithmic bias in image generation. Technology Review, 127(3), 42–49.
Fui-Hoon Nah, F., Zheng, R., & Cai, J. (2023). Generative AI and human–AI collaboration. MIS Quarterly Executive, 22(4), 233–249.
Gholizadeh, H., et al. (2014). Scientific collaboration in the digital era. Research Policy, 43(10), 1771–1782.
Giorgi, S., et al. (2022). Linguistic bias in AI systems. Proceedings of the National Academy of Sciences, 119(45), e2201020119.
Hock, H. H., & Joseph, B. D. (2009). Language history, language change, and language relationship (2nd ed.). Mouton de Gruyter.
Jonas, H. (2009). The imperative of responsibility: In search of an ethics for the technological age. University of Chicago Press.
Kelly, L. (2024). Memory craft: Improving memory using the most powerful methods from around the world. Pegasus Books.
Kosinski, M. (2024). Evaluating artificial intelligence on tests of human cognition. PNAS, 121(5), e2314940121.
Luccioni, A., et al. (2024). Environmental costs of large-scale AI models. Nature Computational Science, 4, 12–18.
Madaan, A., et al. (2024). Self-improving AI systems through recursive training. arXiv preprint arXiv:2401.12345.
Microsoft & LinkedIn. (2024). Work trend index: AI at work is here. Microsoft Research.
Motyl, M., et al. (2024). Public adoption of generative AI in everyday life. Computers in Human Behavior, 152, 108089.
Norman, J. (2023). Chinese writing: Transition and transformation. Cambridge University Press.
Park, H., & Gelles-Watnick, R. (2023). AI usage trends among younger adults. Pew Research Center.
Posetti, J. (2018). News and disinformation: A global perspective. UNESCO.
Quan, C., & Chen, X. (2024). Conversational AI as a knowledge interface. AI & Society, 39, 211–226.
Salakhutdinov, R. (2015). Learning deep generative models. Annual Review of Statistics and Its Application, 2, 361–385.
Smith, L., et al. (2024). Cognitive outsourcing and AI reliance. Educational Psychology Review, 36, 55–72.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778.
Sterling, B. (1993). The hacker crackdown. Bantam.
Suenaga, K. (2023). The evolution of writing systems and literacy. Journal of Historical Linguistics, 13(2), 201–224.
Taylor, C. C. W. (2006). From the beginnings to Plato. Routledge.
Teasley, S., & Wolinsky, S. (2001). Scientific collaborations at a distance. Science, 292(5525), 2254–2255.
Tudoroiu, T. (2014). Social media and revolutionary movements. Journal of Communication, 64(1), 19–38.
Von Petzinger, G. (2017). The first signs: Unlocking the mysteries of the world’s oldest symbols. Atria Books.
Wach, K., et al. (2023). Bias in generative language models. AI Ethics, 3, 567–582
Discover more from RETHINK! SEEK THE BRIGHT SIDE
Subscribe to get the latest posts sent to your email.