The Triangle of Knowledge

Photo by Franck V. on Unsplash

OpenAI debuted its largest language model GPT3 with 175 billion parameters back in May and started to open the API to selected people last week — you can check out the MIT Technology Review article and a blog post by Arram Sabeti to see many interesting generated texts (lyrics, stories, conversations, user manuals, and even guitar tabs and JSX code).

The MIT article points out: “Exactly what’s going on inside GPT-3 isn’t clear”:

Exactly what’s going on inside GPT-3 isn’t clear. But what it seems to be good at is synthesizing text it has found elsewhere on the internet, making it a kind of vast, eclectic scrapbook created from millions and millions of snippets of text that it then glues together in weird and wonderful ways on demand.

The knowledge embedded in the whopping 175 billion parameters is often referred to as the machine’s dark knowledge, which is a concept discussed by Geoffrey Hinton in 2014 (slides, talk) and a 2019 Chinese book 《暗知识》I recently read. The texts generated by GPT3 essentially translate the dark knowledge of the machines into some knowledge our humans can understand and enjoy.

A few days ago, Elon Musk tweeted about directly streaming music to the brain someday using the brain-machine interface chip developed by one of his startups Neuralink, which reminds me the movie scene from “the Matrix” where Trinity learned to fly a helicopter via the “head plug” in a few seconds.

All these recent news makes me think more about how humans and machines transfer knowledge to each other. About 10 years ago, we leveraged the well-known Nonaka’s spiral model of knowledge to study business process design. Nonaka’s model provides a nice framework for the study of creation and conversion of human explicit knowledge (knowledge that is easy to articulate, write down, and share, such as how to operate a microwave and how to write a basic CNN classifier, etc.) and tacit/implicit knowledge (knowledge gained from experience that is hard to express and document, such as how to paint, writing songs, making the best bread in town, etc.):

Source: Wikipedia

I added the dark knowledge to Nonaka’s model and created the following triangle of knowledge to help me think about knowledge transfer between humans and machines:

For the spiral SECI (Socialization, Externalization, Combination, and Internalization) model, please refer to Wikipedia and/or this Harvard Business Review article for details. I am going to discuss the other parts in the triangle of knowledge diagram shown above:

Human → Machine

  • Explicit Knowledge → Dark Knowledge: this includes knowledge engineering where human rules are coded into the expert systems and the more recent knowledge graph building. Supervised learning with human-annotated data also belongs to this category.
  • Tacit Knowledge → Dark Knowledge: Unsupervised learning uses massive unlabeled data to teach machines to master human tacit knowledge such as language (language models such as BERT and GPT3), music (such as Magenta, MuseNet), art (such as ArtGAN, StyleGAN, ArtBreeder), design (such as FashionGAN, Algorithm-Driven Design). Another example is reinforcement learning where the machines learn to achieve a goal in an uncertain and complex environment by making a sequence of decisions to maximize rewards based on human-designed policies (such as AlphaGO and OpenAI Five).

Machine → Human

  • Dark Knowledge → Explicit Knowledge: the stream of research on Explainable Artificial Intelligence (XAI) is trying to make the machine dark knowledge interpretable by humans. FYI: AAAI 2020 had a nice tutorial workshop on XAI.
  • Dark Knowledge → Tacit Knowledge: this is the part where machine-generated results such as music, art, poem, etc. can be inspirations for humans to improve. For example, check out the samples from OpenAI Jukebox projects where many lyrics (pretty good ones) are co-written by the machine and OpenAI researchers. Another way of this type of knowledge transfer could be via the brain-machine interface as mentioned above.

Machine → Machine

Transfer learning has been applied to teach machines to tackle different tasks by fine-tuning pre-trained models, such as fine-tuning BERT and GPT3 to do sentiment analysis, reading comprehension, question answering, etc. Reinforcement learning also enables the machines to improve their performances by playing with itself such as in AlphaZero and OpenAI Five.

Note that this triangle of knowledge framework is just some very preliminary thinking and all discussions/comments are highly appreciated. Thanks for reading.




Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

An overview of ROS, the basis of advanced robot development

Alchemy in Motion

AI for marketing: how AI will skyrocket your digital performance

The World and the Machine and Responsible Machine Learning

The History And Growth of Voice AI and Search

Transform Data into Actionable Intelligence | Petuum

Chatbot design: how to be disruptive in work without getting fired


Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Harry Wang

Harry Wang

More from Medium

How AI can Transform Business Intelligence

Learning Analytics and 🤗Transformers

Hugging Face and Nerd Face emojis separated by a plus sign

An Introduction to AI Ethics

Newsletter #68 — Google breakthrough with PaLM language model