
This Scientific American opinion piece on the remarkable similarities in how machine learning models and hunter-gatherer children acquire knowledge is fascinating.
If we define artificial intelligence as the ability of computer systems to perform tasks that would typically need human intelligence then it's clear that we're just at the beginning of the 'S-curve'. Right now it's all about the ramping up of ANI (Artificial Narrow Intelligence, or where machines mimic human capability in a smaller set of contexts), but we're some way off AGI (Artificial General Intelligence, where machines are indistinguishable from humans) and even further off ASI (Artificial Super Intelligence, or technological singularity).
Machine Learning is obviously at the beginning of this curve (just a subset of AI of-course), enabling machines to learn quickly and independently using data so that they can benefit from previous experience and improve outputs without the need for explicit instructions. Good application of ML is about collecting, organising and analysing good quality data in order to learn fast, identify patterns and anomalies, improve outputs based on rules, or to classify and segment, predict or optimise. There are of-course three main types of learning in ML:
- Supervised learning: The machine/algorithm can be trained using inputs (like image recognition of objects) to learn from datasets
- Unsupervised learning: an algorithm determines previously unknown patterns in datasets without pre-existing labels or a pre-determined goal. So you only need input data. Outputs might be segmentation or visualisation
- Reinforcement learning: a sequential learning model based on the state of the current input and what the last output was. Learning is based on practice and feedback (this is how AlphaGo got good enough to beat the world champion at Go)
The author of the Scientific American piece, Gul Deniz Salali, draws on her observations of how hunter-gatherer children in the Congo learn skills in the absence of formal education. Being directly taught only accounts for a small proportion (less than ten percent) of learning episodes for the children. Instead, they learn their skills mainly from free exploration and observation of their environment, feedback and copying others. For example the parents and community often create a learning opportunity for a child by providing a tool, and keeping watch on their actions without interference. The child learns to adjust their behaviour through feedback.
This, says Gul, is remarkably similar to how machines can acquire learning through inputs and feedback. Just as machines can learn unsupervised from datasets, so children learn through exploration and observation. AlphaGo Zero became its own teacher by playing against itself. Just as machines can learn through reinforcement learning and self-play so we should encourage curiosity and exploration, and provide feedback when needed.
The famous 70, 20, 10 learning model holds that we acquire 70 percent of our knowledge from experience (job-related or otherwise), 20 percent from interactions with others, and 10 percent from more formal education events. It's perhaps no surprise then that it is not only humans that stand to gain the majority of their knowledge through independent learning, experience, exploration and feedback but machines as well.
Photo by Patrick Tomasso on Unsplash