I wrote The A.I. Primer for the Algorithm and Data Literacy Project by Digital Moment (where you can download it in English or French.) It’s intended as a very low floor to the learning trajectory of AI literacy, and is suitable for teens, teachers, and any adults who might feel intimidated by the idea of learning about a subject that even specialists feel anxious about.
Here’s the intro:
A is for Attitude

How does the idea of artificial intelligence make you feel? Are you curious or hesitant? Excited or worried? Do you look forward to the convenience of astonishing chatbots and self-driving cars? Do you picture yourself relegating mundane tasks to artificial intelligence (AI), and having more time and attention for family, friends, and stimulating activities? Or are you worried that AI might replace the skills and talents you bring to the world?
Rest assured that whatever you’re feeling, you are not alone.
A recent survey of 120 countries indicates that different regions feel differently about AI. In Europe, North America, and Latin America, negative attitudes dominate. In the Middle East as well as Central and South Asia, positive attitudes slightly outweigh negative ones. An even greater gap emerges in Africa and Southeast Asia, where views towards AI are more positive. In China, only 11% of respondents had negative attitudes towards AI.
The regional split in views about AI likely reflects cultural and economic differences across the globe, and this sparks a further question: do we form our opinions about AI based on knowledge and experience, or based on personal and cultural bias?
Whatever our perspective, AI is not going away. Its impact on our society is accelerating. The changes it brings will affect the way that we interact with our economy as well as social relationships and citizen responsibilities. This occurs because all technological changes, including the invention of writing, the printing press, telephones, the internet, or AI, eventually impact the way we understand, use, and communicate information. Changes to information processing also change society. We owe it to ourselves to have an informed opinion about AI as well as a basic understanding of how it functions. This will help us work, teach, and live alongside intelligent machines.
Let’s balance healthy skepticism and caution with determined open-mindedness and continual vigilance to re-evaluate our own preconceptions.
B is for bees, and our collective brain power

In 2017, the United Nations declared May 20 World Bee Day. Thirty-five percent of bees die every year, largely due to accelerating climate change trends such as fluctuating seasonal patterns and more frequent natural disasters. Traditional beekeepers are simply not able to keep up with the massive amount of work it will take to restore that balance. This is very bad news, but not just for bees. With 90% of wild plants and 75% of crops dependent on pollination, one out of every three bitefulls of the food we eat requires their survival. Without bees, a third of the world food supply could disappear!
But here is some good news. Thanks to AI innovators, we now have solar-powered autonomous beehives with computer vision and robotics that help beekeepers monitor and thermoregulate their beehives, organically reduce pests, increase bee survival rates, and decrease the amount of human labour needed to care for bees.
If by 2030 bees are thriving, and food is growing at the rate we need it to keep growing, that may have a lot to do with the power of AI to accelerate positive change. But it will also demonstrate our collective human brain power as we develop ingenious solutions that harness AI to serve ecosystems and human well-being.
C is for caring about kangaroos and all species (including humans) impacted by AI

When Volvo began testing their first self-driving cars in Australia they ran into a problem. The vehicles could identify and avoid Northern European animals such as deer and elk. But to an AI that does not have intuitive depth perception, kangaroos were a different problem. When kangaroos are in mid-flight they look like they are further away than they actually are. When they land, they seem closer. The AI had no way of explaining or correcting for this confusion. It was up to humans to figure it out and provide enough data on kangaroos to set the AI straight.
But not every challenge is as obvious to the human eye as the kangaroo identification problem. AI takes in a lot of data and recognizes complex patterns within that data, but it doesn’t really understand the information and can’t supply context.
For instance the AI assistant Alexa can learn to recognize a command like “order me a dollhouse.” But it might not be able to distinguish between whether a parent or a child is ordering the dollhouse. This actually happened in one family, when it was discovered that their little girl had been able to order a very expensive dollhouse for herself. When the story was told one day on local radio by a commentator who said, “I love the story of the little girl who said AIexa, order a dollhouse,” another Alexa in another house heard the command and ordered another dollhouse.
In the past computers followed rules created by human beings. But now, because of advances in “machine learning,” AI is driven by data. To a large extent it makes up its own rules based on what it deciphers from this data.
AI’s value lies in its ability to process enormous amounts of data. However, this very ability presents new problems. As the AI field progresses and learning algorithms become more complex, AI develops strategies that are impossible for a human, and even an AI system, to track.
This is called a “black box” problem.
You may have heard of the black box in airplanes or cars that are accessed after a crash. Those boxes record all the activIties and commands leading up to an accident so that we can learn from what went wrong.
If only AI had a similar box! Unfortunately the amount of data that machine learning algorithms create, means that if there actually were an actual black box in an AI system that we could open, it would probably just emit a lot of indecipherable white noise.
The black box problem in AI is that there is, effectively, no black box.
Machine Learning algorithms process so much information that AI is somewhat mysterious to us and even to itself. From where we sit, all we can really be sure of is what data goes into algorithms and what data comes out. This means that AI still requires human intervention to check and balance its work.
AI can do a lot to make the world safer and more efficient. It can also do a lot to make the world even more dangerous, confusing and inequitable than it already is.
Which leads us to one more letter….
D is for data, decisions, and diversity

In the end AI will only ever be as good, or as trustworthy, as the quality, the diversity and the explainability of the data it is trained on. Moreover, it will only be as valuable as our ability to make good decisions on how it should—or maybe should not—be used.
We all share the desire to be happy and to put our collective intelligence towards the task of building a world that nurtures safety, well being and mutual respect. How AI fits into that goal is up to us!
To keep reading and learn more (e.g. the secret to machine learning, how data is trained and what we might need to unlearn to better work with AI) visit The Algorithm and Data Literacy Project and peruse or download a copy. Even if you’re already AI literate, consider sharing with a less confident friend or peer.
Leave a comment