Since you bought your first smartphone, an entire generation of kids has been born that will never know a world not mediated by AI algorithms.
Why is this important?
For one thing AI and kids is serious business. According to a Pew research report released in 2019, YouTube videos targeted at and featuring children were 3 times more popular than any other content on Youtube.
In 2019, the year Ryan Kaji turned eight, he made $26 million from his YouTube channel, Ryan’s Toy Review, making him that year’s highest paid Youtuber (and beating out his $11 million salary from the year he turned seven).
A significant part of his success could be attributed to his cuteness and his (and his parents) relentless daily video work schedule. That reach would not have been possible, however, without the boost he received from Youtube’s owner, Google, whose AI algorithms were tracking children’s viewing habits, and assessing when, where and how to best place its most popular content.
In September 2019 those practices were curtailed somewhat when Google agreed to pay out a 170 million settlement for violating The Children’s Online Privacy Protection Act (COPPA). That same month, two complaints were filed against Ryan’s parents, one for failure to disclose that he was receiving money from toy companies, and second because in certain ways Google’s AI was not working. Ryan’s target preschool audience of three to six year olds were also being treated to ads for R rated movies, and the bodacious spokeswomen of Carl Jr.s.
Kids don’t realize that the adults around them weren’t born into exactly the same world they were. They expect their parents, educators and leaders to understand the power and the mechanisms behind these algorithms. But if the current popularity of Netflix’s The Social Dilemma can attest, in too many cases they don’t.
This is why in the fall of 2019, we at Kids Code Jeunesse, a Canadian charity with a mission to seed digital skills communities across the world started working with the Canadian branch of UNESCO on The Algorithm Literacy Project, a campaign to raise awareness around the need for better understanding of how algorithms work.
We created a PSA to spark a conversation around how AI algorithms use data to predict what kids like. We housed this on a webpage with discussion guides and materials to explore both traditional and AI programming, and the ethical issues that are and will continue to emerge.
We have no illusions that the video will reach the same number of kids that Ryan does. Our objective remains to start a conversation that addresses this blind spot and inspires teachers and parents to want to learn more about algorithms, to increase their willingness to learn side by side with their kids, and their confidence that they can do this.
Over 2020, even with all the challenges of teaching during the COVID, we have still managed to work with teachers to pilot and introduce materials that will seed the literacies that kids will need to better conceptualize and understand AI: data, machine learning, and modelling, as well as algorithm literacy.
We’ve worked with kindergarteners to conceptualize the first layer of machine learning by assessing their confidence level over whether the picture they are seeing is the face of a puppy, or the topography of a cookie. As they question why this is easier for them to guess than for a computer, we seed the healthy skepticism through which we hope they will continue to see AI.
We’ve worked with elementary school children classifying data through Google’s Teachable Machine (which to Google’s credit, runs of tensor flow, a great platform for experimentation that allows the user to keep data within a browser and entirely out of Google’s control or analysis.) In doing this we’ve opened up discussions around why a computer might have an easier time identifying the faces of white men, than black women. Is this a failure of the computer; or a failure to train the computer with a diversity of data?
All of these activities internalize what we call The Big ideas:
- That AI is everywhere, but often invisible
- That it is not magic
- That it can be wrong
- That training data matters
- That the use of AI is creating and will continue to create ethical issues around bias, transparency, accountability and explainability.
It’s one thing to use AI to help you decide what youTube video, or what Netflix movie you want to watch next. It will be another when teachers or future employers use it to assess a future generation’s work performance. There will always be situations where we should always have the right to have humans explain why they are coming to the conclusions that they are about us.
Our algorithm literacy campaign is a conversation starter, but it’s also a call to action to take some small step today towards either learning something about AI, or transferring what you know about AI to someone else.
We are confident we can harness the tremendous power of AI, to imagine, create and share what we know and have. But first we have to aim to go beyond digital literacy to acquire digital wisdom.
Further viewing and reading
Pew Research Report on Kids and YouTube
Children’s Online Protection of Privacy Act