machine learning Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/machine-learning/ Teach, learn and make with Raspberry Pi Fri, 28 Apr 2023 08:42:15 +0000 en-GB hourly 1 https://wordpress.org/?v=6.2.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png machine learning Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/machine-learning/ 32 32 Experience AI: The excitement of AI in your classroom https://www.raspberrypi.org/blog/experience-ai-launch-lessons/ https://www.raspberrypi.org/blog/experience-ai-launch-lessons/#comments Tue, 18 Apr 2023 10:00:00 +0000 https://www.raspberrypi.org/?p=83694 We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML). Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers…

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML).

Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers and their students. Developed in partnership by the Raspberry Pi Foundation and DeepMind, the programme aims to support teachers in the exciting and fast-moving area of AI, and get young people passionate about the subject.

The importance of AI and machine learning education

Artificial intelligence and machine learning applications are already changing many aspects of our lives. From search engines, social media content recommenders, self-driving cars, and facial recognition software, to AI chatbots and image generation, these technologies are increasingly common in our everyday world.

Young people who understand how AI works will be better equipped to engage with the changes AI applications bring to the world, to make informed decisions about using and creating AI applications, and to choose what role AI should play in their futures. They will also gain critical thinking skills and awareness of how they might use AI to come up with new, creative solutions to problems they care about.

The AI applications people are building today are predicted to affect many career paths. In 2020, the World Economic Forum estimated that AI would replace some 85 million jobs by 2025 and create 97 million new ones. Many of these future jobs will require some knowledge of AI and ML, so it’s important that young people develop a strong understanding from an early age.

A group of young people investigate computer hardware together.
 Develop a strong understanding of the concepts of AI and machine learning with your learners.

Experience AI Lessons

Something we get asked a lot is: “How do I teach AI and machine learning with my class?”. To answer this question, we have developed a set of free lessons for secondary school students (age 11 to 14) that give you everything you need including lesson plans, slide decks, worksheets, and videos.

The lessons focus on relatable applications of AI and are carefully designed so that teachers in a wide range of subjects can use them. You can find out more about how we used research to shape the lessons and how we aim to avoid misconceptions about AI.

The lessons are also for you if you’re an educator or volunteer outside of a school setting, such as in a coding club.

The six lessons

  1. What is AI?: Learners explore the current context of artificial intelligence (AI) and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
  2. How computers learn: Learners focus on the role of data-driven models in AI systems. They are introduced to machine learning and find out about three common approaches to creating ML models. Finally the learners explore classification, a specific application of ML.
  3. Bias in, bias out: Learners create their own machine learning model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model.
  4. Decision trees: Learners take their first in-depth look at a specific type of machine learning model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means. 
  5. Solving problems with ML models: Learners are introduced to the AI project lifecycle and use it to create a machine learning model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
  6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their machine learning model. To finish off the unit, they explore a range of AI-related careers, hear from people working in AI research at DeepMind, and explore how they might apply AI and ML to their interests.

As part of this exciting first phase, we’re inviting teachers to participate in research to help us further develop the resources. All you need to do is sign up through our website, download the lessons, use them in your classroom, and give us your valuable feedback.

An educator points to an image on a student's computer screen.
 Ben Garside, one of our lead educators working on Experience AI, takes a group of students through one of the new lessons.

Support for teachers

We’ve designed the Experience AI lessons with teacher support in mind, and so that you can deliver them to your learners aged 11 to 14 no matter what your subject area is. Each of the lesson plans includes a section that explains new concepts, and the slide decks feature embedded videos in which DeepMind’s AI researchers describe and bring these concepts to life for your learners.

We will also be offering you a range of new teacher training opportunities later this year, including a free online CPD course — Introduction to AI and Machine Learning — and a series of AI-themed webinars.

Tell us your feedback

We will be inviting schools across the UK to test and improve the Experience AI lessons through feedback. We are really looking forward to working with you to shape the future of AI and machine learning education.

Visit the Experience AI website today to get started.

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-launch-lessons/feed/ 4
How anthropomorphism hinders AI education https://www.raspberrypi.org/blog/ai-education-anthropomorphism/ https://www.raspberrypi.org/blog/ai-education-anthropomorphism/#comments Thu, 13 Apr 2023 14:59:33 +0000 https://www.raspberrypi.org/?p=83648 In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough…

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough to convince someone they are talking to a human?” This is commonly referred to as the Turing test.

It’s been hard to miss the newest generation of AI chatbots that companies have released over the last year. News articles and stories about them seem to be everywhere at the moment. So you may have heard of machine learning (ML) chatbots such as ChatGPT and LaMDA. These chatbots are advanced enough to have caused renewed discussions about the Turing Test and whether the chatbots are sentient.

Chatbots are not sentient

Without any knowledge of how people create such chatbots, it’s easy to imagine how someone might develop an incorrect mental model around these chatbots being living entities. With some awareness of Sci-Fi stories, you might even start to imagine what they could look like or associate a gender with them.

A person in front of a cloudy sky, seen through a refractive glass grid. Parts of the image are overlaid with a diagram of a neural network.
Image: Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC BY 4.0

The reality is that these new chatbots are applications based on a large language model (LLM) — a type of machine learning model that has been trained with huge quantities of text, written by people and taken from places such as books and the internet, e.g. social media posts. An LLM predicts the probable order of combinations of words, a bit like the autocomplete function on a smartphone. Based on these probabilities, it can produce text outputs. LLM chatbots run on servers with huge amounts of computing power that people have built in data centres around the world.

Our AI education resources for young people

AI applications are often described as “black boxes” or “closed boxes”: they may be relatively easy to use, but it’s not as easy to understand how they work. We believe that it’s fundamentally important to help everyone, especially young people, to understand the potential of AI technologies and to open these closed boxes to understand how they actually work.

As always, we want to demystify digital technology for young people, to empower them to be thoughtful creators of technology and to make informed choices about how they engage with technology — rather than just being passive consumers.

That’s the goal we have in mind as we’re working on lesson resources to help teachers and other educators introduce KS3 students (ages 11 to 14) to AI and ML. We will release these Experience AI lessons very soon.

Why we avoid describing AI as human-like

Our researchers at the Raspberry Pi Computing Education Research Centre have started investigating the topic of AI and ML, including thinking deeply about how AI and ML applications are described to educators and learners.

To support learners to form accurate mental models of AI and ML, we believe it is important to avoid using words that can lead to learners developing misconceptions around machines being human-like in their abilities. That’s why ‘anthropomorphism’ is a term that comes up regularly in our conversations about the Experience AI lessons we are developing.

To anthropomorphise: “to show or treat an animal, god, or object as if it is human in appearance, character, or behaviour”

https://dictionary.cambridge.org/dictionary/english/anthropomorphize

Anthropomorphising AI in teaching materials might lead to learners believing that there is sentience or intention within AI applications. That misconception would distract learners from the fact that it is people who design AI applications and decide how they are used. It also risks reducing learners’ desire to take an active role in understanding AI applications, and in the design of future applications.

Examples of how anthropomorphism is misleading

Avoiding anthropomorphism helps young people to open the closed box of AI applications. Take the example of a smart speaker. It’s easy to describe a smart speaker’s functionality in anthropomorphic terms such as “it listens” or “it understands”. However, we think it’s more accurate and empowering to explain smart speakers as systems developed by people to process sound and carry out specific tasks. Rather than telling young people that a smart speaker “listens” and “understands”, it’s more accurate to say that the speaker receives input, processes the data, and produces an output. This language helps to distinguish how the device actually works from the illusion of a persona the speaker’s voice might conjure for learners.

Eight photos of the same tree taken at different times of the year, displayed in a grid. The final photo is highly pixelated. Groups of white blocks run across the grid from left to right, gradually becoming aligned.
Image: David Man & Tristan Ferne / Better Images of AI / Trees / CC BY 4.0

Another example is the use of AI in computer vision. ML models can, for example, be trained to identify when there is a dog or a cat in an image. An accurate ML model, on the surface, displays human-like behaviour. However, the model operates very differently to how a human might identify animals in images. Where humans would point to features such as whiskers and ear shapes, ML models process pixels in images to make predictions based on probabilities.

Better ways to describe AI

The Experience AI lesson resources we are developing introduce students to AI applications and teach them about the ML models that are used to power them. We have put a lot of work into thinking about the language we use in the lessons and the impact it might have on the emerging mental models of the young people (and their teachers) who will be engaging with our resources.

It’s not easy to avoid anthropomorphism while talking about AI, especially considering the industry standard language in the area: artificial intelligence, machine learning, computer vision, to name but a few examples. At the Foundation, we are still training ourselves not to anthropomorphise AI, and we take a little bit of pleasure in picking each other up on the odd slip-up.

Here are some suggestions to help you describe AI better:

Avoid usingInstead use
Avoid using phrases such as “AI learns” or “AI/ML does”Use phrases such as “AI applications are designed to…” or “AI developers build applications that…
Avoid words that describe the behaviour of people (e.g. see, look, recognise, create, make)Use system type words (e.g. detect, input, pattern match, generate, produce)
Avoid using AI/ML as a countable noun, e.g. “new artificial intelligences emerged in 2022”Refer to ‘AI/ML’ as a scientific discipline, similarly to how you use the term “biology”

The purpose of our AI education resources

If we are correct in our approach, then whether or not the young people who engage in Experience AI grow up to become AI developers, we will have helped them to become discerning users of AI technologies and to be more likely to see such products for what they are: data-driven applications and not sentient machines.

If you’d like to get involved with Experience AI and use our lessons with your class, you can start by visiting us at experience-ai.org.

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-anthropomorphism/feed/ 4
Classroom activities to discuss machine learning accuracy and ethics | Hello World #18 https://www.raspberrypi.org/blog/classroom-activity-machine-learning-accuracy-ethics-hello-world-18/ Wed, 10 Aug 2022 14:17:38 +0000 https://www.raspberrypi.org/?p=80874 In Hello World issue 18, available as a free PDF download, teacher Michael Jones shares how to use Teachable Machine with learners aged 13–14 in your classroom to investigate issues of accuracy and ethics in machine learning models. Machine learning: Accuracy and ethics The landscape for working with machine learning/AI/deep learning has grown considerably over…

The post Classroom activities to discuss machine learning accuracy and ethics | Hello World #18 appeared first on Raspberry Pi Foundation.

]]>
In Hello World issue 18, available as a free PDF download, teacher Michael Jones shares how to use Teachable Machine with learners aged 13–14 in your classroom to investigate issues of accuracy and ethics in machine learning models.

Machine learning: Accuracy and ethics

The landscape for working with machine learning/AI/deep learning has grown considerably over the last couple of years. Students are now able to develop their understanding from the hard-coded end via resources such as Machine Learning for Kids, get their hands dirty using relatively inexpensive hardware such as the Nvidia Jetson Nano, and build a classification machine using the Google-driven Teachable Machine resources. I have used all three of the above with my students, and this article focuses on Teachable Machine.

For this module, I’m more concerned with the fuzzy end of AI, including how credible AI decisions are, and the elephant-in-the-room aspect of bias and potential for harm.

Michael Jones

For the worried, there is absolutely no coding involved in this resource; the ‘machine’ behind the portal does the hard work for you. For my Year 9 classes (students aged 13 to 14) undertaking a short, three-week module, this was ideal. The coding is important, but was not my focus. For this module, I’m more concerned with the fuzzy end of AI, including how credible AI decisions are, and the elephant-in-the-room aspect of bias and potential for harm.

Getting started with Teachable Machine activities

There are three possible routes to use in Teachable Machine, and my focus is the ‘Image Project’, and within this, the ‘Standard image model’. From there, you are presented with a basic training scenario template — see Hello World issue 16 (pages 84–86) for a step-by-step set-up and training guide. For this part of the project, my students trained the machine to recognise different breeds of dog, with border collie, labrador, saluki, and so on as classes. Any AI system devoted to recognition requires a substantial set of training data. Fortunately, there are a number of freely available data sets online (for example, download a folder of dog photos separated by breed by accessing helloworld.cc/dogdata). Be warned, these can be large, consisting of thousands of images. If you have more time, you may want to set students off to collect data to upload using a camera (just be aware that this can present safeguarding considerations). This is a key learning point with your students and an opportunity to discuss the time it takes to gather such data, and variations in the data (for example, images of dogs from the front, side, or top).

Drawing of a machine learning ars rover trying to decide whether it is seeing an alien or a rock.
Image recognition is a common application of machine learning technology.

Once you have downloaded your folders, upload the images to your Teachable Machine project. It is unlikely that you will be able to upload a whole subfolder at once — my students have found that the optimum number of images seems to be twelve. Remember to build this time for downloading and uploading into your lesson plan. This is a good opportunity to discuss the need for balance in the training data. Ask questions such as, “How likely would the model be to identify a saluki if the training set contained 10 salukis and 30 of the other dogs?” This is a left-field way of dropping the idea of bias into the exploration of AI — more on that later!

Accuracy issues in machine learning models

If you have got this far, the heavy lifting is complete and Google’s training engine will now do the work for you. Once you have set your model on its training, leave the system to complete its work — it takes seconds, even on large sets of data. Once it’s done, you should be ready to test you model. If all has gone well and a webcam is attached to your computer, the Output window will give a prediction of what is being viewed. Again, the article in Hello World issue 16 takes you through the exact steps of this process. Make sure you have several images ready to test. See Figure 1a for the response to an image of a saluki presented to the model. As you might expect, it is showing as a 100 percent prediction.

Screenshots from Teachable Machine showing photos of dogs classified as specific breeds with different degrees of confidence by a machine learning model.
Figure 1: Outputs of a Teachable Machine model classifying photos of dog breeds. 1a (left): Photo of a saluki. 1b (right): Photo of a Samoyed and two people.

It will spark an interesting discussion if you now try the same operation with an image with items other than the one you’re testing in it. For example see Figure 1b, in which two people are in the image along with the Samoyed dog. The model is undecided, as the people are affecting the outcome. This raises the question of accuracy. Which features are being used to identify the dogs as border collie and saluki? Why are the humans in the image throwing the model off the scent?

Getting closer to home, training a model on human faces provides an opportunity to explore AI accuracy through the question of what might differentiate a female from a male face. You can find a model at helloworld.cc/maleorfemale that contains 5418 images almost evenly spread across male and female faces (see Figure 2). Note that this model will take a little longer to train.

Screenshot from Teachable Machine showing two datasets of photos of faces labeled either male or female.
Figure 2: Two photo sets of faces labeled either male or female, uploaded to Teachable Machine.

Once trained, try the model out. Props really help — a top hat, wig, and beard give the model a testing time (pun intended). In this test (see Figure 3), I presented myself to the model face-on and, unsurprisingly, I came out as 100 percent male. However, adding a judge’s wig forces the model into a rethink, and a beard produces a variety of results, but leaves the model unsure. It might be reasonable to assume that our model uses hair length as a strong feature. Adding a top hat to the ensemble brings the model back to a 100 percent prediction that the image is of a male.

Screenshots from Teachable Machine showing two datasets of a model classifying photos of the same face as either male or female with different degrees of confidence, based on the face is wearing a wig, a fake beard, or a tophat.
Figure 3: Outputs of a Teachable Machine model classifying photos of the author’s face as male or female with different degrees of confidence. Click to enlarge.

Machine learning uses a best-fit principle. The outputs, in this case whether I am male or female, have a greater certainty of male (65 percent) versus a lesser certainty of female (35 percent) if I wear a beard (Figure 3, second image from the right). Remove the beard and the likelihood of me being female increases by 2 percent (Figure 3, second image from the left).

Bias in machine learning models

Within a fairly small set of parameters, most human faces are similar. However, when you start digging, the research points to there being bias in AI (whether this is conscious or unconscious is a debate for another day!). You can exemplify this by firstly creating classes with labels such as ‘young smart’, ‘old smart’, ‘young not smart’, and ‘old not smart’. Select images that you think would fit the classes, and train them in Teachable Machine. You can then test the model by asking your students to find images they think fit each category. Run them against the model and ask students to debate whether the AI is acting fairly, and if not, why they think that is. Who is training these models? What images are they receiving? Similarly, you could create classes of images of known past criminals and heroes. Train the model before putting yourself in front of it. How far up the percentage scale are you towards being a criminal? It soon becomes frighteningly worrying that unless you are white and seemingly middle class, AI may prove problematic to you, from decisions on financial products such as mortgages through to mistaken arrest and identification.

It soon becomes frighteningly worrying that unless you are white and seemingly middle class, AI may prove problematic to you, from decisions on financial products such as mortgages through to mistaken arrest and identification.

Michael Jones

Encourage your students to discuss how they could influence this issue of race, class, and gender bias — for example, what rules would they use for identifying suitable images for a data set? There are some interesting articles on this issue that you can share with your students at helloworld.cc/aibias1 and helloworld.cc/aibias2.

Where next with your learners?

In the classroom, you could then follow the route of building models that identify letters for words, for example. One of my students built a model that could identify a range of spoons and forks. You may notice that Teachable Machine can also be run on Arduino boards, which adds an extra dimension. Why not get your students to create their own AI assistant that responds to commands? The possibilities are there to be explored. If you’re using webcams to collect photos yourself, why not create a system that will identify students? If you are lucky enough to have a set of identical twins in your class, that adds just a little more flavour! Teachable Machine offers a hands-on way to demonstrate the issues of AI accuracy and bias, and gives students a healthy opportunity for debate.

Michael Jones is director of Computer Science at Northfleet Technology College in the UK. He is a Specialist Leader of Education and a CS Champion for the National Centre for Computing Education.

More resources for AI and data science education

At the Foundation, AI education is one of our focus areas. Here is how we are supporting you and your learners in this area already:

An image demonstrating that AI systems for object recognition do not distinguish between a real banana on a desk and the photo of a banana on a laptop screen.
  • Computing education researchers are working to answer the many open questions about what good AI and data science education looks like for young people. To learn more, you can watch the recordings from our research seminar series focused on this. We ourselves are working on research projects in this area and will share the results freely with the computing education community.
  • You can find a list of free educational resources about these topics that we’ve collated based on our research seminars, seminar participants’ recommendations, and our own work.

The post Classroom activities to discuss machine learning accuracy and ethics | Hello World #18 appeared first on Raspberry Pi Foundation.

]]>
299 experiments from young people run on the ISS in Astro Pi Mission Space Lab 2021/22 https://www.raspberrypi.org/blog/299-experiments-young-people-iss-astro-pi-mission-space-lab-2021-22/ https://www.raspberrypi.org/blog/299-experiments-young-people-iss-astro-pi-mission-space-lab-2021-22/#comments Wed, 04 May 2022 08:38:33 +0000 https://www.raspberrypi.org/?p=79373 We and our partners at ESA Education are excited to announce that 299 teams of young people who entered Mission Space Lab this year have achieved flight status as part of the 2021/22 European Astro Pi Challenge. This means that these young people’s programs are the first ever to run on the two upgraded Astro…

The post 299 experiments from young people run on the ISS in Astro Pi Mission Space Lab 2021/22 appeared first on Raspberry Pi Foundation.

]]>
We and our partners at ESA Education are excited to announce that 299 teams of young people who entered Mission Space Lab this year have achieved flight status as part of the 2021/22 European Astro Pi Challenge. This means that these young people’s programs are the first ever to run on the two upgraded Astro Pi units on board the International Space Station (ISS).

Two Astro Pi units on board the International Space Station.

Mission Space Lab gives teams of young people up to age 19 the opportunity to design and conduct their own scientific experiments that run on board the ISS. It’s an eight-month long activity that follows the European school year. The exciting hardware upgrades inspired a record number of young people to send us their Mission Space Lab experiment ideas.

""

Teams who want to take on Mission Space Lab choose between two themes for their experiments, investigating either ‘Life in space’ or ‘Life on Earth’. From this year onwards, thanks to the new Astro Pi hardware, teams can also choose to use new sensors and a Coral machine learning accelerator during their experiment time.

Investigating life in space

Using the Astro Pi units’ sensors, teams can investigate life inside the Columbus module of the ISS. This year, 71 ‘Life in space’ experiments are running on the Astro Pi units. The 71 teams are investigating a wide range of topics: for example, how the Earth’s magnetic field is experienced on the ISS in space, how the environmental conditions that the astronauts experience compare with those on Earth beneath the ISS on its orbit, or whether the conditions in the ISS might be suitable for other lifeforms, such as plants or bacteria.

The mark 2 Astro Pi units spin in microgravity on the International Space Station.

For ‘Life in space’ experiments, teams can collect data about factors such as the colour and intensity of cabin light (using the new colour and luminosity sensor included in the upgraded hardware), astronaut movement in the cabin (using the new PIR sensor), and temperature and humidity (using the Sense HAT add-on board’s standard sensors).

Investigating life on Earth

Using the camera on an Astro Pi unit when it’s positioned to view Earth from a window of the ISS, teams can investigate features on the Earth’s surface. This year, for the first time, teams had the option to use visible-light instead of infrared (IR) photography, thanks to the new Astro Pi cameras.

An Astro Pi unit at a window on board the International Space Station.

228 teams’ ‘Life on Earth’ experiments are running this year. Some teams are using the Astro Pis’ sensors to determine the precise location of the ISS when images are captured, to identify whether the ISS is flying over land or sea, or which country it is passing over. Other teams are using IR photography to examine plant health and the effects of deforestation in different regions. Some teams are using visible-light photography to analyse clouds, calculate the velocity of the ISS, and classify biomes (e.g. desert, forest, grassland, wetland) it is passing over. The new hardware available from this year onward has helped to encourage 144 of the teams to use machine learning techniques in their experiments.

Testing, testing, testing

We received 88% more idea submissions for Mission Space Lab this year compared to last year: during Phase 1, 799 teams sent us their experiment ideas. We invited 502 of the teams to proceed to Phase 2 based on the quality of their ideas. 386 teams wrote their code and submitted computer programs for their experiments during Phase 2 this year. Achieving flight status, and thus progressing to Phase 3 of Mission Space Lab, is really a huge accomplishment for the 299 successful teams.

Three replica Astro Pi units on a wooden shelf.
Three replica Astro Pi units run tests on the Mission Space Lab programs submitted by young people.

For us, Phase 2 involved putting every team’s program through a number of tests to make sure that it follows experiment rules, doesn’t compromise the safety and security of the ISS, and will run without errors on the Astro Pi units. Testing means that April is a very busy time for us in the Astro Pi team every year. We run these tests on a number of exact replicas of the new Astro Pis, including a final test to run every experiment that has passed every test for the full 3 hours allotted to each team. The 299 experiments with flight status will run on board the ISS for over 5 weeks in total during Phase 3, and once they have started running, we can’t rely on astronaut intervention to resolve issues. So we have to make sure that all of the programs will run without any problems.

Part of the South Island (Te Waipounamu) of New Zealand (Aotearoa), photographed from the International Space Station using an Astro Pi unit.
The South Island (Te Waipounamu) of New Zealand (Aotearoa), photographed from the International Space Station using an Astro Pi unit. Click to enlarge.

Thanks to the team at ESA, we are delighted that 67 more Mission Space Lab experiments are running on the ISS this year compared to last year. In fact, teams’ experiments using the Astro Pi units are underway right now!

The 299 teams awarded flight status this year represent 23 countries and 1205 young people, with 32% female participants and an average age of 15. Spain has the most teams with experiments progressing to Phase 3 (38), closely followed by the UK (34), Italy (27), Romania (23), and Greece (22).

Four photographs of regions of the Earth taken on the International Space Station using an Astro Pi unit.
Four photographs of the Earth taken on the International Space Station using an Astro Pi unit. Click to enlarge.

Unfortunately, it isn’t possible to run every Mission Space Lab experiment submitted, as there is only limited time for the Astro Pis to be positioned in the ISS window. We wish we could run every experiment that is submitted, but unfortunately time on the ISS, especially on the nadir window, is limited. Eliminating programs was very difficult because of the high quality of this year’s submissions. Many unsuccessful teams’ programs were eliminated based on very small issues. 87 teams submitted programs this year which did not pass testing and so could not be awarded flight status.

The teams whose experiments are not progressing to Phase 3 should still be very proud to have designed experiments that passed Phase 1, and to have made a Phase 2 submission. We recognise how much work all Mission Space Lab teams have done, and we hope to see you again in next year’s Astro Pi Challenge.

What’s next?

Once the programs for all the experiments have run, we will send the teams the data collected by their experiments for Phase 4. In this final phase of Mission Space Lab, teams analyse their data and write a short report to describe their findings. Based on these reports, the ESA Education and Raspberry Pi Foundation teams will determine the winner of this year’s Mission Space Lab. The winning and highly commended teams will receive special prizes.

Congratulations to all Mission Space Lab teams who’ve achieved flight status! We are really looking forward to reading your reports.

Logo of the European Astro Pi Challenge.

The post 299 experiments from young people run on the ISS in Astro Pi Mission Space Lab 2021/22 appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/299-experiments-young-people-iss-astro-pi-mission-space-lab-2021-22/feed/ 5
Bias in the machine: How can we address gender bias in AI? https://www.raspberrypi.org/blog/gender-bias-in-ai-machine-learning-biased-data/ https://www.raspberrypi.org/blog/gender-bias-in-ai-machine-learning-biased-data/#comments Tue, 08 Mar 2022 09:42:15 +0000 https://www.raspberrypi.org/?p=78629 At the Raspberry Pi Foundation, we’ve been thinking about questions relating to artificial intelligence (AI) education and data science education for several months now, inviting experts to share their perspectives in a series of very well-attended seminars. At the same time, we’ve been running a programme of research trials to find out what interventions in…

The post Bias in the machine: How can we address gender bias in AI? appeared first on Raspberry Pi Foundation.

]]>
At the Raspberry Pi Foundation, we’ve been thinking about questions relating to artificial intelligence (AI) education and data science education for several months now, inviting experts to share their perspectives in a series of very well-attended seminars. At the same time, we’ve been running a programme of research trials to find out what interventions in school might successfully improve gender balance in computing. We’re learning a lot, and one primary lesson is that these topics are not discrete: there are relationships between them.

We can’t talk about AI education — or computer science education more generally — without considering the context in which we deliver it, and the societal issues surrounding computing, AI, and data. For this International Women’s Day, I’m writing about the intersection of AI and gender, particularly with respect to gender bias in machine learning.

The quest for gender equality

Gender inequality is everywhere, and researchers, activists, and initiatives, and governments themselves, have struggled since the 1960s to tackle it. As women and girls around the world continue to suffer from discrimination, the United Nations has pledged, in its Sustainable Development Goals, to achieve gender equality and to empower all women and girls.

While progress has been made, new developments in technology may be threatening to undo this. As Susan Leavy, a machine learning researcher from the Insight Centre for Data Analytics, puts it:

Artificial intelligence is increasingly influencing the opinions and behaviour of people in everyday life. However, the over-representation of men in the design of these technologies could quietly undo decades of advances in gender equality.

Susan Leavy, 2018 [1]

Gender-biased data

In her 2019 award-winning book Invisible Women: Exploring Data Bias in a World Designed for Men [2], Caroline Criado Perez discusses the effects of gender-biased data. She describes, for example, how the designs of cities, workplaces, smartphones, and even crash test dummies are all based on data gathered from men. She also discusses that medical research has historically been conducted by men, on male bodies.

Looking at this problem from a different angle, researcher Mayra Buvinic and her colleagues highlight that in most countries of the world, there are no sources of data that capture the differences between male and female participation in civil society organisations, or in local advisory or decision making bodies [3]. A lack of data about girls and women will surely impact decision making negatively. 

Bias in machine learning

Machine learning (ML) is a type of artificial intelligence technology that relies on vast datasets for training. ML is currently being use in various systems for automated decision making. Bias in datasets for training ML models can be caused in several ways. For example, datasets can be biased because they are incomplete or skewed (as is the case in datasets which lack data about women). Another example is that datasets can be biased because of the use of incorrect labels by people who annotate the data. Annotating data is necessary for supervised learning, where machine learning models are trained to categorise data into categories decided upon by people (e.g. pineapples and mangoes).

A banana, a glass flask, and a potted plant on a white surface. Each object is surrounded by a white rectangular frame with a label identifying the object.
Max Gruber / Better Images of AI / Banana / Plant / Flask / CC-BY 4.0

In order for a machine learning model to categorise new data appropriately, it needs to be trained with data that is gathered from everyone, and is, in the case of supervised learning, annotated without bias. Failing to do this creates a biased ML model. Bias has been demonstrated in different types of AI systems that have been released as products. For example:

Facial recognition: AI researcher Joy Buolamwini discovered that existing AI facial recognition systems do not identify dark-skinned and female faces accurately. Her discovery, and her work to push for the first-ever piece of legislation in the USA to govern against bias in the algorithms that impact our lives, is narrated in the 2020 documentary Coded Bias

Natural language processing: Imagine an AI system that is tasked with filling in the missing word in “Man is to king as woman is to X” comes up with “queen”. But what if the system completes “Man is to software developer as woman is to X” with “secretary” or some other word that reflects stereotypical views of gender and careers? AI models called word embeddings learn by identifying patterns in huge collections of texts. In addition to the structural patterns of the text language, word embeddings learn human biases expressed in the texts. You can read more about this issue in this Brookings Institute report

Not noticing

There is much debate about the level of bias in systems using artificial intelligence, and some AI researchers worry that this will cause distrust in machine learning systems. Thus, some scientists are keen to emphasise the breadth of their training data across the genders. However, other researchers point out that despite all good intentions, gender disparities are so entrenched in society that we literally are not aware of all of them. White and male dominance in our society may be so unconsciously prevalent that we don’t notice all its effects.

Three women discuss something while looking at a laptop screen.

As sociologist Pierre Bourdieu famously asserted in 1977: “What is essential goes without saying because it comes without saying: the tradition is silent, not least about itself as a tradition.” [4]. This view holds that people’s experiences are deeply, or completely, shaped by social conventions, even those conventions that are biased. That means we cannot be sure we have accounted for all disparities when collecting data.

What is being done in the AI sector to address bias?

Developers and researchers of AI systems have been trying to establish rules for how to avoid bias in AI models. An example rule set is given in an article in the Harvard Business Review, which describes the fact that speech recognition systems originally performed poorly for female speakers as opposed to male ones, because systems analysed and modelled speech for taller speakers with longer vocal cords and lower-pitched voices (typically men).

A women looks at a computer screen.

The article recommends four ways for people who work in machine learning to try to avoid gender bias:

  • Ensure diversity in the training data (in the example from the article, including as many female audio samples as male ones)
  • Ensure that a diverse group of people labels the training data
  • Measure the accuracy of a ML model separately for different demographic categories to check whether the model is biased against some demographic categories
  • Establish techniques to encourage ML models towards unbiased results

What can everybody else do?

The above points can help people in the AI industry, which is of course important — but what about the rest of us? It’s important to raise awareness of the issues around gender data bias and AI lest we find out too late that we are reintroducing gender inequalities we have fought so hard to remove. Awareness is a good start, and some other suggestions, drawn out from others’ work in this area are:

Improve the gender balance in the AI workforce

Having more women in AI and data science, particularly in both technical and leadership roles, will help to reduce gender bias. A 2020 report by the World Economic Forum (WEF) on gender parity found that women account for only 26% of data and AI positions in the workforce. The WEF suggests five ways in which the AI workforce gender balance could be addressed:

  1. Support STEM education
  2. Showcase female AI trailblazers
  3. Mentor women for leadership roles
  4. Create equal opportunities
  5. Ensure a gender-equal reward system

Ensure the collection of and access to high-quality and up-to-date gender data

We need high-quality dataset on women and girls, with good coverage, including country coverage. Data needs to be comparable across countries in terms of concepts, definitions, and measures. Data should have both complexity and granularity, so it can be cross-tabulated and disaggregated, following the recommendations from the Data2x project on mapping gender data gaps.

A woman works at a multi-screen computer setup on a desk.

Educate young people about AI

At the Raspberry Pi Foundation we believe that introducing some of the potential (positive and negative) impacts of AI systems to young people through their school education may help to build awareness and understanding at a young age. The jury is out on what exactly to teach in AI education, and how to teach it. But we think educating young people about new and future technologies can help them to see AI-related work opportunities as being open to all, and to develop critical and ethical thinking.

Three teenage girls at a laptop

In our AI education seminars we heard a number of perspectives on this topic, and you can revisit the videos, presentation slides, and blog posts. We’ve also been curating a list of resources that can help to further AI education — although there is a long way to go until we understand this area fully. 

We’d love to hear your thoughts on this topic.


References

[1] Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14–16.

[2] Perez, C. C. (2019). Invisible Women: Exploring Data Bias in a World Designed for Men. Random House.

[3] Buvinic M., Levine R. (2016). Closing the gender data gap. Significance 13(2):34–37 

[4] Bourdieu, P. (1977). Outline of a Theory of Practice (No. 16). Cambridge University Press. (p.167)

The post Bias in the machine: How can we address gender bias in AI? appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/gender-bias-in-ai-machine-learning-biased-data/feed/ 7
The AI4K12 project: Big ideas for AI education https://www.raspberrypi.org/blog/ai-education-ai4k12-big-ideas-ai-thinking/ https://www.raspberrypi.org/blog/ai-education-ai4k12-big-ideas-ai-thinking/#comments Thu, 20 Jan 2022 10:48:51 +0000 https://www.raspberrypi.org/?p=77961 What is AI thinking? What concepts should we introduce to young people related to AI, including machine learning (ML), and data science? Should we teach with a glass-box or an opaque-box approach? These are the questions we’ve been grappling with since we started our online research seminar series on AI education at the Raspberry Pi…

The post The AI4K12 project: Big ideas for AI education appeared first on Raspberry Pi Foundation.

]]>
What is AI thinking? What concepts should we introduce to young people related to AI, including machine learning (ML), and data science? Should we teach with a glass-box or an opaque-box approach? These are the questions we’ve been grappling with since we started our online research seminar series on AI education at the Raspberry Pi Foundation, co-hosted with The Alan Turing Institute.

Over the past few months, we’d already heard from researchers from the UK, Germany, and Finland. This month we virtually travelled to the USA, to hear from Prof. Dave Touretzky (Carnegie Mellon University) and Prof. Fred G. Martin (University of Massachusetts Lowell), who have pioneered the influential AI4K12 project together with their colleagues Deborah Seehorn and Christina Gardner-McLure.

The AI4K12 project

The AI4K12 project focuses on teaching AI in K-12 in the US. The AI4K12 team have aligned their vision for AI education to the CSTA standards for computer science education. These Standards, published in 2017, describe what should be taught in US schools across the discipline of computer science, but they say very little about AI. This was the stimulus for starting the AI4K12 initiative in 2018. A number of members of the AI4K12 working group are practitioners in the classroom who’ve made a huge contribution in taking this project from ideas into the classroom.

Dave Touretzky presents the five big ideas of the AI4K12 project at our online research seminar.
Dave gave us an overview of the AI4K12 project (click to enlarge)

The project has a number of goals. One is to develop a curated resource directory for K-12 teachers, and another to create a community of K-12 resource developers. On the AI4K12.org website, you can find links to many resources and sign up for their mailing list. I’ve been subscribed to this list for a while now, and fascinating discussions and resources have been shared. 

Five Big Ideas of AI4K12

If you’ve heard of AI4K12 before, it’s probably because of the Five Big Ideas the team has set out to encompass the AI field from the perspective of school-aged children. These ideas are: 

  1. Perception — the idea that computers perceive the world through sensing
  2. Representation and reasoning — the idea that agents maintain representations of the world and use them for reasoning
  3. Learning — the idea that computers can learn from data
  4. Natural interaction — the idea that intelligent agents require many types of knowledge to interact naturally with humans
  5. Societal impact — the idea that artificial intelligence can impact society in both positive and negative ways

Sometimes we hear concerns that resources being developed to teach AI concepts to young people are narrowly focused on machine learning, particularly supervised learning for classification. It’s clear from the AI4K12 Five Big Ideas that the team’s definition of the AI field encompasses much more than one area of ML. Despite being developed for a US audience, I believe the description laid out in these five ideas is immensely useful to all educators, researchers, and policymakers around the world who are interested in AI education.

Fred Martin presents one of the five big ideas of the AI4K12 project at our online research seminar.
Fred explained how ‘representation and reasoning’ is a big idea in the AI field (click to enlarge)

During the seminar, Dave and Fred shared some great practical examples. Fred explained how the big ideas translate into learning outcomes at each of the four age groups (ages 5–8, 9–11, 12–14, 15–18). You can find out more about their examples in their presentation slides or the seminar recording (see below). 

I was struck by how much the AI4K12 team has thought about progression — what you learn when, and in which sequence — which we do really need to understand well before we can start to teach AI in any formal way. For example, looking at how we might teach visual perception to young people, children might start when very young by using a tool such as Teachable Machine to understand that they can teach a computer to recognise what they want it to see, then move on to building an application using Scratch plugins or Calypso, and then to learning the different levels of visual structure and understanding the abstraction pipeline — the hierarchy of increasingly abstract things. Talking about visual perception, Fred used the example of self-driving cars and how they represent images.

A diagram of the levels of visual structure.
Fred used this slide to describe how young people might learn abstracted elements of visual structure

AI education with an age-appropriate, glass-box approach

Dave and Fred support teaching AI to children using a glass-box approach. By ‘glass-box approach’ we mean that we should give students information about how AI systems work, and show the inner workings, so to speak. The opposite would be a ‘opaque-box approach’, by which we mean showing students an AI system’s inputs and the outputs only to demonstrate what AI is capable of, without trying to teach any technical detail.

AI4K12 advice for educators supporting K-12 students: 1. Use transparent AI demonstrations. 2. Help students build mental models. 3. Encourage students to build AI applications.
AI4K12 teacher guidelines for AI education

Our speakers are keen for learners to understand, at an age-appropriate level, what is going on “inside” an AI system, not just what the system can do. They believe it’s important for young people to build mental models of how AI systems work, and that when the young people get older, they should be able to use their increasing knowledge and skills to develop their own AI applications. This aligns with the views of some of our previous seminar speakers, including Finnish researchers Matti Tedre and Henriikka Vartiainen, who presented at our seminar series in November

What is AI thinking?

Dave addressed the question of what AI thinking looks like in school. His approach was to start with computational thinking (he used the example of the Barefoot project’s description of computational thinking as a starting point) and describe AI thinking as an extension that includes the following skills:

  • Perception 
  • Reasoning
  • Representation
  • Machine learning
  • Language understanding
  • Autonomous robots

Dave described AI thinking as furthering the ideas of abstraction and algorithmic thinking commonly associated with computational thinking, stating that in the case of AI, computation actually is thinking. My own view is that to fully define AI thinking, we need to dig a bit deeper into, for example, what is involved in developing an understanding of perception and representation.

An image demonstrating that AI systems for object recognition may not distinguish between a real banana on a desk and the photo of a banana on a laptop screen.
Image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

Thinking back to Matti Tedre and Henriikka Vartainen’s description of CT 2.0, which focuses only on the ‘Learning’ aspect of the AI4K12 Five Big Ideas, and on the distinct ways of thinking underlying data-driven programming and traditional programming, we can see some differences between how the two groups of researchers describe the thinking skills young people need in order to understand and develop AI systems. Tedre and Vartainen are working on a more finely granular description of ML thinking, which has the potential to impact the way we teach ML in school.

There is also another description of AI thinking. Back in 2020, Juan David Rodríguez García presented his system LearningML at one of our seminars. Juan David drew on a paper by Brummelen, Shen, and Patton, who extended Brennan and Resnick’s CT framework of concepts, practices, and perspectives, to include concepts such as classification, prediction, and generation, together with practices such as training, validating, and testing.

What I take from this is that there is much still to research and discuss in this area! It’s a real privilege to be able to hear from experts in the field and compare and contrast different standpoints and views.

Resources for AI education

The AI4K12 project has already made a massive contribution to the field of AI education, and we were delighted to hear that Dave, Fred, and their colleagues have just been awarded the AAAI/EAAI Outstanding Educator Award for 2022 for AI4K12.org. An amazing achievement! Particularly useful about this website is that it links to many resources, and that the Five Big Ideas give a framework for these resources.

Through our seminars series, we are developing our own list of AI education resources shared by seminar speakers or attendees, or developed by us. Please do take a look.

Join our next seminar

Through these seminars, we’re learning a lot about AI education and what it might look like in school, and we’re having great discussions during the Q&A section.

On Tues 1 February at 17:00–18:30 GMT, we’ll hear from Tara Chklovski, who will talk about AI education in the context of the Sustainable Development Goals. To participate, click the button below to sign up, and we will send you information about joining. I really hope you’ll be there for this seminar!

The schedule of our upcoming seminars is online. You can also (re)visit past seminars and recordings on the blog.

The post The AI4K12 project: Big ideas for AI education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-ai4k12-big-ideas-ai-thinking/feed/ 2
Snapshots from the history of AI, plus AI education resources https://www.raspberrypi.org/blog/machine-learning-education-snapshots-history-ai-hello-world-12/ Tue, 07 Dec 2021 12:25:36 +0000 https://www.raspberrypi.org/?p=77519 In Hello World issue 12, our free magazine for computing educators, George Boukeas, DevOps Engineer for the Astro Pi Challenge here at the Foundation, introduces big moments in the history of artificial intelligence (AI) to share with your learners: The story of artificial intelligence (AI) is a story about humans trying to understand what makes…

The post Snapshots from the history of AI, plus AI education resources appeared first on Raspberry Pi Foundation.

]]>
In Hello World issue 12, our free magazine for computing educators, George Boukeas, DevOps Engineer for the Astro Pi Challenge here at the Foundation, introduces big moments in the history of artificial intelligence (AI) to share with your learners:

The story of artificial intelligence (AI) is a story about humans trying to understand what makes them human. Some of the episodes in this story are fascinating. These could help your learners catch a glimpse of what this field is about and, with luck, compel them to investigate further.                   

The imitation game

In 1950, Alan Turing published a philosophical essay titled Computing Machinery and Intelligence, which started with the words: “I propose to consider the question: Can machines think?” Yet Turing did not attempt to define what it means to think. Instead, he suggested a game as a proxy for answering the question: the imitation game. In modern terms, you can imagine a human interrogator chatting online with another human and a machine. If the interrogator does not successfully determine which of the other two is the human and which is the machine, then the question has been answered: this is a machine that can think.

A statue of Alan Turing on a park bench in Manchester.
The Alan Turing Memorial in Manchester

This imitation game is now a fiercely debated benchmark of artificial intelligence called the Turing test. Notice the shift in focus that Turing suggests: thinking is to be identified in terms of external behaviour, not in terms of any internal processes. Humans are still the yardstick for intelligence, but there is no requirement that a machine should think the way humans do, as long as it behaves in a way that suggests some sort of thinking to humans.

In his essay, Turing also discusses learning machines. Instead of building highly complex programs that would prescribe every aspect of a machine’s behaviour, we could build simpler programs that would prescribe mechanisms for learning, and then train the machine to learn the desired behaviour. Turing’s text provides an excellent metaphor that could be used in class to describe the essence of machine learning: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. We have thus divided our problem into two parts: the child-programme and the education process.”

A chess board with two pieces of each colour left.
Chess was among the games that early AI researchers like Alan Turing developed algorithms for.

It is remarkable how Turing even describes approaches that have since been evolved into established machine learning methods: evolution (genetic algorithms), punishments and rewards (reinforcement learning), randomness (Monte Carlo tree search). He even forecasts the main issue with some forms of machine learning: opacity. “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour.”

The evolution of a definition

The term ‘artificial intelligence’ was coined in 1956, at an event called the Dartmouth workshop. It was a gathering of the field’s founders, researchers who would later have a huge impact, including John McCarthy, Claude Shannon, Marvin Minsky, Herbert Simon, Allen Newell, Arthur Samuel, Ray Solomonoff, and W.S. McCulloch.   

Go has vastly more possible moves than chess, and was thought to remain out of the reach of AI for longer than it did.

The simple and ambitious definition for artificial intelligence, included in the proposal for the workshop, is illuminating: ‘making a machine behave in ways that would be called intelligent if a human were so behaving’. These pioneers were making the assumption that ‘every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’. This assumption turned out to be patently false and led to unrealistic expectations and forecasts. Fifty years later, McCarthy himself stated that ‘it was harder than we thought’.

Modern definitions of intelligence are of distinctly different flavour than the original one: ‘Intelligence is the quality that enables an entity to function appropriately and with foresight in its environment’ (Nilsson). Some even speak of rationality, rather than intelligence: ‘doing the right thing, given what it knows’ (Russell and Norvig).

A computer screen showing a complicated graph.
The amount of training data AI developers have access to has skyrocketed in the past decade.

Read the whole of this brief history of AI in Hello World #12

In the full article, which you can read in the free PDF copy of the issue, George looks at:

  • Early advances researchers made from the 1950s onwards while developing games algorithms, e.g. for chess.
  • The 1997 moment when Deep Blue, a purpose-built IBM computer, beating chess world champion Garry Kasparov using a search approach.
  • The 2011 moment when Watson, another IBM computer system, beating two human Jeopardy! champions using multiple techniques to answer questions posed in natural language.
  • The principles behind artificial neural networks, which have been around for decades and are now underlying many AI/machine learning breakthroughs because of the growth in computing power and availability of vast datasets for training.
  • The 2017 moment when AlphaGo, an artificial neural network–based computer program by Alphabet’s DeepMind, beating Ke Jie, the world’s top-ranked Go player at the time.
Stacks of server hardware behind metal fencing in a data centre.
Machine learning systems need vast amounts of training data, the collection and storage of which has only become technically possible in the last decade.

More on machine learning and AI education in Hello World #12

In your free PDF of Hello World issue 12, you’ll also find:

  • An interview with University of Cambridge statistician David Spiegelhalter, whose work shaped some of the foundations of AI, and who shares his thoughts on data science in schools and the limits of AI 
  • An introduction to Popbots, an innovative project by MIT to open AI to the youngest learners
  • An article by Ken Kahn, researcher in the Department of Education at the University of Oxford, on using the block-based Snap! language to introduce your learners to natural language processing
  • Unplugged and online machine learning activities for learners age 7 to 16 in the regular ‘Lesson plans’ section
  • And lots of other relevant articles

You can also read many of these articles online on the Hello World website.

Find more resources for AI and data science education

In Hello World issue 16, the focus is on all things data science and data literacy for your learners. As always, you can download a free copy of the issue. And on our Hello World podcast, we chat with practicing computing educators about how they bring AI, AI ethics, machine learning, and data science to the young people they teach.

If you want a practical introduction to the basics of machine learning and how to use it, take our free online course.

Drawing of a machine learning ars rover trying to decide whether it is seeing an alien or a rock.

There are still many open questions about what good AI and data science education looks like for young people. To learn more, you can watch our panel discussion about the topic, and join our monthly seminar series to hear insights from computing education researchers around the world.

We are also collating a growing list of educational resources about these topics based on our research seminars, seminar participants’ recommendations, and our own work. Find the resource list here.

The post Snapshots from the history of AI, plus AI education resources appeared first on Raspberry Pi Foundation.

]]>
How do we develop AI education in schools? A panel discussion https://www.raspberrypi.org/blog/ai-education-schools-panel-uk-policy/ https://www.raspberrypi.org/blog/ai-education-schools-panel-uk-policy/#comments Tue, 30 Nov 2021 14:11:05 +0000 https://www.raspberrypi.org/?p=77394 AI is a broad and rapidly developing field of technology. Our goal is to make sure all young people have the skills, knowledge, and confidence to use and create AI systems. So what should AI education in schools look like? To hear a range of insights into this, we organised a panel discussion as part…

The post How do we develop AI education in schools? A panel discussion appeared first on Raspberry Pi Foundation.

]]>
AI is a broad and rapidly developing field of technology. Our goal is to make sure all young people have the skills, knowledge, and confidence to use and create AI systems. So what should AI education in schools look like?

To hear a range of insights into this, we organised a panel discussion as part of our seminar series on AI and data science education, which we co-host with The Alan Turing Institute. Here our panel chair Tabitha Goldstaub, Co-founder of CogX and Chair of the UK government’s AI Council, summarises the event. You can also watch the recording below.

As part of the Raspberry Pi Foundation’s monthly AI education seminar series, I was delighted to chair a special panel session to broaden the range of perspectives on the subject. The members of the panel were:

  • Chris Philp, UK Minister for Tech and the Digital Economy
  • Philip Colligan, CEO of the Raspberry Pi Foundation 
  • Danielle Belgrave, Research Scientist, DeepMind
  • Caitlin Glover, A level student, Sandon School, Chelmsford
  • Alice Ashby, student, University of Brighton

The session explored the UK government’s commitment in the recently published UK National AI Strategy stating that “the [UK] government will continue to ensure programmes that engage children with AI concepts are accessible and reach the widest demographic.” We discussed what it will take to make this a reality, and how we will ensure young people have a seat at the table.

Two teenage girls do coding during a computer science lesson.

Why AI education for young people?

It was clear that the Minister felt it is very important for young people to understand AI. He said, “The government takes the view that AI is going to be one of the foundation stones of our future prosperity and our future growth. It’s an enabling technology that’s going to have almost universal applicability across our entire economy, and that is why it’s so important that the United Kingdom leads the world in this area. Young people are the country’s future, so nothing is complete without them being at the heart of it.”

A teacher watches two female learners code in Code Club session in the classroom.

Our panelist Caitlin Glover, an A level student at Sandon School, reiterated this from her perspective as a young person. She told us that her passion for AI started initially because she wanted to help neurodiverse young people like herself. Her idea was to start a company that would build AI-powered products to help neurodiverse students.

What careers will AI education lead to?

A theme of the Foundation’s seminar series so far has been how learning about AI early may impact young people’s career choices. Our panelist Alice Ashby, who studies Computer Science and AI at Brighton University, told us about her own process of deciding on her course of study. She pointed to the fact that terms such as machine learning, natural language processing, self-driving cars, chatbots, and many others are currently all under the umbrella of artificial intelligence, but they’re all very different. Alice thinks it’s hard for young people to know whether it’s the right decision to study something that’s still so ambiguous.

A young person codes at a Raspberry Pi computer.

When I asked Alice what gave her the courage to take a leap of faith with her university course, she said, “I didn’t know it was the right move for me, honestly. I took a gamble, I knew I wanted to be in computer science, but I wanted to spice it up.” The AI ecosystem is very lucky that people like Alice choose to enter the field even without being taught what precisely it comprises.

We also heard from Danielle Belgrave, a Research Scientist at DeepMind with a remarkable career in AI for healthcare. Danielle explained that she was lucky to have had a Mathematics teacher who encouraged her to work in statistics for healthcare. She said she wanted to ensure she could use her technical skills and her love for math to make an impact on society, and to really help make the world a better place. Danielle works with biologists, mathematicians, philosophers, and ethicists as well as with data scientists and AI researchers at DeepMind. One possibility she suggested for improving young people’s understanding of what roles are available was industry mentorship. Linking people who work in the field of AI with school students was an idea that Caitlin was eager to confirm as very useful for young people her age.

We need investment in AI education in school

The AI Council’s Roadmap stresses how important it is to not only teach the skills needed to foster a pool of people who are able to research and build AI, but also to ensure that every child leaves school with the necessary AI and data literacy to be able to become engaged, informed, and empowered users of the technology. During the panel, the Minister, Chris Philp, spoke about the fact that people don’t have to be technical experts to come up with brilliant ideas, and that we need more people to be able to think creatively and have the confidence to adopt AI, and that this starts in schools. 

A class of primary school students do coding at laptops.

Caitlin is a perfect example of a young person who has been inspired about AI while in school. But sadly, among young people and especially girls, she’s in the minority by choosing to take computer science, which meant she had the chance to hear about AI in the classroom. But even for young people who choose computer science in school, at the moment AI isn’t in the national Computing curriculum or part of GCSE computer science, so much of their learning currently takes place outside of the classroom. Caitlin added that she had had to go out of her way to find information about AI; the majority of her peers are not even aware of opportunities that may be out there. She suggested that we ensure AI is taught across all subjects, so that every learner sees how it can make their favourite subject even more magical and thinks “AI’s cool!”.

A primary school boy codes at a laptop with the help of an educator.

Philip Colligan, the CEO here at the Foundation, also described how AI could be integrated into existing subjects including maths, geography, biology, and citizenship classes. Danielle thoroughly agreed and made the very good point that teaching this way across the school would help prepare young people for the world of work in AI, where cross-disciplinary science is so important. She reminded us that AI is not one single discipline. Instead, many different skill sets are needed, including engineering new AI systems, integrating AI systems into products, researching problems to be addressed through AI, or investigating AI’s societal impacts and how humans interact with AI systems.

On hearing about this multitude of different skills, our discussion turned to the teachers who are responsible for imparting this knowledge, and to the challenges they face. 

The challenge of AI education for teachers

When we shifted the focus of the discussion to teachers, Philip said: “If we really want to equip every young person with the knowledge and skills to thrive in a world that shaped by these technologies, then we have to find ways to evolve the curriculum and support teachers to develop the skills and confidence to teach that curriculum.”

Teenage students and a teacher do coding during a computer science lesson.

I asked the Minister what he thought needed to happen to ensure we achieved data and AI literacy for all young people. He said, “We need to work across government, but also across business and society more widely as well.” He went on to explain how important it was that the Department for Education (DfE) gets the support to make the changes needed, and that he and the Office for AI were ready to help.

Philip explained that the Raspberry Pi Foundation is one of the organisations in the consortium running the National Centre for Computing Education (NCCE), which is funded by the DfE in England. Through the NCCE, the Foundation has already supported thousands of teachers to develop their subject knowledge and pedagogy around computer science.

A recent study recognises that the investment made by the DfE in England is the most comprehensive effort globally to implement the computing curriculum, so we are starting from a good base. But Philip made it clear that now we need to expand this investment to cover AI.

Young people engaging with AI out of school

Philip described how brilliant it is to witness young people who choose to get creative with new technologies. As an example, he shared that the Foundation is seeing more and more young people employ machine learning in the European Astro Pi Challenge, where participants run experiments using Raspberry Pi computers on board the International Space Station. 

Three teenage boys do coding at a shared computer during a computer science lesson.

Philip also explained that, in the Foundation’s non-formal CoderDojo club network and its Coolest Projects tech showcase events, young people build their dream AI products supported by volunteers and mentors. Among these have been autonomous recycling robots and AI anti-collision alarms for bicycles. Like Caitlin with her company idea, this shows that young people are ready and eager to engage and create with AI.

We closed out the panel by going back to a point raised by Mhairi Aitken, who presented at the Foundation’s research seminar in September. Mhairi, an Alan Turing Institute ethics fellow, argues that children don’t just need to learn about AI, but that they should actually shape the direction of AI. All our panelists agreed on this point, and we discussed what it would take for young people to have a seat at the table.

A Black boy uses a Raspberry Pi computer at school.

Alice advised that we start by looking at our existing systems for engaging young people, such as Youth Parliament, student unions, and school groups. She also suggested adding young people to the AI Council, which I’m going to look into right away! Caitlin agreed and added that it would be great to make these forums virtual, so that young people from all over the country could participate.

The panel session was full of insight and felt very positive. Although the challenge of ensuring we have a data- and AI-literate generation of young people is tough, it’s clear that if we include them in finding the solution, we are in for a bright future. 

What’s next for AI education at the Raspberry Pi Foundation?

In the coming months, our goal at the Foundation is to increase our understanding of the concepts underlying AI education and how to teach them in an age-appropriate way. To that end, we will start to conduct a series of small AI education research projects, which will involve gathering the perspectives of a variety of stakeholders, including young people. We’ll make more information available on our research pages soon.

In the meantime, you can sign up for our upcoming research seminars on AI and data science education, and peruse the collection of related resources we’ve put together.

The post How do we develop AI education in schools? A panel discussion appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-schools-panel-uk-policy/feed/ 3
The machine learning effect: Magic boxes and computational thinking 2.0 https://www.raspberrypi.org/blog/machine-learning-education-school-computational-thinking-2-0-research-seminar/ https://www.raspberrypi.org/blog/machine-learning-education-school-computational-thinking-2-0-research-seminar/#comments Wed, 17 Nov 2021 12:28:14 +0000 https://www.raspberrypi.org/?p=77173 How does teaching children and young people about machine learning (ML) differ from teaching them about other aspects of computing? Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland shared some answers at our latest research seminar. Their presentation, titled ‘ML education for K-12: emerging trajectories’, had a profound impact on…

The post The machine learning effect: Magic boxes and computational thinking 2.0 appeared first on Raspberry Pi Foundation.

]]>
How does teaching children and young people about machine learning (ML) differ from teaching them about other aspects of computing? Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland shared some answers at our latest research seminar.

A young girl and boy do a Scratch coding activity together at a desktop computer.

Their presentation, titled ‘ML education for K-12: emerging trajectories’, had a profound impact on my thinking about how we teach computational thinking and programming. For this blog post, I have simplified some of the complexity associated with machine learning for the benefit of readers who are new to the topic.

a 3D-rendered grey box.
Some learners may think machine learning (ML) is like a magic box, but ML is not magic. Research is needed to find out what mental models are most useful for learning about ML.

Our seminars on teaching AI, ML, and data science

We’re currently partnering with The Alan Turing Institute to host a series of free research seminars about how to teach artificial intelligence (AI) and data science to young people.

The seminar with Matti and Henriikka, the third one of the series, was very well attended. Over 100 participants from San Francisco to Rajasthan, including teachers, researchers, and industry professionals, contributed to a lively and thought-provoking discussion.

Representing a large interdisciplinary team of researchers, Matti and Henriikka have been working on how to teach AI and machine learning for more than three years, which in this new area of study is a long time. So far, the Finnish team has written over a dozen academic papers based on their pilot studies with kindergarten-, primary-, and secondary-aged learners.

Current teaching in schools: classical rule-driven programming

Matti and Henriikka started by giving an overview of classical programming and how it is currently taught in schools. Classical programming can be described as rule-driven. Example features of classical computer programs and programming languages are:

  • A classical language has a strict syntax, and a limited set of commands that can only be used in a predetermined way
  • A classical language is deterministic, meaning we can guarantee what will happen when each line of code is run
  • A classical program is executed in a strict, step-wise order following a known set of rules

When we teach this type of programming, we show learners how to use a deductive problem solving approach or workflow: defining the task, designing a possible solution, and implementing the solution by writing a stepwise program that is then run on a computer. We encourage learners to avoid using trial and error to write programs. Instead, as they develop and test a program, we ask them to trace it line by line in order to predict what will happen when each line is run (glass-box testing).

A list of features of rule-driven computer programming, also included in the text.
The features of classical (rule-driven) programming approaches as taught in computer science education (CSE) (Tedre & Vartiainen, 2021).

Classical programming underpins the current view of computational thinking (CT). Our speakers called this version of CT ‘CT 1.0’. So what’s the alternative Matti and Henriikka presented, and how does it affect what computational thinking is or may become?

Machine learning (data-driven) models and new computational thinking (CT 2.0) 

Rule-based programming languages are not being eradicated. Instead, software systems are being augmented through the addition of machine learning (data-driven) elements. Many of today’s successful software products, such as search engines, image classifiers, and speech recognition programs, combine rule-driven software and data-driven models. However, the workflows for these two approaches to solving problems through computing are very different.

A table comparing problem solving workflows using computational thinking 1.0 versus computational thinking 2.0, info also included in the text.
Problem solving is very different depending on whether a rule-driven computational thinking (CT 1.0) approach or a data-driven computational thinking (CT 2.0) approach is used (Tedre & Vartiainen, 2021).

Significantly, while in rule-based programming (and CT 1.0), the focus is on solving problems by creating algorithms, in data-driven approaches, the problem solving workflow is all about the data. To highlight the profound impact this shift in focus has on teaching and learning computing, Matti introduced us to a new version of computational thinking for machine learning, CT 2.0, which is detailed in a forthcoming research paper.

Because of the focus on data rather than algorithms, developing a machine learning model is not at all like developing a classical rule-driven program. In classical programming, programs can be traced, and we can predict what will happen when they run. But in data-driven development, there is no flow of rules, and no absolutely right or wrong answer.

A table comparing conceptual differences between computational thinking 1.0 versus computational thinking 2.0, info also included in the text.
There are major differences between rule-driven computational thinking (CT 1.0) and data-driven computational thinking (CT 2.0), which impact what computing education needs to take into account (Tedre & Vartiainen, 2021).

Machine learning models are created iteratively using training data and must be cross-validated with test data. A tiny change in the data provided can make a model useless. We rarely know exactly why the output of an ML model is as it is, and we cannot explain each individual decision that the model might have made. When evaluating a machine learning system, we can only say how well it works based on statistical confidence and efficiency. 

Machine learning education must cover ethical and societal implications 

The ethical and societal implications of computer science have always been important for students to understand. But machine learning models open up a whole new set of topics for teachers and students to consider, because of these models’ reliance on large datasets, the difficulty of explaining their decisions, and their usefulness for automating very complex processes. This includes privacy, surveillance, diversity, bias, job losses, misinformation, accountability, democracy, and veracity, to name but a few.

I see the shift in problem solving approach as a chance to strengthen the teaching of computing in general, because it opens up opportunities to teach about systems, uncertainty, data, and society.

Jane Waite

Teaching machine learning: the challenges of magic boxes and new mental models

For teaching classical rule-driven programming, much time and effort has been put into researching learners’ understanding of what a program will do when it is run. This kind of understanding is called a learner’s mental model or notional machine. An approach teachers often use to help students develop a useful mental model of a program is to hide the detail of how the program works and only gradually reveal its complexity. This approach is described with the metaphor of hiding the detail of elements of the program in a box. 

Data-driven models in machine learning systems are highly complex and make little sense to humans. Therefore, they may appear like magic boxes to students. This view needs to be banished. Machine learning is not magic. We have just not figured out yet how to explain the detail of data-driven models in a way that allows learners to form useful mental models.

An example of a representation of a machine learning model in TensorFlow, an online machine learning tool (Tedre & Vartiainen, 2021).

Some existing ML tools aim to help learners form mental models of ML, for example through visual representations of how a neural network works (see above). But these explanations are still very complex. Clearly, we need to find new ways to help learners of all ages form useful mental models of machine learning, so that teachers can explain to them how machine learning systems work and banish the view that machine learning is magic.

Some tools and teaching approaches for ML education

Matti and Henriikka’s team piloted different tools and pedagogical approaches with different age groups of learners. In terms of tools, since large amounts of data are needed for machine learning projects, our presenters suggested that tools that enable lots of data to be easily collected are ideal for teaching activities. Media-rich education tools provide an opportunity to capture still images, movements, sounds, or sense other inputs and then use these as data in machine learning teaching activities. For example, to create a machine learning–based rock-paper-scissors game, students can take photographs of their hands to train a machine learning model using Google Teachable Machine.

Photos of hands are used to train a machine learning model as part of a project to create a rock-paper-scissors game.
Photos of hands are used to train a Teachable Machine machine learning model as part of a project to create a rock-paper-scissors game (Tedre & Vartiainen, 2021).

Similar to tools that teach classic programming to novice students (e.g. Scratch), some of the new classroom tools for teaching machine learning have a drag-and-drop interface (e.g. Cognimates). Using such tools means that in lessons, there can be less focus on one of the more complex aspects of learning to program, learning programming language syntax. However, not all machine learning education products include drag-and-drop interaction, some instead have their own complex languages (e.g. Wolfram Programming Lab), which are less attractive to teachers and learners. In their pilot studies, the Finnish team found that drag-and-drop machine learning tools appeared to work well with students of all ages.

The different pedagogical approaches the Finnish research team used in their pilot studies included an exploratory approach with preschool children, who investigated machine learning recognition of happy or sad faces; and a project-based approach with older students, who co-created machine learning apps with web-based tools such as Teachable Machine and Learn Machine Learning (built by the research team), supported by machine learning experts.

Example of a middle school (age 8 to 11) student’s pen and paper design for a machine learning app that recognises different instruments and chords.
Example of a middle school (age 8 to 11) student’s design for a machine learning app that recognises different instruments and chords (Tedre & Vartiainen, 2021).

What impact these pedagogies have on students’ long-term mental models about machine learning has yet to be researched. If you want to find out more about the classroom pilot studies, the academic paper is a very accessible read.

My take-aways: new opportunities, new research questions

We all learned a tremendous amount from Matti and Henriikka and their perspectives on this important topic. Our seminar participants asked them many questions about the pedagogies and practicalities of teaching machine learning in class, and raised concerns about squeezing more into an already packed computing curriculum.

For me, the most significant take-away from the seminar was the need to shift focus from algorithms to data and from CT 1.0 to CT 2.0. Learning how to best teach classical rule-driven programming has been a long journey that we have not yet completed. We are forming an understanding of what concepts learners need to be taught, the progression of learning, key mental models, pedagogical options, and assessment approaches. For teaching data-driven development, we need to do the same.  

The question of how we make sure teachers have the necessary understanding is key.

Jane Waite

I see the shift in problem solving approach as a chance to strengthen the teaching of computing in general, because it opens up opportunities to teach about systems, uncertainty, data, and society. I think it will help us raise awareness about design, context, creativity, and student agency. But I worry about how we will introduce this shift. In my view, there is a considerable risk that we will be sucked into open-ended, project-based learning, with busy and fun but shallow learning experiences that result in restricted conceptual development for students.

I also worry about how we can best help teachers build up the knowledge and experience to support their students. In the Q&A after the seminar, I asked Matti and Henriikka about the role of their team’s machine learning experts in their pilot studies. It seemed to me that without them, the pilot lessons would not have worked, as the participating teachers and students would not have had the vocabulary to talk about the process and would not have known what was doable given the available time, tools, and student knowledge.

The question of how we make sure teachers have the necessary understanding is key. Many existing professional development resources for teachers wanting to learn about ML seem to imply that teachers will all need a PhD in statistics and neural network optimisation to engage with machine learning education. This is misleading. But teachers do need to understand the machine learning concepts that their students need to learn about, and I think we don’t yet know exactly what these concepts are. 

In summary, clearly more research is needed. There are fundamental questions still to be answered about what, when, and how we teach data-driven approaches to software systems development and how this impacts what we teach about classical, rule-based programming. But to me, that is exciting, and I am very much looking forward to the journey ahead.

Join our next free seminar

To find out what others recommend about teaching AI and ML, catch up on last month’s seminar with Professor Carsten Schulte and colleagues on centring data instead of code in the teaching of AI.

We have another four seminars in our monthly series on AI, machine learning, and data science education. Find out more about them on this page, and catch up on past seminar blogs and recordings here.

At our next seminar on Tuesday 7 December at 17:00–18:30 GMT, we will welcome Professor Rose Luckin from University College London. She will be presenting on what it is about AI that makes it useful for teachers and learners.

We look forward to meeting you there!

The post The machine learning effect: Magic boxes and computational thinking 2.0 appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/machine-learning-education-school-computational-thinking-2-0-research-seminar/feed/ 2
Learn the fundamentals of AI and machine learning with our free online course https://www.raspberrypi.org/blog/fundamentals-ai-machine-learning-free-online-course/ https://www.raspberrypi.org/blog/fundamentals-ai-machine-learning-free-online-course/#comments Mon, 18 Oct 2021 14:00:35 +0000 https://www.raspberrypi.org/?p=76518 Join our free online course Introduction to Machine Learning and AI to discover the fundamentals of machine learning and learn to train your own machine learning models using free online tools. Although artificial intelligence (AI) was once the province of science fiction, these days you’re very likely to hear the term in relation to new…

The post Learn the fundamentals of AI and machine learning with our free online course appeared first on Raspberry Pi Foundation.

]]>
Join our free online course Introduction to Machine Learning and AI to discover the fundamentals of machine learning and learn to train your own machine learning models using free online tools.

Drawing of a machine learning robot helping a human identify spam at a computer.

Although artificial intelligence (AI) was once the province of science fiction, these days you’re very likely to hear the term in relation to new technologies, whether that’s facial recognition, medical diagnostic tools, or self-driving cars, which use AI systems to make decisions or predictions.

By the end of this free, online, self-paced course, you will have an appreciation for what goes into machine learning and artificial intelligence systems — and why you should think carefully about what comes out.

Machine learning — a brief overview

You’ll also often hear about AI systems that use machine learning (ML). Very simply, we can say that programs created using ML are ‘trained’ on large collections of data to ‘learn’ to produce more accurate outputs over time. One rather funny application you might have heard of is the ‘muffin or chihuahua?’ image recognition task.

Drawing of a machine learning ars rover trying to decide whether it is seeing an alien or a rock.

More precisely, we would say that a ML algorithm builds a model, based on large collections of data (the training data), without being explicitly programmed to do so. The model is ‘finished’ when it makes predictions or decisions with an acceptable level of accuracy. (For example, it rarely mistakes a muffin for a chihuahua in a photo.) It is then considered to be able to make predictions or decisions using new data in the real world.

It’s important to understand AI and ML — especially for educators

But how does all this actually work? If you don’t know, it’s hard to judge what the impacts of these technologies might be, and how we can be sure they benefit everyone — an important discussion that needs to involve people from across all of society. Not knowing can also be a barrier to using AI, whether that’s for a hobby, as part of your job, or to help your community solve a problem.

some things that machine learning and AI systems can be built into: streetlamps, waste collecting vehicles, cars, traffic lights.

For teachers and educators it’s particularly important to have a good foundational knowledge of AI and ML, as they need to teach their learners what the young people need to know about these technologies and how they impact their lives. (We’ve also got a free seminar series about teaching these topics.)

To help you understand the fundamentals of AI and ML, we’ve put together a free online course: Introduction to Machine Learning and AI. Over four weeks in two hours per week, learning at your own pace, you’ll find out how machine learning can be used to solve problems, without going too deeply into the mathematical details. You’ll also get to grips with the different ways that machines ‘learn’, and you will try out online tools such as Machine Learning for Kids and Teachable Machine to design and train your own machine learning programs.

What types of problems and tasks are AI systems used for?

As well as finding out how these AI systems work, you’ll look at the different types of tasks that they can help us address. One of these is classification — working out which group (or groups) something fits in, such as distinguishing between positive and negative product reviews, identifying an animal (or a muffin) in an image, or spotting potential medical problems in patient data.

You’ll also learn about other types of tasks ML programs are used for, such as regression (predicting a numerical value from a continuous range) and knowledge organisation (spotting links between different pieces of data or clusters of similar data). Towards the end of the course you’ll dive into one of the hottest topics in AI today: neural networks, which are ML models whose design is inspired by networks of brain cells (neurons).

drawing of a small machine learning neural network.

Before an ML program can be trained, you need to collect data to train it with. During the self-paced course you’ll see how tools from statistics and data science are important for ML — but also how ethical issues can arise both when data is collected and when the outputs of an ML program are used.

By the end of the course, you will have an appreciation for what goes into machine learning and artificial intelligence systems — and why you should think carefully about what comes out.

Sign up today to take the course for free

The Introduction to Machine Learning and AI course is open for you to sign up to now. Sign-ups will pause after 12 December. Once you sign up, you’ll have access for six weeks. During this time you’ll be able to interact with your fellow learners, and before 25 October, you’ll also benefit from the support of our expert facilitators. So what are you waiting for?

Share your views as part of our research

As part of our research on computing education, we would like to find out about educators’ views on machine learning. Before you start the course, we will ask you to complete a short survey. As a thank you for helping us with our research, you will be offered the chance to take part in a prize draw for a £50 book token!

Learn more about AI, its impacts, and teaching learners about them

To develop your computing knowledge and skills, you might also want to:

If you are a teacher in England, you can develop your teaching skills through the National Centre for Computing Education, which will give you free upgrades for our courses (including Introduction to Machine Learning and AI) so you’ll receive certificates and unlimited access.

The post Learn the fundamentals of AI and machine learning with our free online course appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/fundamentals-ai-machine-learning-free-online-course/feed/ 1