Widening Access to Applied Machine Learning with TinyML

Download Widening Access to Applied Machine Learning with TinyML

Preview text

Volume 4 Issue 1 DOI: 10.1162/99608f92.762d171a ISSN: 2644-2353
Widening Access to Applied Machine Learning With TinyML
Vijay Janapa Reddi†,* Brian Plancher†,* Susan Kennedy† Laurence Moroney‡ Pete Warden‡ Lara Suzuki‡ Anant Agarwal⋄,§ Colby Banbury† Massimo Banzi∥ Matthew Bennett⋄
Benjamin Brown† Sharad Chitlangia† Radhika Ghosal† Sarah Grafman† Rupert Jaeger⊥ Srivatsan Krishnan† Maximilian Lam† Daniel Leiker⊥ Cara Mann⋄ Mark Mazumder†
Dominic Pajak∥ Dhilan Ramaprasad† J. Evan Smith† Matthew Stewart† Dustin Tingley† † Harvard University ‡ Google ⋄ edX § MIT ∥ Arduino ⊥ CreativeClass.ai
Abstract. Broadening access to both computational and educational resources is critical to diffusing machine learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this article, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, applied ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML leverages low-cost and globally accessible hardware and encourages the development of complete, self-contained applications, from data collection to deployment. To this end, a collaboration between academia and industry produced a four part MOOC that provides application-oriented instruction on how to develop solutions using TinyML. The series is openly available on the edX MOOC platform, has no prerequisites beyond basic programming, and is designed for global learners from a variety of backgrounds. It introduces real-world applications, ML algorithms, data-set engineering, and the ethical considerations of these technologies through hands-on programming and deployment of TinyML applications in both the cloud and on their own microcontrollers. To facilitate continued learning, community building, and collaboration beyond the courses, we launched a standalone website, a forum, a chat, and an optional course-project competition. We also open-sourced the course materials, hoping they will inspire the next generation of ML practitioners and educators and further broaden access to cutting-edge ML technologies.
Keywords: Applied ML, TinyML, STEM Education, Edge AI, MOOC
*{[email protected], [email protected]}.harvard.edu
This article is © 2022 by author(s) as listed above. The article is licensed under a Creative Commons Attribution (CC BY 4.0) International license (https://creativecommons.org/licenses/by/4.0/legalcode), except where otherwise indicated with respect to particular material included in the article. The article should be attributed to the author(s) identified above.

Widening Access to Applied Machine Learning With TinyML


Media Summary
Removing barriers and widening access to machine learning educational resources is vital for empowering future generations as the world adapts to the widespread and rapid adoption of AI (artificial intelligence). Such resources and expertise are currently concentrated in a handful of universities and corporations, resulting in a lack of accessibility for the broader public. To provide a more global outreach, a four-part freely available online course series was developed through a collaboration between academia and industry on the topic of tiny machine learning (TinyML). This field of machine learning is naturally oriented toward accessibility due to requiring only low-cost and globally accessible hardware, with a minimal technical background in computer science or machine learning necessary. The course allows students to focus on developing data-sets, algorithms, and ethical considerations by learning through real-world applications. Furthermore, a set of resources, including a forum, chat, project competition, and website, were set up to facilitate continued learning, networking, and collaboration among former participants. We hope that these resources inspire additional efforts to make machine learning accessible to all, regardless of geographic or socioeconomic background.

1. Introduction
The past two decades have seen machine learning (ML) progress dramatically from a purely academic discipline to a widespread commercial technology that serves a range of sectors. ML allows developers to improve business processes and human productivity through data-driven automation. Given the ubiquity and success of ML applications, its commercial use is expected only to increase. Existing ML applications cover a wide spectrum that includes digital assistants (Maedche et al., 2019; T. M. Mitchell et al., 1994), autonomous vehicles (C. Chen et al., 2015; Dogan et al., 2011), robotics (A. Zeng et al., 2019), health care (Ahmad et al., 2018), transportation (Jahangiri & Rakha, 2015; Zantalis et al., 2019), security (Buczak & Guven, 2015), and education (Alenezi & Faisal, 2020; Kolachalama & Garg, 2018), with a continual emergence of new use cases.
The proliferation of this technology and associated jobs have great potential to improve society and uncover new opportunities for technological innovation, societal prosperity, and individual growth. However, integral to this vision is the assumption that everyone, globally, has unfettered access to ML technologies, which is not the case. Expanding access to applied ML faces three significant challenges that must be overcome. First is a shortage of ML educators at all levels (Brown, 2019; Gagné, 2019). Second is insufficient resources, as training and running ML models often requires costly, high-performance hardware, especially as data sets continue to grow in size. Third is a growing gap between industry and academia, as even the best academic institutions and research labs are struggling to keep pace with the rapid progress and needs of industry.
Addressing these critical issues requires innovative education and workforce training programs to prepare the next generation of applied-ML engineers. To that end, this article presents a pedagogical approach that cross-cuts from research to education to address these challenges and thereby increase global access to applied ML. Developed as an academic and industry collaboration, the resulting massive open online course (MOOC), TinyML on edX, not only teaches hands-on applied ML through the lens of real-world Tiny Machine Learning (TinyML) applications, but also considers the ethical and life-cycle challenges of industrial product development and deployment (see Figure 1). This effort is built on five guiding principles:

Widening Access to Applied Machine Learning With TinyML


Figure 1. We designed a new applied-ML course. The course is motivated by realworld applications, covering not only the software (algorithms) and hardware (embedded systems) but also the product life cycle and responsible AI considerations needed to deploy these applications. To make it globally accessible and scalable, we focused on the emerging TinyML domain and released the course as a MOOC on edX.
(1) Focus on application-based pedagogy that covers all ML aspects; Instead of isolated theory and ML-model training, show how to physically design, develop, deploy, and manage trained ML models.
(2) Through academic and industry collaboration, aid ML learners in developing the fullstack, end-to-end skills needed today, tomorrow, and in the foreseeable future.
(3) Raise awareness of the ethical challenges associated with ML and familiarize learners with ethical reasoning skills to identify and address these challenges.
(4) Prioritize accessibility to students worldwide by teaching ML at a global scale through an open-access MOOC platform, using low-cost hardware that is available globally.
(5) Build community by providing a variety of platforms that allow participants to learn collaboratively and showcase their work.
We focus on teaching applied ML through the lens of TinyML as it enables easier global access to hands-on, end-to-end, application-based instruction. We define TinyML as a fast-growing field of machine learning technologies and applications that includes algorithms, hardware, and software capable of performing on-device sensor data analytics at extremely low power consumption, typically in the mW range and below, and hence enabling a variety of always-on ML use-cases on battery-operated devices. TinyML sits at the intersection of embedded systems and machine learning, and is targeted at enabling new embedded device applications (Figure 2). TinyML focuses on deploying simple yet powerful models onto microcontrollers at the network edge. These microcontrollers consume very low amounts of power (< 1mW, 1000x lower than smartphones), and have very low cost (US$0.50 when ordered in bulk, orders of magnitude cheaper than conventional microprocessors such as those found in smartphones, desktops, and servers). Since TinyML can run on microcontroller development boards with extensive hardware abstraction, the barrier to integrating an application with hardware is minimal. TinyML, therefore, enables a myriad of applications from agriculture, to conservation, to industrial predictive-maintenance. TinyML thus provides an attractive and accessible entry point into applied ML. Because TinyML systems are powerful enough for many commercial tasks, learners can acquire full-stack developer skills that apply to their future jobs in relevant adjacent areas. For example, the lessons learned from endto-end TinyML application design, development, deployment, and management are transferable to working with large-scale ML systems and applications, such as those found in data centers and portable mobile devices, smartphones, and laptops. Importantly, TinyML requires little data, and

Widening Access to Applied Machine Learning With TinyML


Figure 2. Tiny Machine Learning (TinyML) is focused on enabling new applications through the deployment of machine learning models onto embedded systems.
model training can employ simple procedures. As such, unlike with large-scale ML, it is possible for learners to complete the full applied ML workflow, from data collection to deployment.
Understanding ethical reasoning is a crucial skill for ML engineers as inaccurate or unpredictable model performance can erode consumer trust and reduce the chance of success. To this end, we collaborated with the Harvard Embedded EthiCS program to integrate a responsible-AI curriculum into each course, providing opportunities to practice identifying ethical challenges and thinking through potential solutions to concrete problems based on real-world case studies.
Widening access requires both a globally accessible instructional platform and ubiquitous hardware resources that let users benefit from instructional resources at a minimal cost. We, therefore, deployed our pedagogical approach on edX, a MOOC provider that hosts university-level courses across many disciplines. Instruction is based in the free-to-use Google Collaboratory cloud programming environment and on the widely available low-cost microcontroller, the Arm Cortex-M microcontroller. We also worked with Arduino, an open-source hardware and software company, that designs and manufactures microcontrollers and microcontroller kits for building digital devices, to develop a low-cost, globally accessible, all-in-one TinyML Kit.
To foster collaboration and continued learning beyond this edX course, we developed a standalone website, discourse forum, discord chat, and an optional course-project competition. We hope the approach we devised brings ML to more people, not only to learn but also to teach applied ML and data science engineering principles to a broader audience. To that end, we have made all of our resources publicly available at https://github.com/tinyMLx/courseware and launched a broader TinyMLedu open education initiative that supports numerous outreach activities (see Section 8).
We launched the core TinyML edX series, comprising three sequential courses, between October 2020 and March 2021; an optional fourth course is under development. As of December 2021, on average, more than 1,000 new students enroll each week, and over 60,000 total students have enrolled from 176 countries. They come from diverse backgrounds and experiences, ranging from complete novices to experts who want to master an emerging field. Feedback suggests this substantial enrollment may owe to the unique collaborative structure we foster between students, teachers, and industry leaders. Shared ownership between academia and industry appears to give participants confidence they are gaining skills that industry needs both today and tomorrow. Moreover, we recognize that opportunities to interact with experts are both encouraging and validating.

Widening Access to Applied Machine Learning With TinyML


As part of our effort to widen access to applied ML, we have designed this article to target multiple audiences. As such, different sections of the article will be more or less relevant to different target audiences. Sections 2 and 3 are broadly applicable as they help to outline the opportunities and challenges associated with TinyML as a subdiscipline of ML, as well as its instruction. Sections 4 and 5, detailing the course structure and content, are largely targeted at both prospective students and teaching practitioners to help outline the pedagogical approach taken. Section 6 focuses on individuals involved in multimedia development, as well as accessibility researchers, helping outline how the course was developed with remote learning and accessibility in mind. The remainder of the sections, which outline future initiatives, demographic information associated with course participants, limitations, and related work, are broadly applicable to all of the above audiences.

2. Challenges and Opportunities of Applied-ML
We propose three criteria to empower applied-ML practitioners. First, we believe that no one size fits all with regard to interest, experience, and motivation, especially when broadening participation. Second, given ML innovation’s breakneck pace, academic/industrial collaboration on cutting-edge technologies is paramount. Third, learners who wish to prepare for ML careers need experience with the entire development process from data collection to deployment. Moreover, they must understand the ethical implications of their designs before deploying them into the wild.
2.1. Student Background Diversity. A major challenge in expanding ML access is that participants begin applied-ML courses with diverse background knowledge, work experience, learning styles, and goals. Hence, we must provide multiple on-ramps to meet their needs.
Participants include high-school and university students who want to learn about AI to develop cutting-edge applications and gain an edge in their careers, as many employers are increasingly expecting new hires to have some ML background be it either theoretical or applied in practice. Other participants are industry veterans looking to either pivot their careers toward ML or study the landscape of the TinyML field. For example, some are computer-systems engineers who want to learn about ML in general. Others are ML engineers and data scientists who want to expand their applied skills. Others are doctors, scientists, or environmentalists who are curious about how TinyML technology could transform their fields. Other participants are self-taught, makers, tinkerers, and hobbyists who want to build smart “things” based on emerging technologies.
Given this broad spectrum, we have a unique opportunity to enable inclusive learning for all despite differing backgrounds and expertise. But we must provide multiple on-ramps. Specifically, we chose to structure the course in a spiral that sequentially addresses the same concepts with increasing complexity (Harden, 1999). Doing so ensures that not only do participants reinforce fundamentals while picking up new details, but they also master important objectives at every stage. This approach has been shown to improve learning while meeting each individual’s objectives (Neumann et al., 2017).
2.2. Need for Academia/Industry Collaboration. Expanding ML access requires the expertise of academia and industry. Academia is strong in structured teaching: it creates in-depth, rigorous curricula to impart a deep understanding of a field. Conversely, industry is more pragmatic, developing the skills necessary for employment. These approaches are complementary.
Also, ML is moving rapidly thanks largely to industry’s access to rich data. Analysis of MLresearch publications at the NeurIPS ML conference suggests industry leads ML innovation (Chuvpilo, 2020). As such, industry has essential domain-specific knowledge that helps ground ML pedagogy in practical skills and real-world applications.

Widening Access to Applied Machine Learning With TinyML





Figure 3. Example TinyML devices: (a) Pico4ML, (b) Arduino Nano 33 BLE Sense, and (c) STMicroelectronics Sensor Tile. These and many other embedded devices are being built with integrated sensors, specifically for enabling on-device real-time data processing.

We believe that academia and industry must work in tandem to deliver high-quality, accessible, foundational, and skills-based ML content. Joining a strong academic institution and a industry leader in technology innovation, with a history of releasing free and accessible resources, may make students more confident that they are learning relevant skills for their careers.
2.3. Demand for Full-Stack ML Expertise. In ML, we believe that the “full stack”1 approach to building and using ML models is the core skill that will define future engineers. The engineers who bring long-term value to their industry are those who have the in-depth knowledge to innovate beyond well-known applications and scenarios. In fact, full-stack developers are now more numerous than all other developers combined, with 55% of developers identifying as full-stack in a 2020 report (Stack Overflow, 2020).
Our academia and industry collaboration can ensure the course series imparts the full-stack abilities that industry demands. Doing so requires content beyond the narrow, well-lit path of MLmodel training, optimization, and inference. We therefore also focus on acquiring and cleansing data, deploying models in hardware, and managing continuous model updates on the basis of field results. Our hope is that learners will gain a whole new set of applied-ML skills that they can leverage in their varied future careers.
3. ML’s Future Is Tiny and Bright
We employ Tiny Machine Learning (TinyML), a cutting-edge applied-ML field that brings the potential of ML to low-cost, low-performance, and power-constrained embedded systems, and in conjunction with the resources we have created, enables hands-on learning. TinyML lets us impart ML-application design, development, deployment, and life-cycle-management skills.
3.1. Introduction to TinyML. TinyML refers to the deployment of ML resources on small, resource-constrained devices (Figure 3). It starkly contrasts with traditional ML, which increasingly focuses on large-scale implementations that are often confined to the cloud. TinyML is neither a specific technology nor a method per se, but it acts in many ways as a proto-engineering discipline that combines machine learning, embedded systems, and performance engineering. Similar to how chemical engineering evolved from chemistry and how electrical engineering evolved from electromagnetism, TinyML has evolved from machine learning in cloud and mobile computing systems.
The TinyML approach dispels the barriers of traditional ML, such as the high cost of suitable computing hardware and the availability of data. As Table 1 shows, TinyML systems are nearly
1The term full-stack comes from historic career growth in web technologies and the Internet. It began as a series of loosely linked skills but now encompasses web development from the lowest level, the server, to the highest level, the web browser or mobile app.

Widening Access to Applied Machine Learning With TinyML


Table 1. Cloud and Mobile ML systems versus TinyML systems


Architecture Memory Storage Power Price

Cloud E.g., Nvidia V100

GPU Nvidia Volta


SSD/disk TB–PB



Mobile E.g., cellphone

CPU Arm Cortex-A78


Flash 64GB

∼8W ∼$750


E.g., Arduino Nano


SRAM eFlash

33 BLE Sense

Arm Cortex-M4 256KB

1MB ∼0.05W ∼$3

two to three orders of magnitude cheaper and more power efficient than traditional ML systems. As such, this approach can reduce the cost of ML and can handle tasks that go beyond traditional ML. The TinyML approach also makes it easy to emphasize responsible AI (Section 5).
TinyML supports large-scale, distributed, and local ML tasks. Inference on low-cost embedded devices allows scalability, and their low power consumption enables their operation in remote locations far from the electric grid. Since the number of tiny devices in the wild far exceeds the number of traditional cloud and mobile systems (IC Insights, 2020), TinyML is a prime candidate for local ML tasks that were once prohibitively expensive, such as distributed sensor networks and predictive maintenance systems in industrial manufacturing settings.
TinyML applications are broad and continue to expand as the field gains traction. The approach’s unique value stems primarily from bringing ML close to the sensor, right where the data stream originates. Therefore, TinyML permits a wide range of new applications that traditional ML cannot deliver because of bandwidth, latency, economics, reliability, and privacy (BLERP) limitations.
Common TinyML applications include keyword spotting, visual wake words, and anomaly detection. Keyword spotting generally refers to identification of words that typically act as part of a cascade architecture to kick-start or control a system, such as a mobile phone responding to voice commands (Choi et al., 2019; Y. Zhang et al., 2017). Visual wake words involve parsing image data to find an individual (human or animal) or object. This task can potentially serve in security systems (Buczak & Guven, 2015), intelligent lighting (Gopalakrishna et al., 2012), wildlife conservation (Di Minin et al., 2018; Duhart et al., 2019), and more. Anomaly detection looks for abnormalities in persistent activities (Chandola et al., 2009). It has many applications in both consumer and commercial markets, such as checking for abnormal vibrations (Turnbull et al., 2021) or temperatures (X. Zeng et al., 2020) to provide early warnings of potential failures and to enable preventive maintenance (Gupta et al., 2015; Yairi et al., 2006).
3.2. TinyML for Applied ML. An applied-ML engineer should have full-stack experience to appreciate the end user impact of the various stages of the ML-development workflow. In prototypical ML, such as training large neural-network models in the cloud, learners are unable to participate locally in end-to-end ML development. For example, it is impossible to require them to collect millions of images– akin to ImageNet (Deng et al., 2009)–for large and complex tasks, such as general image classification. Even more difficult is asking all learners to buy the computational resources to train a complex ML model and then evaluate its performance in the real world.2
By contrast, the small form factor and domain-specific tasks of TinyML enable the full ML workflow, starting from data collection and ending with model deployment on embedded devices. Students thereby gain a unique experience. For example, to implement keyword spotting in their
2A model may perform well on a test data set and still perform poorly in the real world.

Widening Access to Applied Machine Learning With TinyML


Figure 4. The number of courses is disproportionate to the number of systems.
native language, course participants learn to collect their own speech data, train a model on that data, deploy it in an embedded device, and test the device in their community.
Such activities create an immersive learning experience. They are feasible with TinyML because they only require about 30–40 samples3 of spoken keywords, which is easy to collect (only from people with their explicit consent) using a laptop with a web browser and microphone. Learners can then train the model using Google’s free Colab environment (Bisong, 2019) and deploy it in a TinyML device using TensorFlow Lite for Microcontrollers (David et al., 2020) or another opensource software technology. This approach allows small keyword-spotting models (about 16KB) to run efficiently on low-cost, highly constrained hardware (less than 256KB of RAM)(Warden, 2018).
3.3. TinyML for Expanding Access. The most challenging task in expanding applied-ML access is making low-cost hardware available anywhere. Prior art estimates the number of costly cloud servers to be in the order of 10 million (Reddi, 2017). Cloud-ML technologies cost thousands of dollars, and their physical power, scale, and operational requirements limit their accessibility. Mobile-ML devices are relatively more affordable and pervasive. Google announced that as of May 2021 that there are over three billion active Android devices (Google, 2021b), which, when combined with the one billion iPhone users (Apple, 2016), totals over four Billion devices.
While smartphones are generally more pervasive, their availability is still limited because of network-infrastructure conditions and other factors. Cost remains a significant barrier in many low- and middle-income countries (LMICs) (Bahia & Suardi, 2020). Statista estimates only 59.5% of the world’s population has Internet access, with large offline populations residing in both India and China (Johnson, 2021). According to Pew Research, 76% of individuals in advanced economies have smartphones compared with 45% in emerging economies. Students and teachers in many developing countries lack the resources needed to learn and use traditional ML.
In contrast, TinyML devices are low cost and pervasive. They are readily accessible, enabling hands-on learning anywhere in the world, and their portability eases demonstration of the complete applied-ML workflow in a realistic setting. Furthermore, TinyML applications are more numerous and easier to deploy than mobile- and cloud-ML applications. However, despite the wide availability of tiny devices, there is little material for teaching TinyML (see Figure 4). The number of generalML courses far exceeds the number of TinyML courses (or, more generally, embedded-ML courses).
3For the specific task of keyword spotting as an immersive learning experience, 30–40 samples of 2-4 keywords should be enough to train a model from scratch to do a fairly decent job of disambiguating those few words for one speaker. We do not mean to suggest that this would suffice for a production system.

Widening Access to Applied Machine Learning With TinyML


Figure 5. The ML workflow from data collection to model training to inference. The spiral course design focuses on the neural-network model in Course 1, model application in Course 2, application deployment in Course 3, and, finally, Course 4 “closes the loop” by covering ML operations (MLOps) which enable the scaled management, deployment, and continual improvement of TinyML applications.
4. An Applied-TinyML Specialization
We developed an applied-ML course specialization focusing on TinyML. Our specialization provides multiple on-ramps to enable a diverse learner population. Moreover, because TinyML is easy to deploy on hardware and test in the real world, it allows us to systematically explore applied ML’s vast design space (algorithms, optimization techniques, etc.). It also lets us incorporate responsible AI in all four ML stages: design, development, deployment, and management at scale, which we discuss in greater depth in Section 5. We hope our description of this applied-ML specialization serves as a roadmap for anyone wishing to adopt the program.
4.1. A Four-Course Spiral Design. The TinyML specialization comprises three foundational courses and one advanced course, which we consider optional. Participants would ideally start with the first course and work through the natural progression, but we allow them to go in any order they choose. Depending on their background, they can skip some courses and take the one most relevant to their knowledge and expertise.
We structured our course using a spiral design (Bruner, 2009) in which key concepts are presented repeatedly, but with increased complexity as students progresses through the courses. This design allows us to continuously reinforce key concepts, provide multiple on-ramps to the specialization depending on student background, and support well-scaffolded hands-on exercises throughout the specialization. As we mentioned earlier, our application-focused spiral design covers the complete ML workflow, going outward from the middle. The curriculum begins with neural networks for TinyML in Course 1, expands to cover the details of TinyML applications in Course 2, then deploys full TinyML applications in Course 3, and finally considers application management and scaled

Widening Access to Applied Machine Learning With TinyML


deployment in Course 4 (Figure 5). Our application focus increases learner engagement and enthusiasm (Yang, 2017) and enables students to not only learn core concepts but also create their own TinyML applications and deploy them onto physical microcontrollers.
Table 2 shows a breakdown of the courses. Roughly, each course takes 10 to 12 hours a week for 5 to 6 weeks to complete. For a more detailed and up-to-date overview and links to all materials, visit our open-source courseware at https://github.com/tinyMLx/courseware.

4.2. Fundamentals of TinyML (Course 1). Course 1 is titled Fundamentals of TinyML. Its objective is to ensure students understand the “language” of (tiny) ML so they can dive into future courses. TinyML differs from mainstream (e.g., cloud-based) ML in that it requires not only software expertise but also embedded-hardware expertise. It sits at the intersection of embeddedML applications, algorithms, hardware, and software, so we cover each of these topics. As Figure 5 shows, the course focuses on a portion of the complete ML workflow. Moving to subsequent courses, we progressively expand participants’ understanding of the rest of that workflow.
The course introduces students to basic concepts of embedded systems (e.g., latency, memory, embedded operating systems, and software libraries) and ML (e.g., gradient descent and convolution). The first portion emphasizes the relevance of embedded systems to TinyML. It describes embedded-system concepts through the lens of TinyML, exploring the memory, latency, and portability tradeoffs of deploying ML models in resource-constrained devices versus deploying them in cloud- and mobile-based systems.
The second portion goes deeper by focusing on the theory and practice of ML and deep learning, ensuring students gain the requisite ML knowledge necessary for later courses. Students explore central ML concepts through hands-on coding exercises, training their ML models to perform classification using Python and the TensorFlow library in Google’s Colaboratory programming environment. To stay neutral, lessons that students learn and practice are broadly applicable outside of TensorFlow and the free Google ecosystem we leverage. For instance, the fundamentals students learn with TensorFlow are also relevant to other popular frameworks like PyTorch. The only change that students have to adapt to is the application programming interfaces (APIs).
We provide an overview of embedded systems and ML to ensure students recognize that the topics we cover in the specialization are relevant to their lives and careers, boosting motivation and retention (Dyrberg & Holmegaard, 2019; Wladis et al., 2014). For those with sufficient ML and embedded-systems experience, Course 1 is optional. By designing the series with these multiple on-ramps, we can meet participants wherever they are, regardless of their background and expertise.

4.3. Applications of TinyML (Course 2). The objective of the second course is to give learners the opportunity to see practical (tiny) ML applications. Nearly all such applications differ from traditional ML because TinyML is all about real-time processing of time-series data that comes directly from sensors. As Figure 5 shows, we help students understand the complete end-to-end ML workflow by including additional stages, such as data preprocessing and model optimization. Moreover, when we revisit the same stages (e.g., model design and training), we employ spiral design to broach advanced concepts that build on Course 1.
Course 2 examines ML applications in embedded devices. Participants study the code behind common TinyML use cases, such as keyword spotting (e.g., “OK Google”), in addition to how such front-end, user-facing, technologies integrate with more-complex smartphone functions, such as natural-language processing (NLP). They also examine other industry applications and full-stack topics, including visual wake words, anomaly detection, data-set engineering, and responsible AI.

Preparing to load PDF file. please wait...

0 of 0
Widening Access to Applied Machine Learning with TinyML