Wednesday, March 4, 2026

Cornell students’ 2025 Pico projects

This past Maker Monday, we shared a selection of Raspberry Pi Pico projects from the latest issue of Raspberry Pi Official Magazine. This reminded us that it’s that time of year when Professor V. Hunter Adams gets in touch to tell us what his Electrical and Computer Engineering students at Cornell University have built. Each cohort learns by creating (often fun and silly) things with a Raspberry Pi Pico.

Cornell’s entire Digital Systems Design Using Microcontrollers course was written around RP2040, the chip embedded in our Pico boards. The Raspberry Pi Pico range is perfect for these sorts of projects — it’s tiny, fast, and versatile enough for both beginners and more experienced users. Here are just a few of the projects from last semester:

Pico-Pasture: A cow herding simulation

This project features lovingly generated pixel art of cows (plus some barn buildings, hay, and background animations). Pico-Pasture is a model-based simulation of cattle movement over time, inspired by the student team’s shared love for cows.

PicoChess

PicoChess allows you to play chess against your Pico using a physical chessboard. Your computer opponent detects the magnetic chess pieces on the board via a series of reed switches, asking the ‘Chess Engine’ to identify which piece was lifted and requesting all legal moves for that piece.

Monkey tower defence

The student team behind this project wanted to explore how tangible interactions can make technology more engaging and inclusive for people who learn best through tactile experiences; they turned a beloved online tower defence game into a real-life encounter with a 3D-printed mechanism.

Penny the Plotter

We love a good pen-plotter project, and this one utilises a differential swerve-drive robot and Wi-Fi to draw geometric patterns. The sophisticated omnidirectional movement capabilities of the robot allow it to draw complex paths, including curves and sharp turns. It’s quite the artist.

Heat-seeking quadruped robot

This Raspberry Pi Pico W–controlled robot features three servos per leg, integrated environmental sensing, and a wireless Wi-Fi controller. The custom 3D-printed frame is capable of environmental mapping and target detection with the help of several sensors, including a solid-state LiDAR sensor and a contactless infrared sensor.

Earie: TinyML audio localisation

Earie is such a clever and complex project that I’m just going to gracefully leave it to Professor V. Hunter Adams to explain: “These students built a fake head with ears, then trained a neural network to localise audio with those ears. Really hard, and really interesting, because this is probably (?) pretty much how our brains do it.” We’ll take your word for it, Prof.

If you’d like to see all of the Pico projects created since the course began, they’re compiled in this handy YouTube playlist. You can also find all of the students’ own write-ups for the projects, dating back to 2022, here.

The post Cornell students’ 2025 Pico projects appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/X3T170L

Labels: ,

Monday, March 2, 2026

Raspberry Pi Pico projects

To inspire you this #MakerMonday, we’re showcasing just a few of the best Raspberry Pi Pico projects around — from information dashboards, IoT sensors, and LED lighting to robots, drones, musical instruments, and wearables.

If you need a tiny, low-cost microcontroller board with ultra-low power drain to embed in a project, Raspberry Pi Pico is ideal. It comes in several flavours, depending on how much processing power you need and whether you require wireless connectivity. All models feature a couple of bonus features: analogue inputs and PIO (programmable input/output) state machines that can handle some tasks in the background.

Information dashboard

Fetch online data and show it on a display

Simple weather dashboard

With its wireless connectivity, Raspberry Pi Pico W (or 2 W) can retrieve information from the World Wide Web, such as the current weather conditions. A weather dashboard is a popular Pico project, and Pete Cornish’s simple example is easy to replicate. All you need is a Pico W and a small screen to show the info — he’s used a Waveshare 2.13-inch e-ink display.

To connect Pico W to the web, you’ll need to add your wireless router’s name and password to a config file. In this example, the latter also contains the latitude and longitude of set a location for local weather info. To fetch data, you’ll usually need to obtain a key for the API of whatever online database you’re using — in this case, OpenWeatherMap.

Inky Dashboard

As showcased in issue 152 of Raspberry Pi Official Magazine, Jaeheon Shim’s Inky Dashboard features a calendar and a to-do list. It runs on a Raspberry Pi Pico–powered 7.3-inch Inky Frame colour e-ink display. Since the calendar layout uses the LVGL graphics library, modifying the code to work with other e-ink displays shouldn’t be too difficult.

Calendar data is fetched from iCal, but it can work with other services such as Google Calendar or Microsoft Outlook. Since Pico W can’t handle this directly, it fetches data via a server running on a computer or hosted in the cloud. As Pico W only fetches data every 30 minutes or so, going into a deep sleep in between, it’s a very power-efficient project.

IoT sensor

Use Pico as an IoT device for sensor data

Texting pot plant

Sandeep Mistry’s project combines a Raspberry Pi Pico W with a Pimoroni Grow Kit to monitor the moisture level of soil. If it’s too dry, a text notification is sent — using the Twilio API with a free account — to the owner’s phone to remind them to water it. For some extra personality, Sandeep suggests adding random messages, along with a light sensor to detect sunrise/sunset and say ‘good morning/night’ accordingly.

Alternatively, you could set up a Pico-based self-watering system like the one created by Veeb. If its sensor detects dry soil, it triggers a relay switch to activate a fish-tank water pump to squirt some much-needed H2O into the pot.

IoT dashboard

With one or more sensors connected to a Raspberry Pi Pico W, you can monitor the local environment for aspects such as temperature, atmospheric pressure, and humidity. The data collected can then be used however you want — in this project by Mahmood M Shilleh, it’s sent to an IoT dashboard in the cloud that can be accessed from any device.

There are numerous services and methods that can be used for this; Mahmood opted for ThingSpeak, a popular open source IoT platform. In his guide, he shows how to set up a Pico W to send data from a BME280 sensor to ThingSpeak by generating an API key and creating channels for the data feeds to show them on a web dashboard.

Robots and drones

Make mechanical marvels with Pico

PicoSMARS

Pico’s tiny footprint means it can be used in much smaller robots than a standard Raspberry Pi computer. To prove the point, RoboticBits even made one with a cut-down Pico board equipped with beads for wheels.

They don’t have to be small, however. Kevin McAleer’s PicoSMARS robot is a more standard-sized robot car with a Raspberry Pi Pico for a brain. ‘SMARS’ is short for ‘Screwless Modular Assemblable Robotic System’, a 3D-printable robot that’s modular and customisable. Pico is connected to a motor board to drive the motors, as well as an ultrasonic distance sensor to detect obstacles. Code and STL files for 3D printing can be found via this link.

Quadcopter drone

Again, Pico’s small size and weight (4g without pin headers attached) make it perfect for use in drones. Created by Tim Hanewich, this impressive quadcopter drone uses a Pico as a flight controller, reading telemetry from an MPU-6050 accelerometer and gyroscope, interpreting radio commands from an on-board receiver, and controlling four independent motors through an electronic speed controller (ESC) using pulse-width modulation (PWM).

The project may look a little daunting to recreate, but Tim has written a free and open source twelve-part guide to the build process, including MicroPython coding and testing, to help you achieve stable flight.

Model railway

Choo-choose Pico to stay on track

Automated model railroad

Raspberry Pi boards have been integrated into many hobbyists’ model railway setups to enable features such as smart, controllable lighting. This project features a ‘sensored track’ equipped with IR proximity sensors that detect a train passing. Raspberry Pi Pico acts as the brains, reading the sensors and controlling the track voltage — via a motor driver board — to alter the speed of a locomotive using pulse-width modulation (PWM). In this way, it can make it speed up, slow down, or come to a halt.

Level crossing lights

By connecting Raspberry Pi Pico to a set of tiny level crossing lights for a model railway, this project by Pater Practicus (aka Brendan McGrath) detects passing trains and determines when it’s safe for the model cars or pedestrians to cross. As such, it combines two of Pico’s popular uses: the ability to control LED lighting and take readings from a sensor.

It’s been upgraded from the original work-in-progress version on a wooden track to a more polished setup for a powered railroad, yet still only costs £23 ($32) to make. The build process, including wiring, MicroPython coding, and making the flashing signs, is covered in the linked YouTube video.

Gaming

Pico is ideal for a handheld console

Tiny game console

Maker Twan37 took only a few days to make this Raspberry Pi Pico game console. It’s more of a proof-of-concept and lacks a built-in battery, but is impressive nonetheless and shows what kind of handheld you can easily make with Pico. Housed in a 3D-printed case, Pico is connected to a monochrome 0.96-inch I2C OLED display and a few push-buttons for controls, four of them comprising a makeshift D-pad.

In order to demonstrate the compact console working, the maker created a couple of simple games in MicroPython — clones of Snake and Tetris — along with a maze generator.

PicoZX handheld

An altogether more sophisticated gaming project, this Sinclair ZX Spectrum–emulating handheld console is based around several custom PCBs, with a Raspberry Pi Pico soldered to the main board. Inspired by Peter Misenko’s original PicoZX Spectrum emulator, maker Ken St Cyr (from the YouTube channel What’s Ken Making) even created a miniature QWERTY keyboard that sits below a 2.8-inch colour IPS display. There’s also an on-board D-pad, though it can be used with a separate joystick as well. Ken’s YouTube video documents the whole build process if you fancy having a go.

Music

Make beautiful music with Pico

PicoStepSeq

While Pico doesn’t have an on-board audio output, it can be very useful for music projects. Showcased in issue 123 of Raspberry Pi Official Magazine, this one is a compact eight-step MIDI sequencer with eight lever switches (equipped with status LEDs), a mini I2C OLED screen, a rotary encoder, and a Raspberry Pi Pico in a 3D-printed case.

Maker Tod Kurt says it’s designed as a potential DIY kit for people with beginner-level soldering skills — except for the two MIDI ports, all of the parts are of the through-hole type. All the details and code can be found in the GitHub repo.

MIDI gesture controller

Another MIDI project using Raspberry Pi Pico is GaryRigg’s musical expression pedal for a guitar (or other instrument), which rotates and rolls around a ball joint, its position being read by a six-axis AHRS IMU sensor. Unlike a standard pedal, this enables it to control three parameters at once using yaw, pitch, and roll, meaning far more musical effect variations can be produced. It can also operate as a hand controller, so could be used by DJs or in a studio. For more details, see the showcase in issue 149 of Raspberry Pi Official Magazine.

Input device

Build a Pico-based controller

Gamepad 2.0

Microcontrollers like Pico are ideal for powering a gamepad or other control device. You could try building your own from scratch, but this kit from Kevin McAleer makes it a lot easier and is ideal for controlling robots via Bluetooth or Wi-Fi.

The custom PCB enables you to simply solder a Raspberry Pi Pico W or 2 W onto it, along with eleven tactile switches for two D-pads and three UI buttons. There are also connections for a LiPo battery and an optional mini OLED display. A MicroPython library makes programming easy.

An alternative kit is the Alpakka 1, which is based around an RP2040-powered module.

Macro pad

A macro pad can be a very useful addition to a computer setup, enabling you to easily trigger custom keyboard shortcuts and sequences with the press of a single key.

There are some RP2040/Pico-based macro pad kits available from the likes of Pimoroni and Adafruit. Or you can build your own, like this nine-key example made using a Raspberry Pi Pico, an Arduino Nano R3, some Gateron switches, and a few keycaps. Programmed in CircuitPython, it enables you to record macros to trigger with each key.

Alternatively, you can even build a full keyboard powered by Pico, such as with the pi40 kit.

Wearables

Pico projects you can wear

Cyber Glasses

Pico’s small size and low power drain make it ideal for wearable projects, such as adding LED lighting to cosplay outfits — or hats, like the famous Raspberry Pi Beret. It’s also been used for interactive conference badges, including Pimoroni’s Badgerware range.

Kevin McAleer, on the other hand, opted to bling up some specs for his hackable Cyber Glasses. A Raspberry Pi Pico is mounted onto one arm of the 3D-printed glasses, along with a servo that moves a monocle-like NeoPixel ring in front of the wearer’s right eye.

Pip-Boy 2040

Along with the official Raspberry Pi Pico product line, there are numerous third-party boards that make use of the same RP2040 and RP2350 chips. This project from Adafruit uses the firm’s Feather 2040 board, for example. Inspired by the game Fallout, it’s a wrist-mounted prop ready for the apocalypse. It features a rounded rectangular IPS TFT display in a 3D-printed case.

Using the demo software written in CircuitPython, you can switch between graphic screens using the directional buttons, and move the cursor with the mini joystick. With a bit of imagination, you could easily create more interactive programs for it.

LED lighting

Shine a light with some LEDs

Christmas lights

Strings or strips of individually addressable LEDs are perfect for creating all sorts of funky multicoloured effects. Our LED lighting expert Ben Everard recently put together a two-part guide on controlling Christmas lights with Pico, starting in issue 159 of Raspberry Pi Official Magazine. It’s easy to connect a NeoPixel (aka WS2812B) strip with just three wires: power, ground, and data-in. If you have a whole lot of NeoPixels (more than around 30), though, you may need to inject some extra power into the circuit to light them all up at full brightness.

Stair lights

Adding LED lighting to a staircase has long been a popular lighting project for Raspberry Pi devices. Craig Robertson’s version makes use of a Raspberry Pi Pico and a 30-pixel NeoPixel strip. A passive infrared sensor (PIR) detects a person approaching and lights the way for them. He also added a light-dependent resistor (LDR) to monitor the ambient light level so that the lights are only triggered when it’s dark, not in daylight. He programmed Pico to light the LEDs in sequence, fading them from a colour to white.

The post Raspberry Pi Pico projects appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/IhGfq5R

Labels: ,

Thursday, February 26, 2026

When and why you might need the Raspberry Pi AI HAT+ 2

Our friends at Hailo wrote this article about how to make the most of the Raspberry Pi AI HAT+ 2, pinpointing some of their favourite generative AI use cases.

The Raspberry Pi AI HAT+ 2 is the official generative AI PCIe add-on for Raspberry Pi 5, released on 15 January 2026. It pairs a Hailo-10H AI accelerator capable of up to 40 TOPS of inference performance (INT4) with 8GB of dedicated on-board LPDDR4X memory, enabling local vision and small generative AI workloads on one of the most popular single-board computers ever made.

This hardware combination is designed to enable efficient on-device generative AI while allowing the AI HAT+ 2 to operate within edge device requirements. These include low power consumption, no cloud connectivity, low latency, and maximum data privacy. However, as with any embedded hardware, performance trade-offs matter: edge devices are limited in memory, compute resources, and power budget (typically single-digit W).

For this reason, generative AI applications that require general world awareness, continuous learning, or conversations based on extensive context and knowledge-heavy reasoning are better suited to run in the cloud. For latency-sensitive, privacy-critical, knowledge-confined applications, the new AI HAT+ 2 is an ideal fit.

Let’s break down when and where the AI HAT+ 2 is most powerful, and why it’s not just another niche gadget.

Where the AI HAT+ 2 really excels

The AI HAT+ 2 is strongest when running workloads that are compute-heavy up front, rather than workloads that are dominated by token-by-token (TBT) generation. In practice, this means it shines when you need the Raspberry Pi’s CPU to be available and responsive while running generative AI applications with the following profiles:

  1. Fast execution of encoders — when turning a visual, audio, or text input into a prompt embedding
  2. Short time to first token (TTFT)* — when interactivity and user experience are critical
  3. Large prefill — when the input context is larger than the output response
  4. Multi-stage pipelines — when sequential processing is needed, in which the output of one model becomes the input of the next

*Example benchmark figures for 96 prefill tokens, measured on the CPU using llama.cpp:

Model Raspberry Pi 5 CPU Hailo-10H
QWEN2.5-1.5B-4int 2039ms 320ms

Ideal use cases

Vision-language models (VLMs)

VLMs map naturally to the AI HAT+ 2’s strengths, as the image encoder is a high-compute stage that generates compact token embeddings as output. The Hailo-10H accelerator enables event triggering, logging, indexing, captioning, and smart searching with free text, using a 2B-parameter model that would be prohibitively slow to run on the Raspberry Pi’s CPU alone.

We can think of countless applications in home security and surveillance, such as turning off your alarm when your package is being delivered and notifying you once the delivery is complete, or sending you a log of meaningful pet-monitoring events at the end of each day. The AI HAT+ 2 is also ideal for security and monitoring applications in industries like quality assurance, healthcare, and industrial automation.

Voice to action

Another strong application of the AI HAT+ 2 is a local voice-to-action agent, combining high-compute inference with relatively low-bandwidth interaction. These workflows often rely on a large prefill step, i.e. processing a big, changing input context before generating a short response, which can be much slower on the Raspberry Pi’s CPU alone. This is particularly useful for agents that continuously ingest fresh data (including sensor readings, device states, logs, schedules, and recent events) and then respond locally with a short command or action.

The full sequential pipeline first converts free speech to text using a Whisper-class model, after which a small LLM handles intent understanding, decision-making, and natural free-text interaction, triggering real-world actions locally and reliably. This architecture enables agentic AI and physical AI at the edge by supporting larger Whisper models for improved accuracy, delivering low-cost, responsive, privacy-preserving, real-time voice control for a seamless user experience.

There are endless applications here too. For example, local voice to action enables natural, touchless control of devices, eliminating the need to navigate between elaborate menus and submenus or flip through tedious manuals. Another example application is intuitive wayfinding and navigation in public spaces, such as shopping centres, airports, and campuses, where users can state what they want to do rather than the exact location they need to find (e.g. “Where can I buy sunglasses?”, “Where can I get lunch?”, or “How do I reach my gate?”). In robotics and industrial systems, voice to action can facilitate more responsive human–machine interactions and more seamless cooperation.

Advanced vision applications

When it comes to demanding vision workloads, the AI HAT+ 2 enables a step change in performance. Its high compute power and efficient on-device execution translate directly into large performance gains — as much as 100% faster than the previous Raspberry Pi AI HAT+.

The Hailo-10H chip accelerates large convolutional neural networks (CNNs) and transformer-based vision models, including CLIP, zero-shot detection, and high-capacity object detectors, enabling richer perception without increasing bandwidth or power. This makes it possible to build physical AI systems that combine multiple vision stages — detection, embedding, semantic matching, and reasoning — entirely at the edge, unlocking more capable and responsive applications in home automation, security, robotics, retail, industrial automation, and more. With no cloud connectivity, no data leaves the device, and there are no network lags or costs.

Play to its strengths

The Raspberry Pi AI HAT+ 2 is at its most powerful when certain strengths are harnessed for the right applications. Some examples include:

Strengths Ideal use cases
Free text operation without cloud dependency Offline home automation and robotics
Small language outputs for event triggering, captioning, and summarisation on top of real-time vision Home security
Air-gapped generative summarisation of logs and sensor data Secure industrial monitoring
Natural speech and zero-queue interaction with information agents Information kiosks

Bottom line: Don’t ask your toaster for history lessons…

The Raspberry Pi AI HAT+ 2 isn’t designed to compete with cloud inferencing; large LLMs will always run better where compute and memory are effectively unconstrained. However, for edge scenarios that value privacy, offline operation, low latency, and low power consumption, it unlocks real capabilities that weren’t feasible on the Raspberry Pi platform before, with or without the original AI HAT+.

You will make the best use of it when you need to run tightly scoped, on-device generative tasks alongside vision or real-world sensor input, particularly when the alternative is cloud dependency or far larger and more expensive hardware.

The robust Hailo Community has thousands of active developers. Recent integrations with Frigate and Home Assistant make the AI HAT+ 2 the most attractive option for anyone looking to make their first steps in physical AI and home automation.

The post When and why you might need the Raspberry Pi AI HAT+ 2 appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/f0G3mXB

Labels: ,

Monday, February 23, 2026

Meet the organiser of one of the longest-running Raspberry Pi community events

An automation engineer discovered the joy of quick prototyping on Raspberry Pi, leading him to organise one of the longest-running community events. This #MakerMonday, we meet Richard Kirby, the Raspberry Pi community member behind Raspberry Pint.

Automation is something a lot of Raspberry Pi enthusiasts are into, but this month’s subject, Richard Kirby, takes it to a whole other level: “My work time is consumed as a test manager at a multinational company that develops and deploys large railway automation systems throughout the world,” he tells us. “The system is largely automated, with signallers and train operators only intervening as needed — this includes automated driving of the trains.”

A lot of his work has involved the London Underground, and after moving to the city from Canada for a ‘two-year stint’ in 2009, he’s stuck around. His projects have been featured in Raspberry Pi Official Magazine before — Pi Fighter was a particular highlight — and he also organises the Raspberry Pint meetup in London.

What is your history with making?

In 2014, my daughter asked for a bit of help with a school project. She had decided to use a Raspberry Pi 1 Model B and was struggling with making it drive a DC motor. I was shocked at how quickly we sorted out a simple DC motor system. The DC motor spun a table tennis ball at different speeds using pulse-width modulation (PWM). The ball was then manually catapulted to measure the Magnus effect. I considered it a major victory, but was brought back down to earth when she complained we had no idea of the rotational speed. We ended up building a Raspberry Pi strobe to determine rotational speed; you increase the flashing frequency until the ball appears stationary. This was all over a weekend. 

Needless to say, I got hooked on the pleasure of quickly building something with a Raspberry Pi, followed by spending ages making it more polished. It’s addictive to get a quick win, followed by a grind to make something good enough to tell others about. It keeps me sane after working on projects that are four to twelve years in duration. 

How did Raspberry Pint start?

Matt Mapleston started Raspberry Pint in a London pub in 2013. I went to my first Pint in 2016 and took over organising the meetings in 2017. I wanted to make sure it wouldn’t fade to black — I had found my people! We have grown since the early days, from a handful of people to hybrid in-person and online meetups with between 30 and 80 people.

A photo from a recent, Christmassy Raspberry Pint

What are some of your favourite Raspberry Pint memories?

There is a lot of joy and fun at Raspberry Pint, as the format is makers telling everyone about their projects. The makers are excited and rightfully proud to explain their creations. We have had a lot of great local presentations of weird and wonderful projects. It’s a relaxed venue where the maker can tell their making story; all their trials and tribulations, eventually ending in sweet victory. Even highly accomplished people like the NASA engineers behind the ISS Mimic project were brimming with an obvious love for what they had built.

I never imagined a great community from around the world joining us to tell us about their experiences. Even the Australian and Japanese makers are getting up very early in the morning to tell us about their builds. It’s a fantastic way to spend a weeknight with a pint. 

A key highlight was Eben Upton joining us for an informal discussion. He was relaxed and engaging, able to answer all the questions on a huge range of topics at a surprising level of detail — everything from detailed design decisions to the stock levels of Raspberry Pi 3B+ across the world. It was during the chip shortage, so it was a hot topic.

The Pi Fighter punching bag, powered by Raspberry Pi

What are some of your favourite Raspberry Pi/Pico creations?

Pi Fighter‘, featured in The MagPi issue #85, is still a favourite of mine. It gamified heavy bag workouts. It still works, and was a big hit (and took big hits) at the Cambridge Raspberry Jams.

I still regularly use the ‘Talkative Tube Dashboard’ that was featured in The MagPi issue #120. It provides real-time statuses for the Tube lines. It’s a nice background for work video calls, as we can see railway problems in real time.

My current favourite is my ‘Diet Tracker’ project, which I’ve started using seriously again. It tracks what I eat and the amount of calories I consume. I have a fairly healthy lifestyle, but I needed some help improving my fitness. Naturally, this meant building something using Raspberry Pi. Two years later, I’m significantly healthier, and I’m now using it to get to the next level by further fine-tuning my food intake, closely monitoring protein, fat, and carbohydrates. 

Issue 162 of Raspberry Pi Official Magazine is out now!

You can grab this issue from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available from our online store, which ships around the world. And you can get a digital version via our app on Android or iOS.

You can also subscribe to the print version of our magazine. Not only do we deliver worldwide, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico 2 W!

The post Meet the organiser of one of the longest-running Raspberry Pi community events appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/MZCgL49

Labels: ,

Thursday, February 19, 2026

Turn your Raspberry Pi into an AI agent with OpenClaw

The tech corners of the internet are buzzing with talk of OpenClaw, an open source AI agent. I’ve been playing around with it in the Maker Lab at Pi Towers for the past couple of weeks to find out what it’s really capable of.

By now, most of us are familiar with generative AI chatbots such as ChatGPT or Claude. These tools simulate conversation and generate responses based on prompts, using large language models (LLMs) to answer questions, write code, brainstorm ideas, or help analyse information. They’re incredibly useful — like having a knowledgeable assistant you can ask anything.

But traditional chatbots are fundamentally reactive; you ask a question, they respond. They can help you think through a problem, but the actual execution is still up to you.

This is where AI agents come in

OpenClaw takes the same generative AI capabilities and adds the missing piece: action. Instead of just generating text, an AI agent can use tools, run commands, interact with APIs, manage workflows, and carry out tasks on your behalf.

But, as Spider-Man wisely reminds us, with great power comes great responsibility. Installing OpenClaw on your main computer gives it deep access to your system, potentially allowing it to browse websites, fill in forms, and interact with personal data. That level of capability is incredibly powerful, but it can also pose a very real security risk.

Running OpenClaw on a standalone device like a Raspberry Pi is a great way to mitigate these security concerns. You gain isolation, control, and peace of mind, all while benefiting from a system that’s always on, energy efficient, and quietly ‘doing’ in the background.

Installing OpenClaw

On a freshly installed and updated Raspberry Pi OS, run the following terminal command:

curl -fsSL https://openclaw.ai/install.sh | bash

This will install everything you need and take you through the setup process.

Nice day for a wedding photo booth

As an illustrative example, I’ll share my own first OpenClaw experiment using a Raspberry Pi: a wedding photo booth. You know the type — guests step up, photos are taken, and then they’re instantly shared. I’d previously built one myself in Python (though “built” might be generous). It worked, but it wasn’t pretty.

Later, I experimented with ‘vibe coding’, copying and pasting code between ChatGPT and my Raspberry Pi’s file system. The outcome was much better than my original attempt, but it still required a fair bit of time and manual effort.

Finally, I decided to give it the OpenClaw treatment. I installed the agent on my Raspberry Pi 5 (though a Raspberry Pi 4 with 8GB of RAM works well too), added a VPN service (Tailscale integrates seamlessly with OpenClaw), and configured my OpenAI API key as the primary AI provider.

Next, I set up a fresh installation of Raspberry Pi OS on another Raspberry Pi, which I planned to use as the brains of my photo booth with a Raspberry Pi Camera Module 2 attached. I provided OpenClaw with my login credentials for the Raspberry Pi 5 and asked it to SSH into the device.

From there, I simply chatted with OpenClaw in plain English, explaining exactly how I wanted the photo booth to behave with simple prompts like “Change the font to…”, “Centre the text…”, and so on. Within just a couple of hours, I had created this:

Everything was completed without a single Bash or Python command, and no coding on my part whatsoever. The AI agent created all the necessary files, built the webpage, configured the Wi-Fi hotspot for photo downloads, and set up admin access. From start to finish, it handled everything I needed.

Top tip:

We recommend using a high-quality SD card for your OpenClaw build. Better yet, you could add an M.2 HAT+ and run the OS from an SSD (just use ‘SD Card Copier’ in ‘Accessories’ on Raspberry Pi OS). This makes OpenClaw super snappy.

Using OpenClaw offline

By connecting OpenClaw to a locally hosted model via tools like Ollama, llama.cpp, or LocalAI, all reasoning and processing can happen directly on your Raspberry Pi, keeping your data private, reducing latency, and eliminating API costs. While local AI models may not always match the raw capabilities of large cloud-based models, they excel at fast, iterative tasks and can be combined with cloud providers as an intelligent fallback.

PicoClaw on Raspberry Pi Zero 2 W

While OpenClaw is a powerful AI system for managing workflows and tools, PicoClaw is a slimmed-down agent designed to run locally and execute tasks on minimal hardware, making it perfect for devices like Raspberry Pi Zero, Raspberry Pi Zero 2 W, or Raspberry Pi 3. Since these boards don’t use LPDDR4 memory, you can build an AI agent that’s insulated from supply constraints and price fluctuations in that market.

To try it out, I installed PicoClaw on a Raspberry Pi Zero 2 W and, 30 seconds later, created a test webpage…

The shift towards edge-driven intelligence

Starting with something as simple as hosting webpages quickly shows that OpenClaw is less about replacing tools and more about changing how we interact with them. Tools like OpenClaw, whether they’re used for testing new concepts, managing infrastructure, or supporting real-world deployments, illustrate the potential for shifting inferencing from cloud-based LLMs to low-cost, local devices like Raspberry Pi.

The post Turn your Raspberry Pi into an AI agent with OpenClaw appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/8H5R2fi

Labels: ,

Monday, February 16, 2026

Object detection with Ultralytics YOLO26 on Raspberry Pi

In celebration of #MakerMonday this week, we’re taking a look at how well YOLO‘s AI models deploy and run on Raspberry Pi. This is an exceptionally in-depth tutorial, so we have just shared part of the installation with you here; the rest of the tutorial can be found in the latest issue of Raspberry Pi Official Magazine, pages 70–77.

YOLO (You Only Look Once) is a powerful object detection model created by Ultralytics that enables you to identify content in images and videos from the command line and Python. From here, you can perform classification and respond to images or videos with your code.

When paired with a Raspberry Pi Camera Module, YOLO forms a powerful means of identifying objects that your Raspberry Pi board can react to; you can use it with sensors and actuators connected to the Raspberry Pi to perform real-time identification and reaction. You can also use it to analyse images and video files.

Using YOLO to download an image and perform inference on it

YOLO26 has just been released, and the YOLO26n model is what we are using here. It’s custom-built for speed, accuracy, and versatility. You can use YOLO out of the box or train your own datasets on it.

In this tutorial, we’re going to look at installing the Ultralytics framework with images and video files, both online and in our computer system. We’ll also look at setting up Docker so that you can install the environment and the programs needed.

The YOLO26n model performing image inference alongside the image of a bus, downloaded from the Ultralytics website

You don’t need a Raspberry Pi Camera Module for this, but a reasonably powerful Raspberry Pi will help — we are using a Raspberry Pi 5 for this tutorial. In following tutorials, we will look at integrating a Raspberry Pi Camera Module.

Install Docker

Set up your Raspberry Pi 5 with Raspberry Pi OS (see Raspberry Pi Documentation for help with these steps). We start by installing Docker Engine in Raspberry Pi OS.

Add Docker to apt

To install Docker Engine, you should be running the latest version of Raspberry Pi OS based on Debian Trixie (it will also work on Bookworm and Bullseye, however). These instructions follow the Docker documentation guide for Debian. Docker provides separate Raspberry Pi installation instructions, but these are geared towards the old 32-bit version of Raspberry Pi OS, so stick with the Debian install.

Our Python code in Thonny alongside the YOLO26 Docker instance performing image recognition

First, make sure you have uninstalled any old Docker packages. Open a terminal window and enter:

$ sudo apt remove $(dpkg --get-selections
docker.io docker-compose docker-doc podman‑docker containerd runc | cut -f1)

Unless you have Docker already installed, apt will report that these packages are not found.

Next we’ll add Docker’s official GNU Privacy Guard (GPG) key to the keyrings folder. First, we update apt, then install ca-certificates and curl:

$ sudo apt update
$ sudo apt install ca-certificates curl

These should already be installed. We make sure our keyrings directory has the correct permissions: 0755. This enables the file owner (our admin account) to read, write, and execute; just read and execute permissions are set for groups and others. We do this with a funky install command that is normally used for copying files, but in this instance it’s being used to adjust permissions:

$ sudo install -m 0755 -d /etc/apt/keyrings

Now we use curl to download Docker’s GPG key and place it into our keyrings directory with the file name docker.asc:

$ sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc

We need to ensure that all users can read the docker.asc file. We do this with the standard chmod command with a+r options:

$ sudo chmod a+r /etc/apt/keyrings/docker.asc

Next comes a funky multi-line piece of code that creates a file called docker.sources in our /etc/apt/ directory and contains the details of the Docker repository. Enter the first line and you will see a > in the terminal. Enter each line carefully and press RETURN after each one. Each line is added to the docker.sources text file until you enter EOF (at which point you return to the command line):

$ sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF

Check that the docker.sources file has been created correctly:

$ cat /etc/apt/sources.list.d/docker.sources

The output is expected to list the following, where Suites is the VERSION_CODENAME of your operating system (trixie):

Types: deb
URIs: https://download.docker.com/linux/debian
Suites: trixie
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc 

If there’s a problem, use vim or nano to edit your file:

$ sudo nano /etc/apt/sources.list.d/docker.sources

Check the update

Now update the system and check access to Docker downloads:

$ sudo apt update

The output should include a line like this:

Get:5 https://download.docker.com/linux/debian trixie InRelease [32.5 kB]

Install Docker

Now that the Docker repository is in apt, it’s time to install the various elements. Enter this line in the terminal:

$ sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin dockercompose-plugin

Docker should run automatically after installation. To verify that Docker is running, use:

$ sudo systemctl status docker

Press q to exit systemctl and return to the command line. Some systems may have this behaviour disabled and will require a manual start:

$ sudo systemctl start docker

Finally, verify that the installation is successful by running the hello-world image:

$ sudo docker run hello-world

If this is the first run, it will pull the library/hello-world container from the Docker Hub. You will see a message containing:

Hello from Docker!

This message shows that your installation appears to be working correctly.

Check out the latest issue of Raspberry Pi Official Magazine to learn how to finish setting up Docker and start using YOLO26n.

Issue 162 of Raspberry Pi Official Magazine is out now!

You can grab this issue from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available from our online store, which ships around the world. And you can get a digital version via our app on Android or iOS.

You can also subscribe to the print version of our magazine. Not only do we deliver worldwide, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico 2 W!

The post Object detection with Ultralytics YOLO26 on Raspberry Pi appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/jrH6OtA

Labels: ,

Friday, February 13, 2026

Certifying third-party antennas for use with Raspberry Pi Compute Modules

When designing and producing Raspberry Pi devices, we consider as many potential use cases as possible — particularly when it comes to criteria like wireless (WLAN and Bluetooth) performance and antenna usage. While our single-board computers (such as Raspberry Pi 5) include only an on-board PCB antenna, our Raspberry Pi Compute Module range offers two pre-approved options: an on-board PCB antenna and the external whip antenna from the official Raspberry Pi Antenna Kit.

However, we recognise that some industrial and commercial customers may need to employ third-party antennas for their applications. Example scenarios include:

  • Embedding a Compute Module within a metal enclosure, where the PCB antenna would perform poorly due to the Faraday cage effect
  • Extending the communication distance of a device, which requires increased antenna gain
  • Integrating an antenna with a different form factor, such as a flexible PCB antenna

In such cases, the Compute Module and new antenna may be required to undergo additional testing and certification before the product can be sold. While procedures vary depending on the market and the device’s features, Raspberry Pi is well placed to support our customers in meeting these additional requirements — either by updating our existing certifications or by obtaining new certifications on their behalf.

Compliance requirements

For new antennas, compliance requirements depend on whether the antenna gain is less than, equal to, or higher than the approved gain value. Alternative antenna options are therefore split into two categories:

  • Antenna gain is equal to or less than the approved antenna gain
  • Antenna gain is higher than the approved antenna gain
Antenna gain plot for the external whip antenna included in the Raspberry Pi Antenna Kit (Source: Antenna Patterns)

To help our commercial and industrial customers meet regulatory requirements in either gain scenario, we’ve put together a white paper outlining the certifications and testing procedures required in a number of our our key markets.

Different markets, different regulations

For example, in the UK and EU, integrators can adopt an antenna with a gain less than or equal to the gain of the antenna used for the original certification without needing to carry out any further spectrum usage testing. For antennas with higher gain, this course of action depends on how high the gain of the new antenna is, as this determines whether some or all of the spectrum usage tests need to be repeated. Integrators are, however, encouraged to carry out spurious emissions tests and other electromagnetic compatibility tests on all alternative antennas, regardless of their gain.

Antenna gain plot for our on-board PCB Niche antennas (Source: Antenna Patterns)

In Japan, all antennas must be approved by the country’s Ministry of Internal Affairs and Communications (MIC), and all antenna options must be listed, but no additional testing is required. Similarly, in South Korea and Taiwan, all antennas must comply with each country’s regulations — but further testing is required for antennas with higher gain. In Vietnam and Mexico, no modifications to the device’s existing certifications are required; however, manufacturers must ensure that the radiated output power of the antenna does not exceed the regulatory limits.

For a full list of requirements in several of Raspberry Pi’s key markets, refer to the handy table in our white paper.

Using pre-approved Raspberry Pi antennas

To avoid potential compliance issues or additional costs altogether, manufacturers, integrators, and end users can employ Raspberry Pi’s existing antenna architecture, which is already fully compliant in all of our key markets.

Newer Raspberry Pi single-board computers and microcontrollers include an integrated PCB Niche antenna, providing on-board Wi-Fi and Bluetooth connectivity as standard. Raspberry Pi Compute Module 4 and 5 also feature one of these PCB Niche antennas, along with a built-in U.FL connector for attaching an external antenna.

The U.FL connector on Compute Module 4 and 5 can be fitted with the omnidirectional external whip antenna included in our pre-approved Raspberry Pi Antenna Kit, or with another compatible third-party antenna.

Next steps: How Raspberry Pi can help

Should you need further assistance with integrating an alternative antenna —  either during the product design process or after launch — our in-house Global Market Access (GMA) team is fully equipped to handle any additional tests, documentation submissions, or approvals on your behalf. Contact gma@raspberrypi.com with your product requirements, including the proposed antenna options and a list of your target markets (including any not listed above).

The GMA team will review your antenna specifications and advise whether compliance with the relevant market regulations is possible. Once confirmed, the team will update the existing approvals or obtain new ones to include the new antenna, carrying out any additional testing as required.

Disclaimer:

The information provided here and in our white paper is intended to be used as initial guidance only. Customers should always refer to the official regulations and publications issued by the relevant authorities.

The post Certifying third-party antennas for use with Raspberry Pi Compute Modules appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/oi1LhCr

Labels: ,

Thursday, February 12, 2026

Accessibility improvements for screen readers on raspberrypi.com

We’re committed to making raspberrypi.com work well for everyone. Recently, we’ve been endeavouring to make the site more accessible to people using screen readers, such as JAWS and NVDA on Windows or VoiceOver on macOS and iOS.

Previously, screen reader users had no way to quickly identify the main parts of pages or skip banners and navigation. To address this, we’ve now given each region proper labels and ‘landmarks’ (standard markup recognised by screen readers), allowing you to jump between different parts of the page.

We’ve also corrected our headings so that they follow a logical order and improved our links to reduce duplication; each link is now labelled clearly so that it makes sense on its own, no matter where it appears. Hints and error messages are now associated with their relevant form fields, making it easier to complete forms while using a screen reader.

Simplifying our CAPTCHA protections

The high volume of automated traffic we receive means we often need to distinguish between human users and bots. While we previously relied on hCaptcha to do this, we’ve now implemented Cloudflare Turnstile instead. Rather than presenting users with frustrating visual challenges, Turnstile verifies them behind the scenes using automatic checks.

While hCaptcha does offer an accessibility mode, it requires users to sign up separately and complete extra setup. Turnstile, however, works with screen readers without any extra steps. This change has been applied across raspberrypi.com, including on our forums, the Raspberry Pi ID page, and the Raspberry Pi Connect page. Our board member Chris Mairs found it to be a helpful improvement; he discusses his experience and encourages others to make the switch in a recent post on his blog, The Open Eyed Man.

We now test new and updated pages with a screen reader as part of our development process, checking landmarks, headings, links, and forms. We found Adam Liptrot’s guide to VoiceOver and Deque’s axe accessibility tools particularly helpful here.

Get in touch

If you use a screen reader and run into any issues on our site, or have ideas for further improvements, please get in touch. And if you’d like to improve your own website’s accessibility for screen readers and want to know more details, we’ll try to answer your questions in the comments below.

The post Accessibility improvements for screen readers on raspberrypi.com appeared first on Raspberry Pi.



from News - Raspberry Pi https://ift.tt/tT3Ve0N

Labels: ,