Ydtechnologius
This is a collection of news and articles on tech and gadgets.Informative news on technology, gadgets and news.Latest Technology News, Gadget Reviews, Latest Gadgets, Latest Gizmos and Latest News in Tech.
Thursday, January 29, 2026
SmartCoop: Controlling chickens with Java
The new issue ofRaspberry Pi Official Magazineis here, and with it, this smart chicken coop project. With SmartCoop, a Raspberry Pi monitors feed and water levels, andschedules the opening and closing of the main door based on preconfigured times and weather data.
Owning a small flock of chickens means regularly opening and closing the coop’s main door, collecting the eggs, and making sure there is enough food and water. Given that most of this needs to be done daily, you’ll need to arrange for someone to perform these tasks if you want to get away for more than a day or two.
Enter SmartCoop. One of the key design goals behind this project was to ensure the system was robust enough that its creator, Dave Duncanson, could be away for up to a week without anyone physically attending to it, while still preventing the local foxes from getting to the chickens.
The main gate is opened and closed automatically;
sensors measure things like water and food levels
Dave started working on SmartCoop over ten years ago, and the current version contains the fourth generation of his custom-made PCB. With this new design, he could bump the system to use a Raspberry Pi Zero 2 W.
The full system contains an array of automated doors, light sensors, manual push buttons, water tank measurement tools, feeders, and so on. On the software side, an MQTT broker distributes the data, while a Java application based on Pi4J uses live weather data from an API, along with measurements from the sensors, to open and close the gates, keep track of feeding, and perform other tasks.
Raspberry Pi Zero 2 W is mounted on a custom PCB with ports connected to multiple sensors
The project evolved not only because the technology changed, but also because it was being influenced by nature. Dave was struggling with a fox that loved to hunt the chickens and had learnt when the gates would open automatically. Because of this, the system was adapted to use the expected dawn and dusk times, only opening and closing the gate based on light sensor measurements.
Another problem was caused by Dave’s teenager. As anyone with kids will confirm, teenagers tend to forget a lot of important things, like closing the gate of the coop. To combat this, the SmartCoop monitors the gate and the status of the food and water supply, alerting a configurable number of people when something is wrong.
The chicken coop door is controlled by a daylight sensor
In the future, a UHF RFID reader, combined with an RFID ring for each of the chickens, could be added to the system to monitor whether they are all inside at night. By installing another one of these readers in each of the laying boxes, it would even be possible to keep track of the most (or least) productive chickens.
Raspberry Pi + ESP32
Around 80% of the core functionality is handled by a Raspberry Pi Zero 2 W running a Java application, which uses the Pi4J library to control the GPIO pins and interact with I2C devices. It also stores data in an H2 database and provides GPS and NTP functionality, event scheduling, and a template-based web interface.
Several sensors, like this water level sensor, monitor various conditions to keep the chickens safe, fed, and wateredThe feed level is monitored by sensing the weight of the containerWhen it gets dark or when intruders are detected, the gate closes to keep the chickens inside
The remaining work is handled by the ESP32. Its initial role was simply to power Raspberry Pi on and off at preset, configurable times to conserve battery via RTC interrupts; its functionality has since been extended, and it now also checks the door positions and motor encoders. Because many existing Arduino examples also work on the ESP32, these were used to understand how some components are controlled before the code was ported to Java and Pi4J.
Chicken town
Dave is the first to admit that this solution is most likely over-engineered and therefore not cost-effective — but it’s the perfect way to fully automate his chicken coop. He has no plans to turn it into a commercial product either; instead, he shares both the software and hardware on Bitbucket.
The new issue of Raspberry Pi Official Magazine is out now!
You can grab this issue from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available from our online store, which ships around the world. And you can get a digital version via our app on Android or iOS.
You can also subscribe to the print version of our magazine. Not only do we deliver worldwide, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico 2 W!
For those attending Integrated Systems Europe (ISE) 2026 in Barcelona, a visit to the Sharp booth might reveal something new, exciting, and not yet released…
We’ve been working with Sharp Display Solutions Europe to develop the Raspberry Pi Smart Display Module: an adapter board for Raspberry Pi Compute Module 5 that is designed to deliver high-quality, low-power display experiences for professional signage applications.
The Raspberry Pi Smart Display Module enables users in the audio-visual and digital signage markets to integrate the power, flexibility, and energy efficiency of Compute Module 5 into compatible display screens, with no external media player, cabling, or power source required. The module also provides HDMI output to support a second independent video stream, along with an M.2 expansion slot for optional AI acceleration.
Conforming to the Intel® SDM specification, the Raspberry Pi Smart Display Module slots directly into displays that support Intel’s standard, drawing power from the display itself. With the computer embedded inside the screen, installations are clean, reliable, and easy to maintain, making the Smart Display Module ideal for applications such as flight information systems, retail and corporate signage, and industrial displays.
We designed the Raspberry Pi Smart Display Module to be as straightforward to assemble as possible — customers can install it themselves without any specialist tools.
Enabling edge AI for digital signage
As organisations increasingly explore AI-powered digital signage, the Raspberry Pi Smart Display Module offers an efficient and practical solution. Able to integrate easily with compatible AI accelerators, the module enables edge AI processing to take place directly inside the screen it is paired with. This allows users to run analytics and AI-driven applications locally, privately, and in real time, without reliance on cloud-based services.
Raspberry Pi technology is already used by thousands of businesses and powers hundreds of thousands of screens worldwide; the introduction of the Raspberry Pi Smart Display Module further expands that ecosystem. By embedding AI capability directly into their display solutions, businesses can innovate rapidly and adapt to changing requirements with an energy-efficient, easy-to-integrate modular solution.
See it for yourself
ISE 2026 is taking place from 3–6 February 2026 at Fira de Barcelona, Gran Via. Visitors to the Sharp booth will be able to see the Raspberry Pi Smart Display Module in action ahead of its launch later this year.
Streamline dataset creation for the Raspberry Pi AI Camera
Starting an AI project often begins with building a quality dataset, which can be a complex and time-consuming task. This dataset contains the data you want to use to train, test, and verify that your AI model works. This tutorial introduces a practical approach to help simplify the process.
With the Sony IMX500 sensor on the Raspberry Pi AI Camera, you can use your own datasets to improve your AI models. Whether you’re an experienced maker or just beginning to explore the world of edge AI, this guide will help you organise, refine, and export datasets with ease. Let’s look at how this tool can support you in building smarter AI models, faster.
The challenge of dataset creation
Dataset preparation is an important yet sometimes challenging aspect of vision AI projects. Capturing images, organising them, cropping out irrelevant details, and ensuring they’re formatted correctly is a lot of work. This process can be a roadblock that slows down progress or discourages you from starting in the first place. But with the right setup and tools, you can simplify these tasks and focus on your AI development.
Figure 1: The GUI Tool web interface
Setting up and getting started
For this tutorial, we will use a tool that provides some convenient features for dataset creation: GUI Tool. This makes it easier to capture images that are very close to the deployment environment and highly suitable for training, since the data comes directly from the IMX500 image sensor.
GUI Tool runs on a Raspberry Pi with an AI Camera attached, and you access it via a web browser using another computer on the same network.
To run the tool, you’ll need Node.js and uv software:
Navigate into the new folder and install the software in the root of the folder:
$ make setup
To start the GUI Tool, run:
$ uv run main.py
You’ll need the IP address:
$ hostname -I
Or hostname:
$ hostname
…of your Raspberry Pi to access it on the network.
Access the GUI Tool
Now move to the second computer on your local network and open a browser. Navigate to:
http://<your-raspberrypi-IP-address>:3001
…to access the tool’s interface.
You can also access the GUI Tool directly from your Raspberry Pi and AI Camera via:
http://127.0.0.1:3001
You will see the GUI Tool web interface as shown in Figure 1.
Creating a dataset using the IMX500 sensor
Once the setup is complete, you can use the GUI Tool to create and organise your dataset. Choose the ‘Images’ tab in the sidebar and click ‘Add’ to create a new dataset. Give the dataset a name in the pop-up window; for example, ‘car-dataset’ (Figure 2). Click ‘Add’ to create the dataset.
Now we need to add images by uploading them from your computer. For this tutorial, we have used the Vehicles-OpenImages Dataset from Roboflow (Figure 3).
Click ‘Upload’ and choose an image from your Raspberry Pi OS file system. The image will appear in the car dataset (as in Figure 4).
Figure 2: Create a new dataset
Capture images with the camera
It is also possible to use the GUI Tool to automate image capture directly from a camera attached to your Raspberry Pi. If you have a Raspberry Pi AI Camera connected, you can also gather input tensor data alongside the raw image.
Choose the ‘Camera preview’ tab to view the image from your camera.
Select collection: Click ‘Select Collection’ and choose a dataset to add the images to.
Input: Click the ‘Timer’ switch to automate image capture at set intervals. For example, to capture a frame every 10 seconds for 50 images, set the capture rate to 0.1 and the number of photos to 50. Activate the image capture and let the tool handle the rest.
Input tensor: The Raspberry Pi AI Camera works differently to traditional image processing systems. The IMX500 sensor includes an internal ISP that preprocesses the sensor data and supplies the input tensor directly to its on-board AI accelerator chip. So, for optimal performance, it’s highly recommended that you train models using the exact input tensor data produced by the IMX500 sensor, rather than relying on raw images or preprocessed images only. This ensures that the model learns from data that precisely matches the runtime conditions, which leads to better model performance.
Fortunately, we can very quickly get this input tensor data by enabling the ‘Input Tensor’ flag during the image capturing process.
Start capture: Click the camera icon to start the image capture process.
Figure 3: The Vehicles-OpenImages Dataset from Roboflow is a good test bed of images for training a vehicle detection model
Manage images
Head to the ‘Images’ tab to upload, delete, or capture images directly into your dataset to keep it organised.
Once your dataset is ready, click ‘Images’, then click the cog icon next to your dataset. Select ‘Download’ to save the images as a ZIP file on your computer.
Practical example: Recognising cars
Imagine you’re developing an AI model for car recognition with the IMX500 sensor. Here’s what the process might look like:
Create a ‘car-dataset’ dataset
Capture images of cars using the IMX500 sensor
Automate the capture process to ensure consistency
If needed, crop images to focus on relevant areas, such as individual cars
Organise and manage these images within the tool
Export the dataset and use it to annotate and train your AI model
Figure 4: The vehicle dataset added to GUI Tool
Training your AI model
Once your dataset is ready, the next step is annotation, followed by training with TensorFlow or PyTorch. Alternatively, for a streamlined and user-friendly experience, you can use a dedicated tool to simplify these steps. One tool that can assist you is Brain Builder for AITRIOS (Figure 5) from the Studio Series of AI tools and services for AITRIOS.
Annotating
Annotating your dataset is a critical step in training an AI model because it teaches the AI exactly what you want it to learn. If the annotations contain mistakes, the model will learn those mistakes as well, which can reduce its accuracy.
There are many tools available for annotation, such as Roboflow or CocoAnnotator, that help you label your datasets according to the type of model you plan to train.
When choosing an annotation tool, make sure to check which export formats it supports. Your dataset must be exported in a format compatible with the AI model you want to train.
Figure 5: Sony AITRIOS Brain Builder software can simplify the process of training AI models
Training
Once your dataset is annotated and exported, you are ready to start training. We suggest you follow your chosen framework’s guides on how to create a training script and what hardware you might need.
Brain Builder for AITRIOS
This tool is designed to simplify the annotation and training process, which might be helpful for users with varying levels of AI expertise. With Brain Builder for AITRIOS, you can annotate and train your AI models in a few steps, all inside the same tool. This means your annotated dataset can be sent straight into training, already in the right format.
Brain Builder for AITRIOS currently supports three types of models: Classification, Object Detection, and Anomaly Hi-Fi. You can train and evaluate your model and, when you are happy with the accuracy, export it for IMX500 without any hassle.
Deploying your AI model
Once your model is trained, you can package it and then deploy it on the IMX500:
Package your model on your Raspberry Pi
Build an application to visualise the results, such as counting cars
Creating datasets isn’t just a technical task — it’s a gateway to collaboration, learning, and real-world innovation. The possibilities are wide-ranging: educators can introduce students to AI and machine learning; makers can build smarter IoT devices, such as home security systems or gesture recognition tools; and researchers can accelerate their work on projects including wildlife conservation, medical imaging, and more.
This tutorial featured in Raspberry Pi Official Magazine #161
You can grab this issue from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available from our online store, which ships around the world. And you can get a digital version via our app on Android or iOS.
You can also subscribe to the print version of our magazine. Not only do we deliver worldwide, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico 2 W!
Raspberry Pi Flash Drive available now from $30: a high-quality essential accessory
A USB flash drive is one of those small essentials you reach for from time to time to back up data or transfer files between your Raspberry Pi and substitute computers. For basics like these, it’s tempting to reach for the cheapest thing on Amazon or whatever you find in your local supermarket, but you can easily end up with a device that has sluggish read and write speeds, fragile casing, or – worst of all – far less storage capacity than it claims. Better to go with something you can rely on: introducing the Raspberry Pi Flash Drive, a compact high-capacity USB 3.0 USB‑A device with fast data transfer and an all‑aluminium enclosure. It’s available now at $30 for 128GB, or $55 for 256GB.
We’ve brought our usual exacting standards and attention to detail to our new accessory. It can sustain a write speed of 75MB/s (128GB variant) or 150MB/s (256GB variant), and our thorough testing has made sure it can handle the demands of real life when it comes to sudden disconnection and power failure. Its ergonomic all-aluminium enclosure is easy to grasp and almost impossible to break, although you’ll manage it if, like jdb of this parish, you go at it with a blowtorch. It has an attachment hole so you can keep it on a keyring or similar. The Raspberry Pi logo is etched with classy understatement onto its upper surface.
Fast and robust
Like many high-density NAND flash storage devices, the Raspberry Pi Flash Drive employs a small reservation of pseudo-SLC cache to improve performance under burst-y write workloads. In the background, any writes that were allocated in pSLC are streamed out to the higher-density, but slower, QLC flash. There are significant advantages to doing this: for short periods, the sequential write speed can be almost as fast as USB 3.0 will go.
This cache does, however, make benchmarking challenging. For this reason, the USB 3.0 performance figures we quote are sustained figures, where writes are measured when the cache is forced to do write‑through due to the volume of writes already committed, and reads are measured with the cache empty.
It goes without saying that whatever internal storage arrangement is used, it must be robust against surprise removal or power failure. We verified that our new flash drive meets this requirement over tens of thousands of random power cycles while running intermittently intensive I/O workloads.
Bonus features
In addition to being fast, we made sure that these drives support SSD-style SMART health reporting to help you to manage the device lifespan, as well as supporting TRIM operations. They will also autonomously enter low-power USB 3.0 states when idle.
More handy essentials from Raspberry Pi
Our new flash drive joins a growing range of rigorously specified and robustly tested Raspberry Pi accessories designed to make your day-to-day computing life as friction-free as possible. Raspberry Pi SD Cards and Raspberry Pi SSDs offer you a choice of storage solutions; the four-way Raspberry Pi USB 3 Hub provides an excellent alternative to unsatisfactory price/quality compromises elsewhere; and the Raspberry Pi Bumper is exactly what you need to protect the base and edges of your Raspberry Pi 5, without obstructing access to anything else.
The new Raspberry Pi Flash Drive gives you compact, portable storage with reliable performance for both 128GB and 256GB capacity options. Grab one from a Raspberry Pi Approved Reseller today.
If you’ve ever tried using a Raspberry Pi — or any single-board computer — while travelling, you probably know how frustrating it can be. Hotel rooms with no spare Ethernet ports, conference Wi-Fi behind captive portals, networks that block local discovery tools, or simply not knowing what IP address your headless board received can all turn a simple task into a hassle.
Last year, I came across a concept that sounded like the ideal solution: Ethernet over USB. The idea is beautifully simple — plug the Raspberry Pi into a laptop and it appears as a USB network adapter, just like when you enable USB tethering on a smartphone. That would mean no Wi-Fi setup, no IP scanning, no captive portal headaches — just plug in, SSH, and start working. Bonus: The host computer could even share its internet connection over that same cable.
At least, that’s the theory.
In reality, getting this to work has traditionally involved a mix of outdated scripts, manual configuration steps, and platform-specific instructions that only reliably supported one host OS at a time — Windows, macOS, or Linux, but rarely all three. Many great community efforts exist, but they often require you to clone repositories, edit system files, or manually switch the Raspberry Pi between Internet Connection Sharing (ICS) and normal local networking — and ICS is typically treated as an optional afterthought, rather than part of a unified workflow.
I wanted to streamline that experience — not to replace community solutions, but to offer a clean, all-in-one option that “just works”, regardless of whether the user is a first-time Raspberry Pi owner or someone deploying a fleet of headless boards.
So I started a project with a clear goal in mind: to make a single Debian package that enables USB gadget networking straight out of the box on all supported Raspberry Pi boards, and across all major host operating systems.
Introducing rpi-usb-gadget
Starting with Raspberry Pi OS Trixie images dated 20.10.2025 and later, a new package called rpi-usb-gadget is included by default. It can be enabled with a single toggle in Raspberry Pi Imager, making USB networking setup drastically simpler.
Once enabled:
Your Raspberry Pi will present itself as a USB Ethernet device when connected to a PC
You can SSH directly using the hostname you set in Raspberry Pi Imager — no Wi-Fi or Ethernet setup required
If your PC has an active internet connection and ICS is enabled, the Raspberry Pi will automatically receive internet access through the same USB cable
A lightweight background service runs on the Raspberry Pi to detect host connectivity and automatically switch between standalone mode and ICS-backed networking
In practice, it behaves very similarly to USB tethering on a smartphone — but for Raspberry Pi
The package is supported on all major host systems: Windows, macOS, and Linux.
Important hardware note
To use USB gadget mode, the Raspberry Pi must be connected to a USB port that supports OTG (device mode):
Raspberry Pi model
USB port to use
Raspberry Pi Zero, Zero W, Zero 2 W
The micro USB port closest to HDMI — not ‘PWR IN’
Raspberry Pi 4, 5, 500, 500+
The USB-C port directly on the board
Compute Module 5
The USB-C port on the Raspberry Pi CM5 IO Board
Compute Module 4
Requires additional manual setup and is not auto-configured
⚠ Warning:
Once gadget mode is enabled, the selected port will function exclusively as USB networking + power input; it will no longer operate as a regular USB host port. This means that keyboards, storage devices, or other peripherals cannot be connected to that port while gadget mode is active.
Supported boards:
Raspberry Pi Zero (W) and Zero 2 W
Raspberry Pi 3 Model A+
Raspberry Pi 4 Model B
Raspberry Pi 5, 500, and 500+
Compute Module 5
Compute Module 4 (technically supported, but additional manual setup is required)
For optimal stability — especially on Raspberry Pi 4, 5, and 500/500+ — connect the Raspberry Pi directly to a USB port on your PC. Some laptop USB ports cannot provide sufficient power, which may cause reboots or USB link drops.
Recommended accessory: The Raspberry Pi USB 3 Hub allows you to power the device externally while still passing only data over the USB connection; this is ideal for laptops with weak USB power delivery.
Enabling gadget mode the easy way: Raspberry Pi Imager 2.0
Double-click the generated os_list_local.rpi-imager-manifest file
Select a Raspberry Pi OS Trixie image (20.10.2025 or newer)
In the ‘Customisation’ menu, set a hostname
This is the name you’ll use to SSH into the Raspberry Pi
If ICS is disabled on the host, the fallback IP will be 10.12.194.1
Go to ‘Interfaces & Features’ and toggle ‘Enable USB Gadget Mode’
Write the image, insert the card into your Raspberry Pi, and connect it to your PC using the correct USB/OTG port (not just the power input)
Power on the Raspberry Pi (the first boot may take longer than usual and might reboot once — this is expected)
Once booted, your Raspberry Pi should appear as a new Ethernet adapter on your host machine
You can now SSH using the hostname you set
Windows driver requirement
Windows does not include a generic driver for USB Ethernet gadget devices. To avoid relying on impersonated vendor IDs or unofficial drivers, a dedicated installer is provided: download and run rpi-usb-gadget-driver-setup.exe from the project’s releases page on GitHub.
This only needs to be done once per Windows machine.
Internet Connection Sharing (ICS)
If you want your Raspberry Pi to access the internet through the USB connection, enable Internet Connection Sharing (ICS) on your host computer.
On Windows
Plug in the Raspberry Pi and confirm it shows up as a new Ethernet adapter
→ It will appear under a name like ‘Ethernet 7 — Raspberry Pi USB Remote NDIS Network Device’
Open ‘Network Connections’:
→ Control Panel → Network and Internet → Network and Sharing Center → Change adapter settings
→ Or press Win + R and enter ncpa.cpl
Identify your primary internet-connected adapter (e.g. Wi-Fi) and open ‘Properties’
Go to the ‘Sharing’ tab, enable ‘Allow other network users to connect…’, and, in the dropdown, choose the Raspberry Pi USB Ethernet adapter
Confirm your selection; within around 60 seconds, the Raspberry Pi should obtain an IP address
⚠ Note on Windows ICS behavior:
If ICS is enabled for the Raspberry Pi’s adapter while the Raspberry Pi is not connected, Windows may bind its DHCP service to another interface (such as a Hyper-V adapter). In that case, the Raspberry Pi interface may show as shared but will not receive a DHCP lease. To fix this, fully disable ICS on all adapters you shared the network from, plug in the Raspberry Pi, and then re-enable ICS.
On macOS
Connect the Raspberry Pi and wait for it to appear as a new USB Ethernet device
Open System Settings → General → Sharing → Internet Sharing
Before enabling, click the info icon to configure:
→ Your internet source (e.g. Wi-Fi)
→ The Raspberry Pi USB Gadget interface as the target to share to
Save and toggle ‘Internet Sharing’ on
On Linux
Enable routing and NAT from your primary internet connection to the Raspberry Pi USB Gadget network interface using your distribution’s NetworkManager or equivalent. Instructions vary depending on the desktop environment or init system.
Enabling USB gadget mode without Imager
This only applies to Raspberry Pi OS Trixie–based images.
Fresh images (before first boot):
Cloud-init can enable USB gadget mode automatically on first boot:
Mount the boot partition
Edit user-data and append:
rpi:
enable_usb_gadget: true
enable_ssh: true # Optional but recommended when using gadget mode
It’s recommended that you also define a user and an SSH key in the same file, as the setup wizard cannot be used over SSH, and connecting USB peripherals is not possible on Zero boards while gadget mode is active.
Existing installation:
1. Verify that you are running Raspberry Pi OS Trixie:
After reboot, the USB port will switch into gadget mode. Any active SSH sessions will temporarily drop and then reconnect once the USB Ethernet interface becomes available. Depending on the host system, give it up to one minute for DHCP/ICS negotiation to settle.
Technical details
The rpi-usb-gadget package configures the g_ether USB gadget kernel module, which exposes a virtual Ethernet interface according to the host OS’ capabilities:
Windows hosts → RNDIS mode
macOS/Linux hosts → CDC-ECM mode
The correct mode is automatically selected at runtime based on USB descriptor negotiation — no manual selection is required.
Why no USB serial console?
CDC-ACM (serial over USB) is not included, as Windows cannot bind both RNDIS/ECM and ACM to a single composite USB device using one .inf file without vendor-specific drivers.
A lightweight background service runs on the Raspberry Pi and continuously monitors:
USB link state
DHCP/ICS availability on the host
Routing and DNS status
If ICS is detected on the host, gateway and DNS configurations are automatically applied to provide seamless internet access over USB.
Troubleshooting
If the connection doesn’t work immediately, work through the following checks:
Hardware and cabling
Ensure you are using the correct USB/OTG port:
→ On Zero models, this must be the port labeled USB, not PWR IN
Connect directly to a USB port on the host PC — avoid hubs or docks that may block OTG negotiation or limit power
Only connect one Raspberry Pi at a time in gadget mode to avoid host-side interface conflicts
If the Raspberry Pi reboots repeatedly or disconnects, it may not be receiving enough power
→ Use a powered USB hub or the Raspberry Pi USB 3 Hub to supply external power while keeping data routed through the host
ICS and DHCP behavior
After enabling ICS, wait up to one minute for DHCP to issue an IP, and for hostname resolution to begin working
On Windows, if the gadget adapter shows as ‘Shared’ but no IP is assigned:
→ Disable ICS completely on all interfaces, plug in the Raspberry Pi, then re-enable ICS and reassign the correct adapter
Windows may list multiple ‘Ethernet X’ adapters from previous attempts
→ Consider removing or disabling unused adapters to prevent routing conflicts
Hostname and mDNS
If ssh pi@hostname.local does not resolve on Windows, install Bonjour/mDNS support or use the assigned IP address directly (shown in arp -a)
Source code and driver downloads
This feature is now included by default in Raspberry Pi OS Trixie images, but all code and tooling remain open for inspection and contribution. You can find the full source, the documentation, and the Windows driver installer (rpi-usb-gadget-driver-setup.exe) here.
The repository also contains an issue tracker, as well as reports from real-world setups — particularly involving ICS on Windows and macOS — which are extremely valuable. Different host systems behave slightly differently when assigning DHCP, routing, or firewall rules, so community feedback helps make the experience more reliable for everyone.
Wrapping up
USB gadget mode brings a Raspberry Pi much closer to being a plug-and-go development device — no Wi-Fi setup, no IP scanning, no HDMI, and no keyboard required. Just connect a single USB cable, SSH in, and start building.
This feature has been designed to be simple enough for beginners to use, yet robust and scriptable enough for power users and large-scale deployments. Future updates will continue to refine host detection, ICS handling, and diagnostics. If you have ideas or edge cases to share, the GitHub issue tracker and the Raspberry Pi Forums are open.
Introducing the Raspberry Pi AI HAT+ 2: Generative AI on Raspberry Pi 5
A little over a year ago, we introduced the Raspberry Pi AI HAT+, an add-on board for Raspberry Pi 5 featuring the Hailo-8 (26-TOPS variant) and Hailo-8L (13-TOPS variant) neural network accelerators. With all AI processing happening directly on the device, the AI HAT+ delivered true edge AI capabilities to our users, giving them data privacy and security while eliminating the need to subscribe to expensive cloud-based AI services.
While the AI HAT+ provides best-in-class acceleration for vision-based neural network models, including object detection, pose estimation, and scene segmentation (see it in action here), it lacks the capability to run the increasingly popular generative AI (GenAI) models. Today, we are excited to announce the Raspberry Pi AI HAT+ 2, our first AI product designed to fill the generative AI gap.
Unlock generative AI on your Raspberry Pi 5
Featuring the new Hailo-10H neural network accelerator, the Raspberry Pi AI HAT+ 2 delivers 40 TOPS (INT4) of inferencing performance, ensuring generative AI workloads run smoothly on Raspberry Pi 5. Performing all AI processing locally and without a network connection, the AI HAT+ 2 operates reliably and with low latency, maintaining the privacy, security, and cost-efficiency of cloud-free AI computing that we introduced with the original AI HAT+.
Unlike its predecessor, the AI HAT+ 2 features 8GB of dedicated on-board RAM, enabling the accelerator to efficiently handle much larger models than previously possible. This, along with an updated hardware architecture, allows the Hailo-10H chip to accelerate large language models (LLMs), vision-language models (VLMs), and other generative AI applications.
For vision-based models — such as Yolo-based object recognition, pose estimation, and scene segmentation — the AI HAT+ 2’s computer vision performance is broadly equivalent to that of its 26-TOPS predecessor, thanks to the on-board RAM. It also benefits from the same tight integration with our camera software stack (libcamera, rpicam-apps, and Picamera2) as the original AI HAT+. For users already working with the AI HAT+ software, transitioning to the AI HAT+ 2 is mostly seamless and transparent.
Some example applications
The following LLMs will be available to install at launch:
Model
Parameters/size
DeepSeek-R1-Distill
1.5 billion
Llama3.2
1 billion
Qwen2.5-Coder
1.5 billion
Qwen2.5-Instruct
1.5 billion
Qwen2
1.5 billion
More (and larger) models are being readied for updates, and should be available to install soon after launch.
Let’s take a quick look at some of these models in action. The following examples use the hailo-ollama LLM backend (available in Hailo’s Developer Zone) and the Open WebUI frontend, providing a familiar chat interface via a browser. All of these examples are running entirely locally on a Raspberry Pi AI HAT+ 2 connected to a Raspberry Pi 5.
The first example uses the Qwen2 model to answer a few simple questions:
The next example uses the Qwen2.5-Coder model to perform a coding task:
This example does some simple French-to-English translation using Qwen2:
The final example shows a VLM describing the scene from a camera stream:
Fine-tune your AI models
By far the most popular examples of generative AI models are LLMs like ChatGPT and Claude, text-to-image/video models like Stable Diffusion and DALL-E, and, more recently, VLMs that combine the capabilities of vision models and LLMs. Although the examples above showcase the capabilities of the available AI models, one must keep their limitations in mind: cloud-based LLMs from OpenAI, Meta, and Anthropic range from 500 billion to 2 trillion parameters; the edge-based LLMs running on the Raspberry Pi AI HAT+ 2, which are sized to fit into the available on-board RAM, typically run at 1–7 billion parameters. Smaller LLMs like these are not designed to match the knowledge set available to the larger models, but rather to operate within a constrained dataset.
This limitation can be overcome by fine-tuning the AI models for your specific use case. On the original Raspberry Pi AI HAT+, visual models (such as Yolo) can be retrained using image datasets suited to the HAT’s intended application — this is also the case for the Raspberry Pi AI HAT+ 2, and can be done using the Hailo Dataflow Compiler.
Similarly, the AI HAT+ 2 supports Low-Rank Adaptation (LoRA)–based fine-tuning of the language models, enabling efficient, task-specific customisation of pre-trained LLMs while keeping most of the base model parameters frozen. Users can compile adapters for their particular tasks using the Hailo Dataflow Compiler and run the adapted models on the Raspberry Pi AI HAT+ 2.
Available to buy now
The Raspberry Pi AI HAT+ 2 is available now at $130. For help setting yours up, check out our AI HAT guide.
Hailo’s GitHub repo provides plenty of examples, demos, and frameworks for vision- and GenAI-based applications, such as VLMs, voice assistants, and speech recognition. You can also find documentation, tutorials, and downloads for the Dataflow Compiler and the hailo-ollama server in Hailo’s Developer Zone.
Keeping tabs on cattle ranging over enormous ranches in rural Canada is a serious challenge. “Ranching operates outdoors, in all weather, in remote locations where there is no power or data connectivity,” explains Flokk founder and CEO Mark Olson. Record-keeping has traditionally been done using pocket notebooks, which are easily lost, damaged or trampled underfoot while the rancher struggles to check on each animal’s health and well-being. It’s the sort of task that hapless TV vets might have nightmares about. Nonetheless, detailed and fully up-to-date information about each cow is mandatory in the highly regulated world of animal husbandry.
Flokk’s ruggedised handheld scanner records crucial animal data;
details are saved to a Raspberry Pi Zero W inside and sent to a central server via Starlink satellite broadband
Computer scientist and farmer Mark Olson applied his knowledge of data collection to address the issue, coming up with a ruggedised scanner that has a Raspberry Pi Zero W at its heart and which automates much of the record collection process.
Rural reach
Mark grew up on a farm in Alberta, Canada, and, after briefly exploring the computerisation of agriculture when he graduated in the mid-1980s, went on to work in enterprise IT and management. Open-source software has always interested Mark, not least because his early forays into building home servers were of the DIY, fish-the-hardware-out-of-the-dumpster variety. Linux was the only platform he could afford that would run on the hardware he’d sourced. An interest in Raspberry Pi is therefore no surprise. It’s “a logical extension of my interest and aptitudes in open source,” says Mark, who uses Debian Linux for both Flokk’s website and online services.
RFID tags are read by the Flokk handheld scanner and the details are uploaded to a central database
This contrasts with “digital solutions for ranching [that] currently attempt to use hardware built for offices, not ranches. Consumer hardware is expensive and delicate, and unsuitable for the harsh environment of ranches.”
Traceability is not only desirable — it’s legally mandated. All Canadian cattle are fitted with a tag with an RFID code so that, if an issue arises with the animal, it can be traced back, and any other animals that might have been impacted by the issue can be located and investigated.
Flokk digitises data collection and record keeping for cow/calf ranching, a $5 billion industry in Canada and a trillion dollar industry globally, moving it from the small pocket notebooks ranchers use today to a digital solution.
Mark’s LinkedIn article about the importance of such record-keeping for the Canadian agricultural economy offers some valuable insights. He talks of the clear need for beef traceability and for Canada to secure its future food security via agricultural exports. He also emphasises the importance of keeping data (including saleability, individual animal traceability, pregnancy, and other health stats) on-device, freeing up the rancher’s time by giving them the autonomy that comes from not having to laboriously fill out paperwork. Ranchers can now capture data using in-field scans and have it digitised and stored immediately, keeping it safe.
Quick FACTS
Every livestock animal in Canada should have an in-ear RFID tag
Most ranchers have no means of reading the CCIA tags, so they keep paper records
Missing and inaccurate records impact traceability efforts
Dedicated digital readers are expensive and have no other functions
Flokk works with record management software, making compliance easier
He points out that Canada spends around 40% of its agricultural services budget on inspection and control, and says this could be massively reduced by using technology to tag and monitor agricultural assets. Starlink provides “a ready answer for rural broadband and wireless connectivity”, and he rates it above 5G connectivity, “which is both far more expensive to install and requires time to roll out, along with ongoing maintenance”.
No platform other than Debian — the flavour of Linux behind Raspberry Pi OS — lets you take the same software and skill set and apply it all the way from a battery-powered handheld, through an SBC desktop, to a multi-node server.
Hard-headed approach
Flokk chose Raspberry Pi for its affordability, high capability, and compatibility with other open-source products. “Raspberry Pi Zero W uses a full-function Broadcom system on a chip, not a microcontroller, and Linux as the OS, [so] Flokk can leverage standard open-source tools. We can add functionality in days and be confident our platform will support whatever software or functionality we will need to deliver in future.”
The ruggedised handset is ideal for ranchers on the move
It was important that the cost of the handheld scanners wouldn’t be a barrier to ranchers adopting them; Flokk will make its money from subscriptions, rather than the hardware. The end of life for a Flokk handheld will most likely not be the result of wear and tear or obsolescence — rather, it will be dropped into a pen of cattle and stomped on. Flokk has to be able to rapidly, and affordably, recover a customer’s data from offsite backup and ship them a new Flokk ready to use.
Mark wrote all the code himself, devoting more than 1000 hours to both code and platform development. When they began the process, Raspbian/Raspberry Pi OS would not run on the handheld scanner, so they used Tiny Core Linux instead. Future iterations may use this alongside Raspberry Pi OS.
Flokk runs Tiny Core Linux code on Raspberry Pi Zero W, alongside an RFID reader inside a Hammond Hand-Held T ruggedised case
Mark says Raspberry Pi Zero’s capabilities were “a perfect match; right price, power efficient, and exactly the I/O we needed”. Flokk is currently preparing its next iteration of hardware, migrating to Raspberry Pi Zero 2 W, upgrading the handheld scanner’s display, and adding a GPS receiver and, perhaps, a camera.
For now, Mark is busy garnering investor interest in Flokk so that it can be rolled out at scale across Canada’s ranches. Demonstrating its real-world use is imperative. “The Flokk I proudly use for investor presentations has brown stains on it; and the source of those brown stains is exactly what you think it is.”
This article is from Raspberry Pi Official Magazine #161
You can grab this issue from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available from our online store, which ships around the world. And you can get a digital version via our app on Android or iOS.
You can also subscribe to the print version of our magazine. Not only do we deliver worldwide, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico 2 W!
At this time of year, lots of new Raspberry Pi users are wondering what to do with the tiny computer they were gifted at Christmas. The excellent team behind Raspberry Pi Official Magazine thought ahead and included a collection of fun, beginner-friendly projects for you to try in the latest issue. Happy #MakerMonday to all of our new friends.
Do citizen science with Raspberry Shake
Help map tremors around the world with this simple kit
This seismometer kit includes a special geophone sensor that lets you study the Earth and its movements very accurately. Not only that, but you can hook it up to a network of like-minded amateur — and professional — seismologists and see how seismic activity spreads around the world, in real time.
Add to a global network of citizen scientists monitoring seismic activity
Assembly is very easy — just connect the geophone to the add-on board, put the board on a Raspberry Pi, then assemble the acrylic case. There’s a handy video that runs you through the process, too, and goes on to cover the software setup.
The kit for Raspberry Shake is nice and compact
Monitor your plants with Pico
Get a text when your hydrangeas need some hydration
Home automation doesn’t just have to involve controlling your lights and heating — you can also use similar techniques to maintain your plants. Moisture sensors are a very common component for Raspberry Pi projects, and the Grow HAT from Pimoroni makes it easier to use them.
A Grow Kit monitors the soil moisture
The extra trick to this project is that it will send you emails with updates — inspiring messages in the morning, as well as reminders that your plant needs a bit of water. This project also makes use of a little hack to get a Raspberry Pi HAT to work with Pico — it uses a lot less power than a full Raspberry Pi too, making the project much greener.
A Perma-Proto HAT is used to connect Pico to the Grow HAT
Record stop-motion videos with a Camera Module
Become a movie-maker with some Lego and a lot of patience
We can show you how to take simple pictures or videos with a Raspberry Pi and a Camera Module, but we think it’s a lot more fun to create a hybrid setup that aids in stop-motion photography.
Orange you glad you made a film?
Very simply, this project lets you take a photo at the press of a button, and then waits for you to press the button again for the next photo. In that time, you rearrange what’s in the preview window on your monitor, creating a frame of animation each time. You can then stitch these together with some code to create a final product. With a few programming tricks, you can even have a ghostly version of the previous frame on screen to aid you in setting up the next shot.
Create a robot
Make a friendly automaton with a suite of custom sensors
The CamJam EduKit 3 is the perfect introduction to building your own robot. It’s a small and inexpensive kit that pairs with a full-size Raspberry Pi or Raspberry Pi Zero to create a customisable machine that you can either use as a remote-control car or to experiment with robot automation.
A 3D-printed chassis is available, but you can actually use the box from the kit to make the robot
It comes with an ultrasonic distance sensor and line followers — classic robotics sensors used for navigation. Assembly is fairly straightforward, and the Raspberry Pi Foundation’s tutorial even makes use of the box it comes in to build it; no parts wasted!
There are a lot of parts, but minimal soldering involved
Build a smart mirror
Check on yourself and your day with one futuristic piece of furniture
Smart mirrors, aka magic mirrors, are one of those projects that every Raspberry Pi maker needs to do once. On the surface, it may seem like a complex and advanced project; however, it’s actually fairly straightforward. The hardest part can be constructing the frame, which, if you can’t find a suitable pre-made frame at IKEA, involves a simple bit of carpentry.
There aren’t many components to a smart mirror
Putting some reflective two-way mirrored wrap on a big old TV and installing the Magic Mirror software are the other main steps; the latter is very easy to configure and has plenty of add-ons too.
A small IKEA frame with a compact monitor does the trick for a smaller version
This article is from Raspberry Pi Official Magazine #161
You can grab this issue from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, including the Raspberry Pi Store in Cambridge. It’s also available from our online store, which ships around the world. And you can get a digital version via our app on Android or iOS.
You can also subscribe to the print version of our magazine. Not only do we deliver worldwide, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico 2 W!
Education: Bachelor of Computer Applications (BCA) - 2024 Graduate
Technical Skills: Web Development (MERN Stack)
Areas of Interest: Mobile Application Development (Android).
Projectes: Attendance app, Music website etc.