AI-Enabled Safe Locker with BrainChip Project

 Build a smart locker system that opens only when both the user’s face and voice command match authorized patterns — all processed using low-power neuromorphic AI. This article provides brief information on an AI-enabled safe locker with BrainChip, features, etc.


AI-Enabled Safe Locker with BrainChip

Features

  • Face recognition (Vision AI using SNN)
  • Wake word detection (Audio SNN)
  • Servo-controlled locking mechanism
  • Local, low-latency inference (no cloud)
  • On-chip learning (for adding new users)

Components Required

Component

Description

BrainChip Akida USB Dev Kit Neuromorphic processor (main AI engine)
USB Microphone Audio input (wake-word detection)
USB Camera Visual input (face recognition)
Servo Motor (SG90/996R) Physical lock control
Raspberry Pi 4 / Jetson Nano Host controller with Linux (Ubuntu 20.04)
Breadboard + jumper wires To connect the servo motor
Power Source USB power bank or adapter

Software Setup

1. Install Dependencies

  • sudo apt update.
  • sudo apt install python3-pip libatlas-base-dev.
  • pip3 install akida speechrecognition opencv-python numpy pyserial.

Install Akida SDK:

Download the BrainChip Akida SDK from the official site. Follow their instructions to install the Python SDK and runtime.

AI-Enabled Safe Locker Project Architecture

AI-Enabled Safe Locker
AI-Enabled Safe Locker

Step-by-Step Implementation

Step 1: Data Collection & Preprocessing

a. Face Dataset (Images)

Collect 20–30 frontal face images per authorized person using OpenCV:

import cv2
cap = cv2.VideoCapture(0)
for i in range(30):
ret, frame = cap.read()
cv2.imwrite(f”user_face_{i}.jpg”, frame)
cap.release()

b.Voice Samples (Wake Word)

Record your custom phrase (e.g., “Unlock Akida”) using PyAudio or Audacity.

PCBWay

Step 2 : Train SNN Models with Akida

a. Convert Face Classifier to SNN

Use MobileNet or a custom CNN for feature extraction and convert to SNN using akida.Model.

from akida import Model
model = Model(“cnn_model.h5”)
model.quantize()
model_to_akida = model.convert()
model_to_akida.save(“face_model.akd”)

b. Convert Wake Word Classifier

  • Use MFCC preprocessing → CNN → SNN
  • Convert the audio classifier to an Akida model using the Akida tools.

Step 3: Load Models and Infer

from akida import AkidaModel
face_model = AkidaModel(“face_model.akd”)
audio_model = AkidaModel(“wake_model.akd”)

Audio Inference (Wake Word)

def is_wake_word(audio):
prediction = audio_model.predict(audio)
return prediction == “unlock_akida”

Face Inference (Real-Time Face Match)

def is_authorized_face(frame):
face = detect_and_crop_face(frame)
prediction = face_model.predict(face)
return prediction == “authorized_user”

Control the Servo Lock

import RPi.GPIO as GPIO
import time
servo_pin = 17
GPIO.setmode(GPIO.BCM)
GPIO.setup(servo_pin, GPIO.OUT)
servo = GPIO.PWM(servo_pin, 50)
servo.start(0)
def open_locker():
servo.ChangeDutyCycle(7.5) # Adjust as per lock
time.sleep(1)
servo.ChangeDutyCycle(0)
def close_locker():
servo.ChangeDutyCycle(2.5)
time.sleep(1)
servo.ChangeDutyCycle(0)

Step 5: Integration Logic

import cv2
import speech_recognition as sr
cam = cv2.VideoCapture(0)
while True:
# Wake word check
audio = record_audio_sample()
if not is_wake_word(audio):
continue
# Face check
ret, frame = cam.read()
if is_authorized_face(frame):
open_locker()
print(“Locker opened!”)
else:
print(“Face not recognized.”)

Testing and Validation

  • Add a new user using Brainchip Akida’s on-chip learning API.
  • Try unlocking with the wrong voice or face → the system should deny access.
  • Log each attempt (success/failure) for analytics.

 Try Implementing the above project  and let us know your results..