Fruit selection machine made with T2-OLinuXino-LIME2 and MOD-IO, runs OpenCV

As part of the OSIE project Nexedi SA, France built a small conveyor belt using entirely open-source products and technologies.

Their goal was to build a fruit selection machine, which would use AI (starting with OpenCV) to inspect fruits (and possibly many other small particles) and to do a physical separation (selection) of them.

Olimex have been chosen as a partner in the project because of the many of OSHW solutions which the company offers.

T2-OLinuXino-LIME2 and MOD-IO were used in the implementation.

Details for the project is available here.

OpenCV 4.0 is now available for download


OpenCV 4.0 is officially announced.

What’s new?

  • OpenCV is now C++11 library and requires C++11-compliant compiler.
  • Most of C APIs are removed
  • dnn (deep neural networking) module includes experimental Vulcan backend for platforms missing OpenGL
  • QR code detector and reading is add to objdetect module

You can see all in the OpenCV change log



Hackaday Belgrade 2018 is this weekend 26th of May. Get ready for Retro Computing BASIC badge hacking.


This weekend 26th of May Hackaday will have their second conference in Belgrade.

You can see the program here.

The conference badge is cool retro computer running BASIC. There will be badge hacking workshop so we will get with us some PIC-KIT3s .

I will have talk about how we hacked Soldering robot with ugly programming interface with TERES-I laptop and FPGA / Sigrok and how we replaced the soldering robot brain with OLinuXino-MICRO, so now it’s ready to take CAD CNC files and use fiducials and do AOI inspection after soldering.


Amazing Quadcoper video shows how compact and powerful could be our computers today


Interesting video at Ted shows how advanced is the technology we have our hands on now.

Computers get smaller and lightweight but more and more powerful so we can attach them even to flying Quadcopters and make them to process video and control multiply motors.

A13-OLinuXino-WIFI and OpenCV Face detection project


We continue the experiments with OpenCV, this new project detect human face, mouth, eyes and upper head position.

When human Face is captured by the camera it’s coordinates are detected then with push button switch you can attach Mustache, Glasses or Horns on the detected Face.

The project sources are on GitHub:

Here is video :

A13-OLinuXino-WIFI running SCRATCH + OPEN-CV demonstration


Now when we know how to interface new blocks in Scratch IDE with python and having OpenCV library up and running on OLinuXino it was matter of time to make the new project.

There is already nice project written in Scratch with eyes tracking your mouse pointer:

Also we show you OpenCV object tracking

Now we decided to combine both in single project:

1. Install scratchpy

# git clone git://
# cd scratchpy

    !!!IMPORTANT: Make sure that you date is set:
# date -s "3 APR 2013"
# python install

2. Install OpenCV

# apt-get install libopencv-dev python-opencv

3. Install Scratch

# apt-get install scratch

4. Open Scratch and open the demo project

5. A message should appear that remote sensor connection is enabled.

If doesn’t click on “Sensing”, right click on sensor value and click “Enable remote sensor connection”.

6. Open terminal and start the tracking program

# python

7. This will track yellow ball and the eyes will look at this direction

8. If you want to use some other object with different color use

Start the config:

# python
Move the sliders until only the desired color is white and everything else is black.

Open and edit the following line:
 cv.Scalar(0, 100, 255),
 cv.Scalar(40, 255, 256),

Write your values, for example:

 cv.Scalar(110, 98, 100),
 cv.Scalar(131, 255, 256),

Then just start the file:

# python

If you wonder what does line 77 k = cv.WaitKey(70) this is delay as OpenCV track the object very fast and send coordinates faster than Scratchpy + Scratch can handle which leads to buffer overflow, with this delay the object tracking is artifically delaed to may Scratch have time to update the animation properly.

Here is video of this project:

Note that the camera is put on the top of the monitor and displays mirrored object i.e. when you move the heart right the animation moves the eyes to the right but on the camera you see mirrored image 🙂

And the GitHub sources are HERE

Next OpenCV project – Face tracking is in progress, stay tuned 🙂

Make Door Security Logger with A13-OLinuXino-WIFI + OpenCV


This is cool little project done in minutes with A13-OLinuXino running OpenCV. We were thinking what to make with OpenCV and with the use of GPIOs on A13-OLinuXino and decided to put small switch on our laboratory door connected to A13-OLinuXino GPIO:


then to wire A13-OLinuXino with Web Cam on the old ping-pong table in the front ot the door, so we can sense every time door is opened and closed:


OK, now we are ready and have just to write the python code to log the pictures with the Web-cam every time somebody enters the lab:

from cv2 import *
import sys
import time
import datetime
import A13_GPIO as gpio

def main():
    #init gpio module
 gpio.setcfg(gpio.PIN36, gpio.INP)

    while True:
        #select /dev/video0 as source
cam = VideoCapture(0)
#wait for low level (door open)
        while True:
            g = gpio.input(gpio.PIN36)
            if(gpio == 0):
            #take 15 pictures, and use only the last one
 for i in range(15): 
                s, img =
            #get the current system time 
            now =
            k = str(now)
            if s:
                imwrite(k + ".jpg", img)
                print(k + " -> New image saved...")
            #wait for high level (door closed)
            while True:
                g = gpio.input(gpio.PIN36)
                if(gpio == 1):
             #wait some time (debounce)

if __name__ == '__main__':

You can download the project code and OpenCV installation instructions on GitHub:

Scratch + A13-OLinuXino-WIFI Physical Computing


Scratch is great visual environment very easy to understand by children.

I already blogged that Scratch is running fine on OLinuXino here

What bothered us though was that we can’t use all GPIOs and hardware resource which OLinuXino offers and Scratch basic installation can only play music and make cool animations.

So we start searching info how to implement this.

Soon we found SNAP which is based BYOB (Build your own Blocks) which is based on Scratch but adds ability to make your own blocks, recursion etc new features.

SNAP is implemented with JavaScript so even better it runs on your Browser without need of installation. Based on SNAP there are few cool projects made by Technoboy10 ( interfacing Wii Nunchuk , Arduino and NXT-Lego robots.

I asked Technoboy10 is there documentation how to connect Hardware to SNAP and soon he published this info in the Wiki:!-Extension which is very detailed.

So How cool is this? Kids are able to program robots with visual IDE!

Looking at Technoboy10 GitHub repository we figured out how to do this, but SNAP basic installation have no such great choice of sprites and animation examples, so I decided for the beginning to try to implement OLinuXino hardware features to SCratch.

So I posted question on Scratch forum and start to wait for reply . After few days waiting, 30+ views and none reply I did what I had to do at the beginning: searched github for “scratch c” then for “scratch python” and found what we needed:

This is cool python client for Scratch. When you enable remote sensors in Scratch it creates server, then with scratchpy you can pool for given variables and take their values or change them, this way you can connect sensors or actuators.

As opposite to SNAP (where SNAP i client and snap-server is made with pyton, so SNAP connects to the server) here Scratch makes server and the scratchpy client connects to it and interact through variables.

So here is our first “Hello World” blinking LED with Scratch:

Instruction how to install the OLinuXino GPIO interface to Scratch is on GitHub:

Our second project animate GPIO connector with Scratch based on real buttons inputs and LED outputs:


Here is video on the GPIO interaction with SCRATCH

Now when we have this powerful tool: scratchpy to interface Scratch to anything via python, just imagine what would happend if we connect Scratch to … OpenCV 🙂

From my previous blog I show you some videos which demonstrate the power of OpenCV: face recognition, face expression recognition, color objects tracking.  You can make SCRATCH script for instance which recognize the face of the person in the fron of the web cam and run some animation to welcome it by name.

Or you can program your robot to chase this yellow pong ball which is in the front of it.

Or you can make different animations depend on the face expression of the person in the front of the camera… the applications and the fun will be endless!

New Debian Images with UVC camera support, Python and OpenCV in Wiki


As you maybe noticed we play a lot recently with video processing and OpenCV + Python, so we decided to release new image which includes these packages.

Unfortunately we are on the limit with the current 2GB cards, so we had to release two images:

1. Debian with OpenCV + UVC camera support + Python without XFCE4 which still fits in 2GB cards we ship now.

2. Debian with XFCE4 + OpenCV + UVC + Python on 4GB image. We recommend you to use Class10 card as with Class4 things work many times slower.

We are now working to source new fast 4GB cards which to ship with A13-OLinuXino new image.

You can download the images from our WIKI

OpenCV + Python fun with A13-OLinuXino-WIFI


OpenCV is open source huge image processing library. It was build by thousands of contributors for many years.

Now OpenCV is so powerful that you can make with the latest 2.4 revision even face recognition, gesture analysis and any kind of image filtering and enhancements.

There are tons of funny projects based on OpenCV like these videos:

Again OpenCV is not something new, it is used for years on desktop computers, but now you can have OpenCV running on small 2W embedded OLinuXino which you can add to your robot or interactive construction.

Installing OpenCV on A13-OLinuXino

1. Make sure that your A13 Linux image support cams:

# ls -l /dev/video*

You should have video0 or video1. The demo uses video0.

2. Install OpenCV

# apt-get install libopencv-dev

(If you dont want all packages, use core dist)

3. Install Python

# apt-get install python-dev

(This will install Python2.7)

4. Get OpenCV support for python

# apt-get install python-opencv

Now you are ready to develop with OpenCV on A13-OLinuXino!

Our first “Hello world” example will be simple, we will run web server, then with OpenCV will take pictures every 5 seconds and once per minute will store to the SD-card.

This way you can make time elapsed videos like this one:

Here is the code with comments:

from cv2 import * #import opencv module
import sys        #import system module

def main():
    cam = VideoCapture(0)     #tell opencv where to take pictures from in this case from /dev/video0
    index = 1     #this is the picture index
    while True: #search the SD-card for previous pictures to calc last picture index (in case of power failure for instance which interrupted the picture save process)
        test = imread("img" + str(index) + ".jpg")
        if test == None:
            index += 1

    while True:
        for i in range(12): # wait 12 * 5 seconds = 1 minute to save picture
            s, img = # capture the picture, s is flag which =1 if capture is successful
            if s:
                imwrite("capture.jpg", img)
            waitKey(5000) # just wait 5 seconds and do nothing

        imwrite("img" + str(index) + ".jpg", img) # save the picture to SD-card and increase picture index
        print("img" + str(index) + ".jpg")
        index += 1

    return 0

if __name__ == '__main__':

now let’s setup the web server so we can see the pictures with web interface:

5. Get Apache (or whatever else you want)

# apt-get install apache2

the code on the web page is simple:

<!DOCTYPE html>
<h2>CAM FEED</h2>
<img border="0" src="images/capture.jpg" alt="Capture go here...">

this code is already in the tar.gz file at GitHub

6. Go to www-dir

# cd /var/www/
# mkdir CAM
# cd CAM
# tar zxf demo.tar.gz .

7. Go to images folder and start the python module

# python images/ &

8. Open browser and enter the address where your A13 is connected:


9. You should see images every 5 sec. Additionally every 1 minute a image is saved on the SD-card.