For my final project, I’ve been working on sunglasses that track your eyes and display on the screen of the glasses where the user is looking:

23

LEDs on the screen of sunglasses

As I brainstormed how to make the lights work on the screen, I thought about using an LED matrix, since it’s a ready grid. It would be easier to use, because I wouldn’t have to work with too many wires and a messy circuit. But once I got one, I realized it wouldn’t be possible to put the matrix on the screen and see through them. So the LED matrix idea fell through.

11

As I started building my grid of LEDs, I also realized I couldn’t put 3 LEDs in the same row, because then they all would light up, instead of one LED at a time. So I decided to put them all in one row.

7

I used a code that sends a number to the arduino through the serial monitor and a specific LED, attached to that number, lights up. Here is the code:

void setup() {
Serial.begin(9600);
for (int thisPin = 2; thisPin < 11; thisPin++) {
pinMode(thisPin, OUTPUT);
}
}

void loop() {
if(Serial.available() > 0) {
int inByte = Serial.read();

switch (inByte) {
case ‘1’:
digitalWrite(2, HIGH);
digitalWrite(3, LOW);
digitalWrite(4, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
break;
case ‘2’:
digitalWrite(3, HIGH);
digitalWrite(2, LOW);
digitalWrite(4, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
break;
case ‘3’:
digitalWrite(4, HIGH);
digitalWrite(2, LOW);
digitalWrite(3, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
break;
case ‘4’:
digitalWrite(5, HIGH);
digitalWrite(3, LOW);
digitalWrite(4, LOW);
digitalWrite(2, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
break;
case ‘5’:
digitalWrite(6, HIGH);
digitalWrite(3, LOW);
digitalWrite(4, LOW);
digitalWrite(5, LOW);
digitalWrite(2, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
break;
case ‘6’:
digitalWrite(7, HIGH);
digitalWrite(3, LOW);
digitalWrite(4, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(2, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
break;
case ‘7’:
digitalWrite(8, HIGH);
digitalWrite(2, LOW);
digitalWrite(4, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(3, LOW);
digitalWrite(9, LOW);
digitalWrite(10, LOW);
break;
case ‘8’:
digitalWrite(9, HIGH);
digitalWrite(2, LOW);
digitalWrite(3, LOW);
digitalWrite(5, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(4, LOW);
digitalWrite(10, LOW);
break;
case ‘9’:
digitalWrite(10, HIGH);
digitalWrite(3, LOW);
digitalWrite(4, LOW);
digitalWrite(2, LOW);
digitalWrite(6, LOW);
digitalWrite(7, LOW);
digitalWrite(8, LOW);
digitalWrite(9, LOW);
digitalWrite(5, LOW);
break;

default:

for (int thisPin = 2; thisPin < 11; thisPin++) {
digitalWrite(thisPin, LOW);
}
}
}
}

Eye tracking

Once I had my LEDs figured out, I started my work on eye tracking. I spent quite a bit of time researching face recognition; I tried trackingjs.com and webgazer.cs.brown.edu. Neither made sense or worked for what I was trying to do, because the goal was to have a camera inside of my glasses only looking at the eye(s). The codes from webgazer and tracking.js were looking at a face as a whole, and then tracking eye movement within the face.

With the image from my vestibular therapy goggles in mind, I decided that I will try to use color detection instead and try to locate the pupil inside of the iris. Since the pupil is the darkest area, I can tell my code to look for the black color and locate the average X and average Y of the blacks found, and hopefully that’s where my pupil is.

629402577

Since I have a 3 by 3 grid with my LEDs, I broke the image of the eye to 3X3 and got 9 zones, and did the math for locating areas in each zone.

For the color tracking, I used Processing with some help from coding rainbow. I made the code to look for the blacks, divided the image into 9 zones and detected which area has the average X and average Y of all blacks, and sent the number of the zone to arduino with Serial write. The code currently uses images, but can easily work with live video. Here is the code:

//import processing.video.*;

//Capture video;
PImage img;
import processing.serial.*;
Serial myPort;

color trackColor;
float threshold = 20;

void setup() {
size(909, 613);
//String[] cameras = Capture.list();
//printArray(cameras);
// video = new Capture(this, cameras[3]);
//video.start();
trackColor = color(0, 0, 0);
img = loadImage(“eyes/4.jpg”);
println(“width “+img.width);
println(“height “+img.height);
String portName = “/dev/tty.usbmodem1421”;
myPort = new Serial(this, portName, 9600);
}

//void captureEvent(Capture video) {
//video.read();

//}

void draw() {
//video.loadPixels();
//image(video, 0, 0);
img.loadPixels();
image(img, 0, 0);

float avgX = 0;
float avgY = 0;

int count = 0;

// Begin loop to walk through every pixel
for (int x = 0; x < img.width; x++ ) {
for (int y = 0; y < img.height; y++ ) {
int loc = x + y * img.width;
// What is current color
color currentColor = img.pixels[loc];
//println(“currentColor: “+currentColor);
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
//println(“r1: ” + r1 + ” g1: ” + g1 + ” b1: ” +b1);
float r2 = red(trackColor);
float g2 = green(trackColor);
float b2 = blue(trackColor);
//println(“r2: ” + r2 + ” g2: ” + g2 + ” b2: ” +b2);

float d = distSq(r1, g1, b1, r2, g2, b2);
//println(“d :” + d);
if (d < threshold*threshold) {
stroke(255);
strokeWeight(1);
point(x, y);
avgX += x;
avgY += y;
count++;
}
}
}

// We only consider the color found if its color distance is less than N.
// This threshold of N is arbitrary and you can adjust this number depending on how accurate you require the tracking to be.
if (count > 0) {
avgX = avgX / count;
avgY = avgY / count;
// Draw a circle at the tracked pixel
fill(255);
strokeWeight(4.0);
stroke(0);
ellipse(avgX, avgY, 24, 24);
println(avgX + “,” +avgY);
if ((avgX < img.width/3) && (avgY < img.height/3)) {
//the pupil is in zone 1:
myPort.write(‘1’);
}
if ((avgX > img.width/3) && (avgX < img.width*2/3) && (avgY < img.height/3)) {
//it’s in zone 2
myPort.write(‘2’);
}
if ((avgX > img.width*2/3) && (avgY < img.height/3)) {
//it’s in zone 3:
myPort.write(‘3’);
}

if ((avgX < img.width/3) && (avgY > img.height/3) && (avgY < img.height*2/3)) {
//it’s in zone 4
myPort.write(‘4’);
}
if ((avgX > img.width/3) && (avgX < img.width*2/3) && (avgY > img.height/3) && (avgY < img.height*2/3)) {
//it’s in zone 5
myPort.write(‘5’);
}
if ((avgX > img.width*2/3) && (avgY > img.height/3) && (avgY < img.height*2/3)) {
myPort.write(‘6’);
}
if ((avgX < img.width/3) && (avgY > img.height*2/3)) {
//it’s in zone 7:
myPort.write(‘7’);
}
if ((avgX > img.width/3) && (avgX < img.width*2/3) && (avgY > img.height*2/3)) {
//it’s in zone 8:
myPort.write(‘8’);
}
if ((avgX > img.width*2/3) && (avgY > img.height*2/3)) {
//it’s in zone 9:
myPort.write(‘9’);
}
}

}

float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
return d;
}

Here are a few examples. Please note that the camera inside of the glasses would be infrared with infrared lighting, so the eyes wouldn’t have shadows, like on the 1st and 2nd examples.

The circle in the images in the right column defines the average X and Y of the blacks, which pretty accurately finds the pupil. And depending in which area is the circle, that zone triggers the assigned LED. For example, in the first image the eye is looking left, that’s Zone 4, so processing code sends a “4” to the arduino, which then turns on the LED #4:

8

It works!

Advertisements