<< return to Pixycam.com

CMUcam5 Pixy to track a color with C++

I have created a Matlab code and C++ code for tracking a color separately. Moreover, as I know Pixy CMUcam5 is a open source so how can I implement this code,matlab and C++ to Pixy CMUcam5 to track a color separately ? I am not intended to use the PixyMon application software.i want to learn more some extra things.
Please suggest me.
Thank you.

Hello Biswas,
You might consider writing your own firmware. It’s not the easiest way to test your ideas/algorithms. Testing through PixyMon is an easier way to get started.

Hope this helps!

Edward

Hello Edward,

Thanks for the answer.

Yes I think its better to work with PixyMon at the beginning.
I had found some very useful tips in the following links posted by “Sondre”.

http://cmucam.org/boards/9/topics/5320
http://cmucam.org/boards/9/topics/5308

SO I am also planning to follow this technique to understand how he did modified or work on renderer.cpp, monmodule.cpp(please explain me this term in a simple language) etc. Actually I am very new to the programming and I have only fewer knowledge on C++ and qtcreater but I am still excited to build or modified the Pixymon.

So I would like to request you to give some ideas/steps how can I start from the beginning ?

Moreover, as I did some study on this forum I came to know that Building own pixymon is easier than writing own firmware. is there any specific reason on behind that? this might be the silly question but i want to be clear with these basic things. so please explain me in simple language so that i can understand.

Hope for your positive response.

Thank you,

Biswas

Hello Biswas,
I’ll try to answer as many questions as I can, and if I don’t know the answer I’ll try to find someone who knows.

It’s usually easier to develop code that runs on a PC or Mac than it is to develop code that runs on a small embedded processor (Pixy’s processor). That’s why most customers start by testing their ideas on PixyMon.

Hope this helps!

Edward

Hello Edward,

Now I have got some question regarding the CMUcam5 Pixy, as I know Pixy use Hue based color filtering algorithm to detect object. so if you don’t mind could you please help me to understand how this algorithm works? I want to learn more in depth about its algorithm.how exactly detect the color object with this algorithm on Pixy ?

Nexy question, it stored the block of information about the detected object, what does it mean ?
what information does it store and if I call Pixy library what information does it send to the Arduino?
hope to get either detail information or related link so that I can do some research on it.

if you are not clear with my question please let me know I will try to write it again.

Thank you.

Biswas Lohani V K wrote:

how exactly detect the color object with this algorithm on Pixy ?

It uses a complementary color space (red-green and blue-green). It then plots the pixels on this colorspace and forms boundaries in the colorspace. These boundaries are the color signatures. It’s based on work between Charmed Labs and CMU. There is no document that describes the algorithm, but there has been talk of publishing a paper.

Nexy question, it stored the block of information about the detected object, what does it mean ?

Each detected object is returned as a “block”. x and y location and width and height.

what information does it store and if I call Pixy library what information does it send to the Arduino?

You might find these pages useful:

http://cmucam.org/projects/cmucam5/wiki/Arduino_API
http://cmucam.org/projects/cmucam5/wiki/Porting_Guide

Hope this helps!

Edward

Hello Edward,

Thank you for giving some of the idea about the detection of the object by Pixy. But couples of question arises and some

of the new question were came in my mind, so hope you will help me to understand these as well.

like you said, it uses a complementary color space, does it mean it used the “Opponent Process theory” developed by Ewald Hering?
as I did some study, the purpose of making complementary color space is to produce a grey scale color like white or black.
or we can also say the purpose is make it appear brighter by adjusting the contrast.

I just want to know why they used complementary color space in PIXY for my better understanding.

next, “plots the pixels on this color space”, does it mean the color histogram ?

my over all purpose working on PIXY is to understand its algorithm, how does it works and find out its x-coordinate value so that I can run my robot only in the x direction.
but I think now its very hard to understand the complete algorithm of detection and tracking of color with PIxy because there is no paper published yet. but if possible I would like to ask for some link so that I can request as a academic purpose.

moreover, in pixy.blocks[i].x, where x is the location of the center of the detected object whose values range from 0 to 319. i didn’t understand what values it represents, is it a pixel values in x direction or something else?

and the last question for this post, is it possible to record or capture the video either direct from PIXYMON app or by using the CMUcam5 PIXY when the program is running?

Sorry for asking number of question, but i believe innovation starts by asking the question.

like you said, it uses a complementary color space, does it mean it used the “Opponent Process theory” developed by Ewald Hering?
as I did some study, the purpose of making complementary color space is to produce a grey scale color like white or black.
or we can also say the purpose is make it appear brighter by adjusting the contrast.

I just want to know why they used complementary color space in PIXY for my better understanding.

I believe it was chosen because it is a 2-dimensional space that is easy to use.

next, “plots the pixels on this color space”, does it mean the color histogram ?

I suppose you could think of it like that.

my over all purpose working on PIXY is to understand its algorithm, how does it works and find out its x-coordinate value so that I can run my robot only in the x direction.
but I think now its very hard to understand the complete algorithm of detection and tracking of color with PIxy because there is no paper published yet. but if possible I would like to ask for some link so that I can request as a academic purpose.

I’m sorry Biswas, I don’t have a link. You might look here:

this algorithm is similar.

moreover, in pixy.blocks[i].x, where x is the location of the center of the detected object whose values range from 0 to 319. i didn’t understand what values it represents, is it a pixel values in x direction or something else?

The x and y locations correspond to the center of the detected object.

and the last question for this post, is it possible to record or capture the video either direct from PIXYMON app or by using the CMUcam5 PIXY when the program is running?

You can use a 3rd party application to do this, but PixyMon doesn’t support video record, just grabbing frames.

http://cmucam.org/projects/cmucam5/wiki/How_to_Grab_a_Frame

Hello Edward,

thank you for providing lots of information regarding my curiosity and specially for the link that you provided for the understanding of algorithm. it is very useful. and i understand the meaning of x, y as it is the centroid value of the detected object specially for the selected signature. here my confusion is the range of these values, like x ranges from 0 to 319. what does it mean?
also i am bit confused here, third party application means for capturing the video from same pixy camera by using that app or you mean to say any other source of image like mobile camera or other camera and app for capturing the video ? because if i can use the third party app for capturing the video from Pixy then that would be wonderful.
if not i suggest to add video recording features in the updated version of next Pixycam if it is applicable.
Anyway thanks for your answer.

Biswas Lohani V K wrote:

thank you for providing lots of information regarding my curiosity and specially for the link that you provided for the understanding of algorithm. it is very useful. and i understand the meaning of x, y as it is the centroid value of the detected object specially for the selected signature. here my confusion is the range of these values, like x ranges from 0 to 319. what does it mean?

The x value can be any value between 0 and 319. If the object is on the far left part of Pixy’s image, it will be closer to 0. If the object is on the far right of the Pixy’s image, it will be closer to 319. The values correspond to pixel locations.

also i am bit confused here, third party application means for capturing the video from same pixy camera by using that app or you mean to say any other source of image like mobile camera or other camera and app for capturing the video ? because if i can use the third party app for capturing the video from Pixy then that would be wonderful.
if not i suggest to add video recording features in the updated version of next Pixycam if it is applicable.
Anyway thanks for your answer.

There are 3rd party applications for capturing video on your computer’s screen. Just google “desktop record program”.

Edward

Awesome !!! now i am clear about location of x & y.

0--------------------160----------------------319
Left Stop Right

          <<<<<<ROBOT>>>>>>>
            Line of Sight

for the middle location of x, 160, i want to stop my robot at the line of sight of the detected object
for the x>160, i want to move my robot towards left perpendicular to the detected object at same time.
for the x<160, i want to move my robot towards right perpendicular to the detected object at same time.
for that i have done this,

if (pixy.blocks[0].signature == 1) {

  int xx = pixy.blocks[0].x;
  
  if (xx > 160) {
    xStepper(left, 100);
       
  }
  if (xx < 160) {
    xStepper(right, 100);  
  }     

i am able to move the robot in left and right as it detect object moving towards the respected direction.
but there is some delay that means detected object and my robot is not moving simultaneously at the line of sight. and

when i stop moving object, my robot moves to and fro frequently so how can i modified this code to stop my robot at the beginning. when it detect the object at the beginning i want to stop my robot. when object starts moving i want to move my robot at line of sight to the object. and if i stop moving object again i want to stop my robot. my robot is like a camera drolly. sorry for poor writing. Thank you

Hello Edward, i am writing report on my work and planning to acknowlede your name as you also indrectly supervised to work in my project through this forum, so i would like to request for your permission to write your name and if you dont mind would like to ask your designation as well.
Hope for your positive response. Thank you

Hello Biswas,
Feel free to use my name :slight_smile: My title is probably “Support Engineer”.

Edward

Hello Edward,

Thank you.

Biswas