<< return to Pixycam.com

New Pixy 2. Do something when signature X approach

Hello, I’m new at this, and might be simple, but

  1. I need to know if there is a world forum or chat live (IRC) to talk to people that are working with pixy or this is the right place.
  2. I want to have the Pixy camera installed and when an object approach Pixy and pixy recognize it, do something, like for example, move an arm attached to the same structure.

Let’s say, I want an arm to pick up something from a table only is an object reach the Pixy camera.

I saw this video and it shows an arm moving with pixy recognition. But it looks like the arm knows already the position of the dots where to put the lego square.

I want to have an arm that has two balls and depending the object that approach the robot, pixy recognize it and picks up one ball or the other.

Do you think this is possible?

Any idea?
Thank you! And hello to all.

Pixy can only control two servos itself. For a robot arm you’d need more servos. Many microcontrollers (like Arduinos) can control lots of servos.
You have to read the object coordinates from Pixy and guide the arm to the object. Therefore the arm ( or the hand ) has to be an object in Pixy, too.

First, thank youi for your reply.
So is Pixy such precise that knows exactly where is the object and send the arm to that position?
For example, let’s say I push a ball towards the robot, Pixy of course follow it, would the arm push the ball precisely? Once the ball it’s stop, not during the movement.

Thank you

Hi Juan,
Thanks for the message. It’s a good question. I’ve been told this is a classic computer vision problem – recovering 3d information based on the 2d camera information. In order to recover the 3d information, you can have more than 1 camera, like stereo, or you can take a picture, move the camera, take a picture and so on. It will allow you to use some math to recover the 3d information. It can be challenging.

Probably the simplest way to do this is with the “ground plane assumption”. That is, if you have Pixy looking down on the floor (or any plane, like a table) at a 45 degree angle (for example) and you assume that detected objects are on the floor, you can create a simple mapping between Pixy’s y coordinate (in the image) and the distance from the lens. That is, objects that are high in the image (low y-coordinate) are far away and objects low in the image (high y-coordinate) are close. Once you get the distance from the lens, the other coordinates are fairly simple to recover, and you’ve effectively solved the 3d recovery problem.

Hope this helps!

Edward

If the Pixy would look down vertically and the robot hand would have a fixed Z-height, it would only have to navigate in XY-plane. Much easier than 3D recovery calculations.
But that also limits your possibilities. ( The arm mus not cover the view to the ball object )

…. Also the size of the object gives distance, assuming object never changes it’s actual physical shape or size.

It’s being more difficult than I thought to find a solution for this. Or maybe pixy is not the solution. But I think Pixy can help.
For example, if another robot stops of front of my robot and the robot that it’s coming has three balls, I wan tto pick up the red one or the circle one.
Does anyone konws for example if there are:

  1. Magnet that can attract some sensor.
  2. I’ve though in a yellow + black tape so the pixy recognizes it.
  3. Did you konw about this game?
    https://www.amazon.com/Milton-Bradley-Classic-Operation-Exclusive/dp/B00000DMFM

How would you make the robot to pick up only thinkgs from the holes?
I’m thinking in manage to have the robot to understand to pick up things from holes.

Thank you everyone for your help, I think it is challenging.

Edited:

What about having in the holes some lights or some type of light signals that my robots understands that the the place is over there? Is there any sensor that communicates like that?

All I can see are things like this

As you can see in that video, there are two technologies on it. One that finds the right piece and it does pick it up, then the second that put pieces on the right place because already has the X-Y-Z coordinates already programmed.
I want to know about the first one. How and where to put the arm based on some type of shape or knowledge.

:slight_smile:

This one it’s better and curisoly the same company

As you can see in this one there is a camera (that could be Pixy one in my robot) and the arm picks it up.

Hi Juan,

the vision system in those videos is designed for industrial applications. I don’t know the price for sure, but it’s probably several orders of magnitude greater than Pixy :slight_smile: and a more complex algorithm.

Pixy is great at tracking colored objects, so if your application involves colors, great! Tracking the location of a colored ball (or several colored balls) should be pretty easy. As others have pointed out, calculating the 3D coordinates might prove tricky, but you can probably extrapolate / calibrate based on apparent size of the object. Pixy does provide the area of each object it detects - the pixel height x width in the image.

Hope this helps!

Jesse

1 Like

Sure, every reply helps. Thank you.

Is there any other type of sensor or even pixy for example lights?
Or magnet?
Thinking about a square made of light for example.

Pixy can also detect light sources like LEDs. That’s probably the best solution, when the ambiant light changes. The LED will keep the brightness.

Cool! Like one led or a shape of leds? I have to investigate in this way. I thinks it’s better.
Thank you.

I tested it with three different colours.
You have to turn down brightness very low and the LEDs can cause overexposure pretty easy.
To make Pixy see them as Colo Code, the LEDs have to be very close to each other. (closer than in the picture)

rgb-led

I’ll try at day light. Did you?
I’ll post here my results as soon as I get the cam
Thnks for all the help.

Post more if you have more ideas, I’d appreciatte.