Image Processing and Object tracking Robot


#21

Hey,

Have you guys played with these open source robotics development platforms with huge global developer communities. They seem interesting to at least know about. I’ll install and start familiarizing with them today.

  1. ROS - http://www.ros.org/about-ros/
    “Open Source Ubuntu-based Robot Operating System (ROS) and a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.”
  1. Gazebo - http://gazebosim.org/
    “Robot simulation is an essential tool in every roboticist’s toolbox. A well-designed simulator makes it possible to rapidly test algorithms, design robots, and perform regression testing using realistic scenarios. Gazebo offers the ability to accurately and efficiently simulate populations of robots in complex indoor and outdoor environments. At your fingertips is a robust physics engine, high-quality graphics, and convenient programmatic and graphical interfaces. Best of all, Gazebo is free with a vibrant community.”

Here’s a comparison of both http://www.generationrobots.com/blog/en/2015/02/robotic-simulation-scenarios-with-gazebo-and-ros/


#22

@sande thanks for that info. very interesting and a stepping stone to start. i will look at them over a couple of days just to get familiar. I am eager to start toying around.


#23

Hi, I am using some ROS packages for a project I’m doing. I use MAVROS (MAVlink over ROS) to enable messaging between my autopilot, on-board computer and ground-based GCS. I’m happy to contribute to a project using ROS particularly PTAM and ORB-Slam.


#24

This is awesome @Oscar!

From your practical experience and with a global picture in mind, since roboticists should be able to find work worldwide, what do you think about basing a learning project on ROS ?


#25

Thanks @AYSande will install them too and familiarize myself with them.


#26

For Robot navigation will need a mini computer for image processing for this case Raspberry Pi, the controller for the wheels and camera gimbal which will consist of motor driver to control four wheels on the robot and atleast two servos for the gimbal.

I proposal differential steering for the robot movement. This can also be a stepping stone to a self-driving car.

Done a little design of how the AGV on the robot might look like, thou not yet complete


#27

Hello Guys,
Sorry was away form keyboard.

@cyrus I think the execution timing is subject to test and practical experiments.


#28

Hey @Cyrus,

What are you requirements for this project and what workflow do you generally apply to convert the requirements into a product ?

Regards.


#29

The requirements for this project are as shown below.

Option A consists of what i have at my disposal. The chassis of the robot is made locally using perspex. In this option we use a raspberry pi camera and buy 4 motors and 4 wheels individually. So the cost is a little bit high.

https://www.dropbox.com/s/cnwuufv14mmaxse/Requirments%20option%20A.xlsx?dl=0

The other option is to buy a ready made chassis with wheels and motors and use a USB camera instead of the Pi camera. This reduces cost.

https://www.dropbox.com/s/takcghkg2oe8jkf/Requirments%20option%20b.xlsx?dl=0

NB; For image processing you just need the Pi and Camera.

Will post the work schedule later on


#30

Fun way to learn robotics!


#31

Learn robotics for free on edX . The course is self-paced and includes the following

  • Represent 2D and 3D spatial relationships, homogeneous coordinates
  • Manipulate robot arms: kinematic chains, forward and inverse kinematics, differential kinematics
  • Program and navigate mobile robots: robot and map representations, motion planning
  • Plan complete robot systems
  • Develop present and future applications for robots

Register and enroll here
https://www.edx.org/course/robotics-columbiax-csmm-103x#!


#32

Hi guys i usually have a personal handicap on how I am able to tackle problems or learn. If I don’t know what I am trying to solve am anable to learn anything. what I mean is we ought to know what we want our robot to do and what kind of application we need our robot for. then we can outline the control strategies we need and hence the plat form which supports these strategies. For the purpose of us to be able to focus on any robotic application, I would suggest we outline the following.

  1. what task to be accomplished by our robot
  2. What control strategies we require, both the mechanics and end effectors.

#33

Hello Guys, below is a suggestion of the tasks the robot is going to carry out.
The robot is placed in a field where there are three objects of different colors but the same shape and collection points of the same color as the objects but different shapes as shown below.

The Robot is to identify the circular object using the camera, go pick it and place it to the collection point marching the objects color (have just explained it simply)

#Mechanics
The robot will have two main parts, an AGV and an arm (manipulator). The AGV for navigation and the arm for pick and place.
The camera that is used to see(*hic) the object is mounted on the robot with a gimble that can rotate atleast 150 degrees coupled with a gyroscope(MPU6050), the gyro takes the gyro takes the readings of the angle rotated by the gimble after it has identified the object, this readings are passed to the MCU which uses another gyro mounted on the AGV to navigate to the location of the object.

A ping sensor is used for the AGV to stop at the right distance for the picking or placing process. The steering of the robot is differential steering using four powered wheels( each wheel is driven independently). The identification by the camera is based on color and shape of the object, hence the different shapes for the picked object and the collection points.

@simo I hope I have been clear enough in both explanations.

I think this is simple enough for us to begin with, then on the meet up scheduled on 5th November we can discuss more.
Will be making the a prototype based on the above and will post the progress here, please provide as much suggestions as you can.

Thanks:slight_smile:


#34

Thanks @Cyrus, that seems like a good start. And it looks quite a challenge. I have a few questions.

  1. Will the robot be pre programmed with the coordinates or it will have to search for the objects

  2. will the collection points take a certain form or will the robot have to think of the best way to approach and drop the collected items.

  3. will the movement be autonomous or we will have to give a few guided clues such as lines dots and so forth.


#35

@Simo

  1. The robot will not be pre-programmed with the co-ordinates, the Gyro on the camera will give the (X, Y) position of the object, this will be passed to the second gyro on the AGV, The AGV will then turn to face the object then from there the AGV will move towards the object with the help of the sonar range finder until it moves close enough to the object( hence sonar gives the third dimension i.e Depth(Z)).
    The Gyro on the AGV ensures the robot maintains a straight line as it moves.

2.The collection points will e rectangular(rectangular containers) to distinguish them from the objects.

3.The movement will be autonomous , no guiding lines or something of the sort, however, the robot will be pre-programmed on which object to search for first, second and third, but that’s all


#36

Great @Cyrus!

That’s makes for a good set of requirements.

How can we jump in and assist before the meetup, you have design files to share?

Regards.


#37

Hello all,
I have liked this discussion!
As much as I know about image processing, openCv is cool! But there’s a small problem… In future, you may want to advance your vision such that you will just show the robot a few images as an input then it shall figure out (on its own) what other alike million possible images are (as an output). This is deep learning (supervised). Which is essentially machine learning.
With OpenCV, it is a general purpose and you have a hard time to train your deep learning algorithms (should you ever want to get your hands dirty with ~a million images). In this case, TensorFlow lib wins in computer vision (in my opinion).
In short I prefer TensorFlow.
By the way I have not read this thread keenly. I was most attracted by computer vision and image processing which is common in machine learning (which is a component of my research area).
I think it’s nice to go OpenCV for computer vision for now! However, keep an eye on Machine learning techniques for computer vision.


#38

Welcome aboard @Sirmaxford:slight_smile:, great to have you. Great insight there, we appreciate.

Thank you


#39

Just joined the group and I hope am not late to the Image Processing party. Got some experience with OpenCV and its great despite the steep installation and learning curve on the RaspberryPi+Arduino. @Sirmaxford Am very interested with Tensor Flow, hope to learn more here.


#40

Hello guys, on Saturday we willl have a meetup on image processing. We willl explore the different image processing and machine learning available for us to utilize. Some have been mentioned here in the forum . e.g

  1. Open CV https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiUqpGs8IvQAhXJM48KHaKZASYQFggcMAA&url=http%3A%2F%2Fopencv.org%2F&usg=AFQjCNGUr-UTYvy3hRjaFyy2oCg43JU9Vw&sig2=Qct0xcSshKU2LvQnZS_yNw

  2. Tensor Flow https://www.tensorflow.org/

Among others that we will suggest. We will be using Raspberry PI as our computing platform, if you have one bring it with you:slight_smile:
On the meetup we will also discuss possible applications of image processing in the industry and robotics.
The meetup will also mark the beginning of a collaborative build of a vision robot.

See you at gearbox:slight_smile:

Thanks