top of page

Maze Navigator: Machine Learning 

ROS2

Group of 3: 3/07-3/14

Overview

For this project, we had to teach our robot to recognize 7 predetermined objects to be able to successfully navigate a path. The robot, once it sees one of these objects 6 inches away, it'll turn 90 degrees left or right to travel to the next object.

​

To train the robot to recognize the objects, we used teachable machine by Google. We used a camera to recognize the objects and then an ultra sonic sensor to determine when the robot was 6 inches away. 

IMG_6945_edited.jpg

Creating the Object Recognition Model

We used Google's online platform Teachable Machine to create the machine learning model. We took about 100-200 photos of each object in varying orientations. We knew where the maze would be taking place, and so all the photos were taken there to ensure that the ambient lighting was the same.

Coding: Action Clients and Action Servers

One of the biggest challenges in this assignment was creating action servers that would work in the way we wanted to. By this, I mean that due to the asynchronous nature of action clients, our team struggled with understanding how to have all of the action clients and functions work in the "order" that we wanted to. Particularly, our action client that drove the robot to drive forwrad would continue running when the rotation action client was called, which we did not want. Our final code can be found in the following Github Link:
https://github.com/theresangyn778/ME35-Intro-to-Robotics/tree/main/Create3MachineLearning

Design

At first we only wanted to use the pi camera and the image recognition model to be able to determine when the object is 6 inches away. However, we realized that the model is created to identify objects and provide confidence levels on that determined object, which don't reveal anything in terms of distanace. Therefore we incorporated the ultra sonic sensor.

We used primaraily 80-20s as support structures for the components of the camera and the ultra sonic sensor. Then we 3d printed or made laser cut supports for the camera and ultra sonic sensor to screw into.

Results

Our robot rana smoothly and was able to successfully identify objects and turn correctly (majority of the time).

bottom of page