Day Fourteen – Project Conclusion

So the project has come to an end… but what have we achieved?

 

Our final objective was to get the Pi/Roomba/Ros/Laser combination set-up and working, such that the Robot can move around an environment and avoid obstacles. As a further goal, we aimed to also have the robot moving around a room, map it out and then navigate around it. Most of this has been complete, apart from the part which enables the robot to navigate around a self-created map and tell the robot to move to certain locations in the map.

 

Infact, by the end of this week, we managed to finish all of our initial goals that we set out to complete in the first day of the project. Along the way, a lot of changes have been made as to how we achieve this goal (e.g. Using laser instead of Kinect) but nevertheless, we managed to achieve it. Furthermore, most of the last week was spent working on implementing the navigation stack into the project and the first goal was to get the robot mapping out the room. By the end of the week we managed this after a lot of complications! Simply understanding how the navigation stack works was a feat in itself, due to the vast amount of documentation and files that need to be set-up to do with transformation trees. After some guidance from our supervisor, we managed to fix a lot of problems and discovered that the root of a big problem was due to the fact that the time and date set on the Raspberry Pi was not accurate, which resulted in any recorded ROS messages becoming un-usable. Combining this understanding with our current obstacle avoiding package, we were able to create a number of maps, two of which are show below!

 

This first one was quite simply created by setting up a small square(ish) enclosure in which the robot was placed. After recording it moving around for a short period of time, the rosbag file was played back into the gmapping stack which uses SLAM and thus the map was created. While the map was being created, it was impressive to watch it all play back in RViz, which is something that could quite easily be shown as a demonstration and/or visualization when shown at Open Days etc.

This second map is our attempt at mapping an entire room, which didn’t exactly turn out perfectly and has some notable faults. The section in the center where no map has been created is an area where obstacles are located, however they do not appear as they should. This is most likely occurring due to the amount of rotations which the program used to map the area, which as a result meant that the odometry was very in-accurate. This could be greatly improved by simply improving the program to rotate less often or slower. Unfortunately, we did not managed to implement our ultimate goal, that included a system which allows the user to specify points in the map, which would then allow the robot to autonomously navigate to that located. From inspection, it would not be hard to include this, due to the fact that the navigation stack within ROS handles a lot of this by itself.

In terms of the actual robot set-up itself, the final product looks something like the following!

The top half of the image shows the guts of the robot, which also highlights the set-up used during the project. Due to the fact that we have a lot of components which require a lot of power, two battery packs were needed to ensure that constant and sufficient power was being pumped into the system. It is possible to power it just from one battery pack, however it then becomes unreliable. At the moment, this is quite a hacked up and janky set-up, however it does the job and the aesthetics can be sorted later ;-). A lot of thought went into the set-up of this robot and various different set-ups have been tried and tested.

 

Now that the project is over, the whole team is happy that we managed to achieve a lot in just three weeks and have genuinely enjoyed the project throughout. A lot of times have certainly been frustrating (most commonly due to the Pi causing problems!) but the team managed to work through and complete everything in a timely fashion. We hope we have managed to set a strong foundation for some other budding ROS-users to come along and continue working on this project!

7 thoughts on “Day Fourteen – Project Conclusion

  1. Ah, OK. I wrote the original ROS Hokuyo driver but don’t recall that parameter, perhaps it has changed with updates.

    This quote is key: “where one member of our team spent a lot of the first week of the project simply compiling dependencies on the pi.” That is what I’d like to avoid. So if you ever decide to document that process and would like to share that experience in the ROS community, it would be great to post it into the ROS wiki and let folks know via answers.ros. I’d be happy to help with that, as I think many ROS users would gain from that documentation.

  2. thanks, very helpful. Looks like it runs on R Pi just like on a standard machine except for the need of this line:
    rosparam set hokuyo_node/calibrate_time false
    How did you know you needed that parameter set?
    Is there an overall document of your project? I am specifically interested in lessons learned running the code on R pi. I am very familiar with ROS and using the Hokuyo with it, but I imagine that there were roadblocks related to R pi and its version of Linux.

    1. No problem.
      I think that command was taken from the ROS wiki page for the scanner and speeds up the startup of the driver slightly.
      Unfortunately there is no overall document for the project really apart from what we documented on the GitHub wiki pages as this project was merely a base system where other projects, which are currently being undertaken, are designed to build on top of it.
      Ha, yes there were many setbacks in terms of using the pi and the wheezy version of Debian we used, to the point where one member of our team spent a lot of the first week of the project simply compiling dependencies on the pi. Also, originally we were going to mount a kinect/asus xtion onto the robot but due to time limits and problems with the drivers we scrapped that.

    1. Hi Dan,

      I’m guessing you can’t see it because it leads to a link on a private GitHub which was a silly mistake on my part. There isn’t a whole lot of information on that page so I’ve just pasted it’s entire contents below:

      1. Checkout SVN copy of the driver

      – Checkout from: http://www.ros.org/wiki/hokuyo_node
      cd ~/ros_workspace OR roscd OR Any place where $ROS_PACKAGE_PATH points to
      svn co https://code.ros.org/svn/ros-pkg/stacks/laser_drivers/trunk/hokuyo_node

      2. Get Dependencies and compile

      rosdep install hokuyo_node rviz
      rosmake hokuyo_node rviz

      3. Power on Hokuyo laser scanner and ensure power light is on.

      4. Configure Hokuyo laser scanner

      Make sure that hokuyo_node will be able to access the laser scanner itself.
      – List permissions using ls -l /dev/ttyACM0
      – You should see something similar to this:
      crw-rw-rw- 1 root dialout 166, 0 Aug 30 15:31 /dev/ttyACM0
      – If you don’t see this or only see crw-rw---- you need to run:
      sudo chmod a+rw /dev/ttyACM0

      5. Open a new terminal and run roscore

      6. Set-up hokuyo parameters correctly

      – The hokuyo node needs to be set-up such that the right port is set and can be done so by running the following:
      rosparam set hokuyo_node/calibrate_time false
      rosparam set hokuyo_node/port /dev/ttyACM0

      7. In a new terminal, run the Hokuyo node

      Run:
      rosrun hokuyo_node hokuyo_node

      You should see something like the following if it is working correctly:
      [INFO] [1346330858.255866180]: Connected to device with ID: H0613856
      [INFO] [1346330858.664850549]: Streaming data.

      8. Viewing the data

      To see that everything is working and being published in ROS, in a new terminal run:
      rosrun rviz rviz -d `rospack find hokuyo_node`/hokuyo_test.vcg
      Zoom out in the center viewing area and zoom into the grid to see the LaserScan streaming.

  3. Cool project this is very promising. I am wondering about a few things.
    Did you run gmapping on the raspberry pi or was the rosbag played back on a different computer running gmapping?
    Do you think it could be possible to stream the laser data to a laptop and have that running gmapping and the navigation stack?

    1. The only things running on the raspberry pi are the laser scanner driver and the roomba driver. All data is being streamed back to a different PC, which is running the gmapping.

Leave a Reply

Your email address will not be published. Required fields are marked *