Why ROS?

I have implemented a remote controller RC car and I’ve done everything pretty much myself starting from scratch (not the fastest way to get going). I’ve designed my own micro-controller board, soldered it and written the software for it. On Jetson I’m running my own Qt based “daemon” that communicates with the MCU board over USB, and streams H264 video using GStreamer to a PC. The PC runs a Qt based controller application written by me (and is very ugly). And the communication protocol running on top of UDP is also self-written because I wanted to have “smart retransmissions” on it.

I’ve learned a lot by making everything myself but it’s also too slow. So now I’m wondering if I should try to use e.g. ROS on my project.

So, I would be very interested to hear why and especially how are you using ROS on your project?

Why ROS?

I can reply to you writing the history of my robotic project: MyzharBot.
I started writing code for my robot in the far 2012, I had a motor control board and I needed a High Level software to communicate with it.
It’s easy I thought… and it was easy, so my first version of the SDK was born:
It was fully based on Qt, it used modbus protocol to communicate with the motor board and I created a few widget to control the robot.
After this step I started the “remoting” phase: QTcpSocket, QUdpSocket was my daily bread for a lot of time. I was not writing “robotics” code, but “computer science” code.

So last september I decided that it was the time to move to ROS… I knew about it since it was born, but I did not want to use something ready… I was wrong.
In less than one month I reached the same level I reached writing my own code in more than 2 years and today I’m fairly near to full automation.

You can read about this full story on my blog… this is the post I wrote when I decided to migrate to ROS:

Following the posts on my blog you can understand how fast was the evolution of my project since last september.

There is a phrase that sums up the “ROS phylosophy”: <<Why do we need to always reinvent the wheel?>>

Thanks for the comments. I guess I need to polish the current version a bit to be better able to demo it and then start actually testing ROS. I still don’t quite understand what steps I need to take to get same functionality with that compared to what I have now. But I’m sure it’ll become clear quickly :)

Our projects have some things in common. Jetson for Linux, STM32F4 for RTOS. My IMU (LSM9DS0) is only half-working but I’m hoping it doesn’t take too much to get it working properly. I could probably use something better for the power distribution. Now I’ve just connected a bunch of DC-DC boosters to the 6.7V battery.

The goals seem a bit different. I guess you are aiming for autonomous action and I would like to have good remote control.

EDIT: I noticed that your project also has some remote control possibility. Does ROS have some sort of remote control GUI and some fancy communication protocol between the GUI and the remote robot?

ROS has Rviz GUI that can be used to visualize the robot state and has a few ways to control the robot. If you see my latest video you will see one of the way: Interactive Marker Twist Server

Word of advice: Although the Jetson is super fast, unless you manually disable all power saving features (for example by default only 1/4 jetsons CPU’s are active) i wouldnt recommend running rviz on the jetson itself as itll take away performance from the things you need (path planning, obstacle avoidance, etc).

Through ROS you can easily set it up that as long as two devices are on the same network, you can monitor all the ros data that is being produced on the jetson, on your desktop (that is running rviz). You can do this by exporting the ROS_MASTER_URI env var on your desktop to point to the IP of the jetson (its a master-slave type of deal here, where the jetson is the master, and your desktop is the slave). You can look here for a good example on how to set this up: http://wiki.ros.org/Robots/TurtleBot/Network%20Setup

Also, adding onto my previous comment, a cool thing that i like about ROS is that you can stack multiple Jetson’s (or whatever ARM/x86 machine you are running) and set them all up in master/slave and distribute different processes amongst each other. I havent done it with Jetson yet, but i have with the Radxa Rock (4 of them). One board runs all the image processing, another runs just the navigation stack (takes up about 130% CPU total), another runs my sensor drivers, and the other one is for manual control. Dont think i really need a 4th one, but hey, the more the merrier right?

I completely agree with you, in fact in my guide to the configuration of the Jetson (http://myzharbot.robot-home.it/blog/software/configuration-nvidia-jetson-tk1/) you can find all of these settings.
I add another advise, running Rviz on a “home wifi network” is not really good. Sometimes it works really well, sometimes you have great latency. I’m going to connect my laptop directly to Jetson using it as wifi access point.

I enabled the Jetson processor performance to maximum but still rviz did not run, can someone suggest a better way to do this. It’s “segmentation fault” all the times. I have tried so much by now. Can someone please help or suggest.

I’m not familiar with rviz. In the case of segmentation fault though, it’s best to have a version compiled with debug symbols. You can try to run it under gdb and get a backtrace when it faults. strace can also log system calls which suggest where the failure is. In any case processor performance settings are unlikely to change the fault (in the case of call stack corruption changes to debug symbols or environment could possibly change the error…this is unlikely).

So…is rviz something you can compile with debug symbols? Can you try running it under gdb (gdb , ‘r’ to run), then get a backtrace (‘bt’) after it fails (‘q’ to quit gdb).

I do not think that it’s a problem due to performances.

Try this:

rosrun rviz rviz

if it doe not work you can try this:

export OGRE_RTT_MODE=Copy
rosrun rviz rviz

or other solution from here:

I tried both these methods but it did not help either.
I used gdb to debug:
Program received signal SIGSEGV, Seg Fault
from /lib/arm-linux-gnueeabihf/libpcre.so.3 occurs and by using back trace it says previous frame identical to this frame ( corrupt stack)

Also, I flashed Jestson using flash.sh script. It is version R21.3 and has hokoyu and navigation stack installed.

Corrupted stack frame is an interesting bug. I see the error at least points to libpcre.so.3. The debug symbols for this are in package “libpcre3-dbg”, which you could install…not sure if it is going to help much though.

The trouble with a corrupted stack is that the point of failure is not necessarily the point seen as the symptom. In the case of using gdb to debug, detecting the failure in the library and not in the main program means possibly the library debug symbol package might help to figure out what part of the library was called, although the actual bug is probably in its caller and not the libpcre itself. Add that debug symbol package, then try gdb again and see if more information is offered.

One thing I think about with a stack corruption is that perhaps there is a mismatched version of some package in the group of packages working together, or perhaps bad data where a NULL terminated string was expected. So make sure packages in general are updated; apt update, apt-get upgrade. I see libpcre.so.3.13.1 links only to libc.so.6, and I’d have very high confidence that libc is not the source of a stack corrupt.

I will look into this again sometimes. I got rviz running on a remote desktop as this serves my purpose pretty much well for now. Thanks for your help and I will try too do as suggested asap

@jaghvi - I found that if I recompiled robot_model from source, the rviz segmentation fault was fixed. This apparently is an issue on ARM with the Collada library doing a string comparison that seg faults. This is for ROS Indigo.

@Kangalow have you recompiled the whole ROS from source or only the robot_model module?

@Myzhar Just robot_model.

Did you try that ?
I also have a segfault with libpcre.so.3.13.1 :-(

Any other hints?


Kangalow has more info about how he puts together ROS on his site: http://jetsonhacks.com/2015/05/27/robot-operating-system-ros-on-nvidia-jetson-tk1/ though I’m not sure if anything’s changed significantly in ~1 year.

libpcre segfaulting seems rather odd, Perl-compatible regular expressions are pretty common. Not sure if it was compiled or packaged, but however it was installed, it might be good to remove it and try the other installation method.