I’m interested in 3d Object Pose Estimation with Pose CNN Decoder for using it to recognize metal elements in bin. Is it possible to use Jetson nano for this purpose using supported d435 3d camera ?
How can I get info for grisp which I want to send to robot to pick part ?
Is Nano suitable to make segmentation etc. ?
Next question about laptop ? Which laptop do You recommend to install Isaac SDK ?
How much ram CPU gpu etc. ?
Yes it should do the job at least if you don"t need 60fps,
what is grisp ?
Isaac SDK requires that your desktop system include a GPU with a compute capability of 3.5 or higher.
The CPU have to be powerfull enought to don’t bottleneck the gpu
8Gb RAM (even if it could do the job with 4Gb it is not recommended)
Hope I could help a bit
Excellent Planktos, thank You for quick reply.
I was thinking about gripping recognized detail or by gripper or suction cups.
So can You describe me role of for example nano or Xavier ? Data from Jetsons flies to PC to compute on PC’s gpus and then send data back to nano ? Am I right ?
For using Isaac I need to use strong laptop or Xavier ?
You are welcome,
For the suction, if your object is a cube for exemple you may want your arm to aim the suction systeme to be aligned with one of the cube plane, to do that you can use Isaac manipulation or ROS or do it yourself.
You need to train the IA on PC and then export your app to nano tu run it on nano. It’s not necessary to run it on nano because the PC have Jetson as well, so you can try it without nano. And when the app is ran on nano you don’t need a PC because the nano will be autonomous.
So you need a “strong” PC to train the IA, try your app and send the compiled app to your nvidia hardware; then you can run your app on nano or xavier or other nvidia jetson compatiple hardware without PC.
Good morning Planktos.
We have three ways to do my task, graphical using Unity, using ROS or making it by codelets.
I’m so glad Isaac implements python codelets to use, by now I’m not familiar with Isaac objects names, have no idea how to dig in. But I think NVidia makes Isaac so integrated with unity maybe this way is so convenient and straight forward. I saw that BMW is using Isaac for 5 main bots created with Isaac. One of them is pick bot. I don’t think that anyone in BMW was making codelets when they have access to unity.
Ok. How about 3d sensors I see list of supported devices is so short. Can You advice me which sensor is most popular?
Yes it is easier to use surface gripper.
There is 3 types of 3D sensors : Tof, camera, projection(lidar and other IR tech), I think that camera is the best when you don’t need more than 50m range and have a little budget but that just me.
Camera 3d sensors consist of two cameras for stereo which able the computation on depth by looking at the shift of objects between the two cameras and an IMU which compute translation and rotation with a gyrscope, accelerometer and sometime a compass. So you just need two cameras and a IMU.
Issac support a lots of sensors but you need to modify JSONS files and codelets to make it work.
The Caya and Karter robot of Nvidia use Intel RealSense Depth Camera D435 but popular sensors like RealSense and Zed are expenssive, you can instead use two cameras or a stereo camera which cost aroud 90$ and a basic IMU for 5$.
If you do that just for fun I would go for a realsense but if you want a product to sell I would go for a stereo camera and a IMU separatly so you will be able to shift accuracy of IMU and cameras as well as getting better price.
If you need high speed pose estimation you will need to get a synchronised global shutter camera and a back up IMU.
I can give you more details about custom harware if you are doing a commercial project, if not just go for D435 or ZED 3d sensors.
Also you have to choose carfully the FOV of your camera because this will greatly change the accuracy of the tracking, the best depend on the environement : surronding objects and distances. Here is a paper about that if you are interested
You gave me much infos I have to systemize :)
About surface gripper feature. I don’t think so I need more info then position xyz and orientation abc for gripping. By now I’m not able to make full pick path controlled by Isaac but it is interesting. For me important is to find position to proper grip. That’s all. But You gave me more to think about. For Kuka robot for example is RSI tech pack (robot sensor interface) simply is ability to control path not in traditional way by settled (programmed and checked by man) points in trajectory of robot and it tool. This feature is so refreshing in a way how trajectory can be computed. Isaac system is almost new born (I’m new sorry if I said wrong) and list of supported robots (industrial etc) is short but will change I hope. For a long time I was working with robots which has regular paths (points constant without any ability to change them by external system). I’m not touching safety using path planning using isaac. It is a long story I guess.
I see intelisense are popular. As some specialist wrote me that sensors can be proper to recognize details with at least laptop dimensions. I’m not profi in this case so I can not answer but I saw comparison d435 and old good checked by millions Xbox 360 sensor. And Xbox has better measuring of depth, more stable.
You said about Karter robo and as NVidia decided d435 is system which covers needs in optical and depth info to system to recognize where robo is.
About using this leading edge technology. It depends on how deeply go into this area. I don’t have budget and huge team of specialists to implement this project on huge scale. All I know this autonomous technology will have critical impact in factories etc. in future. I’ve seen many small, middle warehouses needs some substitute of small transport couse of lack workers, costs in lean management spirit ;)
Future push us into autonomous in many areas of life. Dot.