概要 Abstract
We explain how to use OpenRAVE in this page. OpenRAVE is providing development environment of motion planning in robot application. Using OpenRAVE, simulation, such as manipulation by robot arm and Path planning by automatic land vehicle, is available.
目次 Table of contents

OpenRAVEのインストール Installation of OpenRAVE

ここでは,Linux上でOpenRAVE(詳しくは を参照)をインストールする方法を紹介する.

Here, we introduce method to install OpenRAVE on Linux. (For the detail, refer to ).

apt-getでインストール Installation with apt-get


To install OpenRAVE, you should execute following command.

  $ sudo add-apt-repository ppa:openrave/release
  $ sudo apt-get update
  $ sudo apt-get install openrave

サンプルプログラム Sample program


Here, we mention briefly about explanation of sample program and execution method.

サンプルプログラムの場所 Location of sample program


Sample program exist in //jamaica/share/Material/openrave/, so copy it by yourself.

サンプルプログラム(.py)の説明 Explanation of sample program (.py)

今回はサンプルプログラムsample08-grab.pyについて説明する. このプログラムはアームロボットがコップを掴み,持ち上げるプログラムになっている.

This time, we describe sample program As for this program, arm robot grasps glass, and lift it. [#i2704206]

  from openravepy import *
  import numpy
  env = Environment()
  raw_input("Press Enter to start...")
  manip = robot.GetActiveManipulator()
  ikmodel = databases.inversekinematics.InverseKinematicsModel(robot,iktype=IkParameterization.Type.Transform6D)
  if not ikmodel.load():
  manipprob = interfaces.BaseManipulation(robot)
  Tgoal = numpy.array([[0,-1,0,-0.23],[-1,0,0,-0.1446],[0,0,-1,0.85],[0,0,0,1]])
  res = manipprob.MoveToHandPosition(matrices=[Tgoal],seedik=10)
  taskprob = interfaces.TaskManipulation(robot)
  with env:
  raw_input("Press Enter to exit...")
  • 1-5行目:シミュレーションを行う環境を設定する. Line 1-5: Set environment to perform simulation.
  • 1:OpenRAVEからモジュールをインポートする. 1: Import module program unit from OpenRAVE.
  • 2:python用の配列numpyをインポートする. 2: Import array numpy for python.
  • 3:OpenRAVEでのシミュレーションの基本となるEnvironmentのインスタンスを作成する. 3: Create instance of Environment, which is basics of simulation in OpenRAVE.
  • 4:ビューワーとして"qtcoin"を設定する. 4: Set "qtcoin" as viewer.
  • 5:以下のフォルダ(/usr/share/openrave-0.4/data)に,予めロボット,物体を定義したxmlファイルが幾つか提供されてい,その中のlab1.env.xmlをロード. 5: In following Folder (/usr/share/openrave-0.4/data), some xml files which defined robot, object are provided in advance, then load lab1.env.xml inside.
  • 7,28行目:Enterキーを押すまで待機する. Line 7,28: Wait until pushing enter key.
  • 9-13行目:逆運動学関数を設定する. Line 9-13: Set inverse kinematics function.
  • 9:環境中のロボットを取得する. 9: Acquire robots of environment.
  • 10:ロボット中のマニピュレータを取得する. 10: Acquire the manipulator of robots.
  • 11:逆運動学関数モデルをロボットに設定する. 11: Set inverse kinematics function model to robot.
  • 12-13:逆運動学関数モデルが設定できなかった場合、モデルを自動生成する. 12-13: When cannot set inverse kinematics function model, generate model automatically.
  • 15-18行目:コップの位置までアームを移動する. Line 15-18: Move arm to glass position.
  • 15:ロボットの移動マニピュレーションを作動する. 15: Operate transfer manipulation of robot.
  • 16:目標位置までの逆運動学の同時変換行列を作成する. 16: Create simultaneous transformation matrix of inverse kinematics to target position.
  • 17:逆運動学を解いてアームを移動する. 17: Untie inverse kinematics, and move arm.
  • 18:待機. 18: Wait.
  • 20-24行目:コップを掴む. Line 20-24: Grasp a glass.
  • 20:ロボットの動作マニピュレーションを作動する. 20: Operate operation manipulation of robot.
  • 21:アームの指を閉じる. 21: Close finger of arm.
  • 22:待機. 22: Wait.
  • 23-24:掴んだコップをロボットの動きとともに位置を追従させる. 23-24: Let grasped glass follow position of movement of robot.
  • 26行目:初期位置へアームを戻す. Line 26: Return arm to initial position.
  • 29行目:OpenRAVEを安全に終了するためのコマンド. Line 29: Command to finish OpenRAVE safely.

サンプルプログラム(.xml)の説明 Explanation of sample program (.xml)


Next, Explain xml file (lab1.env.xml) which defined environment.

lab1.env.xml:ll.5-7::ロボットの定義 Definitions of robot :

  <Robot file="robots/barrettsegway.robot.xml" name="BarrettWAM">
    <translation>-0.8 0.14 0.01</translation>
  • 5行目:ロボットのxmlファイルを読み込み,名前を設定する. Line 5: Read xml file of robot, and set the name.
  • 6行目:ロボットの位置の座標を設定する. Line 6: Set coordinate of probot osition.

lab1.env.xml:ll.44-52::物体の定義(自分で作成する場合) Object definitions (when you create in person) :

  <KinBody name="pole">
    <translation>-0.312 0.416 1</translation>
    <Body type="static">
      <Geom type="box">
        <extents>0.05 0.05 1</extents>
        <diffuseColor>1 .2 .2</diffuseColor>
  • 44行目:物体の名前を設定する. Line 44: Set the object name.
  • 45行目:物体の位置の座標(x, y, z)を設定する. Line 45: Set coordinate (x, y, z) of object position.
  • 46行目:作成する物体のタイプを設定する. Line 46: Set type of creating object.
  • 47行目:作成する物体のパーツのタイプを設定する. Line 47: Set type of parts of creating object.
  • 48行目:物体の大きさ(横,縦,高さ)[m]を設定する. Line 48: Set object size (width, length, height )[m].
  • 47行目:拡散反射による色(R, G, B)を設定する. Line 47: Set color (R, G, B) from diffuse reflection.

※HIROのシミュレータを作りたい場合,サンプルプログラムフォルダ(openrave)中の./data2/hiro_test.env.xmlを参照してxmlファイルを作成する. * When creating HIRO simulator, refer to ./data2/hiro_test.env.xml in sample program Folder (openrave) to create xml file.

サンプルプログラムの実行 Sample program execution

実際にサンプルプログラムsample08-grab.pyを実行してみる. プログラムを実行するには以下のコマンドを実行すればよい.

Actually, try to execute sample program You should execute following command to execute program.

  $ python


Show execution result below.

Initial state
Default value


Before using OpenRAVE, it is necessary to create the models of the environment and the robots according to a predefined XML format or COLLADA format.

For simplicity, let's use the XML format.

Let's consider an environment file called myenv.xml:

  <!-- set the background color of the environment-->
  <bkgndcolor>0.9 0.9 0.9</bkgndcolor>
  <!-- set the initial camera translation-->
  <camtrans>3.187209 0.146988 2.772924</camtrans>
  <!-- set the initial camera rotation specified by rotation-axis-->
  <camrotationaxis>-0.676235 -0.664169 0.318726 143.890963</camrotationaxis>

  <Robot name="SimpleBot">
    <kinbody name="SimpleBotKin">
      <body name="base" type="dynamic">
        <mass type="box">
          <extents>0.1 0.2 0.25</extents>
        <geom type="box">
          <extents>0.1 0.2 0.25</extents>
          <translation>0 0 0.25</translation>
          <diffuseColor>0.6 0.6 1.0</diffuseColor>
          <ambientColor>0.6 0.6 1.0</ambientColor>
    <translation>-0.6 0.0 0.0</translation>

  <KinBody name="FloorWalls">
    <Body type="static">
      <Translation>0 0 0</Translation>
      <Geom type="box">
        <extents>2.5 2.5 0.005</extents>
        <translation>0 0 -0.005</translation>
        <diffuseColor>0.6 0.6 0.6</diffuseColor>
        <ambientColor>0.6 0.6 0.6</ambientColor>
      <Geom type="box">
        <extents>2.5 0.01 0.2</extents>
        <translation>0 -2.5 0.2</translation>
        <diffuseColor>0.6 0.6 0.6</diffuseColor>
        <ambientColor>0.6 0.6 0.6</ambientColor>
      <Geom type="box">
        <extents>2.5 0.01 0.2</extents>
        <translation>0 2.5 0.2</translation>
        <diffuseColor>0.6 0.6 0.6</diffuseColor>
        <ambientColor>0.6 0.6 0.6</ambientColor>
      <Geom type="box">
        <extents>0.01 2.5 0.2</extents>
        <translation>2.5 0 0.2</translation>
        <diffuseColor>0.6 0.6 0.6</diffuseColor>
        <ambientColor>0.6 0.6 0.6</ambientColor>
      <Geom type="box">
        <extents>0.01 2.5 0.2</extents>
        <translation>-2.5 0 0.2</translation>
        <diffuseColor>0.6 0.6 0.6</diffuseColor>
        <ambientColor>0.6 0.6 0.6</ambientColor>


Some GUI parameters can be set in this file, such as camera translation, rotation, background, etc.

The robot tag must contain at least one kinbody which will consist of one or more rigid bodies and joints.

Also, we can add other environment objects such as walls, floor, etc.

Basic script

env = Environment()
robot = env.GetRobots()[0]
raw_input("Press Enter to Exit...")


It is possible to define the manipulators and sensors of the robot.


<Manipulator name="leftarm">
   <joints>HRP4_L_HAND_J0 HRP4_L_HAND_J1</joints>
   <closingdirection>0 -1</closingdirection>
   <direction>1 0 0</direction>
   <!-- grasp goal with respect to the effector -->
   <Translation>0.02 0.0 -0.11</Translation>
<Manipulator name="rightarm">


<AttachedSensor name="sensor1">
    <translation>0 -0.2 0</translation>
    <rotationaxis>0 1 0 90</rotationaxis>
    <sensor file="sensor1.xml">
<AttachedSensor name="sensor2">


OpenRAVE includes some models that we can use in our environments:

  <KinBody name="bottle">
  <Body name="bottle_body" type="static">
     <Translation>0.35 -0.16 0.701</Translation>
     <rotationaxis>1 0 0 90</rotationaxis>
     <Geom type="trimesh">
       <Data>models/objects/bottle.wrl 0.006</Data>
       <Render>models/objects/bottle.wrl 0.006</Render>
     <mass type="custom">

Creating objects dynamically

for i in range(0,10):
  body = openravepy.RaveCreateKinBody(env,'')
  boxinfo= numpy.array([[0,0,0.3, 0.1,0.15,0.3]])
  boxinfo[0,0]= 4.5*(numpy.random.rand()-0.5)
  boxinfo[0,1]= 4.5*(numpy.random.rand()-0.5)
  render= True
  print boxinfo



You can use the class we have prepared for navigation:

class MyNavigationPlanning:
  def __init__(self,robot):
    self.env = robot.GetEnv()
    self.robot = robot
    self.cdmodel = openravepy.databases.convexdecomposition.ConvexDecompositionModel(self.robot)
    if not self.cdmodel.load():
    self.basemanip = openravepy.interfaces.BaseManipulation(self.robot)

  def moveTo(self,goal,envmin=[-2.5,-2.5,0],envmax=[2.5,2.5,1]):
    with self.env:
    with self.env:
      with self.robot:
        if self.env.CheckCollision(self.robot):
          print 'invalid goal is specified: ',goal
          return False
    print 'planning to: ',goal
    # draw the marker
    center = numpy.r_[goal[0:2],0.2]
    xaxis = 0.5*numpy.array((numpy.cos(goal[2]),numpy.sin(goal[2]),0))
    yaxis = 0.25*numpy.array((-numpy.sin(goal[2]),numpy.cos(goal[2]),0))
    h = self.env.drawlinelist(numpy.transpose(numpy.c_[center-xaxis,center+xaxis,center-yaxis,center+yaxis,center+xaxis,center+0.5*xaxis+0.5*yaxis,center+xaxis,center+0.5*xaxis-0.5*yaxis]),linewidth=5.0,colors=numpy.array((0,1,0)))
    # start to move
    if self.basemanip.MoveActiveJoints(goal=goal,maxiter=3000,steplength=0.1) is None:
      print 'failed to plan to the goal: ',goal
      return False
    print 'waiting for controller'
    return True

Then just include it in your Python script:

nav = MyNavigationPlanning(robot)

The first two parameters are the x and y coordinates (following OpenRAVE conventions) and the third is the direction (rotation along z axis).


manip = robot.SetActiveManipulator('leftarm') # set the manipulator to leftarm

ikmodel = databases.inversekinematics.InverseKinematicsModel(robot,iktype=IkParameterization.Type.Transform6D)
if not ikmodel.load():

basemanip = interfaces.BaseManipulation(robot) # create the interface for basic manipulation

Tgoal = numpy.array([[0,0,-1,-0.4],[0,1,0,-1.3],[1,0,0,1.2],[0,0,0,1]]) #target
h = env.plot3(Tgoal[0:3,3], 10) #show the target in GUI
raw_input("Press Enter to continue...")
basemanip.MoveToHandPosition(matrices=[Tgoal]) # call motion planner with target
robot.WaitForController(0) # wait

When moving a manipulator (e.g. using MoveToHandPosition), it is also possible to get a trajectory object without making the robot move.

with robot:
      Tgoal = numpy.array([[0,0,-1,0.15],[0,1,0,-0.22],[1,0,0,0.9],[0,0,0,1]])
      h.append(env.plot3(Tgoal[0:3,3], 10))
      traj = basemanip.MoveToHandPosition(matrices=[Tgoal],execute=False,outputtrajobj=True)
      if(traj == None):
          print 'Target unreachable!'

Please check the trajectory concepts from the official site.

Some of the advantages of working with trajectory objects are:

  • Merging trajectories
    which inserts all the waypoints of traj2 (2nd argument) at the end of traj1 (first argument), without overwriting (3rd argument).
  • Calculating trajectory duration
    print traj.GetDuration()
  • Get number of waypoints
    print traj.GetNumWaypoints()
  • Retiming a trajectory
    • trajectory
    • robot
    • hastimestamps
    • fmaxvelmult

If fmaxvelmult is 1, the trajectory will keep its original duration.

If fmaxvelmult > 1, the maximum velocity will increase and consequently the robot will move faster and the trajectory duration will decrease.

If fmaxvelmult < 1, the maximum velocity will decrease and consequently the robot will move slower and the trajectory duration will increase.

  • Sample a trajectory
    chosenTime = 12.4
    vtrajdata = traj.Sample(chosenTime,robot.GetActiveConfigurationSpecification())

After manipulating the trajectory, it can be sent to the robot using:



In OpenRAVE, virtual sensors can be simulated inside the environment. Also, data from real sensors can be used to update the virtual environment.

Virtual sensors

First, let's create a XML file with the sensor definition (e.g.mycamera.sensor.xml).

<sensor name="camera" type="BaseCamera">
  <KK>640 640 320 240</KK>
  <color>0.5 0.5 1</color>

Then, let's attach the sensor to the robot by editing the environment file (e.g.mytestwamcamera.env.xml).

  <Robot file="robots/barrettwam.robot.xml" name="BarrettWAM">
    <AttachedSensor name="camera">
      <translation>0 -0.2 0</translation>
      <rotationaxis>0 1 0 90</rotationaxis>
      <sensor file="mycamera.sensor.xml">

Note that we attached the sensor to one link of the robot and we determined its position and orientation with the translation and rotationaxis tags. The sensor will move as that link moves.

Python script:

1: env = Environment()
2: env.SetViewer('qtcoin')
3: env.Load('mytestwamcamera.env.xml')
4: robot = env.GetRobots()[0]
5: sensor = env.GetSensors()[0]
6: sensor.Configure(Sensor.ConfigureCommand.PowerOn)
7: sensor.Configure(Sensor.ConfigureCommand.RenderDataOn)
8: while True:
9:      try:
10:           sensordata = sensor.GetSensorData()
11:           scipy.misc.pilutil.imsave('test.png',sensordata.imagedata)
12:           break
13:     except openrave_exception:
14:           time.sleep(0.1)

1: Creates an environment
2: Set qtcoin as the viewer
3: Load the created environment
4: Get the first robot
5: Get the first sensor
6: Turn on the sensor
7: Start rendering data
8-14: Wait until the sensor is ready and save the image data
15: Stop rendering data
16: Turn off the sensor
17: Destroy the environment

Camera Sensor

Real sensors

First, let's create a file with an object definition (e.g. sphere.xml):

<KinBody name="mysphere">
   <Body type="dynamic">
    <translation>0.0 0.0 1.3</translation>
    <Geom type="sphere">
      <ambientColor>0.0 0.0 1.0</ambientColor>
      <diffuseColor>0.0 0.0 1.0</diffuseColor>

Then in the Python script, we can add the object using:

body = env.ReadKinBodyXMLFile(filename='sphere.xml')

Also, we can remove the object and place a new copy of it:

body = env.ReadKinBodyXMLFile(filename='sphere.xml')

If we want to move the object, we can use:

T = body.GetTransform()
T[0:3,3] = numpy.array([0,0,1.2])

which will set the z coordinate to 1.2[m].

Using the data from the real sensors, the objects can be added/removed/updated dynamically.



Mailing list

Code examples

Quick examples


Installed examples

After installing OpenRAVE, it is possible to execute the examples from the command line using the following command and the name of the example, for example: --example movehandstraight

It is also possible to see the source code of the examples in the installation directory, for example:


More code examples

トップ   編集 凍結 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2013-02-26 (火) 22:30:00 (1969d)