Pepper API for Python

Jalapyno library

This is a wrapper around qi framework by Aldebaran to control Pepper the humanoid robot with Python 2.7.

Package uses high-level commands to move robot, take camera input or run Google Recognition API to get speech recognition.

It also includes a virtual robot for testing purposes.

class robot.Pepper(ip_address, port=9559)

Real robot controller

Create an instance of real robot controller by specifying a robot’s IP address and port. IP address can be:

  • hostname (hostname like pepper.local)
  • IP address (can be obtained by pressing robot’s chest button)

Default port is usually used, e.g. 9559.

Example:
>>> pepper = Pepper("pepper.local")
>>> pepper = Pepper("192.169.0.1", 1234)
ask_wikipedia()

Ask for question and then robot will say first two sentences from Wikipedia

..warning:: Autonomous life has to be turned on to process audio

autonomous_life_off()

Switch autonomous life off

Note

After switching off, robot stays in resting posture. After turning autonomous life default posture is invoked

autonomous_life_on()

Switch autonomous life on

battery_status()

Say a battery status

Blink eyes with defined color

Parameters:rgb (integer) – Color in RGB space
Example:
>>> pepper.blink_eyes([255, 0, 0])
clean_tablet()

Clean tablet and show default tablet animation on robot

download_file(file_name)

Download a file from robot to ./tmp folder in root.

..warning:: Folder ./tmp has to exist! :param file_name: File name with extension (or path) :type file_name: string

exploration_mode(radius)

Start exploration mode when robot it performing a SLAM in specified radius. Then it saves a map into robot into its default folder.

See also

When robot would not move maybe it only needs to set smaller safety margins. Take a look and set_security_distance()

Note

Default folder for saving maps on the robot is: /home/nao/.local/share/Explorer/

Parameters:radius (integer) – Distance in meters
Returns:image
Return type:cv2 image
get_camera_frame(show)

Get camera frame from subscribed camera link.

Warning

Please subscribe to camera before getting a camera frame. After you don’t need it unsubscribe it.

Parameters:show (bool) – Show image when recieved and wait for ESC
Returns:image
Return type:cv2 image
get_depth_frame(show)

Get depth frame from subscribed camera link.

Warning

Please subscribe to camera before getting a camera frame. After you don’t need it unsubscribe it.

Parameters:show (bool) – Show image when recieved and wait for ESC
Returns:image
Return type:cv2 image
get_face_properties()

Gets all face properties from the tracked face in front of the robot.

It tracks: - Emotions (neutral, happy, surprised, angry and sad - Age - Gender

Note

It also have a feature that it substracts a 5 year if it talks to a female.

Note

If it cannot decide which gender the user is, it just greets her/him as “Hello human being”

..warning:: To get this feature working ALAutonomousLife process is needed. In this methods it is called by default

get_robot_name()

Gets a current name of the robot

Returns:Name of the robot
Return type:string
hand(hand, close)

Close or open hand

Parameters:
  • hand (string) – Which hand - left - right
  • close (boolean) – True if close, false if open
list_behavior()

Prints all installed behaviors on the robot

listen()

Wildcard speech recognition via internal Pepper engine

Warning

To get this proper working it is needed to disable or uninstall all application which can modify a vocabulary in a Pepper.

Note

Note this version only rely on time but not its internal speak processing this means that Pepper will ‘bip’ at the begining and the end of human speak but it is not taken a sound in between the beeps. Search for ‘Robot is listening to you ... sentence in log console

Example:
>>> words = pepper.listen()
Returns:Speech to text
Return type:string
listen_to(vocabulary)

Listen and match the vocabulary which is passed as parameter.

Example:
>>> words = pepper.listen_to(["what color is the sky", "yes", "no"]
Parameters:vocabulary (string) – List of phrases or words to recognize
Returns:Recognized phrase or words
Return type:string
load_map(file_name, file_path='/home/nao/.local/share/Explorer/')

Load stored map on a robot. It will find a map in default location, in other cases alternative path can be specifies by file_name.

Note

Default path of stored maps is /home/nao/.local/share/Explorer/

Warning

If file path is specified it is needed to have `` at the end.

Parameters:
  • file_name (string) – Name of the map
  • file_path (string) – Path to the map
move_forward(speed)

Move forward with certain speed

Parameters:speed (float) – Positive values forward, negative backwards
move_head_default()

Put head into default position in ‘StandInit’ pose

move_head_down()

Look down

move_head_up()

Look up

move_to_circle(clockwise, t=10)

Move a robot into circle for specified time

Note

This example only count on time not finished circles.

>>> pepper.move_to_circle(clockwise=True, t=5)
Parameters:
  • clockwise (bool) – Specifies a direction to turn around
  • t (float) – Time in seconds (default 10)
navigate_to(x, y)

Navigate robot in map based on exploration mode or load previously mapped enviroment.

Note

Before navigation you have to run localization of the robot.

Warning

Navigation to 2D point work only up to 3 meters from robot.

Example:
>>> pepper.robot_localization()
>>> pepper.navigate_to(1.0, 0.3)
Parameters:
  • x (float) – X axis in meters
  • y (float) – Y axis in meters
pick_a_volunteer()

Complex movement for choosing a random people.

If robot does not see any person it will automatically after several seconds turning in one direction and looking for a human. When it detects a face it will says ‘I found a volunteer’ and raise a hand toward her/him and move forward. Then it get’s into a default ‘StandInit’ pose.

Example:
>>> pepper.pick_a_volunteer()
play_sound(sound)

Play a mp3 or wav sound stored on Pepper

Note

This is working only for songs stored in robot.

Parameters:sound (string) – Absolute path to the sound
point_at(x, y, z, effector_name, frame)

Point end-effector in cartesian space

Example:
>>> pepper.point_at(1.0, 1.0, 0.0, "RArm", 0)
Parameters:
  • x (float) – X axis in meters
  • y (float) – Y axis in meters
  • z (float) – Z axis in meters
  • effector_name (string) – LArm, RArm or Arms
  • frame (integer) – 0 = Torso, 1 = World, 2 = Robot
rename_robot()

Change current name of the robot

rest()

Get robot into default resting position know as Crouch

restart_robot()

Restart robot (it takes several minutes)

robot_localization()

Localize a robot in a map

Note

After loading a map into robot or after new exploration robots always need to run a self localization. Even some movement in cartesian space demands localization.

say(text)

Text to speech (robot internal engine)

Parameters:text (string) – Text to speech
set_awareness(state)

Turn on or off the basic awareness of the robot, e.g. looking for humans, self movements etc.

Parameters:state (bool) – If True set on, if False set off
set_security_distance(distance=0.05)

Set security distance. Lower distance for passing doors etc.

Warning

It is not wise to turn security distance off. Robot may fall from stairs or bump into any fragile objects.

Example:
>>> pepper.set_security_distance(0.01)
Parameters:distance (float) – Distance from the objects in meters
set_volume(volume)

Set robot volume in percentage

Parameters:volume (integer) – From 0 to 100 %
static share_localhost(folder)

Shares a location on localhost via HTTPS to Pepper be able to reach it by subscribing to IP address of this computer.

Example:
>>> pepper.share_localhost("/Users/pepper/Desktop/web/")
Parameters:folder (string) – Root folder to share
show_map(on_robot=False, remote_ip=None)

Shows a map from robot based on previously loaded one or explicit exploration of the scene. It can be viewed on the robot or in the computer by OpenCV.

Parameters:
  • on_robot (bool) – If set shows a map on the robot
  • remote_ip (string) – IP address of remote (default None)

Warning

Showing map on the robot is not fully supported at the moment.

show_tablet_camera(text)

Show image from camera with SpeechToText annotation on the robot tablet

Note

For showing image on robot you will need to share a location via HTTPS and save the image to ./tmp.

Warning

It has to be some camera subscribed and ./tmp folder in root directory exists for showing it on the robot.

Example:
>>> pepper = Pepper("10.37.1.227")
>>> pepper.share_localhost("/Users/michael/Desktop/Pepper/tmp/")
>>> pepper.subscribe_camera("camera_top", 2, 30)
>>> while True:
>>>     pepper.show_tablet_camera("camera top")
>>>     pepper.tablet_show_web("http://10.37.2.241:8000/tmp/camera.png")
Parameters:text (string) – Question of the visual question answering
shutdown_robot()

Turn off the robot completely

speech_to_text(audio_file)

Translate speech to text via Google Speech API

Parameters:audio_file (string) – Name of the audio (default speech.wav
Returns:Text of the speech
Return type:string
stand()

Get robot into default standing position known as StandInit or Stand

start_animation(animation)

Starts a animation which is stored on robot

See also

Take a look a the animation names in the robot http://doc.aldebaran.com/2-5/naoqi/motion/alanimationplayer.html#alanimationplayer

Parameters:animation (string) – Animation name
Returns:True when animation has finished
Return type:bool
start_behavior(behavior)

Starts a behavior stored on robot

Parameters:behavior (string) – Behavior name
start_dance()

Start a robotic dance

stop_localization()

Stop localization of the robot

stop_moving()

Stop robot from moving by move_around and turn_around methods

stop_sound()

Stop sound

subscribe_camera(camera, resolution, fps)

Subscribe to a camera service. You need to subscribe a camera before you reach a images from it. If you choose depth_camera only 320x240 resolution is enabled.

Warning

Each subscription has to have a unique name otherwise it will conflict it and you will not be able to get any images due to return value None from stream.

Example:
>>> pepper.subscribe_camera(0, 1, 15)
>>> image = pepper.get_camera_frame(False)
>>> pepper.unsubscribe_camera()
Parameters:
  • camera (string) – camera_depth, camera_top or camera_bottom
  • resolution (integer) –
    1. 160x120
    2. 320x240
    3. 640x480
    4. 1280x960
  • fps (integer) – Frames per sec (5, 10, 15 or 30)
tablet_show_image(image_url)

Display image on robot tablet

See also

For more take a look at tablet_show_web()

Example:
>>> pepper.tablet_show_image("https://goo.gl/4Xq6Bc")
Parameters:image_url (string) – Image URL (local or web)
tablet_show_settings()

Show robot settings on the tablet

tablet_show_web(url)

Display a web page on robot’s tablet. It also works for sharing a locally stored images and websites by running:

>>> pepper.share_localhost("/Users/user/Desktop/web_host/")
>>> pepper.tablet_show_web("<remote_ip>:8000/web_host/index.html")

Or

>>> pepper.tablet_show_web("https://www.ciirc.cvut.cz")

Note

Or you can simply run python -m SimpleHTTPServer in the root of the web host and host it by self on specified port. Default is 8000.

Parameters:url (string) – Web URL
track_object(object_name, effector_name, diameter=0.05)

Track a object with a given object type and diameter. If Face is chosen it has a default parameters to 15 cm diameter per face. After staring tracking in will wait until user press ctrl+c.

See also

For more info about tracking modes, object names and other: http://doc.aldebaran.com/2-5/naoqi/trackers/index.html#tracking-modes

Example:
>>> pepper.track_object("Face", "Arms")

Or

>>> pepper.track_object("RedBall", "LArm", diameter=0.1)
Parameters:
  • object_nameRedBall, Face, LandMark, LandMarks, People or Sound
  • effector_nameLArm, RArm, Arms
  • diameter – Diameter of the object (default 0.05, for face default 0.15)
turn_around(speed)

Turn around its axis

Parameters:speed (float) – Positive values to right, negative to left
turn_off_leds()

Turn off the LEDs in robot’s eyes

unsubscribe_camera()

Unsubscribe to camera after you don’t need it

unsubscribe_effector()

Unsubscribe a end-effector after tracking some object

Note

It has to be done right after each tracking by hands.

upload_file(file_name)

Upload file to the home directory of the robot

Parameters:file_name (string) – File name with extension (or path)