psychopy.visual.Rift

Overview

Rift([fovType, trackingOriginType, ...])

Class provides a display and peripheral interface for the Oculus Rift (see: https://www.oculus.com/) head-mounted display.

Rift.close()

Close the window and cleanly shutdown the LibOVR session.

Rift.size

Size property to get the dimensions of the view buffer instead of the window.

Rift.setSize(value[, log])

Rift.perfHudMode([mode])

Set the performance HUD mode.

Rift.hidePerfHud()

Hide the performance HUD.

Rift.stereoDebugHudMode(mode)

Set the debug stereo HUD mode.

Rift.setStereoDebugHudOption(option, value)

Configure stereo debug HUD guides.

Rift.userHeight

Get user height in meters (float).

Rift.eyeHeight

Eye height in meters (float).

Rift.eyeToNoseDistance

Eye to nose distance in meters (float).

Rift.eyeOffset

Eye separation in centimeters (float).

Rift.hasPositionTracking

True if the HMD is capable of tracking position.

Rift.hasOrientationTracking

True if the HMD is capable of tracking orientation.

Rift.hasMagYawCorrection

True if this HMD supports yaw drift correction.

Rift.manufacturer

Get the connected HMD's manufacturer (str).

Rift.serialNumber

Get the connected HMD's unique serial number (str).

Rift.hid

USB human interface device (HID) identifiers (int, int).

Rift.displayResolution

Get the HMD's raster display size (int, int).

Rift.displayRefreshRate

Get the HMD's display refresh rate in Hz (float).

Rift.pixelsPerTanAngleAtCenter

Horizontal and vertical pixels per tangent angle (=1) at the center of the display.

Rift.tanAngleToNDC(horzTan, vertTan)

Convert tan angles to the normalized device coordinates for the current buffer.

Rift.trackerCount

Number of attached trackers.

Rift.getTrackerInfo(trackerIdx)

Get tracker information.

Rift.headLocked

True if head locking is enabled.

Rift.trackingOriginType

Current tracking origin type (str).

Rift.recenterTrackingOrigin()

Recenter the tracking origin using the current head position.

Rift.specifyTrackingOrigin(pose)

Specify a tracking origin.

Rift.specifyTrackingOriginPosOri([pos, ori])

Specify a tracking origin using a pose and orientation.

Rift.clearShouldRecenterFlag()

Clear the 'shouldRecenter' status flag at the API level.

Rift.testBoundary(deviceType[, bounadryType])

Test if tracked devices are colliding with the play area boundary.

Rift.sensorSampleTime

Sensor sample time (float).

Rift.getDevicePose(deviceName[, absTime, ...])

Get the pose of a tracked device.

Rift.getTrackingState([absTime, latencyMarker])

Get the tracking state of the head and hands.

Rift.calcEyePoses(headPose[, originPose])

Calculate eye poses for rendering.

Rift.eyeRenderPose

Computed eye pose for the current buffer.

Rift.shouldQuit

True if the user requested the application should quit through the headset's interface.

Rift.isVisible

True if the app has focus in the HMD and is visible to the viewer.

Rift.hmdMounted

True if the HMD is mounted on the user's head.

Rift.hmdPresent

True if the HMD is present.

Rift.shouldRecenter

True if the user requested the origin be re-centered through the headset's interface.

Rift.hasInputFocus

True if the application currently has input focus.

Rift.overlayPresent

Rift.setBuffer(buffer[, clear])

Set the active draw buffer.

Rift.getPredictedDisplayTime()

Get the predicted time the next frame will be displayed on the HMD.

Rift.getTimeInSeconds()

Absolute time in seconds.

Rift.viewMatrix

The view matrix for the current eye buffer.

Rift.nearClip

Distance to the near clipping plane in meters.

Rift.farClip

Distance to the far clipping plane in meters.

Rift.projectionMatrix

Get the projection matrix for the current eye buffer.

Rift.isBoundaryVisible

True if the VR boundary is visible.

Rift.getBoundaryDimensions([boundaryType])

Get boundary dimensions.

Rift.connectedControllers

Connected controller types (list of str)

Rift.updateInputState([controllers])

Update all connected controller states.

Rift.flip([clearBuffer, drawMirror])

Submit view buffer images to the HMD's compositor for display at next V-SYNC and draw the mirror texture to the on-screen window.

Rift.multiplyViewMatrixGL()

Multiply the local eye pose transformation modelMatrix obtained from the SDK using glMultMatrixf.

Rift.multiplyProjectionMatrixGL()

Multiply the current projection modelMatrix obtained from the SDK using glMultMatrixf.

Rift.setRiftView([clearDepth])

Set head-mounted display view.

Rift.setDefaultView([clearDepth])

Return to default projection.

Rift.getThumbstickValues([controller, deadzone])

Get controller thumbstick values.

Rift.getIndexTriggerValues([controller, ...])

Get the values of the index triggers.

Rift.getHandTriggerValues([controller, deadzone])

Get the values of the hand triggers.

Rift.getButtons(buttons[, controller, testState])

Get button states from a controller.

Rift.getTouches(touches[, controller, testState])

Get touch states from a controller.

Rift.startHaptics(controller[, frequency, ...])

Start haptic feedback (vibration).

Rift.stopHaptics(controller)

Stop haptic feedback.

Rift.createHapticsBuffer(samples)

Create a new haptics buffer.

Rift.submitControllerVibration(controller, ...)

Submit a haptics buffer to begin controller vibration.

Rift.createPose([pos, ori])

Create a new Rift pose object (LibOVRPose).

Rift.createBoundingBox([extents])

Create a new bounding box object (LibOVRBounds).

Rift.isPoseVisible(pose)

Check if a pose object if visible to the present eye.

Details

class psychopy.visual.rift.Rift(fovType='recommended', trackingOriginType='floor', texelsPerPixel=1.0, headLocked=False, highQuality=True, monoscopic=False, samples=1, mirrorMode='default', mirrorRes=None, warnAppFrameDropped=True, autoUpdateInput=True, legacyOpenGL=True, *args, **kwargs)[source]

Class provides a display and peripheral interface for the Oculus Rift (see: https://www.oculus.com/) head-mounted display. This is a lazy-imported class, therefore import using full path from psychopy.visual.rift import Rift when inheriting from it.

Requires PsychXR 0.2.4 to be installed. Setting the winType=’glfw’ is preferred for VR applications.

Parameters:
  • fovType (str) – Field-of-view (FOV) configuration type. Using ‘recommended’ auto-configures the FOV using the recommended parameters computed by the runtime. Using ‘symmetric’ forces a symmetric FOV using optimal parameters from the SDK, this mode is required for displaying 2D stimuli. Specifying ‘max’ will use the maximum FOVs supported by the HMD.

  • trackingOriginType (str) – Specify the HMD origin type. If ‘floor’, the height of the user is added to the head tracker by LibOVR.

  • texelsPerPixel (float) – Texture pixels per display pixel at FOV center. A value of 1.0 results in 1:1 mapping. A fractional value results in a lower resolution draw buffer which may increase performance.

  • headLocked (bool) – Lock the compositor render layer in-place, disabling Asynchronous Space Warp (ASW). Enable this if you plan on computing eye poses using custom or modified head poses.

  • highQuality (bool) – Configure the compositor to use anisotropic texture sampling (4x). This reduces aliasing artifacts resulting from high frequency details particularly in the periphery.

  • nearClip (float) – Location of the near and far clipping plane in GL units (meters by default) from the viewer. These values can be updated after initialization.

  • farClip (float) – Location of the near and far clipping plane in GL units (meters by default) from the viewer. These values can be updated after initialization.

  • monoscopic (bool) – Enable monoscopic rendering mode which presents the same image to both eyes. Eye poses used will be both centered at the HMD origin. Monoscopic mode uses a separate rendering pipeline which reduces VRAM usage. When in monoscopic mode, you do not need to call ‘setBuffer’ prior to rendering (doing so will do have no effect).

  • samples (int or str) – Specify the number of samples for multi-sample anti-aliasing (MSAA). When >1, multi-sampling logic is enabled in the rendering pipeline. If ‘max’ is specified, the largest number of samples supported by the platform is used. If floating point textures are used, MSAA sampling is disabled. Must be power of two value.

  • mirrorMode (str) – On-screen mirror mode. Values ‘left’ and ‘right’ show rectilinear images of a single eye. Value ‘distortion` shows the post-distortion image after being processed by the compositor. Value ‘default’ displays rectilinear images of both eyes side-by-side.

  • mirrorRes (list of int) – Resolution of the mirror texture. If None, the resolution will match the window size. The value of mirrorRes is used for to define the resolution of movie frames.

  • warnAppFrameDropped (bool) – Log a warning if the application drops a frame. This occurs when the application fails to submit a frame to the compositor on-time. Application frame drops can have many causes, such as running routines in your application loop that take too long to complete. However, frame drops can happen sporadically due to driver bugs and running background processes (such as Windows Update). Use the performance HUD to help diagnose the causes of frame drops.

  • autoUpdateInput (bool) – Automatically update controller input states at the start of each frame. If False, you must manually call updateInputState before getting input values from LibOVR managed input devices.

  • legacyOpenGL (bool) – Disable ‘immediate mode’ OpenGL calls in the rendering pipeline. Specifying False maintains compatibility with existing PsychoPy stimuli drawing routines. Use True when computing transformations using some other method and supplying shaders matrices directly.

_assignFlipTime(obj, attrib, format=<class 'float'>)

Helper function to assign the time of last flip to the obj.attrib

Parameters:
  • obj (dict or object) – A mutable object (usually a dict of class instance).

  • attrib (str) – Key or attribute of obj to assign the flip time to.

  • format (str, class or None) – Format in which to return time, see clock.Timestamp.resolve() for more info. Defaults to float.

_checkMatchingSizes(requested, actual)

Checks whether the requested and actual screen sizes differ. If not then a warning is output and the window size is set to actual

_cleanEditables()

Make sure there are no dead refs in the editables list

_endOfFlip(clearBuffer)

Override end of flip with custom color channel masking if required.

_getFrame(rect=None, buffer='mirror')[source]

Return the current HMD view as an image.

Parameters:
  • rect (array_like) – Rectangle [x, y, w, h] defining a sub-region of the frame to capture. This should remain None for HMD frames.

  • buffer (str, optional) – Buffer to capture. For the HMD, only ‘mirror’ is available at this time.

Returns:

Buffer pixel contents as a PIL/Pillow image object.

Return type:

Image

_getPixels(rect=None, buffer='front', includeAlpha=True, makeLum=False)

Return an array of pixel values from the current window buffer or sub-region.

Parameters:
  • rect (tuple[int], optional) – The region of the window to capture in pixel coordinates (left, bottom, width, height). If None, the whole window is captured.

  • buffer (str, optional) – Buffer to capture.

  • includeAlpha (bool, optional) – Include the alpha channel in the returned array. Default is True.

  • makeLum (bool, optional) – Convert the RGB values to luminance values. Values are rounded to the nearest integer. Default is False.

Returns:

Pixel values as a 3D array of shape (height, width, channels). If includeAlpha is False, the array will have shape (height, width, 3). If makeLum is True, the array will have shape (height, width).

Return type:

ndarray

Examples

Get the pixel values of the whole window:

pix = win._getPixels()

Get pixel values and convert to luminance and get average:

pix = win._getPixels(makeLum=True)
average = pix.mean()
_getRegionOfFrame(rect=(-1, 1, 1, -1), buffer='front', power2=False, squarePower2=False)

Deprecated function, here for historical reasons. You may now use :py:attr:`~Window._getFrame() and specify a rect to get a sub-region, just as used here.

power2 can be useful with older OpenGL versions to avoid interpolation in PatchStim. If power2 or squarePower2, it will expand rect dimensions up to next power of two. squarePower2 uses the max dimensions. You need to check what your hardware & OpenGL supports, and call _getRegionOfFrame() as appropriate.

_prepareMonoFrame(clear=True)[source]

Prepare a frame for monoscopic rendering. This is called automatically after _startHmdFrame() if monoscopic rendering is enabled.

_renderFBO()

Perform a warp operation.

(in this case a copy operation without any warping)

_resolveMSAA()[source]

Resolve multisample anti-aliasing (MSAA). If MSAA is enabled, drawing operations are diverted to a special multisample render buffer. Pixel data must be ‘resolved’ by blitting it to the swap chain texture. If not, the texture will be blank.

Notes

You cannot perform operations on the default FBO (at frameBuffer) when MSAA is enabled. Any changes will be over-written when ‘flip’ is called.

_setCurrent()

Make this window’s OpenGL context current.

If called on a window whose context is current, the function will return immediately. This reduces the number of redundant calls if no context switch is required. If useFBO=True, the framebuffer is bound after the context switch.

_setupFrameBuffer()[source]

Override the default framebuffer init code in window.Window to use the HMD swap chain. The HMD’s swap texture and render buffer are configured here.

If multisample anti-aliasing (MSAA) is enabled, a secondary render buffer is created. Rendering is diverted to the multi-sample buffer when drawing, which is then resolved into the HMD’s swap chain texture prior to committing it to the chain. Consequently, you cannot pass the texture attached to the FBO specified by frameBuffer until the MSAA buffer is resolved. Doing so will result in a blank texture.

_setupGL()

Setup OpenGL state for this window.

_setupGamma(gammaVal)

A private method to work out how to handle gamma for this Window given that the user might have specified an explicit value, or maybe gave a Monitor.

_startHmdFrame()[source]

Prepare to render an HMD frame. This must be called every frame before flipping or setting the view buffer.

This function will wait until the HMD is ready to begin rendering before continuing. The current frame texture from the swap chain are pulled from the SDK and made available for binding.

_startOfFlip()[source]

Custom _startOfFlip for HMD rendering. This finalizes the HMD texture before diverting drawing operations back to the on-screen window. This allows flip to swap the on-screen and HMD buffers when called. This function always returns True.

Return type:

True

_updatePerfStats()[source]

Update and process performance statistics obtained from LibOVR. This should be called at the beginning of each frame to get the stats of the last frame.

This is called automatically when _waitToBeginHmdFrame() is called at the beginning of each frame.

_updateProjectionMatrix()[source]

Update or re-calculate projection matrices based on the current render descriptor configuration.

_waitToBeginHmdFrame()[source]

Wait until the HMD surfaces are available for rendering.

addEditable(editable)

Adds an editable element to the screen (something to which characters can be sent with meaning from the keyboard).

The current editable object receiving chars is Window.currentEditable

Parameters:

editable

Returns:

property ambientLight

Ambient light color for the scene [r, g, b, a]. Values range from 0.0 to 1.0. Only applicable if useLights is True.

Examples

Setting the ambient light color:

win.ambientLight = [0.5, 0.5, 0.5]

# don't do this!!!
win.ambientLight[0] = 0.5
win.ambientLight[1] = 0.5
win.ambientLight[2] = 0.5
applyEyeTransform(clearDepth=True)

Apply the current view and projection matrices.

Matrices specified by attributes viewMatrix and projectionMatrix are applied using ‘immediate mode’ OpenGL functions. Subsequent drawing operations will be affected until flip() is called.

All transformations in GL_PROJECTION and GL_MODELVIEW matrix stacks will be cleared (set to identity) prior to applying.

Parameters:

clearDepth (bool) – Clear the depth buffer. This may be required prior to rendering 3D objects.

Examples

Using a custom view and projection matrix:

# Must be called every frame since these values are reset after
# `flip()` is called!
win.viewMatrix = viewtools.lookAt( ... )
win.projectionMatrix = viewtools.perspectiveProjectionMatrix( ... )
win.applyEyeTransform()
# draw 3D objects here ...
property aspect

Aspect ratio of the current viewport (width / height).

backgroundFit

How should the background image of this window fit? Options are:

None, “None”, “none”

No scaling is applied, image is present at its pixel size unaltered.

“cover”

Image is scaled such that it covers the whole screen without changing its aspect ratio. In other words, both dimensions are evenly scaled such that its SHORTEST dimension matches the window’s LONGEST dimension.

“contain”

Image is scaled such that it is contained within the screen without changing its aspect ratio. In other words, both dimensions are evenly scaled such that its LONGEST dimension matches the window’s SHORTEST dimension.

“scaleDown”, “scale-down”, “scaledown”

If image is bigger than the window along any dimension, it will behave as if backgroundFit were “contain”. Otherwise, it will behave as if backgroundFit were None.

backgroundImage

Background image for the window, can be either a visual.ImageStim object or anything which could be passed to visual.ImageStim.image to create one. Will be drawn each time win.flip() is called, meaning it is always below all other contents of the window.

blendMode

Blend mode to use.

calcEyePoses(headPose, originPose=None)[source]

Calculate eye poses for rendering.

This function calculates the eye poses to define the viewpoint transformation for each eye buffer. Upon starting a new frame, the application loop is halted until this function is called and returns.

Once this function returns, setBuffer may be called and frame rendering can commence. The computed eye pose for the selected buffer is accessible through the eyeRenderPose attribute after calling setBuffer(). If monoscopic=True, the eye poses are set to the head pose.

The source data specified to headPose can originate from the tracking state retrieved by calling getTrackingState(), or from other sources. If a custom head pose is specified (for instance, from a motion tracker), you must ensure head-locking is enabled to prevent the ASW feature of the compositor from engaging. Furthermore, you must specify sensor sample time for motion-to-photon calculation derived from the sample time of the custom tracking source.

Parameters:
  • headPose (LibOVRPose) – Head pose to use.

  • originPose (LibOVRPose, optional) – Origin of tracking in the VR scene.

Examples

Get the tracking state and calculate the eye poses:

# get tracking state at predicted mid-frame time
trackingState = hmd.getTrackingState()

# get the head pose from the tracking state
headPose = trackingState.headPose.thePose
hmd.calcEyePoses(headPose)  # compute eye poses

# begin rendering to each eye
for eye in ('left', 'right'):
    hmd.setBuffer(eye)
    hmd.setRiftView()
    # draw stuff here ...

Using a custom head pose (make sure headLocked=True before doing this):

headPose = createPose((0., 1.75, 0.))
hmd.calcEyePoses(headPose)  # compute eye poses
callOnFlip(function, *args, **kwargs)

Call a function immediately after the next flip() command.

The first argument should be the function to call, the following args should be used exactly as you would for your normal call to the function (can use ordered arguments or keyword arguments as normal).

e.g. If you have a function that you would normally call like this:

pingMyDevice(portToPing, channel=2, level=0)

then you could call callOnFlip() to have the function call synchronized with the frame flip like this:

win.callOnFlip(pingMyDevice, portToPing, channel=2, level=0)
clearAutoDraw()

Remove all autoDraw components, meaning they get autoDraw set to False and are not added to any list (as in .stashAutoDraw)

clearBuffer(color=True, depth=False, stencil=False)

Clear the present buffer (to which you are currently drawing) without flipping the window.

Useful if you want to generate movie sequences from the back buffer without actually taking the time to flip the window.

Set color prior to clearing to set the color to clear the color buffer to. By default, the depth buffer is cleared to a value of 1.0.

Parameters:
  • color (bool) – Buffers to clear.

  • depth (bool) – Buffers to clear.

  • stencil (bool) – Buffers to clear.

Examples

Clear the color buffer to a specified color:

win.color = (1, 0, 0)
win.clearBuffer(color=True)

Clear only the depth buffer, depthMask must be True or else this will have no effect. Depth mask is usually True by default, but may change:

win.depthMask = True
win.clearBuffer(color=False, depth=True, stencil=False)
clearShouldRecenterFlag()[source]

Clear the ‘shouldRecenter’ status flag at the API level.

close()[source]

Close the window and cleanly shutdown the LibOVR session.

property color

Set the color of the window.

This command sets the color that the blank screen will have on the next clear operation. As a result it effectively takes TWO flip() operations to become visible (the first uses the color to create the new screen, the second presents that screen to the viewer). For this reason, if you want to changed background color of the window “on the fly”, it might be a better idea to draw a Rect that fills the whole window with the desired Rect.fillColor attribute. That’ll show up on first flip.

See other stimuli (e.g. GratingStim.color) for more info on the color attribute which essentially works the same on all PsychoPy stimuli.

See Color spaces for further information about the ways to specify colors and their various implications.

property colorSpace

The name of the color space currently being used

Value should be: a string or None

For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (defined during initialisation).

Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to specify colorSpace before setting the color. Example:

# A light green text
stim = visual.TextStim(win, 'Color me!',
                       color=(0, 1, 0), colorSpace='rgb')

# An almost-black text
stim.colorSpace = 'rgb255'

# Make it light green again
stim.color = (128, 255, 128)
property connectedControllers

Connected controller types (list of str)

property contentScaleFactor

Scaling factor (float) to use when drawing to the backbuffer to convert framebuffer to client coordinates.

property convergeOffset

Convergence offset from monitor in centimeters.

This is value corresponds to the offset from screen plane to set the convergence plane (or point for toe-in projections). Positive offsets move the plane farther away from the viewer, while negative offsets nearer. This value is used by setPerspectiveView and should be set before calling it to take effect.

Notes

  • This value is only applicable for setToeIn and setOffAxisView.

coordToRay(screenXY)

Convert a screen coordinate to a direction vector.

Takes a screen/window coordinate and computes a vector which projects a ray from the viewpoint through it (line-of-sight). Any 3D point touching the ray will appear at the screen coordinate.

Uses the current viewport and projectionMatrix to calculate the vector. The vector is in eye-space, where the origin of the scene is centered at the viewpoint and the forward direction aligned with the -Z axis. A ray of (0, 0, -1) results from a point at the very center of the screen assuming symmetric frustums.

Note that if you are using a flipped/mirrored view, you must invert your supplied screen coordinates (screenXY) prior to passing them to this function.

Parameters:

screenXY (array_like) – X, Y screen coordinate. Must be in units of the window.

Returns:

Normalized direction vector [x, y, z].

Return type:

ndarray

Examples

Getting the direction vector between the mouse cursor and the eye:

mx, my = mouse.getPos()
dir = win.coordToRay((mx, my))

Set the position of a 3D stimulus object using the mouse, constrained to a plane. The object origin will always be at the screen coordinate of the mouse cursor:

# the eye position in the scene is defined by a rigid body pose
win.viewMatrix = camera.getViewMatrix()
win.applyEyeTransform()

# get the mouse location and calculate the intercept
mx, my = mouse.getPos()
ray = win.coordToRay([mx, my])
result = intersectRayPlane(   # from mathtools
    orig=camera.pos,
    dir=camera.transformNormal(ray),
    planeOrig=(0, 0, -10),
    planeNormal=(0, 1, 0))

# if result is `None`, there is no intercept
if result is not None:
    pos, dist = result
    objModel.thePose.pos = pos
else:
    objModel.thePose.pos = (0, 0, -10)  # plane origin

If you don’t define the position of the viewer with a RigidBodyPose, you can obtain the appropriate eye position and rotate the ray by doing the following:

pos = numpy.linalg.inv(win.viewMatrix)[:3, 3]
ray = win.coordToRay([mx, my]).dot(win.viewMatrix[:3, :3])
# then ...
result = intersectRayPlane(
    orig=pos,
    dir=ray,
    planeOrig=(0, 0, -10),
    planeNormal=(0, 1, 0))
static createBoundingBox(extents=None)[source]

Create a new bounding box object (LibOVRBounds).

LibOVRBounds represents an axis-aligned bounding box with dimensions defined by extents. Bounding boxes are primarily used for visibility testing and culling by PsychXR. The dimensions of the bounding box can be specified explicitly, or fitted to meshes by passing vertices to the fit() method after initialization.

This function exposes the LibOVRBounds class so you don’t need to access it by importing psychxr.

Parameters:

extents (array_like or None) – Extents of the bounding box as (mins, maxs). Where mins (x, y, z) is the minimum and maxs (x, y, z) is the maximum extents of the bounding box in world units. If None is specified, the returned bounding box will be invalid. The bounding box can be later constructed using the fit() method or the extents attribute.

Returns:

Object representing a bounding box.

Return type:

~psychxr.libovr.LibOVRBounds

Examples

Add a bounding box to a pose:

# create a 1 meter cube bounding box centered with the pose
bbox = Rift.createBoundingBox(((-.5, -.5, -.5), (.5, .5, .5)))

# create a pose and attach the bounding box
modelPose = Rift.createPose()
modelPose.boundingBox = bbox

Perform visibility culling on the pose using the bounding box by using the isVisible() method:

if hmd.isPoseVisible(modelPose):
    modelPose.draw()
static createHapticsBuffer(samples)[source]

Create a new haptics buffer.

A haptics buffer is object which stores vibration amplitude samples for playback through the Touch controllers. To play a haptics buffer, pass it to submitHapticsBuffer().

Parameters:

samples (array_like) – 1-D array of amplitude samples, ranging from 0 to 1. Values outside of this range will be clipped. The buffer must not exceed HAPTICS_BUFFER_SAMPLES_MAX samples, any additional samples will be dropped.

Returns:

Haptics buffer object.

Return type:

LibOVRHapticsBuffer

Notes

Methods startHaptics and stopHaptics cannot be used interchangeably with this function.

Examples

Create a haptics buffer where vibration amplitude ramps down over the course of playback:

samples = np.linspace(
    1.0, 0.0, num=HAPTICS_BUFFER_SAMPLES_MAX-1, dtype=np.float32)
hbuff = Rift.createHapticsBuffer(samples)

# vibrate right Touch controller
hmd.submitControllerVibration(CONTROLLER_TYPE_RTOUCH, hbuff)
static createPose(pos=(0.0, 0.0, 0.0), ori=(0.0, 0.0, 0.0, 1.0))[source]

Create a new Rift pose object (LibOVRPose).

LibOVRPose is used to represent a rigid body pose mainly for use with the PsychXR’s LibOVR module. There are several methods associated with the object to manipulate the pose.

This function exposes the LibOVRPose class so you don’t need to access it by importing psychxr.

Parameters:
  • pos (tuple, list, or ndarray of float) – Position vector/coordinate (x, y, z).

  • ori (tuple, list, or ndarray of float) – Orientation quaternion (x, y, z, w).

Returns:

Object representing a rigid body pose for use with LibOVR.

Return type:

LibOVRPose

property cullFace

True if face culling is enabled.`

property cullFaceMode

Face culling mode, either back, front or both.

property currentEditable

The editable (Text?) object that currently has key focus

property depthFunc

Depth test comparison function for rendering.

property depthMask

True if depth masking is enabled. Writing to the depth buffer will be disabled.

property depthTest

True if depth testing is enabled.

classmethod dispatchAllWindowEvents()

Dispatches events for all pyglet windows. Used by iohub 2.0 psychopy kb event integration.

property displayRefreshRate

Get the HMD’s display refresh rate in Hz (float).

property displayResolution

Get the HMD’s raster display size (int, int).

property draw3d

True if 3D drawing is enabled on this window.

property eyeHeight

Eye height in meters (float).

property eyeOffset

Eye separation in centimeters (float).

property eyeRenderPose

Computed eye pose for the current buffer. Only valid after calling calcEyePoses().

property eyeToNoseDistance

Eye to nose distance in meters (float).

Examples

Generate your own eye poses. These are used when calcEyePoses() is called:

leftEyePose = Rift.createPose((-self.eyeToNoseDistance, 0., 0.))
rightEyePose = Rift.createPose((self.eyeToNoseDistance, 0., 0.))

Get the inter-axial separation (IAS) reported by LibOVR:

iad = self.eyeToNoseDistance * 2.0
property farClip

Distance to the far clipping plane in meters.

property firmwareVersion

Get the firmware version of the active HMD (int, int).

flip(clearBuffer=True, drawMirror=True)[source]

Submit view buffer images to the HMD’s compositor for display at next V-SYNC and draw the mirror texture to the on-screen window. This must be called every frame.

Parameters:
  • clearBuffer (bool) – Clear the frame after flipping.

  • drawMirror (bool) – Draw the HMD mirror texture from the compositor to the window.

Returns:

Absolute time in seconds when control was given back to the application. The difference between the current and previous values should be very close to 1 / refreshRate of the HMD.

Return type:

float

Notes

  • The HMD compositor and application are asynchronous, therefore there is no guarantee that the timestamp returned by ‘flip’ corresponds to the exact vertical retrace time of the HMD.

fps()

Report the frames per second since the last call to this function (or since the window was created if this is first call)

property frameBufferSize

Size of the framebuffer in pixels (w, h).

property frontFace

Face winding order to define front, either ccw or cw.

property fullscr

Return whether the window is in fullscreen mode.

gamma

Set the monitor gamma for linearization.

Warning

Don’t use this if using a Bits++ or Bits#, as it overrides monitor settings.

gammaRamp

Sets the hardware CLUT using a specified 3xN array of floats ranging between 0.0 and 1.0.

Array must have a number of rows equal to 2 ^ max(bpc).

getActualFrameRate(nIdentical=10, nMaxFrames=100, nWarmUpFrames=10, threshold=1, infoMsg=None)

Measures the actual frames-per-second (FPS) for the screen.

This is done by waiting (for a max of nMaxFrames) until nIdentical frames in a row have identical frame times (std dev below threshold ms).

Parameters:
  • nIdentical (int, optional) – The number of consecutive frames that will be evaluated. Higher –> greater precision. Lower –> faster.

  • nMaxFrames (int, optional) – The maximum number of frames to wait for a matching set of nIdentical.

  • nWarmUpFrames (int, optional) – The number of frames to display before starting the test (this is in place to allow the system to settle after opening the Window for the first time.

  • threshold (int or float, optional) – The threshold for the std deviation (in ms) before the set are considered a match.

Returns:

Frame rate (FPS) in seconds. If there is no such sequence of identical frames a warning is logged and None will be returned.

Return type:

float or None

getBoundaryDimensions(boundaryType='PlayArea')[source]

Get boundary dimensions.

Parameters:

boundaryType (str) – Boundary type, can be ‘PlayArea’ or ‘Outer’.

Returns:

Dimensions of the boundary meters [x, y, z].

Return type:

ndarray

getButtons(buttons, controller='Xbox', testState='continuous')[source]

Get button states from a controller.

Returns True if any names specified to buttons reflect testState since the last updateInputState or flip call. If multiple button names are specified as a list or tuple to buttons, multiple button states are tested, returning True if all the buttons presently satisfy the testState. Note that not all controllers available share the same buttons. If a button is not available, this function will always return False.

Parameters:
  • buttons (list of str or str) – Buttons to test. Valid buttons names are ‘A’, ‘B’, ‘RThumb’, ‘RShoulder’ ‘X’, ‘Y’, ‘LThumb’, ‘LShoulder’, ‘Up’, ‘Down’, ‘Left’, ‘Right’, ‘Enter’, ‘Back’, ‘VolUp’, ‘VolDown’, and ‘Home’. Names can be passed as a list to test multiple button states.

  • controller (str) – Controller name.

  • testState (str) –

    State to test. Valid values are:

    • continuous - Button is presently being held down.

    • rising or pressed - Button has been pressed since the last update.

    • falling or released - Button has been released since the last update.

Returns:

Button state and timestamp in seconds the controller was polled.

Return type:

tuple of bool, float

Examples

Check if the ‘Enter’ button on the Oculus remote was released:

isPressed, tsec = hmd.getButtons(['Enter'], 'Remote', 'falling')

Check if the ‘A’ button was pressed on the touch controller:

isPressed, tsec = hmd.getButtons(['A'], 'Touch', 'pressed')
getContentScaleFactor()

Get the scaling factor required for scaling correctly on high-DPI displays.

If the returned value is 1.0, no scaling needs to be applied to objects drawn on the backbuffer. A value >1.0 indicates that the backbuffer is larger than the reported client area, requiring points to be scaled to maintain constant size across similarly sized displays. In other words, the scaling required to convert framebuffer to client coordinates.

Returns:

Scaling factor to be applied along both horizontal and vertical dimensions.

Return type:

float

Examples

Get the size of the client area:

clientSize = win.frameBufferSize / win.getContentScaleFactor()

Get the framebuffer size from the client size:

frameBufferSize = win.clientSize * win.getContentScaleFactor()

Convert client (window) to framebuffer pixel coordinates (eg., a mouse coordinate, vertices, etc.):

# `mousePosXY` is an array ...
frameBufferXY = mousePosXY * win.getContentScaleFactor()
# you can also use the attribute ...
frameBufferXY = mousePosXY * win.contentScaleFactor

Notes

  • This value is only valid after the window has been fully realized.

getDevicePose(deviceName, absTime=None, latencyMarker=False)[source]

Get the pose of a tracked device. For head (HMD) and hand poses (Touch controllers) it is better to use getTrackingState() instead.

Parameters:
  • deviceName (str) – Name of the device. Valid device names are: ‘HMD’, ‘LTouch’, ‘RTouch’, ‘Touch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’.

  • absTime (float, optional) – Absolute time in seconds the device pose refers to. If not specified, the predicted time is used.

  • latencyMarker (bool) – Insert a marker for motion-to-photon latency calculation. Should only be True if the HMD pose is being used to compute eye poses.

Returns:

Pose state object. None if device tracking was lost.

Return type:

LibOVRPoseState or None

getFutureFlipTime(targetTime=0, clock=None)

The expected time of the next screen refresh. This is currently calculated as win._lastFrameTime + refreshInterval

Parameters:
  • targetTime (float) – The delay from now for which you want the flip time. 0 will give the because that the earliest we can achieve. 0.15 will give the schedule flip time that gets as close to 150 ms as possible

  • clock (None, 'ptb', 'now' or any Clock object) – If True then the time returned is compatible with ptb.GetSecs()

  • verbose (bool) – Set to True to view the calculations along the way

getHandTriggerValues(controller='Touch', deadzone=False)[source]

Get the values of the hand triggers.

Parameters:
  • controller (str) – Name of the controller to get hand trigger values. Possible values for controller are ‘Touch’, ‘RTouch’, ‘LTouch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’; the only devices with hand triggers the SDK manages. For additional controllers, use PsychPy’s built-in event or hardware support.

  • deadzone (bool) – Apply the deadzone to hand trigger values. This pre-filters stick input to apply a dead-zone where 0.0 will be returned if the trigger returns a displacement within 0.2746.

Returns:

Left and right hand trigger values. Displacements are represented as tuple of two float representing the left anr right displacement values, which range from 0.0 to 1.0. The returned values reflect the controller state since the last updateInputState or flip call.

Return type:

tuple

getIndexTriggerValues(controller='Xbox', deadzone=False)[source]

Get the values of the index triggers.

Parameters:
  • controller (str) – Name of the controller to get index trigger values. Possible values for controller are ‘Xbox’, ‘Touch’, ‘RTouch’, ‘LTouch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’; the only devices with index triggers the SDK manages. For additional controllers, use PsychPy’s built-in event or hardware support.

  • deadzone (bool) – Apply the deadzone to index trigger values. This pre-filters stick input to apply a dead-zone where 0.0 will be returned if the trigger returns a displacement within 0.2746.

Returns:

Left and right index trigger values. Displacements are represented as tuple of two float representing the left anr right displacement values, which range from 0.0 to 1.0. The returned values reflect the controller state since the last updateInputState or flip call.

Return type:

tuple of float

getMovieFrame(buffer='mirror')[source]

Capture the current HMD frame as an image.

Saves to stack for saveMovieFrames(). As of v1.81.00 this also returns the frame as a PIL image.

This can be done at any time (usually after a flip() command).

Frames are stored in memory until a saveMovieFrames() command is issued. You can issue getMovieFrame() as often as you like and then save them all in one go when finished.

For HMD frames, you should call getMovieFrame after calling flip to ensure that the mirror texture saved reflects what is presently being shown on the HMD. Note, that this function is somewhat slow and may impact performance. Only call this function when you’re not collecting experimental data.

Parameters:

buffer (str, optional) – Buffer to capture. For the HMD, only ‘mirror’ is available at this time.

Returns:

Buffer pixel contents as a PIL/Pillow image object.

Return type:

Image

getMsPerFrame(nFrames=60, showVisual=False, msg='', msDelay=0.0)

Assesses the monitor refresh rate (average, median, SD) under current conditions, over at least 60 frames.

Records time for each refresh (frame) for n frames (at least 60), while displaying an optional visual. The visual is just eye-candy to show that something is happening when assessing many frames. You can also give it text to display instead of a visual, e.g., msg='(testing refresh rate...)'; setting msg implies showVisual == False.

To simulate refresh rate under cpu load, you can specify a time to wait within the loop prior to doing the flip(). If 0 < msDelay < 100, wait for that long in ms.

Returns timing stats (in ms) of:

  • average time per frame, for all frames

  • standard deviation of all frames

  • median, as the average of 12 frame times around the median (~monitor refresh rate)

Author:
  • 2010 written by Jeremy Gray

getPredictedDisplayTime()[source]

Get the predicted time the next frame will be displayed on the HMD. The returned time is referenced to the clock LibOVR is using.

Returns:

Absolute frame mid-point time for the given frame index in seconds.

Return type:

float

getThumbstickValues(controller='Xbox', deadzone=False)[source]

Get controller thumbstick values.

Parameters:
  • controller (str) – Name of the controller to get thumbstick values. Possible values for controller are ‘Xbox’, ‘Touch’, ‘RTouch’, ‘LTouch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’; the only devices with thumbsticks the SDK manages. For additional controllers, use PsychPy’s built-in event or hardware support.

  • deadzone (bool) – Apply the deadzone to thumbstick values. This pre-filters stick input to apply a dead-zone where 0.0 will be returned if the sticks return a displacement within -0.2746 to 0.2746.

Returns:

Left and right, X and Y thumbstick values. Axis displacements are represented in each tuple by floats ranging from -1.0 (full left/down) to 1.0 (full right/up). The returned values reflect the controller state since the last updateInputState or flip call.

Return type:

tuple

getTimeInSeconds()[source]

Absolute time in seconds. The returned time is referenced to the clock LibOVR is using.

Returns:

Time in seconds.

Return type:

float

getTouches(touches, controller='Touch', testState='continuous')[source]

Get touch states from a controller.

Returns True if any names specified to touches reflect testState since the last updateInputState or flip call. If multiple button names are specified as a list or tuple to touches, multiple button states are tested, returning True if all the touches presently satisfy the testState. Note that not all controllers available support touches. If a touch is not supported or available, this function will always return False.

Special states can be used for basic gesture recognition, such as ‘LThumbUp’, ‘RThumbUp’, ‘LIndexPointing’, and ‘RIndexPointing’.

Parameters:
  • touches (list of str or str) – Buttons to test. Valid touches names are ‘A’, ‘B’, ‘RThumb’, ‘RThumbRest’ ‘RThumbUp’, ‘RIndexPointing’, ‘LThumb’, ‘LThumbRest’, ‘LThumbUp’, ‘LIndexPointing’, ‘X’, and ‘Y’. Names can be passed as a list to test multiple button states.

  • controller (str) – Controller name.

  • testState (str) –

    State to test. Valid values are:

    • continuous - User is touching something on the controller.

    • rising or pressed - User began touching something since the last call to updateInputState.

    • falling or released - User stopped touching something since the last call to updateInputState.

Returns:

Touch state and timestamp in seconds the controller was polled.

Return type:

tuple of bool, float

Examples

Check if the ‘Enter’ button on the Oculus remote was released:

isPressed, tsec = hmd.getButtons(['Enter'], 'Remote', 'falling')

Check if the ‘A’ button was pressed on the touch controller:

isPressed, tsec = hmd.getButtons(['A'], 'Touch', 'pressed')
getTrackerInfo(trackerIdx)[source]

Get tracker information.

Parameters:

trackerIdx (int) – Tracker index, ranging from 0 to trackerCount.

Returns:

Object containing tracker information.

Return type:

LibOVRTrackerInfo

Raises:

IndexError – Raised when trackerIdx out of range.

getTrackingState(absTime=None, latencyMarker=True)[source]

Get the tracking state of the head and hands.

Calling this function retrieves the tracking state of the head (HMD) and hands at absTime from the LibOVR runtime. The returned object is a LibOVRTrackingState instance with poses, motion derivatives (i.e. linear and angular velocity/acceleration), and tracking status flags accessible through its attributes.

The pose states of the head and hands are available by accessing the headPose and handPoses attributes, respectively.

Parameters:
  • absTime (float, optional) – Absolute time the tracking state refers to. If not specified, the predicted display time is used.

  • latencyMarker (bool, optional) – Set a latency marker upon getting the tracking state. This is used for motion-to-photon calculations.

Returns:

Tracking state object. For more information about this type see:

Return type:

LibOVRTrackingState

See also

getPredictedDisplayTime

Time at mid-frame for the current frame index.

Examples

Get the tracked head pose and use it to calculate render eye poses:

# get tracking state at predicted mid-frame time
absTime = getPredictedDisplayTime()
trackingState = hmd.getTrackingState(absTime)

# get the head pose from the tracking state
headPose = trackingState.headPose.thePose
hmd.calcEyePoses(headPose)  # compute eye poses

Get linear/angular velocity and acceleration vectors of the right touch controller:

# right hand is the second value (index 1) at `handPoses`
rightHandState = trackingState.handPoses[1]  # is `LibOVRPoseState`

# access `LibOVRPoseState` fields to get the data
linearVel = rightHandState.linearVelocity  # m/s
angularVel = rightHandState.angularVelocity  # rad/s
linearAcc = rightHandState.linearAcceleration  # m/s^2
angularAcc = rightHandState.angularAcceleration  # rad/s^2

# extract components like this if desired
vx, vy, vz = linearVel
ax, ay, az = angularVel

Above is useful for physics simulations, where one can compute the magnitude and direction of a force applied to a virtual object.

It’s often the case that object tracking becomes unreliable for some reason, for instance, if it becomes occluded and is no longer visible to the sensors. In such cases, the reported pose state is invalid and may not be useful. You can check if the position and orientation of a tracked object is invalid using flags associated with the tracking state. This shows how to check if head position and orientation tracking was valid when sampled:

if trackingState.positionValid and trackingState.orientationValid:
    print('Tracking valid.')

It’s up to the programmer to determine what to do in such cases. Note that tracking may still be valid even if

Get the calibrated origin used for tracking during the sample period of the tracking state:

calibratedOrigin = trackingState.calibratedOrigin
calibPos, calibOri = calibratedOrigin.posOri

Time integrate a tracking state. This extrapolates the pose over time given the present computed motion derivatives. The contrived example below shows how to implement head pose forward prediction:

# get current system time
absTime = getTimeInSeconds()

# get the elapsed time from `absTime` to predicted v-sync time,
# again this is an example, you would usually pass predicted time to
# `getTrackingState` directly.
dt = getPredictedDisplayTime() - absTime

# get the tracking state for the current time, poses will lag where
# they are expected at predicted time by `dt` seconds
trackingState = hmd.getTrackingState(absTime)

# time integrate a pose by `dt`
headPoseState = trackingState.headPose
headPosePredicted = headPoseState.timeIntegrate(dt)

# calc eye poses with predicted head pose, this is a custom pose to
# head-locking should be enabled!
hmd.calcEyePoses(headPosePredicted)

The resulting head pose is usually very close to what getTrackingState would return if the predicted time was used. Simple forward prediction with time integration becomes increasingly unstable as the prediction interval increases. Under normal circumstances, let the runtime handle forward prediction by using the pose states returned at the predicted display time. If you plan on doing your own forward prediction, you need enable head-locking, clamp the prediction interval, and apply some sort of smoothing to keep the image as stable as possible.

property hasInputFocus

True if the application currently has input focus.

property hasMagYawCorrection

True if this HMD supports yaw drift correction.

property hasOrientationTracking

True if the HMD is capable of tracking orientation.

property hasPositionTracking

True if the HMD is capable of tracking position.

property headLocked

True if head locking is enabled.

property hid

USB human interface device (HID) identifiers (int, int).

hideMessage()

Remove any message that is currently being displayed.

hidePerfHud()[source]

Hide the performance HUD.

hidePilotingIndicator()

Hide the visual indicator which shows we are in piloting mode.

property hmdMounted

True if the HMD is mounted on the user’s head.

property hmdPresent

True if the HMD is present.

property isBoundaryVisible

True if the VR boundary is visible.

isPoseVisible(pose)[source]

Check if a pose object if visible to the present eye. This method can be used to perform visibility culling to avoid executing draw commands for objects that fall outside the FOV for the current eye buffer.

If boundingBox has a valid bounding box object, this function will return False if all the box points fall completely to one side of the view frustum. If boundingBox is None, the point at pos is checked, returning False if it falls outside of the frustum. If the present buffer is not ‘left’ or ‘right’, this function will always return False.

Parameters:

pose (LibOVRPose) – Pose to test for visibility.

Returns:

True if pose’s bounding box or origin is outside of the view frustum.

Return type:

bool

property isVisible

True if the app has focus in the HMD and is visible to the viewer.

property lights

Scene lights.

This is specified as an array of ~psychopy.visual.LightSource objects. If a single value is given, it will be converted to a list before setting. Set useLights to True before rendering to enable lighting/shading on subsequent objects. If lights is None or an empty list, no lights will be enabled if useLights=True, however, the scene ambient light set with ambientLight will be still be used.

Examples

Create a directional light source and add it to scene lights:

dirLight = gltools.LightSource((0., 1., 0.), lightType='directional')
win.lights = dirLight  # `win.lights` will be a list when accessed!

Multiple lights can be specified by passing values as a list:

myLights = [gltools.LightSource((0., 5., 0.)),
            gltools.LightSource((-2., -2., 0.))
win.lights = myLights
logOnFlip(msg, level, obj=None)

Send a log message that should be time-stamped at the next flip() command.

Parameters:
  • msg (str) – The message to be logged.

  • level (int) – The level of importance for the message.

  • obj (object, optional) – The python object that might be associated with this message if desired.

property manufacturer

Get the connected HMD’s manufacturer (str).

property mouseVisible

Returns the visibility of the mouse cursor.

multiFlip(flips=1, clearBuffer=True)

Flip multiple times while maintaining the display constant. Use this method for precise timing.

WARNING: This function should not be used. See the Notes section for details.

Parameters:
  • flips (int, optional) – The number of monitor frames to flip. Floats will be rounded to integers, and a warning will be emitted. Window.multiFlip(flips=1) is equivalent to Window.flip(). Defaults to 1.

  • clearBuffer (bool, optional) – Whether to clear the screen after the last flip. Defaults to True.

Notes

Examples

Example of using multiFlip:

# Draws myStim1 to buffer
myStim1.draw()
# Show stimulus for 4 frames (90 ms at 60Hz)
myWin.multiFlip(clearBuffer=False, flips=6)
# Draw myStim2 "on top of" myStim1
# (because buffer was not cleared above)
myStim2.draw()
# Show this for 2 frames (30 ms at 60Hz)
myWin.multiFlip(flips=2)
# Show blank screen for 3 frames (buffer was cleared above)
myWin.multiFlip(flips=3)
multiplyProjectionMatrixGL()[source]

Multiply the current projection modelMatrix obtained from the SDK using glMultMatrixf. The projection matrix used depends on the current eye buffer set by setBuffer().

multiplyViewMatrixGL()[source]

Multiply the local eye pose transformation modelMatrix obtained from the SDK using glMultMatrixf. The modelMatrix used depends on the current eye buffer set by setBuffer().

Return type:

None

property nearClip

Distance to the near clipping plane in meters.

nextEditable()

Moves focus of the cursor to the next editable window

onResize(width, height)

A default resize event handler.

This default handler updates the GL viewport to cover the entire window and sets the GL_PROJECTION matrix to be orthogonal in window space. The bottom-left corner is (0, 0) and the top-right corner is the width and height of the Window in pixels.

Override this event handler with your own to create another projection, for example in perspective.

property overlayPresent
perfHudMode(mode='Off')[source]

Set the performance HUD mode.

Parameters:

mode (str) – HUD mode to use.

property pixelsPerTanAngleAtCenter

Horizontal and vertical pixels per tangent angle (=1) at the center of the display.

This can be used to compute pixels-per-degree for the display.

property productName

Get the HMD’s product name (str).

property projectionMatrix

Get the projection matrix for the current eye buffer. Note that setting projectionMatrix manually will break visibility culling.

recenterTrackingOrigin()[source]

Recenter the tracking origin using the current head position.

recordFrameIntervals

Record time elapsed per frame.

Provides accurate measures of frame intervals to determine whether frames are being dropped. The intervals are the times between calls to flip(). Set to True only during the time-critical parts of the script. Set this to False while the screen is not being updated, i.e., during any slow, non-frame-time-critical sections of your code, including inter-trial-intervals, event.waitkeys(), core.wait(), or image.setImage().

Examples

Enable frame interval recording, successive frame intervals will be stored:

win.recordFrameIntervals = True

Frame intervals can be saved by calling the saveFrameIntervals method:

win.saveFrameIntervals()
removeEditable(editable)
resetEyeTransform(clearDepth=True)

Restore the default projection and view settings to PsychoPy defaults. Call this prior to drawing 2D stimuli objects (i.e. GratingStim, ImageStim, Rect, etc.) if any eye transformations were applied for the stimuli to be drawn correctly.

Parameters:

clearDepth (bool) – Clear the depth buffer upon reset. This ensures successive draw commands are not affected by previous data written to the depth buffer. Default is True.

Notes

  • Calling flip() automatically resets the view and projection to defaults. So you don’t need to call this unless you are mixing 3D and 2D stimuli.

Examples

Going between 3D and 2D stimuli:

# 2D stimuli can be drawn before setting a perspective projection
win.setPerspectiveView()
# draw 3D stimuli here ...
win.resetEyeTransform()
# 2D stimuli can be drawn here again ...
win.flip()
resetViewport()

Reset the viewport to cover the whole framebuffer.

Set the viewport to match the dimensions of the back buffer or framebuffer (if useFBO=True). The scissor rectangle is also set to match the dimensions of the viewport.

retrieveAutoDraw()

Add all stimuli which are on ‘hold’ back into the autoDraw list, and clear the hold list.

property rgb
saveFrameIntervals(fileName=None, clear=True)

Save recorded screen frame intervals to disk, as comma-separated values.

Parameters:
  • fileName (None or str) – None or the filename (including path if necessary) in which to store the data. If None then ‘lastFrameIntervals.log’ will be used.

  • clear (bool) – Clear buffer frames intervals were stored after saving. Default is True.

saveMovieFrames(fileName, codec='libx264', fps=30, clearFrames=True)

Writes any captured frames to disk.

Will write any format that is understood by PIL (tif, jpg, png, …)

Parameters:
  • filename (str) – Name of file, including path. The extension at the end of the file determines the type of file(s) created. If an image type (e.g. .png) is given, then multiple static frames are created. If it is .gif then an animated GIF image is created (although you will get higher quality GIF by saving PNG files and then combining them in dedicated image manipulation software, such as GIMP). On Windows and Linux .mpeg files can be created if pymedia is installed. On macOS .mov files can be created if the pyobjc-frameworks-QTKit is installed. Unfortunately the libs used for movie generation can be flaky and poor quality. As for animated GIFs, better results can be achieved by saving as individual .png frames and then combining them into a movie using software like ffmpeg.

  • codec (str, optional) – The codec to be used by moviepy for mp4/mpg/mov files. If None then the default will depend on file extension. Can be one of libx264, mpeg4 for mp4/mov files. Can be rawvideo, png for avi files (not recommended). Can be libvorbis for ogv files. Default is libx264.

  • fps (int, optional) – The frame rate to be used throughout the movie. Only for quicktime (.mov) movies.. Default is 30.

  • clearFrames (bool, optional) – Set this to False if you want the frames to be kept for additional calls to saveMovieFrames. Default is True.

Examples

Writes a series of static frames as frame001.tif, frame002.tif etc.:

myWin.saveMovieFrames('frame.tif')

As of PsychoPy 1.84.1 the following are written with moviepy:

myWin.saveMovieFrames('stimuli.mp4') # codec = 'libx264' or 'mpeg4'
myWin.saveMovieFrames('stimuli.mov')
myWin.saveMovieFrames('stimuli.gif')
property scissor

Scissor rectangle (x, y, w, h) for the current draw buffer.

Values x and y define the origin, and w and h the size of the rectangle in pixels. The scissor operation is only active if scissorTest=True.

Usually, the scissor and viewport are set to the same rectangle to prevent drawing operations from spilling into other regions of the screen. For instance, calling clearBuffer will only clear within the scissor rectangle.

Setting the scissor rectangle but not the viewport will restrict drawing within the defined region (like a rectangular aperture), not changing the positions of stimuli.

property scissorTest

True if scissor testing is enabled.

property screenshot
property sensorSampleTime

Sensor sample time (float). This value corresponds to the time the head (HMD) position was sampled, which is required for computing motion-to-photon latency. This does not need to be specified if getTrackingState was called with latencyMarker=True.

property serialNumber

Get the connected HMD’s unique serial number (str).

Use this to identify a particular unit if you own many.

setBlendMode(blendMode, log=None)

Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.

setBuffer(buffer, clear=True)[source]

Set the active draw buffer.

Warning

The window.Window.size property will return the buffer’s dimensions in pixels instead of the window’s when setBuffer is set to ‘left’ or ‘right’.

Parameters:
  • buffer (str) – View buffer to divert successive drawing operations to, can be either ‘left’ or ‘right’.

  • clear (boolean) – Clear the color, stencil and depth buffer.

setColor(color, colorSpace=None, operation='', log=None)

Usually you can use stim.attribute = value syntax instead, but use this method if you want to set color and colorSpace simultaneously.

See color for documentation on colors.

setDefaultView(clearDepth=True)[source]

Return to default projection. Call this before drawing PsychoPy’s 2D stimuli after a stereo projection change.

Note: This only has an effect if using Rift in legacy immediate mode OpenGL.

Parameters:

clearDepth (bool) – Clear the depth buffer prior after configuring the view parameters.

setGamma(gamma, log=None)

Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.

setMouseType(name='arrow')

Change the appearance of the cursor for this window. Cursor types provide contextual hints about how to interact with on-screen objects.

The graphics used ‘standard cursors’ provided by the operating system. They may vary in appearance and hot spot location across platforms. The following names are valid on most platforms:

  • arrow : Default pointer.

  • ibeam : Indicates text can be edited.

  • crosshair : Crosshair with hot-spot at center.

  • hand : A pointing hand.

  • hresize : Double arrows pointing horizontally.

  • vresize : Double arrows pointing vertically.

Parameters:

name (str) – Type of standard cursor to use (see above). Default is arrow.

Notes

  • On Windows the crosshair option is negated with the background color. It will not be visible when placed over 50% grey fields.

setMouseVisible(visibility, log=None)

Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.

setOffAxisView(applyTransform=True, clearDepth=True)

Set an off-axis projection.

Create an off-axis projection for subsequent rendering calls. Sets the viewMatrix and projectionMatrix accordingly so the scene origin is on the screen plane. If eyeOffset is correct and the view distance and screen size is defined in the monitor configuration, the resulting view will approximate ortho-stereo viewing.

The convergence plane can be adjusted by setting convergeOffset. By default, the convergence plane is set to the screen plane. Any points on the screen plane will have zero disparity.

Parameters:
  • applyTransform (bool) – Apply transformations after computing them in immediate mode. Same as calling applyEyeTransform() afterwards.

  • clearDepth (bool, optional) – Clear the depth buffer.

setPerspectiveView(applyTransform=True, clearDepth=True)

Set the projection and view matrix to render with perspective.

Matrices are computed using values specified in the monitor configuration with the scene origin on the screen plane. Calculations assume units are in meters. If eyeOffset != 0, the view will be transformed laterally, however the frustum shape will remain the same.

Note that the values of projectionMatrix and viewMatrix will be replaced when calling this function.

Parameters:
  • applyTransform (bool) – Apply transformations after computing them in immediate mode. Same as calling applyEyeTransform() afterwards if False.

  • clearDepth (bool, optional) – Clear the depth buffer.

setRGB(newRGB)

Deprecated: As of v1.61.00 please use setColor() instead

setRecordFrameIntervals(value=True, log=None)

Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.

setRiftView(clearDepth=True)[source]

Set head-mounted display view. Gets the projection and view matrices from the HMD and applies them.

Note: This only has an effect if using Rift in legacy immediate mode OpenGL.

Parameters:

clearDepth (bool) – Clear the depth buffer prior after configuring the view parameters.

setScale(units, font='dummyFont', prevScale=(1.0, 1.0))

DEPRECATED: this method used to be used to switch between units for stimulus drawing but this is now handled by the stimuli themselves and the window should always be left in units of ‘pix’

setSize(value, log=True)[source]
setStereoDebugHudOption(option, value)[source]

Configure stereo debug HUD guides.

Parameters:
  • option (str) – Option to set. Valid options are InfoEnable, Size, Position, YawPitchRoll, and Color.

  • value (array_like or bool) –

    Value to set for a given option. Appropriate types for each option are:

    • InfoEnable - bool, True to show, False to hide.

    • Size - array_like, [w, h] in meters.

    • Position - array_like, [x, y, z] in meters.

    • YawPitchRoll - array_like, [pitch, yaw, roll] in degrees.

    • Color - array_like, [r, g, b] as floats ranging 0.0 to 1.0.

Returns:

True if the option was successfully set.

Return type:

bool

Examples

Configuring a stereo debug HUD guide:

# show a quad with a crosshair
hmd.stereoDebugHudMode('QuadWithCrosshair')
# enable displaying guide information
hmd.setStereoDebugHudOption('InfoEnable', True)
# set the position of the guide quad in the scene
hmd.setStereoDebugHudOption('Position', [0.0, 1.7, -2.0])
setToeInView(applyTransform=True, clearDepth=True)

Set toe-in projection.

Create a toe-in projection for subsequent rendering calls. Sets the viewMatrix and projectionMatrix accordingly so the scene origin is on the screen plane. The value of convergeOffset will define the convergence point of the view, which is offset perpendicular to the center of the screen plane. Points falling on a vertical line at the convergence point will have zero disparity.

Parameters:
  • applyTransform (bool) – Apply transformations after computing them in immediate mode. Same as calling applyEyeTransform() afterwards.

  • clearDepth (bool, optional) – Clear the depth buffer.

Notes

  • This projection mode is only ‘correct’ if the viewer’s eyes are converged at the convergence point. Due to perspective, this projection introduces vertical disparities which increase in magnitude with eccentricity. Use setOffAxisView if you want to display something the viewer can look around the screen comfortably.

setUnits(value, log=True)
setViewPos(value, log=True)
property shouldQuit

True if the user requested the application should quit through the headset’s interface.

property shouldRecenter

True if the user requested the origin be re-centered through the headset’s interface.

showMessage(msg)

Show a message in the window. This can be used to show information to the participant.

This creates a TextBox2 object that is displayed in the window. The text can be updated by calling this method again with a new message. The updated text will appear the next time draw() is called.

Parameters:

msg (str or None) – Message text to display. If None, then any existing message is removed.

showPilotingIndicator()

Show the visual indicator which shows we are in piloting mode.

property size

Size property to get the dimensions of the view buffer instead of the window. If there are no view buffers, always return the dims of the window.

specifyTrackingOrigin(pose)[source]

Specify a tracking origin. If trackingOriginType=’floor’, this function sets the origin of the scene in the ground plane. If trackingOriginType=’eye’, the scene origin is set to the known eye height.

Parameters:

pose (LibOVRPose) – Tracking origin pose.

specifyTrackingOriginPosOri(pos=(0.0, 0.0, 0.0), ori=(0.0, 0.0, 0.0, 1.0))[source]

Specify a tracking origin using a pose and orientation. This is the same as specifyTrackingOrigin, but accepts a position vector [x, y, z] and orientation quaternion [x, y, z, w].

Parameters:
  • pos (tuple or list of float, or ndarray) – Position coordinate of origin (x, y, z).

  • ori (tuple or list of float, or ndarray) – Quaternion specifying orientation (x, y, z, w).

startHaptics(controller, frequency='low', amplitude=1.0)[source]

Start haptic feedback (vibration).

Vibration is constant at fixed frequency and amplitude. Vibration lasts 2.5 seconds, so this function needs to be called more often than that for sustained vibration. Only controllers which support vibration can be used here.

There are only two frequencies permitted ‘high’ and ‘low’, however, amplitude can vary from 0.0 to 1.0. Specifying `frequency`=’off’ stops vibration if in progress.

Parameters:
  • controller (str) – Name of the controller to vibrate.

  • frequency (str) – Vibration frequency. Valid values are: ‘off’, ‘low’, or ‘high’.

  • amplitude (float) – Vibration amplitude in the range of [0.0 and 1.0]. Values outside this range are clamped.

stashAutoDraw()

Put autoDraw components on ‘hold’, meaning they get autoDraw set to False but are added to an internal list to be ‘released’ when .releaseAutoDraw is called.

property stencilTest

True if stencil testing is enabled.

stereoDebugHudMode(mode)[source]

Set the debug stereo HUD mode.

This makes the compositor add stereoscopic reference guides to the scene. You can configure the HUD can be configured using other methods.

Parameters:

mode (str) – Stereo debug mode to use. Valid options are Off, Quad, QuadWithCrosshair, and CrosshairAtInfinity.

Examples

Enable a stereo debugging guide:

hmd.stereoDebugHudMode('CrosshairAtInfinity')

Hide the debugging guide. Should be called before exiting the application since it’s persistent until the Oculus service is restarted:

hmd.stereoDebugHudMode('Off')
stopHaptics(controller)[source]

Stop haptic feedback.

Convenience function to stop controller vibration initiated by the last vibrateController call. This is the same as calling vibrateController(controller, frequency='off').

Parameters:

controller (str) – Name of the controller to stop vibrating.

submitControllerVibration(controller, hapticsBuffer)[source]

Submit a haptics buffer to begin controller vibration.

Parameters:
  • controller (str) – Name of controller to vibrate.

  • hapticsBuffer (LibOVRHapticsBuffer) – Haptics buffer to playback.

Notes

Methods startHaptics and stopHaptics cannot be used interchangeably with this function.

tanAngleToNDC(horzTan, vertTan)[source]

Convert tan angles to the normalized device coordinates for the current buffer.

Parameters:
  • horzTan (float) – Horizontal tan angle.

  • vertTan (float) – Vertical tan angle.

Returns:

Normalized device coordinates X, Y. Coordinates range between -1.0 and 1.0. Returns None if an invalid buffer is selected.

Return type:

tuple of float

testBoundary(deviceType, bounadryType='PlayArea')[source]

Test if tracked devices are colliding with the play area boundary.

This returns an object containing test result data.

Parameters:
  • deviceType (str, list or tuple) – The device to check for boundary collision. If a list of names is provided, they will be combined and all tested.

  • boundaryType (str) – Boundary type to test.

timeOnFlip(obj, attrib, format=<class 'float'>)

Retrieves the time on the next flip and assigns it to the attrib for this obj.

Parameters:
  • obj (dict or object) – A mutable object (usually a dict of class instance).

  • attrib (str) – Key or attribute of obj to assign the flip time to.

  • format (str, class or None) – Format in which to return time, see clock.Timestamp.resolve() for more info. Defaults to float.

Examples

Assign time on flip to the tStartRefresh key of myTimingDict:

win.getTimeOnFlip(myTimingDict, 'tStartRefresh')
title
property trackerCount

Number of attached trackers.

property trackingOriginType

Current tracking origin type (str).

Valid tracking origin types are ‘floor’ and ‘eye’.

units

None, ‘height’ (of the window), ‘norm’, ‘deg’, ‘cm’, ‘pix’ Defines the default units of stimuli initialized in the window. I.e. if you change units, already initialized stimuli won’t change their units.

Can be overridden by each stimulus, if units is specified on initialization.

See Units for the window and stimuli for explanation of options.

update()

Deprecated: use Window.flip() instead

updateInputState(controllers=None)[source]

Update all connected controller states. This updates controller input states for an input device managed by LibOVR.

The polling time for each device is accessible through the controllerPollTimes attribute. This attribute returns a dictionary where the polling time from the last updateInputState call for a given controller can be retrieved by using the name as a key.

Parameters:

controllers (tuple or list, optional) – List of controllers to poll. If None, all available controllers will be polled.

Examples

Poll the state of specific controllers by name:

controllers = ['XBox', 'Touch']
updateInputState(controllers)
updateLights(index=None)

Explicitly update scene lights if they were modified.

This is required if modifications to objects referenced in lights have been changed since assignment. If you removed or added items of lights you must refresh all of them.

Parameters:

index (int, optional) – Index of light source in lights to update. If None, all lights will be refreshed.

Examples

Call updateLights if you modified lights directly like this:

win.lights[1].diffuseColor = [1., 0., 0.]
win.updateLights(1)
property useLights

Enable scene lighting.

Lights will be enabled if using legacy OpenGL lighting. Stimuli using shaders for lighting should check if useLights is True since this will have no effect on them, and disable or use a no lighting shader instead. Lights will be transformed to the current view matrix upon setting to True.

Lights are transformed by the present GL_MODELVIEW matrix. Setting useLights will result in their positions being transformed by it. If you want lights to appear at the specified positions in world space, make sure the current matrix defines the view/eye transformation when setting useLights=True.

This flag is reset to False at the beginning of each frame. Should be False if rendering 2D stimuli or else the colors will be incorrect.

property userHeight

Get user height in meters (float).

property viewMatrix

The view matrix for the current eye buffer. Only valid after a calcEyePoses() call. Note that setting viewMatrix manually will break visibility culling.

viewPos

The origin of the window onto which stimulus-objects are drawn.

The value should be given in the units defined for the window. NB: Never change a single component (x or y) of the origin, instead replace the viewPos-attribute in one shot, e.g.:

win.viewPos = [new_xval, new_yval]  # This is the way to do it
win.viewPos[0] = new_xval  # DO NOT DO THIS! Errors will result.
property viewport

Viewport rectangle (x, y, w, h) for the current draw buffer.

Values x and y define the origin, and w and h the size of the rectangle in pixels.

This is typically set to cover the whole buffer, however it can be changed for applications like multi-view rendering. Stimuli will draw according to the new shape of the viewport, for instance and stimulus with position (0, 0) will be drawn at the center of the viewport, not the window.

Examples

Constrain drawing to the left and right halves of the screen, where stimuli will be drawn centered on the new rectangle. Note that you need to set both the viewport and the scissor rectangle:

x, y, w, h = win.frameBufferSize  # size of the framebuffer
win.viewport = win.scissor = [x, y, w / 2.0, h]
# draw left stimuli ...

win.viewport = win.scissor = [x + (w / 2.0), y, w / 2.0, h]
# draw right stimuli ...

# restore drawing to the whole screen
win.viewport = win.scissor = [x, y, w, h]
waitBlanking

After a call to flip() should we wait for the blank before the script continues.

property windowedSize

Size of the window to use when not fullscreen (w, h).


Back to top