psychopy.visual.Rift
¶
|
Class provides a display and peripheral interface for the Oculus Rift (see: https://www.oculus.com/) head-mounted display. |
Close the window and cleanly shutdown the LibOVR session. |
|
Size property to get the dimensions of the view buffer instead of the window. |
|
|
|
|
Set the performance HUD mode. |
Hide the performance HUD. |
|
|
Set the debug stereo HUD mode. |
|
Configure stereo debug HUD guides. |
Get user height in meters (float). |
|
Eye height in meters (float). |
|
Eye to nose distance in meters (float). |
|
Eye separation in centimeters (float). |
|
|
|
|
|
|
|
Get the connected HMD's manufacturer (str). |
|
Get the connected HMD's unique serial number (str). |
|
USB human interface device (HID) identifiers (int, int). |
|
Get the HMD's raster display size (int, int). |
|
Get the HMD's display refresh rate in Hz (float). |
|
Horizontal and vertical pixels per tangent angle (=1) at the center of the display. |
|
|
Convert tan angles to the normalized device coordinates for the current buffer. |
Number of attached trackers. |
|
|
Get tracker information. |
True if head locking is enabled. |
|
Current tracking origin type (str). |
|
Recenter the tracking origin using the current head position. |
|
Specify a tracking origin. |
|
|
Specify a tracking origin using a pose and orientation. |
Clear the 'shouldRecenter' status flag at the API level. |
|
|
Test if tracked devices are colliding with the play area boundary. |
Sensor sample time (float). |
|
|
Get the pose of a tracked device. |
|
Get the tracking state of the head and hands. |
|
Calculate eye poses for rendering. |
Computed eye pose for the current buffer. |
|
True if the user requested the application should quit through the headset's interface. |
|
True if the app has focus in the HMD and is visible to the viewer. |
|
True if the HMD is mounted on the user's head. |
|
True if the HMD is present. |
|
True if the user requested the origin be re-centered through the headset's interface. |
|
True if the application currently has input focus. |
|
|
Set the active draw buffer. |
Get the predicted time the next frame will be displayed on the HMD. |
|
Absolute time in seconds. |
|
The view matrix for the current eye buffer. |
|
Distance to the near clipping plane in meters. |
|
Distance to the far clipping plane in meters. |
|
Get the projection matrix for the current eye buffer. |
|
True if the VR boundary is visible. |
|
|
Get boundary dimensions. |
Connected controller types (list of str) |
|
|
Update all connected controller states. |
|
Submit view buffer images to the HMD's compositor for display at next V-SYNC and draw the mirror texture to the on-screen window. |
Multiply the local eye pose transformation modelMatrix obtained from the SDK using |
|
Multiply the current projection modelMatrix obtained from the SDK using |
|
|
Set head-mounted display view. |
|
Return to default projection. |
|
Get controller thumbstick values. |
|
Get the values of the index triggers. |
|
Get the values of the hand triggers. |
|
Get button states from a controller. |
|
Get touch states from a controller. |
|
Start haptic feedback (vibration). |
|
Stop haptic feedback. |
|
Create a new haptics buffer. |
|
Submit a haptics buffer to begin controller vibration. |
|
Create a new Rift pose object ( |
|
Create a new bounding box object ( |
|
Check if a pose object if visible to the present eye. |
Class provides a display and peripheral interface for the Oculus Rift (see: https://www.oculus.com/) head-mounted display. This is a lazy-imported class, therefore import using full path from psychopy.visual.rift import Rift when inheriting from it.
Requires PsychXR 0.2.4 to be installed. Setting the winType=’glfw’ is preferred for VR applications.
fovType (str) – Field-of-view (FOV) configuration type. Using ‘recommended’ auto-configures the FOV using the recommended parameters computed by the runtime. Using ‘symmetric’ forces a symmetric FOV using optimal parameters from the SDK, this mode is required for displaying 2D stimuli. Specifying ‘max’ will use the maximum FOVs supported by the HMD.
trackingOriginType (str) – Specify the HMD origin type. If ‘floor’, the height of the user is added to the head tracker by LibOVR.
texelsPerPixel (float) – Texture pixels per display pixel at FOV center. A value of 1.0 results in 1:1 mapping. A fractional value results in a lower resolution draw buffer which may increase performance.
headLocked (bool) – Lock the compositor render layer in-place, disabling Asynchronous Space Warp (ASW). Enable this if you plan on computing eye poses using custom or modified head poses.
highQuality (bool) – Configure the compositor to use anisotropic texture sampling (4x). This reduces aliasing artifacts resulting from high frequency details particularly in the periphery.
nearClip (float) – Location of the near and far clipping plane in GL units (meters by default) from the viewer. These values can be updated after initialization.
farClip (float) – Location of the near and far clipping plane in GL units (meters by default) from the viewer. These values can be updated after initialization.
monoscopic (bool) – Enable monoscopic rendering mode which presents the same image to both eyes. Eye poses used will be both centered at the HMD origin. Monoscopic mode uses a separate rendering pipeline which reduces VRAM usage. When in monoscopic mode, you do not need to call ‘setBuffer’ prior to rendering (doing so will do have no effect).
samples (int or str) – Specify the number of samples for multi-sample anti-aliasing (MSAA). When >1, multi-sampling logic is enabled in the rendering pipeline. If ‘max’ is specified, the largest number of samples supported by the platform is used. If floating point textures are used, MSAA sampling is disabled. Must be power of two value.
mirrorMode (str) – On-screen mirror mode. Values ‘left’ and ‘right’ show rectilinear images of a single eye. Value ‘distortion` shows the post-distortion image after being processed by the compositor. Value ‘default’ displays rectilinear images of both eyes side-by-side.
mirrorRes (list of int) – Resolution of the mirror texture. If None, the resolution will match the window size. The value of mirrorRes is used for to define the resolution of movie frames.
warnAppFrameDropped (bool) – Log a warning if the application drops a frame. This occurs when the application fails to submit a frame to the compositor on-time. Application frame drops can have many causes, such as running routines in your application loop that take too long to complete. However, frame drops can happen sporadically due to driver bugs and running background processes (such as Windows Update). Use the performance HUD to help diagnose the causes of frame drops.
autoUpdateInput (bool) – Automatically update controller input states at the start of each frame. If False, you must manually call updateInputState before getting input values from LibOVR managed input devices.
legacyOpenGL (bool) – Disable ‘immediate mode’ OpenGL calls in the rendering pipeline. Specifying False maintains compatibility with existing PsychoPy stimuli drawing routines. Use True when computing transformations using some other method and supplying shaders matrices directly.
Helper function to assign the time of last flip to the obj.attrib
Checks whether the requested and actual screen sizes differ. If not then a warning is output and the window size is set to actual
Make sure there are no dead refs in the editables list
Override end of flip with custom color channel masking if required.
Return the current HMD view as an image.
rect (array_like) – Rectangle [x, y, w, h] defining a sub-region of the frame to capture. This should remain None for HMD frames.
buffer (str, optional) – Buffer to capture. For the HMD, only ‘mirror’ is available at this time.
Buffer pixel contents as a PIL/Pillow image object.
Image
Return an array of pixel values from the current window buffer or sub-region.
rect (tuple[int], optional) – The region of the window to capture in pixel coordinates (left, bottom, width, height). If None, the whole window is captured.
buffer (str, optional) – Buffer to capture.
includeAlpha (bool, optional) – Include the alpha channel in the returned array. Default is True.
makeLum (bool, optional) – Convert the RGB values to luminance values. Values are rounded to the nearest integer. Default is False.
Pixel values as a 3D array of shape (height, width, channels). If includeAlpha is False, the array will have shape (height, width, 3). If makeLum is True, the array will have shape (height, width).
ndarray
Examples
Get the pixel values of the whole window:
pix = win._getPixels()
Get pixel values and convert to luminance and get average:
pix = win._getPixels(makeLum=True)
average = pix.mean()
Deprecated function, here for historical reasons. You may now use :py:attr:`~Window._getFrame() and specify a rect to get a sub-region, just as used here.
power2 can be useful with older OpenGL versions to avoid interpolation
in PatchStim
. If power2 or squarePower2, it will expand rect
dimensions up to next power of two. squarePower2 uses the max
dimensions. You need to check what your hardware & OpenGL supports,
and call _getRegionOfFrame()
as appropriate.
Prepare a frame for monoscopic rendering. This is called
automatically after _startHmdFrame()
if monoscopic rendering is
enabled.
Perform a warp operation.
(in this case a copy operation without any warping)
Resolve multisample anti-aliasing (MSAA). If MSAA is enabled, drawing operations are diverted to a special multisample render buffer. Pixel data must be ‘resolved’ by blitting it to the swap chain texture. If not, the texture will be blank.
Notes
You cannot perform operations on the default FBO (at frameBuffer) when MSAA is enabled. Any changes will be over-written when ‘flip’ is called.
Make this window’s OpenGL context current.
If called on a window whose context is current, the function will return
immediately. This reduces the number of redundant calls if no context
switch is required. If useFBO=True
, the framebuffer is bound after
the context switch.
Override the default framebuffer init code in window.Window to use the HMD swap chain. The HMD’s swap texture and render buffer are configured here.
If multisample anti-aliasing (MSAA) is enabled, a secondary render buffer is created. Rendering is diverted to the multi-sample buffer when drawing, which is then resolved into the HMD’s swap chain texture prior to committing it to the chain. Consequently, you cannot pass the texture attached to the FBO specified by frameBuffer until the MSAA buffer is resolved. Doing so will result in a blank texture.
Setup OpenGL state for this window.
A private method to work out how to handle gamma for this Window given that the user might have specified an explicit value, or maybe gave a Monitor.
Prepare to render an HMD frame. This must be called every frame before flipping or setting the view buffer.
This function will wait until the HMD is ready to begin rendering before continuing. The current frame texture from the swap chain are pulled from the SDK and made available for binding.
Custom _startOfFlip
for HMD rendering. This
finalizes the HMD texture before diverting drawing operations back to
the on-screen window. This allows flip
to swap the
on-screen and HMD buffers when called. This function always returns
True.
True
Update and process performance statistics obtained from LibOVR. This should be called at the beginning of each frame to get the stats of the last frame.
This is called automatically when
_waitToBeginHmdFrame()
is called at the
beginning of each frame.
Update or re-calculate projection matrices based on the current render descriptor configuration.
Adds an editable element to the screen (something to which characters can be sent with meaning from the keyboard).
The current editable object receiving chars is Window.currentEditable
editable –
Ambient light color for the scene [r, g, b, a]. Values range from 0.0 to 1.0. Only applicable if useLights is True.
Examples
Setting the ambient light color:
win.ambientLight = [0.5, 0.5, 0.5]
# don't do this!!!
win.ambientLight[0] = 0.5
win.ambientLight[1] = 0.5
win.ambientLight[2] = 0.5
Apply the current view and projection matrices.
Matrices specified by attributes viewMatrix
and
projectionMatrix
are applied using ‘immediate mode’
OpenGL functions. Subsequent drawing operations will be affected until
flip()
is called.
All transformations in GL_PROJECTION
and GL_MODELVIEW
matrix
stacks will be cleared (set to identity) prior to applying.
clearDepth (bool) – Clear the depth buffer. This may be required prior to rendering 3D objects.
Examples
Using a custom view and projection matrix:
# Must be called every frame since these values are reset after
# `flip()` is called!
win.viewMatrix = viewtools.lookAt( ... )
win.projectionMatrix = viewtools.perspectiveProjectionMatrix( ... )
win.applyEyeTransform()
# draw 3D objects here ...
Aspect ratio of the current viewport (width / height).
How should the background image of this window fit? Options are:
No scaling is applied, image is present at its pixel size unaltered.
Image is scaled such that it covers the whole screen without changing its aspect ratio. In other words, both dimensions are evenly scaled such that its SHORTEST dimension matches the window’s LONGEST dimension.
Image is scaled such that it is contained within the screen without changing its aspect ratio. In other words, both dimensions are evenly scaled such that its LONGEST dimension matches the window’s SHORTEST dimension.
If image is bigger than the window along any dimension, it will behave as if backgroundFit were “contain”. Otherwise, it will behave as if backgroundFit were None.
Background image for the window, can be either a visual.ImageStim object or anything which could be passed to visual.ImageStim.image to create one. Will be drawn each time win.flip() is called, meaning it is always below all other contents of the window.
Blend mode to use.
Calculate eye poses for rendering.
This function calculates the eye poses to define the viewpoint transformation for each eye buffer. Upon starting a new frame, the application loop is halted until this function is called and returns.
Once this function returns, setBuffer may be called and frame
rendering can commence. The computed eye pose for the selected buffer is
accessible through the eyeRenderPose
attribute after calling
setBuffer()
. If monoscopic=True, the eye poses are set to
the head pose.
The source data specified to headPose can originate from the tracking
state retrieved by calling getTrackingState()
, or from
other sources. If a custom head pose is specified (for instance, from a
motion tracker), you must ensure head-locking is enabled to prevent
the ASW feature of the compositor from engaging. Furthermore, you must
specify sensor sample time for motion-to-photon calculation derived from
the sample time of the custom tracking source.
headPose (LibOVRPose) – Head pose to use.
originPose (LibOVRPose, optional) – Origin of tracking in the VR scene.
Examples
Get the tracking state and calculate the eye poses:
# get tracking state at predicted mid-frame time
trackingState = hmd.getTrackingState()
# get the head pose from the tracking state
headPose = trackingState.headPose.thePose
hmd.calcEyePoses(headPose) # compute eye poses
# begin rendering to each eye
for eye in ('left', 'right'):
hmd.setBuffer(eye)
hmd.setRiftView()
# draw stuff here ...
Using a custom head pose (make sure headLocked=True
before doing
this):
headPose = createPose((0., 1.75, 0.))
hmd.calcEyePoses(headPose) # compute eye poses
Call a function immediately after the next flip()
command.
The first argument should be the function to call, the following args should be used exactly as you would for your normal call to the function (can use ordered arguments or keyword arguments as normal).
e.g. If you have a function that you would normally call like this:
pingMyDevice(portToPing, channel=2, level=0)
then you could call callOnFlip()
to have the function
call synchronized with the frame flip like this:
win.callOnFlip(pingMyDevice, portToPing, channel=2, level=0)
Remove all autoDraw components, meaning they get autoDraw set to False and are not added to any list (as in .stashAutoDraw)
Clear the present buffer (to which you are currently drawing) without flipping the window.
Useful if you want to generate movie sequences from the back buffer without actually taking the time to flip the window.
Set color prior to clearing to set the color to clear the color buffer to. By default, the depth buffer is cleared to a value of 1.0.
Examples
Clear the color buffer to a specified color:
win.color = (1, 0, 0)
win.clearBuffer(color=True)
Clear only the depth buffer, depthMask must be True or else this will have no effect. Depth mask is usually True by default, but may change:
win.depthMask = True
win.clearBuffer(color=False, depth=True, stencil=False)
Set the color of the window.
This command sets the color that the blank screen will have on the
next clear operation. As a result it effectively takes TWO
flip()
operations to become visible (the first uses
the color to create the new screen, the second presents that screen to
the viewer). For this reason, if you want to changed background color of
the window “on the fly”, it might be a better idea to draw a
Rect
that fills the whole window with the desired
Rect.fillColor
attribute. That’ll show up on first flip.
See other stimuli (e.g. GratingStim.color
)
for more info on the color attribute which essentially works the same on
all PsychoPy stimuli.
See Color spaces for further information about the ways to specify colors and their various implications.
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to specify colorSpace before setting the color. Example:
# A light green text
stim = visual.TextStim(win, 'Color me!',
color=(0, 1, 0), colorSpace='rgb')
# An almost-black text
stim.colorSpace = 'rgb255'
# Make it light green again
stim.color = (128, 255, 128)
Connected controller types (list of str)
Scaling factor (float) to use when drawing to the backbuffer to convert framebuffer to client coordinates.
See also
Convergence offset from monitor in centimeters.
This is value corresponds to the offset from screen plane to set the convergence plane (or point for toe-in projections). Positive offsets move the plane farther away from the viewer, while negative offsets nearer. This value is used by setPerspectiveView and should be set before calling it to take effect.
Notes
This value is only applicable for setToeIn and setOffAxisView.
Convert a screen coordinate to a direction vector.
Takes a screen/window coordinate and computes a vector which projects a ray from the viewpoint through it (line-of-sight). Any 3D point touching the ray will appear at the screen coordinate.
Uses the current viewport and projectionMatrix to calculate the vector. The vector is in eye-space, where the origin of the scene is centered at the viewpoint and the forward direction aligned with the -Z axis. A ray of (0, 0, -1) results from a point at the very center of the screen assuming symmetric frustums.
Note that if you are using a flipped/mirrored view, you must invert your supplied screen coordinates (screenXY) prior to passing them to this function.
screenXY (array_like) – X, Y screen coordinate. Must be in units of the window.
Normalized direction vector [x, y, z].
ndarray
Examples
Getting the direction vector between the mouse cursor and the eye:
mx, my = mouse.getPos()
dir = win.coordToRay((mx, my))
Set the position of a 3D stimulus object using the mouse, constrained to a plane. The object origin will always be at the screen coordinate of the mouse cursor:
# the eye position in the scene is defined by a rigid body pose
win.viewMatrix = camera.getViewMatrix()
win.applyEyeTransform()
# get the mouse location and calculate the intercept
mx, my = mouse.getPos()
ray = win.coordToRay([mx, my])
result = intersectRayPlane( # from mathtools
orig=camera.pos,
dir=camera.transformNormal(ray),
planeOrig=(0, 0, -10),
planeNormal=(0, 1, 0))
# if result is `None`, there is no intercept
if result is not None:
pos, dist = result
objModel.thePose.pos = pos
else:
objModel.thePose.pos = (0, 0, -10) # plane origin
If you don’t define the position of the viewer with a RigidBodyPose, you can obtain the appropriate eye position and rotate the ray by doing the following:
pos = numpy.linalg.inv(win.viewMatrix)[:3, 3]
ray = win.coordToRay([mx, my]).dot(win.viewMatrix[:3, :3])
# then ...
result = intersectRayPlane(
orig=pos,
dir=ray,
planeOrig=(0, 0, -10),
planeNormal=(0, 1, 0))
Create a new bounding box object
(LibOVRBounds
).
LibOVRBounds
represents an axis-aligned
bounding box with dimensions defined by extents. Bounding boxes are
primarily used for visibility testing and culling by PsychXR. The
dimensions of the bounding box can be specified explicitly, or fitted
to meshes by passing vertices to the
fit()
method after initialization.
This function exposes the LibOVRBounds
class
so you don’t need to access it by importing psychxr.
extents (array_like or None) – Extents of the bounding box as (mins, maxs). Where mins
(x, y, z) is the minimum and maxs (x, y, z) is the maximum extents
of the bounding box in world units. If None is specified, the
returned bounding box will be invalid. The bounding box can be later
constructed using the fit()
method or the extents
attribute.
Object representing a bounding box.
~psychxr.libovr.LibOVRBounds
Examples
Add a bounding box to a pose:
# create a 1 meter cube bounding box centered with the pose
bbox = Rift.createBoundingBox(((-.5, -.5, -.5), (.5, .5, .5)))
# create a pose and attach the bounding box
modelPose = Rift.createPose()
modelPose.boundingBox = bbox
Perform visibility culling on the pose using the bounding box by
using the isVisible()
method:
if hmd.isPoseVisible(modelPose):
modelPose.draw()
Create a new haptics buffer.
A haptics buffer is object which stores vibration amplitude samples for
playback through the Touch controllers. To play a haptics buffer, pass
it to submitHapticsBuffer()
.
samples (array_like) – 1-D array of amplitude samples, ranging from 0 to 1. Values outside of this range will be clipped. The buffer must not exceed HAPTICS_BUFFER_SAMPLES_MAX samples, any additional samples will be dropped.
Haptics buffer object.
LibOVRHapticsBuffer
Notes
Methods startHaptics and stopHaptics cannot be used interchangeably with this function.
Examples
Create a haptics buffer where vibration amplitude ramps down over the course of playback:
samples = np.linspace(
1.0, 0.0, num=HAPTICS_BUFFER_SAMPLES_MAX-1, dtype=np.float32)
hbuff = Rift.createHapticsBuffer(samples)
# vibrate right Touch controller
hmd.submitControllerVibration(CONTROLLER_TYPE_RTOUCH, hbuff)
Create a new Rift pose object
(LibOVRPose
).
LibOVRPose
is used to represent a rigid body
pose mainly for use with the PsychXR’s LibOVR module. There are several
methods associated with the object to manipulate the pose.
This function exposes the LibOVRPose
class
so you don’t need to access it by importing psychxr.
True if face culling is enabled.`
Face culling mode, either back, front or both.
The editable (Text?) object that currently has key focus
Depth test comparison function for rendering.
True if depth masking is enabled. Writing to the depth buffer will be disabled.
True if depth testing is enabled.
Dispatches events for all pyglet windows. Used by iohub 2.0 psychopy kb event integration.
Get the HMD’s display refresh rate in Hz (float).
Get the HMD’s raster display size (int, int).
True if 3D drawing is enabled on this window.
Eye height in meters (float).
Eye separation in centimeters (float).
Computed eye pose for the current buffer. Only valid after calling
calcEyePoses()
.
Eye to nose distance in meters (float).
Examples
Generate your own eye poses. These are used when
calcEyePoses()
is called:
leftEyePose = Rift.createPose((-self.eyeToNoseDistance, 0., 0.))
rightEyePose = Rift.createPose((self.eyeToNoseDistance, 0., 0.))
Get the inter-axial separation (IAS) reported by LibOVR:
iad = self.eyeToNoseDistance * 2.0
Distance to the far clipping plane in meters.
Get the firmware version of the active HMD (int, int).
Submit view buffer images to the HMD’s compositor for display at next V-SYNC and draw the mirror texture to the on-screen window. This must be called every frame.
Absolute time in seconds when control was given back to the application. The difference between the current and previous values should be very close to 1 / refreshRate of the HMD.
Notes
The HMD compositor and application are asynchronous, therefore there is no guarantee that the timestamp returned by ‘flip’ corresponds to the exact vertical retrace time of the HMD.
Report the frames per second since the last call to this function (or since the window was created if this is first call)
Size of the framebuffer in pixels (w, h).
Face winding order to define front, either ccw or cw.
Return whether the window is in fullscreen mode.
Set the monitor gamma for linearization.
Warning
Don’t use this if using a Bits++ or Bits#, as it overrides monitor settings.
Sets the hardware CLUT using a specified 3xN array of floats ranging between 0.0 and 1.0.
Array must have a number of rows equal to 2 ^ max(bpc).
Measures the actual frames-per-second (FPS) for the screen.
This is done by waiting (for a max of nMaxFrames) until nIdentical frames in a row have identical frame times (std dev below threshold ms).
nIdentical (int, optional) – The number of consecutive frames that will be evaluated. Higher –> greater precision. Lower –> faster.
nMaxFrames (int, optional) – The maximum number of frames to wait for a matching set of nIdentical.
nWarmUpFrames (int, optional) – The number of frames to display before starting the test (this is in place to allow the system to settle after opening the Window for the first time.
threshold (int or float, optional) – The threshold for the std deviation (in ms) before the set are considered a match.
Frame rate (FPS) in seconds. If there is no such sequence of identical frames a warning is logged and None will be returned.
float or None
Get boundary dimensions.
boundaryType (str) – Boundary type, can be ‘PlayArea’ or ‘Outer’.
Dimensions of the boundary meters [x, y, z].
ndarray
Get button states from a controller.
Returns True if any names specified to buttons reflect testState
since the last updateInputState
or
flip
call. If multiple button names are specified as a
list or tuple to buttons, multiple button states are tested,
returning True if all the buttons presently satisfy the testState.
Note that not all controllers available share the same buttons. If a
button is not available, this function will always return False.
buttons (list of str or str) – Buttons to test. Valid buttons names are ‘A’, ‘B’, ‘RThumb’, ‘RShoulder’ ‘X’, ‘Y’, ‘LThumb’, ‘LShoulder’, ‘Up’, ‘Down’, ‘Left’, ‘Right’, ‘Enter’, ‘Back’, ‘VolUp’, ‘VolDown’, and ‘Home’. Names can be passed as a list to test multiple button states.
controller (str) – Controller name.
testState (str) –
State to test. Valid values are:
continuous - Button is presently being held down.
rising or pressed - Button has been pressed since the last update.
falling or released - Button has been released since the last update.
Button state and timestamp in seconds the controller was polled.
Examples
Check if the ‘Enter’ button on the Oculus remote was released:
isPressed, tsec = hmd.getButtons(['Enter'], 'Remote', 'falling')
Check if the ‘A’ button was pressed on the touch controller:
isPressed, tsec = hmd.getButtons(['A'], 'Touch', 'pressed')
Get the scaling factor required for scaling correctly on high-DPI displays.
If the returned value is 1.0, no scaling needs to be applied to objects drawn on the backbuffer. A value >1.0 indicates that the backbuffer is larger than the reported client area, requiring points to be scaled to maintain constant size across similarly sized displays. In other words, the scaling required to convert framebuffer to client coordinates.
Scaling factor to be applied along both horizontal and vertical dimensions.
Examples
Get the size of the client area:
clientSize = win.frameBufferSize / win.getContentScaleFactor()
Get the framebuffer size from the client size:
frameBufferSize = win.clientSize * win.getContentScaleFactor()
Convert client (window) to framebuffer pixel coordinates (eg., a mouse coordinate, vertices, etc.):
# `mousePosXY` is an array ...
frameBufferXY = mousePosXY * win.getContentScaleFactor()
# you can also use the attribute ...
frameBufferXY = mousePosXY * win.contentScaleFactor
Notes
This value is only valid after the window has been fully realized.
Get the pose of a tracked device. For head (HMD) and hand poses
(Touch controllers) it is better to use getTrackingState()
instead.
deviceName (str) – Name of the device. Valid device names are: ‘HMD’, ‘LTouch’, ‘RTouch’, ‘Touch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’.
absTime (float, optional) – Absolute time in seconds the device pose refers to. If not specified, the predicted time is used.
latencyMarker (bool) – Insert a marker for motion-to-photon latency calculation. Should only be True if the HMD pose is being used to compute eye poses.
Pose state object. None if device tracking was lost.
LibOVRPoseState or None
The expected time of the next screen refresh. This is currently calculated as win._lastFrameTime + refreshInterval
targetTime (float) – The delay from now for which you want the flip time. 0 will give the because that the earliest we can achieve. 0.15 will give the schedule flip time that gets as close to 150 ms as possible
clock (None, 'ptb', 'now' or any Clock object) – If True then the time returned is compatible with ptb.GetSecs()
verbose (bool) – Set to True to view the calculations along the way
Get the values of the hand triggers.
controller (str) – Name of the controller to get hand trigger values. Possible values for controller are ‘Touch’, ‘RTouch’, ‘LTouch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’; the only devices with hand triggers the SDK manages. For additional controllers, use PsychPy’s built-in event or hardware support.
deadzone (bool) – Apply the deadzone to hand trigger values. This pre-filters stick input to apply a dead-zone where 0.0 will be returned if the trigger returns a displacement within 0.2746.
Left and right hand trigger values. Displacements are represented
as tuple of two float representing the left anr right displacement
values, which range from 0.0 to 1.0. The returned values reflect the
controller state since the last updateInputState
or flip
call.
Get the values of the index triggers.
controller (str) – Name of the controller to get index trigger values. Possible values for controller are ‘Xbox’, ‘Touch’, ‘RTouch’, ‘LTouch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’; the only devices with index triggers the SDK manages. For additional controllers, use PsychPy’s built-in event or hardware support.
deadzone (bool) – Apply the deadzone to index trigger values. This pre-filters stick input to apply a dead-zone where 0.0 will be returned if the trigger returns a displacement within 0.2746.
Left and right index trigger values. Displacements are represented
as tuple of two float representing the left anr right displacement
values, which range from 0.0 to 1.0. The returned values reflect the
controller state since the last updateInputState
or flip
call.
Capture the current HMD frame as an image.
Saves to stack for saveMovieFrames()
. As of v1.81.00
this also returns the frame as a PIL image.
This can be done at any time (usually after a flip()
command).
Frames are stored in memory until a saveMovieFrames()
command is issued. You can issue getMovieFrame()
as
often as you like and then save them all in one go when finished.
For HMD frames, you should call getMovieFrame after calling flip to ensure that the mirror texture saved reflects what is presently being shown on the HMD. Note, that this function is somewhat slow and may impact performance. Only call this function when you’re not collecting experimental data.
buffer (str, optional) – Buffer to capture. For the HMD, only ‘mirror’ is available at this time.
Buffer pixel contents as a PIL/Pillow image object.
Image
Assesses the monitor refresh rate (average, median, SD) under current conditions, over at least 60 frames.
Records time for each refresh (frame) for n frames (at least 60),
while displaying an optional visual. The visual is just eye-candy to
show that something is happening when assessing many frames. You can
also give it text to display instead of a visual,
e.g., msg='(testing refresh rate...)'
; setting msg implies
showVisual == False
.
To simulate refresh rate under cpu load, you can specify a time to
wait within the loop prior to doing the flip()
.
If 0 < msDelay < 100, wait for that long in ms.
Returns timing stats (in ms) of:
average time per frame, for all frames
standard deviation of all frames
median, as the average of 12 frame times around the median (~monitor refresh rate)
2010 written by Jeremy Gray
Get the predicted time the next frame will be displayed on the HMD. The returned time is referenced to the clock LibOVR is using.
Absolute frame mid-point time for the given frame index in seconds.
Get controller thumbstick values.
controller (str) – Name of the controller to get thumbstick values. Possible values for controller are ‘Xbox’, ‘Touch’, ‘RTouch’, ‘LTouch’, ‘Object0’, ‘Object1’, ‘Object2’, and ‘Object3’; the only devices with thumbsticks the SDK manages. For additional controllers, use PsychPy’s built-in event or hardware support.
deadzone (bool) – Apply the deadzone to thumbstick values. This pre-filters stick input to apply a dead-zone where 0.0 will be returned if the sticks return a displacement within -0.2746 to 0.2746.
Left and right, X and Y thumbstick values. Axis displacements are
represented in each tuple by floats ranging from -1.0 (full
left/down) to 1.0 (full right/up). The returned values reflect the
controller state since the last updateInputState
or flip
call.
Absolute time in seconds. The returned time is referenced to the clock LibOVR is using.
Time in seconds.
Get touch states from a controller.
Returns True if any names specified to touches reflect testState
since the last updateInputState
or
flip
call. If multiple button names are specified as a
list or tuple to touches, multiple button states are tested,
returning True if all the touches presently satisfy the testState.
Note that not all controllers available support touches. If a touch is
not supported or available, this function will always return False.
Special states can be used for basic gesture recognition, such as ‘LThumbUp’, ‘RThumbUp’, ‘LIndexPointing’, and ‘RIndexPointing’.
touches (list of str or str) – Buttons to test. Valid touches names are ‘A’, ‘B’, ‘RThumb’, ‘RThumbRest’ ‘RThumbUp’, ‘RIndexPointing’, ‘LThumb’, ‘LThumbRest’, ‘LThumbUp’, ‘LIndexPointing’, ‘X’, and ‘Y’. Names can be passed as a list to test multiple button states.
controller (str) – Controller name.
testState (str) –
State to test. Valid values are:
continuous - User is touching something on the controller.
rising or pressed - User began touching something since the last call to updateInputState.
falling or released - User stopped touching something since the last call to updateInputState.
Touch state and timestamp in seconds the controller was polled.
Examples
Check if the ‘Enter’ button on the Oculus remote was released:
isPressed, tsec = hmd.getButtons(['Enter'], 'Remote', 'falling')
Check if the ‘A’ button was pressed on the touch controller:
isPressed, tsec = hmd.getButtons(['A'], 'Touch', 'pressed')
Get tracker information.
trackerIdx (int) – Tracker index, ranging from 0 to trackerCount
.
Object containing tracker information.
LibOVRTrackerInfo
IndexError – Raised when trackerIdx out of range.
Get the tracking state of the head and hands.
Calling this function retrieves the tracking state of the head (HMD)
and hands at absTime from the LibOVR runtime. The returned object is
a LibOVRTrackingState
instance with poses,
motion derivatives (i.e. linear and angular velocity/acceleration), and
tracking status flags accessible through its attributes.
The pose states of the head and hands are available by accessing the headPose and handPoses attributes, respectively.
Tracking state object. For more information about this type see:
LibOVRTrackingState
See also
getPredictedDisplayTime
Time at mid-frame for the current frame index.
Examples
Get the tracked head pose and use it to calculate render eye poses:
# get tracking state at predicted mid-frame time
absTime = getPredictedDisplayTime()
trackingState = hmd.getTrackingState(absTime)
# get the head pose from the tracking state
headPose = trackingState.headPose.thePose
hmd.calcEyePoses(headPose) # compute eye poses
Get linear/angular velocity and acceleration vectors of the right touch controller:
# right hand is the second value (index 1) at `handPoses`
rightHandState = trackingState.handPoses[1] # is `LibOVRPoseState`
# access `LibOVRPoseState` fields to get the data
linearVel = rightHandState.linearVelocity # m/s
angularVel = rightHandState.angularVelocity # rad/s
linearAcc = rightHandState.linearAcceleration # m/s^2
angularAcc = rightHandState.angularAcceleration # rad/s^2
# extract components like this if desired
vx, vy, vz = linearVel
ax, ay, az = angularVel
Above is useful for physics simulations, where one can compute the magnitude and direction of a force applied to a virtual object.
It’s often the case that object tracking becomes unreliable for some reason, for instance, if it becomes occluded and is no longer visible to the sensors. In such cases, the reported pose state is invalid and may not be useful. You can check if the position and orientation of a tracked object is invalid using flags associated with the tracking state. This shows how to check if head position and orientation tracking was valid when sampled:
if trackingState.positionValid and trackingState.orientationValid:
print('Tracking valid.')
It’s up to the programmer to determine what to do in such cases. Note that tracking may still be valid even if
Get the calibrated origin used for tracking during the sample period of the tracking state:
calibratedOrigin = trackingState.calibratedOrigin
calibPos, calibOri = calibratedOrigin.posOri
Time integrate a tracking state. This extrapolates the pose over time given the present computed motion derivatives. The contrived example below shows how to implement head pose forward prediction:
# get current system time
absTime = getTimeInSeconds()
# get the elapsed time from `absTime` to predicted v-sync time,
# again this is an example, you would usually pass predicted time to
# `getTrackingState` directly.
dt = getPredictedDisplayTime() - absTime
# get the tracking state for the current time, poses will lag where
# they are expected at predicted time by `dt` seconds
trackingState = hmd.getTrackingState(absTime)
# time integrate a pose by `dt`
headPoseState = trackingState.headPose
headPosePredicted = headPoseState.timeIntegrate(dt)
# calc eye poses with predicted head pose, this is a custom pose to
# head-locking should be enabled!
hmd.calcEyePoses(headPosePredicted)
The resulting head pose is usually very close to what getTrackingState would return if the predicted time was used. Simple forward prediction with time integration becomes increasingly unstable as the prediction interval increases. Under normal circumstances, let the runtime handle forward prediction by using the pose states returned at the predicted display time. If you plan on doing your own forward prediction, you need enable head-locking, clamp the prediction interval, and apply some sort of smoothing to keep the image as stable as possible.
True if the application currently has input focus.
True
if this HMD supports yaw drift correction.
True
if the HMD is capable of tracking orientation.
True
if the HMD is capable of tracking position.
True if head locking is enabled.
USB human interface device (HID) identifiers (int, int).
Remove any message that is currently being displayed.
Hide the visual indicator which shows we are in piloting mode.
True if the HMD is mounted on the user’s head.
True if the HMD is present.
True if the VR boundary is visible.
Check if a pose object if visible to the present eye. This method can be used to perform visibility culling to avoid executing draw commands for objects that fall outside the FOV for the current eye buffer.
If boundingBox
has a valid
bounding box object, this function will return False if all the box
points fall completely to one side of the view frustum. If
boundingBox
is None, the point
at pos
is checked, returning
False if it falls outside of the frustum. If the present buffer is not
‘left’ or ‘right’, this function will always return False.
pose (LibOVRPose
) – Pose to test for visibility.
True if pose’s bounding box or origin is outside of the view frustum.
True if the app has focus in the HMD and is visible to the viewer.
Scene lights.
This is specified as an array of ~psychopy.visual.LightSource objects. If a single value is given, it will be converted to a list before setting. Set useLights to True before rendering to enable lighting/shading on subsequent objects. If lights is None or an empty list, no lights will be enabled if useLights=True, however, the scene ambient light set with ambientLight will be still be used.
Examples
Create a directional light source and add it to scene lights:
dirLight = gltools.LightSource((0., 1., 0.), lightType='directional')
win.lights = dirLight # `win.lights` will be a list when accessed!
Multiple lights can be specified by passing values as a list:
myLights = [gltools.LightSource((0., 5., 0.)),
gltools.LightSource((-2., -2., 0.))
win.lights = myLights
Send a log message that should be time-stamped at the next
flip()
command.
Get the connected HMD’s manufacturer (str).
Returns the visibility of the mouse cursor.
Flip multiple times while maintaining the display constant. Use this method for precise timing.
WARNING: This function should not be used. See the Notes section for details.
Notes
This function can behave unpredictably, and the PsychoPy authors recommend against using it. See https://github.com/psychopy/psychopy/issues/867 for more information.
Examples
Example of using multiFlip
:
# Draws myStim1 to buffer
myStim1.draw()
# Show stimulus for 4 frames (90 ms at 60Hz)
myWin.multiFlip(clearBuffer=False, flips=6)
# Draw myStim2 "on top of" myStim1
# (because buffer was not cleared above)
myStim2.draw()
# Show this for 2 frames (30 ms at 60Hz)
myWin.multiFlip(flips=2)
# Show blank screen for 3 frames (buffer was cleared above)
myWin.multiFlip(flips=3)
Multiply the current projection modelMatrix obtained from the SDK
using glMultMatrixf
. The projection matrix used depends on the
current eye buffer set by setBuffer()
.
Multiply the local eye pose transformation modelMatrix obtained from the
SDK using glMultMatrixf
. The modelMatrix used depends on the current eye
buffer set by setBuffer()
.
None
Distance to the near clipping plane in meters.
Moves focus of the cursor to the next editable window
A default resize event handler.
This default handler updates the GL viewport to cover the entire
window and sets the GL_PROJECTION
matrix to be orthogonal in
window space. The bottom-left corner is (0, 0) and the top-right
corner is the width and height of the Window
in pixels.
Override this event handler with your own to create another projection, for example in perspective.
Set the performance HUD mode.
mode (str) – HUD mode to use.
Horizontal and vertical pixels per tangent angle (=1) at the center of the display.
This can be used to compute pixels-per-degree for the display.
Get the HMD’s product name (str).
Get the projection matrix for the current eye buffer. Note that setting projectionMatrix manually will break visibility culling.
Record time elapsed per frame.
Provides accurate measures of frame intervals to determine
whether frames are being dropped. The intervals are the times between
calls to flip()
. Set to True only during the
time-critical parts of the script. Set this to False while the screen
is not being updated, i.e., during any slow, non-frame-time-critical
sections of your code, including inter-trial-intervals,
event.waitkeys()
, core.wait()
, or image.setImage()
.
Examples
Enable frame interval recording, successive frame intervals will be stored:
win.recordFrameIntervals = True
Frame intervals can be saved by calling the
saveFrameIntervals
method:
win.saveFrameIntervals()
Restore the default projection and view settings to PsychoPy defaults. Call this prior to drawing 2D stimuli objects (i.e. GratingStim, ImageStim, Rect, etc.) if any eye transformations were applied for the stimuli to be drawn correctly.
clearDepth (bool) – Clear the depth buffer upon reset. This ensures successive draw commands are not affected by previous data written to the depth buffer. Default is True.
Notes
Calling flip()
automatically resets the view and
projection to defaults. So you don’t need to call this unless you are
mixing 3D and 2D stimuli.
Examples
Going between 3D and 2D stimuli:
# 2D stimuli can be drawn before setting a perspective projection
win.setPerspectiveView()
# draw 3D stimuli here ...
win.resetEyeTransform()
# 2D stimuli can be drawn here again ...
win.flip()
Reset the viewport to cover the whole framebuffer.
Set the viewport to match the dimensions of the back buffer or framebuffer (if useFBO=True). The scissor rectangle is also set to match the dimensions of the viewport.
Add all stimuli which are on ‘hold’ back into the autoDraw list, and clear the hold list.
Save recorded screen frame intervals to disk, as comma-separated values.
fileName (None or str) – None or the filename (including path if necessary) in which to store the data. If None then ‘lastFrameIntervals.log’ will be used.
clear (bool) – Clear buffer frames intervals were stored after saving. Default is True.
Writes any captured frames to disk.
Will write any format that is understood by PIL (tif, jpg, png, …)
filename (str) – Name of file, including path. The extension at the end of the file determines the type of file(s) created. If an image type (e.g. .png) is given, then multiple static frames are created. If it is .gif then an animated GIF image is created (although you will get higher quality GIF by saving PNG files and then combining them in dedicated image manipulation software, such as GIMP). On Windows and Linux .mpeg files can be created if pymedia is installed. On macOS .mov files can be created if the pyobjc-frameworks-QTKit is installed. Unfortunately the libs used for movie generation can be flaky and poor quality. As for animated GIFs, better results can be achieved by saving as individual .png frames and then combining them into a movie using software like ffmpeg.
codec (str, optional) – The codec to be used by moviepy for mp4/mpg/mov files. If
None then the default will depend on file extension. Can be
one of libx264
, mpeg4
for mp4/mov files. Can be
rawvideo
, png
for avi files (not recommended). Can be
libvorbis
for ogv files. Default is libx264
.
fps (int, optional) – The frame rate to be used throughout the movie. Only for quicktime (.mov) movies.. Default is 30.
clearFrames (bool, optional) – Set this to False if you want the frames to be kept for
additional calls to saveMovieFrames
. Default is True.
Examples
Writes a series of static frames as frame001.tif, frame002.tif etc.:
myWin.saveMovieFrames('frame.tif')
As of PsychoPy 1.84.1 the following are written with moviepy:
myWin.saveMovieFrames('stimuli.mp4') # codec = 'libx264' or 'mpeg4'
myWin.saveMovieFrames('stimuli.mov')
myWin.saveMovieFrames('stimuli.gif')
Scissor rectangle (x, y, w, h) for the current draw buffer.
Values x and y define the origin, and w and h the size of the rectangle in pixels. The scissor operation is only active if scissorTest=True.
Usually, the scissor and viewport are set to the same rectangle to prevent drawing operations from spilling into other regions of the screen. For instance, calling clearBuffer will only clear within the scissor rectangle.
Setting the scissor rectangle but not the viewport will restrict drawing within the defined region (like a rectangular aperture), not changing the positions of stimuli.
True if scissor testing is enabled.
Sensor sample time (float). This value corresponds to the time the head (HMD) position was sampled, which is required for computing motion-to-photon latency. This does not need to be specified if getTrackingState was called with latencyMarker=True.
Get the connected HMD’s unique serial number (str).
Use this to identify a particular unit if you own many.
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
Set the active draw buffer.
Warning
The window.Window.size property will return the buffer’s dimensions in pixels instead of the window’s when setBuffer is set to ‘left’ or ‘right’.
buffer (str) – View buffer to divert successive drawing operations to, can be either ‘left’ or ‘right’.
clear (boolean) – Clear the color, stencil and depth buffer.
Usually you can use stim.attribute = value
syntax instead,
but use this method if you want to set color and colorSpace
simultaneously.
See color
for documentation on colors.
Return to default projection. Call this before drawing PsychoPy’s 2D stimuli after a stereo projection change.
Note: This only has an effect if using Rift in legacy immediate mode OpenGL.
clearDepth (bool) – Clear the depth buffer prior after configuring the view parameters.
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
Change the appearance of the cursor for this window. Cursor types provide contextual hints about how to interact with on-screen objects.
The graphics used ‘standard cursors’ provided by the operating system. They may vary in appearance and hot spot location across platforms. The following names are valid on most platforms:
arrow
: Default pointer.
ibeam
: Indicates text can be edited.
crosshair
: Crosshair with hot-spot at center.
hand
: A pointing hand.
hresize
: Double arrows pointing horizontally.
vresize
: Double arrows pointing vertically.
name (str) – Type of standard cursor to use (see above). Default is arrow
.
Notes
On Windows the crosshair
option is negated with the background
color. It will not be visible when placed over 50% grey fields.
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
Set an off-axis projection.
Create an off-axis projection for subsequent rendering calls. Sets the viewMatrix and projectionMatrix accordingly so the scene origin is on the screen plane. If eyeOffset is correct and the view distance and screen size is defined in the monitor configuration, the resulting view will approximate ortho-stereo viewing.
The convergence plane can be adjusted by setting convergeOffset. By default, the convergence plane is set to the screen plane. Any points on the screen plane will have zero disparity.
Set the projection and view matrix to render with perspective.
Matrices are computed using values specified in the monitor configuration with the scene origin on the screen plane. Calculations assume units are in meters. If eyeOffset != 0, the view will be transformed laterally, however the frustum shape will remain the same.
Note that the values of projectionMatrix
and
viewMatrix
will be replaced when calling this
function.
Deprecated: As of v1.61.00 please use setColor() instead
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
Set head-mounted display view. Gets the projection and view matrices from the HMD and applies them.
Note: This only has an effect if using Rift in legacy immediate mode OpenGL.
clearDepth (bool) – Clear the depth buffer prior after configuring the view parameters.
DEPRECATED: this method used to be used to switch between units for stimulus drawing but this is now handled by the stimuli themselves and the window should always be left in units of ‘pix’
Configure stereo debug HUD guides.
option (str) – Option to set. Valid options are InfoEnable, Size, Position, YawPitchRoll, and Color.
value (array_like or bool) –
Value to set for a given option. Appropriate types for each option are:
InfoEnable - bool, True to show, False to hide.
Size - array_like, [w, h] in meters.
Position - array_like, [x, y, z] in meters.
YawPitchRoll - array_like, [pitch, yaw, roll] in degrees.
Color - array_like, [r, g, b] as floats ranging 0.0 to 1.0.
True
if the option was successfully set.
Examples
Configuring a stereo debug HUD guide:
# show a quad with a crosshair
hmd.stereoDebugHudMode('QuadWithCrosshair')
# enable displaying guide information
hmd.setStereoDebugHudOption('InfoEnable', True)
# set the position of the guide quad in the scene
hmd.setStereoDebugHudOption('Position', [0.0, 1.7, -2.0])
Set toe-in projection.
Create a toe-in projection for subsequent rendering calls. Sets the viewMatrix and projectionMatrix accordingly so the scene origin is on the screen plane. The value of convergeOffset will define the convergence point of the view, which is offset perpendicular to the center of the screen plane. Points falling on a vertical line at the convergence point will have zero disparity.
Notes
This projection mode is only ‘correct’ if the viewer’s eyes are converged at the convergence point. Due to perspective, this projection introduces vertical disparities which increase in magnitude with eccentricity. Use setOffAxisView if you want to display something the viewer can look around the screen comfortably.
True if the user requested the application should quit through the headset’s interface.
True if the user requested the origin be re-centered through the headset’s interface.
Show a message in the window. This can be used to show information to the participant.
This creates a TextBox2 object that is displayed in the window. The text can be updated by calling this method again with a new message. The updated text will appear the next time draw() is called.
msg (str or None) – Message text to display. If None, then any existing message is removed.
Show the visual indicator which shows we are in piloting mode.
Size property to get the dimensions of the view buffer instead of the window. If there are no view buffers, always return the dims of the window.
Specify a tracking origin. If trackingOriginType=’floor’, this function sets the origin of the scene in the ground plane. If trackingOriginType=’eye’, the scene origin is set to the known eye height.
pose (LibOVRPose) – Tracking origin pose.
Specify a tracking origin using a pose and orientation. This is the same as specifyTrackingOrigin, but accepts a position vector [x, y, z] and orientation quaternion [x, y, z, w].
Start haptic feedback (vibration).
Vibration is constant at fixed frequency and amplitude. Vibration lasts 2.5 seconds, so this function needs to be called more often than that for sustained vibration. Only controllers which support vibration can be used here.
There are only two frequencies permitted ‘high’ and ‘low’, however, amplitude can vary from 0.0 to 1.0. Specifying `frequency`=’off’ stops vibration if in progress.
Put autoDraw components on ‘hold’, meaning they get autoDraw set to False but are added to an internal list to be ‘released’ when .releaseAutoDraw is called.
True if stencil testing is enabled.
Set the debug stereo HUD mode.
This makes the compositor add stereoscopic reference guides to the scene. You can configure the HUD can be configured using other methods.
mode (str) – Stereo debug mode to use. Valid options are Off, Quad, QuadWithCrosshair, and CrosshairAtInfinity.
Examples
Enable a stereo debugging guide:
hmd.stereoDebugHudMode('CrosshairAtInfinity')
Hide the debugging guide. Should be called before exiting the application since it’s persistent until the Oculus service is restarted:
hmd.stereoDebugHudMode('Off')
Stop haptic feedback.
Convenience function to stop controller vibration initiated by the last
vibrateController
call. This is the same as calling
vibrateController(controller, frequency='off')
.
controller (str) – Name of the controller to stop vibrating.
Submit a haptics buffer to begin controller vibration.
controller (str) – Name of controller to vibrate.
hapticsBuffer (LibOVRHapticsBuffer) – Haptics buffer to playback.
Notes
Methods startHaptics and stopHaptics cannot be used interchangeably with this function.
Convert tan angles to the normalized device coordinates for the current buffer.
Test if tracked devices are colliding with the play area boundary.
This returns an object containing test result data.
Retrieves the time on the next flip and assigns it to the attrib for this obj.
Examples
Assign time on flip to the tStartRefresh
key of myTimingDict
:
win.getTimeOnFlip(myTimingDict, 'tStartRefresh')
Number of attached trackers.
Current tracking origin type (str).
Valid tracking origin types are ‘floor’ and ‘eye’.
None, ‘height’ (of the window), ‘norm’, ‘deg’, ‘cm’, ‘pix’ Defines the default units of stimuli initialized in the window. I.e. if you change units, already initialized stimuli won’t change their units.
Can be overridden by each stimulus, if units is specified on initialization.
See Units for the window and stimuli for explanation of options.
Deprecated: use Window.flip() instead
Update all connected controller states. This updates controller input states for an input device managed by LibOVR.
The polling time for each device is accessible through the controllerPollTimes attribute. This attribute returns a dictionary where the polling time from the last updateInputState call for a given controller can be retrieved by using the name as a key.
controllers (tuple or list, optional) – List of controllers to poll. If None, all available controllers will be polled.
Examples
Poll the state of specific controllers by name:
controllers = ['XBox', 'Touch']
updateInputState(controllers)
Explicitly update scene lights if they were modified.
This is required if modifications to objects referenced in lights have been changed since assignment. If you removed or added items of lights you must refresh all of them.
index (int, optional) – Index of light source in lights to update. If None, all lights will be refreshed.
Examples
Call updateLights if you modified lights directly like this:
win.lights[1].diffuseColor = [1., 0., 0.]
win.updateLights(1)
Enable scene lighting.
Lights will be enabled if using legacy OpenGL lighting. Stimuli using shaders for lighting should check if useLights is True since this will have no effect on them, and disable or use a no lighting shader instead. Lights will be transformed to the current view matrix upon setting to True.
Lights are transformed by the present GL_MODELVIEW matrix. Setting useLights will result in their positions being transformed by it. If you want lights to appear at the specified positions in world space, make sure the current matrix defines the view/eye transformation when setting useLights=True.
This flag is reset to False at the beginning of each frame. Should be False if rendering 2D stimuli or else the colors will be incorrect.
Get user height in meters (float).
The view matrix for the current eye buffer. Only valid after a
calcEyePoses()
call. Note that setting viewMatrix manually
will break visibility culling.
The origin of the window onto which stimulus-objects are drawn.
The value should be given in the units defined for the window. NB: Never change a single component (x or y) of the origin, instead replace the viewPos-attribute in one shot, e.g.:
win.viewPos = [new_xval, new_yval] # This is the way to do it
win.viewPos[0] = new_xval # DO NOT DO THIS! Errors will result.
Viewport rectangle (x, y, w, h) for the current draw buffer.
Values x and y define the origin, and w and h the size of the rectangle in pixels.
This is typically set to cover the whole buffer, however it can be changed for applications like multi-view rendering. Stimuli will draw according to the new shape of the viewport, for instance and stimulus with position (0, 0) will be drawn at the center of the viewport, not the window.
Examples
Constrain drawing to the left and right halves of the screen, where stimuli will be drawn centered on the new rectangle. Note that you need to set both the viewport and the scissor rectangle:
x, y, w, h = win.frameBufferSize # size of the framebuffer
win.viewport = win.scissor = [x, y, w / 2.0, h]
# draw left stimuli ...
win.viewport = win.scissor = [x + (w / 2.0), y, w / 2.0, h]
# draw right stimuli ...
# restore drawing to the whole screen
win.viewport = win.scissor = [x, y, w, h]
After a call to flip()
should we wait for the
blank before the script continues.
Size of the window to use when not fullscreen (w, h).