psychopy.visual.VisualSystemHD
¶Classes for using NordicNeuralLab’s VisualSystemHD in-scanner display for presenting visual stimuli. Support is preliminary so users must empirically verify whether the default settings for barrel distortion and FOV are correct. Support may be good enough at this point for studies that do not require precise stereoscopy or stimulus sizes.
|
Class provides support for NordicNeuralLab's VisualSystemHD(tm) fMRI display hardware. |
True if using monoscopic mode. |
|
True if using lens correction. |
|
Distortion coefficient (float). |
|
Diopters value of the current eye buffer. |
|
|
Set the diopters for a given eye. |
Eye offset for the current buffer in centimeters used for stereoscopic rendering. |
|
|
Set the eye offset in centimeters. |
|
Set the eye buffer to draw to. |
Set the projection and view matrix to render with perspective. |
Class provides support for NordicNeuralLab’s VisualSystemHD(tm) fMRI display hardware. This is a lazy-imported class, therefore import using full path from psychopy.visual.nnlvs import VisualSystemHD when inheriting from it.
Use this class in-place of the Window class for use with the VSHD hardware. Ensure that the VSHD headset display output is configured in extended desktop mode (eg. nVidia Surround). Extended desktops are only supported on Windows and Linux systems.
The VSHD is capable of both 2D and stereoscopic 3D rendering. You can select which eye to draw to by calling setBuffer, much like how stereoscopic rendering is implemented in the base Window class.
Notes
This class handles drawing differently than the default window class, as a result, stimuli autoDraw is not supported.
Edges of the warped image may appear jagged. To correct this, create a window using multiSample=True and numSamples > 1 to smooth out these artifacts.
Examples
Here is a basic example of 2D rendering using the VisualSystemHD(tm). This is the binocular version of the dynamic ‘plaid.py’ demo:
from psychopy import visual, core, event
# Create a visual window
win = visual.VisualSystemHD(fullscr=True, screen=1)
# Initialize some stimuli, note contrast, opacity, ori
grating1 = visual.GratingStim(win, mask="circle", color='white',
contrast=0.5, size=(1.0, 1.0), sf=(4, 0), ori = 45, autoLog=False)
grating2 = visual.GratingStim(win, mask="circle", color='white',
opacity=0.5, size=(1.0, 1.0), sf=(4, 0), ori = -45, autoLog=False,
pos=(0.1, 0.1))
trialClock = core.Clock()
t = 0
while not event.getKeys() and t < 20:
t = trialClock.getTime()
for eye in ('left', 'right'):
win.setBuffer(eye) # change the buffer
grating1.phase = 1 * t # drift at 1Hz
grating1.draw() # redraw it
grating2.phase = 2 * t # drift at 2Hz
grating2.draw() # redraw it
win.flip()
win.close()
core.quit()
As you can see above, there are few changes needed to convert an existing 2D experiment to run on the VSHD. For 3D rendering with perspective, you need set eyeOffset and apply the projection by calling setPerspectiveView. (other projection modes are not implemented or supported right now):
from psychopy import visual, core, event
# Create a visual window
win = visual.VisualSystemHD(fullscr=True, screen=1,
multiSample=True, nSamples=8)
# text to display
instr = visual.TextStim(win, text="Any key to quit", pos=(0, -.7))
# create scene light at the pivot point
win.lights = [
visual.LightSource(win, pos=(0.4, 4.0, -2.0), lightType='point',
diffuseColor=(0, 0, 0), specularColor=(1, 1, 1))
]
win.ambientLight = (0.2, 0.2, 0.2)
# Initialize some stimuli, note contrast, opacity, ori
ball = visual.SphereStim(win, radius=0.1, pos=(0, 0, -2), color='green',
useShaders=False)
iod = 6.2 # interocular separation in CM
win.setEyeOffset(-iod / 2.0, 'left')
win.setEyeOffset(iod / 2.0, 'right')
trialClock = core.Clock()
t = 0
while not event.getKeys() and t < 20:
t = trialClock.getTime()
for eye in ('left', 'right'):
win.setBuffer(eye) # change the buffer
# setup drawing with perspective
win.setPerspectiveView()
win.useLights = True # switch on lights
ball.draw() # draw the ball
# shut the lights, needed to prevent light color from affecting
# 2D stim
win.useLights = False
# reset transform to draw text correctly
win.resetEyeTransform()
instr.draw()
win.flip()
win.close()
core.quit()
monoscopic (bool) – Use monoscopic rendering. If True, the same image will be drawn to both eye buffers. You will not need to call setBuffer. It is not possible to set monoscopic mode after the window is created. It is recommended that you use monoscopic mode if you intend to display only 2D stimuli about the center of the display as it uses a less memory intensive rendering pipeline.
diopters (tuple or list) – Initial diopter values for the left and right eye. Default is (-1, -1), values must be integers.
lensCorrection (bool) – Apply lens correction (barrel distortion) to the output. The amount of distortion applied can be specified using distCoef. If False, no distortion will be applied to the output and the entire display will be used. Not applying correction will result in pincushion distortion which produces a non-rectilinear output.
distCoef (float) – Distortion coefficient for barrel distortion. If None, the recommended value will be used for the model of display. You can adjust the value to fine-tune the barrel distortion.
directDraw (bool) – Direct drawing mode. Stimuli are drawn directly to the back buffer instead of creating separate buffer. This saves video memory but does not permit barrel distortion or monoscopic rendering. If False, drawing is done with two FBOs containing each eye’s image.
hwModel (str) – Model of the VisualSystemHD in use. Used to set viewing parameters accordingly. Default is ‘vshd’. Cannot be changed after starting the application.
Helper function to assign the time of last flip to the obj.attrib
Warp and blit to the appropriate eye buffer.
eye (str) – Eye buffer being used.
Checks whether the requested and actual screen sizes differ. If not then a warning is output and the window size is set to actual
Make sure there are no dead refs in the editables list
Override end of flip with custom color channel masking if required.
Return the current Window as an image.
Return an array of pixel values from the current window buffer or sub-region.
rect (tuple[int], optional) – The region of the window to capture in pixel coordinates (left, bottom, width, height). If None, the whole window is captured.
buffer (str, optional) – Buffer to capture.
includeAlpha (bool, optional) – Include the alpha channel in the returned array. Default is True.
makeLum (bool, optional) – Convert the RGB values to luminance values. Values are rounded to the nearest integer. Default is False.
Pixel values as a 3D array of shape (height, width, channels). If includeAlpha is False, the array will have shape (height, width, 3). If makeLum is True, the array will have shape (height, width).
ndarray
Examples
Get the pixel values of the whole window:
pix = win._getPixels()
Get pixel values and convert to luminance and get average:
pix = win._getPixels(makeLum=True)
average = pix.mean()
Deprecated function, here for historical reasons. You may now use :py:attr:`~Window._getFrame() and specify a rect to get a sub-region, just as used here.
power2 can be useful with older OpenGL versions to avoid interpolation
in PatchStim
. If power2 or squarePower2, it will expand rect
dimensions up to next power of two. squarePower2 uses the max
dimensions. You need to check what your hardware & OpenGL supports,
and call _getRegionOfFrame()
as appropriate.
Get the horizontal and vertical extents of the barrel distortion in normalized device coordinates. This is used to determine the FOV along each axis after barrel distortion.
eye (str) – Eye to compute the extents for.
2d array of coordinates [+X, -X, +Y, -Y] of the extents of the barrel distortion.
ndarray
Perform a warp operation.
(in this case a copy operation without any warping)
Make this window’s OpenGL context current.
If called on a window whose context is current, the function will return
immediately. This reduces the number of redundant calls if no context
switch is required. If useFBO=True
, the framebuffer is bound after
the context switch.
Setup OpenGL state for this window.
A private method to work out how to handle gamma for this Window given that the user might have specified an explicit value, or maybe gave a Monitor.
Custom _startOfFlip
for HMD rendering. This
finalizes the HMD texture before diverting drawing operations back to
the on-screen window. This allows flip
to swap the
on-screen and HMD buffers when called. This function always returns
True.
True
Adds an editable element to the screen (something to which characters can be sent with meaning from the keyboard).
The current editable object receiving chars is Window.currentEditable
editable –
Ambient light color for the scene [r, g, b, a]. Values range from 0.0 to 1.0. Only applicable if useLights is True.
Examples
Setting the ambient light color:
win.ambientLight = [0.5, 0.5, 0.5]
# don't do this!!!
win.ambientLight[0] = 0.5
win.ambientLight[1] = 0.5
win.ambientLight[2] = 0.5
Apply the current view and projection matrices.
Matrices specified by attributes viewMatrix
and
projectionMatrix
are applied using ‘immediate mode’
OpenGL functions. Subsequent drawing operations will be affected until
flip()
is called.
All transformations in GL_PROJECTION
and GL_MODELVIEW
matrix
stacks will be cleared (set to identity) prior to applying.
clearDepth (bool) – Clear the depth buffer. This may be required prior to rendering 3D objects.
Examples
Using a custom view and projection matrix:
# Must be called every frame since these values are reset after
# `flip()` is called!
win.viewMatrix = viewtools.lookAt( ... )
win.projectionMatrix = viewtools.perspectiveProjectionMatrix( ... )
win.applyEyeTransform()
# draw 3D objects here ...
Aspect ratio of the current viewport (width / height).
How should the background image of this window fit? Options are:
No scaling is applied, image is present at its pixel size unaltered.
Image is scaled such that it covers the whole screen without changing its aspect ratio. In other words, both dimensions are evenly scaled such that its SHORTEST dimension matches the window’s LONGEST dimension.
Image is scaled such that it is contained within the screen without changing its aspect ratio. In other words, both dimensions are evenly scaled such that its LONGEST dimension matches the window’s SHORTEST dimension.
If image is bigger than the window along any dimension, it will behave as if backgroundFit were “contain”. Otherwise, it will behave as if backgroundFit were None.
Background image for the window, can be either a visual.ImageStim object or anything which could be passed to visual.ImageStim.image to create one. Will be drawn each time win.flip() is called, meaning it is always below all other contents of the window.
Blend mode to use.
Call a function immediately after the next flip()
command.
The first argument should be the function to call, the following args should be used exactly as you would for your normal call to the function (can use ordered arguments or keyword arguments as normal).
e.g. If you have a function that you would normally call like this:
pingMyDevice(portToPing, channel=2, level=0)
then you could call callOnFlip()
to have the function
call synchronized with the frame flip like this:
win.callOnFlip(pingMyDevice, portToPing, channel=2, level=0)
Remove all autoDraw components, meaning they get autoDraw set to False and are not added to any list (as in .stashAutoDraw)
Clear the present buffer (to which you are currently drawing) without flipping the window.
Useful if you want to generate movie sequences from the back buffer without actually taking the time to flip the window.
Set color prior to clearing to set the color to clear the color buffer to. By default, the depth buffer is cleared to a value of 1.0.
Examples
Clear the color buffer to a specified color:
win.color = (1, 0, 0)
win.clearBuffer(color=True)
Clear only the depth buffer, depthMask must be True or else this will have no effect. Depth mask is usually True by default, but may change:
win.depthMask = True
win.clearBuffer(color=False, depth=True, stencil=False)
Close the window (and reset the Bits++ if necess).
Set the color of the window.
This command sets the color that the blank screen will have on the
next clear operation. As a result it effectively takes TWO
flip()
operations to become visible (the first uses
the color to create the new screen, the second presents that screen to
the viewer). For this reason, if you want to changed background color of
the window “on the fly”, it might be a better idea to draw a
Rect
that fills the whole window with the desired
Rect.fillColor
attribute. That’ll show up on first flip.
See other stimuli (e.g. GratingStim.color
)
for more info on the color attribute which essentially works the same on
all PsychoPy stimuli.
See Color spaces for further information about the ways to specify colors and their various implications.
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to specify colorSpace before setting the color. Example:
# A light green text
stim = visual.TextStim(win, 'Color me!',
color=(0, 1, 0), colorSpace='rgb')
# An almost-black text
stim.colorSpace = 'rgb255'
# Make it light green again
stim.color = (128, 255, 128)
Scaling factor (float) to use when drawing to the backbuffer to convert framebuffer to client coordinates.
See also
Convergence offset from monitor in centimeters.
This is value corresponds to the offset from screen plane to set the convergence plane (or point for toe-in projections). Positive offsets move the plane farther away from the viewer, while negative offsets nearer. This value is used by setPerspectiveView and should be set before calling it to take effect.
Notes
This value is only applicable for setToeIn and setOffAxisView.
Convert a screen coordinate to a direction vector.
Takes a screen/window coordinate and computes a vector which projects a ray from the viewpoint through it (line-of-sight). Any 3D point touching the ray will appear at the screen coordinate.
Uses the current viewport and projectionMatrix to calculate the vector. The vector is in eye-space, where the origin of the scene is centered at the viewpoint and the forward direction aligned with the -Z axis. A ray of (0, 0, -1) results from a point at the very center of the screen assuming symmetric frustums.
Note that if you are using a flipped/mirrored view, you must invert your supplied screen coordinates (screenXY) prior to passing them to this function.
screenXY (array_like) – X, Y screen coordinate. Must be in units of the window.
Normalized direction vector [x, y, z].
ndarray
Examples
Getting the direction vector between the mouse cursor and the eye:
mx, my = mouse.getPos()
dir = win.coordToRay((mx, my))
Set the position of a 3D stimulus object using the mouse, constrained to a plane. The object origin will always be at the screen coordinate of the mouse cursor:
# the eye position in the scene is defined by a rigid body pose
win.viewMatrix = camera.getViewMatrix()
win.applyEyeTransform()
# get the mouse location and calculate the intercept
mx, my = mouse.getPos()
ray = win.coordToRay([mx, my])
result = intersectRayPlane( # from mathtools
orig=camera.pos,
dir=camera.transformNormal(ray),
planeOrig=(0, 0, -10),
planeNormal=(0, 1, 0))
# if result is `None`, there is no intercept
if result is not None:
pos, dist = result
objModel.thePose.pos = pos
else:
objModel.thePose.pos = (0, 0, -10) # plane origin
If you don’t define the position of the viewer with a RigidBodyPose, you can obtain the appropriate eye position and rotate the ray by doing the following:
pos = numpy.linalg.inv(win.viewMatrix)[:3, 3]
ray = win.coordToRay([mx, my]).dot(win.viewMatrix[:3, :3])
# then ...
result = intersectRayPlane(
orig=pos,
dir=ray,
planeOrig=(0, 0, -10),
planeNormal=(0, 1, 0))
True if face culling is enabled.`
Face culling mode, either back, front or both.
The editable (Text?) object that currently has key focus
Depth test comparison function for rendering.
True if depth masking is enabled. Writing to the depth buffer will be disabled.
True if depth testing is enabled.
Diopters value of the current eye buffer.
Dispatches events for all pyglet windows. Used by iohub 2.0 psychopy kb event integration.
Distortion coefficient (float).
True if 3D drawing is enabled on this window.
Eye offset for the current buffer in centimeters used for stereoscopic rendering. This works differently than the main window class as it sets the offset for the current buffer. The offset is saved and automatically restored when the buffer is selected.
Distance to the far clipping plane in meters.
Flip the front and back buffers after drawing everything for your
frame. (This replaces the update()
method, better
reflecting what is happening underneath).
clearBuffer (bool, optional) – Clear the draw buffer after flipping. Default is True.
Wall-clock time in seconds the flip completed. Returns None if
waitBlanking
is False.
float or None
Notes
The time returned when waitBlanking
is True
corresponds to when the graphics driver releases the draw buffer to
accept draw commands again. This time is usually close to the vertical
sync signal of the display.
Examples
Results in a clear screen after flipping:
win.flip(clearBuffer=True)
The screen is not cleared (so represent the previous screen):
win.flip(clearBuffer=False)
Report the frames per second since the last call to this function (or since the window was created if this is first call)
Size of the framebuffer in pixels (w, h).
Face winding order to define front, either ccw or cw.
Return whether the window is in fullscreen mode.
Set the monitor gamma for linearization.
Warning
Don’t use this if using a Bits++ or Bits#, as it overrides monitor settings.
Sets the hardware CLUT using a specified 3xN array of floats ranging between 0.0 and 1.0.
Array must have a number of rows equal to 2 ^ max(bpc).
Measures the actual frames-per-second (FPS) for the screen.
This is done by waiting (for a max of nMaxFrames) until nIdentical frames in a row have identical frame times (std dev below threshold ms).
nIdentical (int, optional) – The number of consecutive frames that will be evaluated. Higher –> greater precision. Lower –> faster.
nMaxFrames (int, optional) – The maximum number of frames to wait for a matching set of nIdentical.
nWarmUpFrames (int, optional) – The number of frames to display before starting the test (this is in place to allow the system to settle after opening the Window for the first time.
threshold (int or float, optional) – The threshold for the std deviation (in ms) before the set are considered a match.
Frame rate (FPS) in seconds. If there is no such sequence of identical frames a warning is logged and None will be returned.
float or None
Get the scaling factor required for scaling correctly on high-DPI displays.
If the returned value is 1.0, no scaling needs to be applied to objects drawn on the backbuffer. A value >1.0 indicates that the backbuffer is larger than the reported client area, requiring points to be scaled to maintain constant size across similarly sized displays. In other words, the scaling required to convert framebuffer to client coordinates.
Scaling factor to be applied along both horizontal and vertical dimensions.
Examples
Get the size of the client area:
clientSize = win.frameBufferSize / win.getContentScaleFactor()
Get the framebuffer size from the client size:
frameBufferSize = win.clientSize * win.getContentScaleFactor()
Convert client (window) to framebuffer pixel coordinates (eg., a mouse coordinate, vertices, etc.):
# `mousePosXY` is an array ...
frameBufferXY = mousePosXY * win.getContentScaleFactor()
# you can also use the attribute ...
frameBufferXY = mousePosXY * win.contentScaleFactor
Notes
This value is only valid after the window has been fully realized.
The expected time of the next screen refresh. This is currently calculated as win._lastFrameTime + refreshInterval
targetTime (float) – The delay from now for which you want the flip time. 0 will give the because that the earliest we can achieve. 0.15 will give the schedule flip time that gets as close to 150 ms as possible
clock (None, 'ptb', 'now' or any Clock object) – If True then the time returned is compatible with ptb.GetSecs()
verbose (bool) – Set to True to view the calculations along the way
Capture the current Window as an image.
Saves to stack for saveMovieFrames()
. As of v1.81.00
this also returns the frame as a PIL image
This can be done at any time (usually after a flip()
command).
Frames are stored in memory until a saveMovieFrames()
command is issued. You can issue getMovieFrame()
as
often as you like and then save them all in one go when finished.
The back buffer will return the frame that hasn’t yet been ‘flipped’ to be visible on screen but has the advantage that the mouse and any other overlapping windows won’t get in the way.
The default front buffer is to be called immediately after a
flip()
and gives a complete copy of the screen at the
window’s coordinates.
buffer (str, optional) – Buffer to capture.
Buffer pixel contents as a PIL/Pillow image object.
Image
Assesses the monitor refresh rate (average, median, SD) under current conditions, over at least 60 frames.
Records time for each refresh (frame) for n frames (at least 60),
while displaying an optional visual. The visual is just eye-candy to
show that something is happening when assessing many frames. You can
also give it text to display instead of a visual,
e.g., msg='(testing refresh rate...)'
; setting msg implies
showVisual == False
.
To simulate refresh rate under cpu load, you can specify a time to
wait within the loop prior to doing the flip()
.
If 0 < msDelay < 100, wait for that long in ms.
Returns timing stats (in ms) of:
average time per frame, for all frames
standard deviation of all frames
median, as the average of 12 frame times around the median (~monitor refresh rate)
2010 written by Jeremy Gray
Remove any message that is currently being displayed.
Hide the visual indicator which shows we are in piloting mode.
True if using lens correction.
Scene lights.
This is specified as an array of ~psychopy.visual.LightSource objects. If a single value is given, it will be converted to a list before setting. Set useLights to True before rendering to enable lighting/shading on subsequent objects. If lights is None or an empty list, no lights will be enabled if useLights=True, however, the scene ambient light set with ambientLight will be still be used.
Examples
Create a directional light source and add it to scene lights:
dirLight = gltools.LightSource((0., 1., 0.), lightType='directional')
win.lights = dirLight # `win.lights` will be a list when accessed!
Multiple lights can be specified by passing values as a list:
myLights = [gltools.LightSource((0., 5., 0.)),
gltools.LightSource((-2., -2., 0.))
win.lights = myLights
Send a log message that should be time-stamped at the next
flip()
command.
True if using monoscopic mode.
Returns the visibility of the mouse cursor.
Flip multiple times while maintaining the display constant. Use this method for precise timing.
WARNING: This function should not be used. See the Notes section for details.
Notes
This function can behave unpredictably, and the PsychoPy authors recommend against using it. See https://github.com/psychopy/psychopy/issues/867 for more information.
Examples
Example of using multiFlip
:
# Draws myStim1 to buffer
myStim1.draw()
# Show stimulus for 4 frames (90 ms at 60Hz)
myWin.multiFlip(clearBuffer=False, flips=6)
# Draw myStim2 "on top of" myStim1
# (because buffer was not cleared above)
myStim2.draw()
# Show this for 2 frames (30 ms at 60Hz)
myWin.multiFlip(flips=2)
# Show blank screen for 3 frames (buffer was cleared above)
myWin.multiFlip(flips=3)
Distance to the near clipping plane in meters.
Moves focus of the cursor to the next editable window
A default resize event handler.
This default handler updates the GL viewport to cover the entire
window and sets the GL_PROJECTION
matrix to be orthogonal in
window space. The bottom-left corner is (0, 0) and the top-right
corner is the width and height of the Window
in pixels.
Override this event handler with your own to create another projection, for example in perspective.
Projection matrix defined as a 4x4 numpy array.
Record time elapsed per frame.
Provides accurate measures of frame intervals to determine
whether frames are being dropped. The intervals are the times between
calls to flip()
. Set to True only during the
time-critical parts of the script. Set this to False while the screen
is not being updated, i.e., during any slow, non-frame-time-critical
sections of your code, including inter-trial-intervals,
event.waitkeys()
, core.wait()
, or image.setImage()
.
Examples
Enable frame interval recording, successive frame intervals will be stored:
win.recordFrameIntervals = True
Frame intervals can be saved by calling the
saveFrameIntervals
method:
win.saveFrameIntervals()
Restore the default projection and view settings to PsychoPy defaults. Call this prior to drawing 2D stimuli objects (i.e. GratingStim, ImageStim, Rect, etc.) if any eye transformations were applied for the stimuli to be drawn correctly.
clearDepth (bool) – Clear the depth buffer upon reset. This ensures successive draw commands are not affected by previous data written to the depth buffer. Default is True.
Notes
Calling flip()
automatically resets the view and
projection to defaults. So you don’t need to call this unless you are
mixing 3D and 2D stimuli.
Examples
Going between 3D and 2D stimuli:
# 2D stimuli can be drawn before setting a perspective projection
win.setPerspectiveView()
# draw 3D stimuli here ...
win.resetEyeTransform()
# 2D stimuli can be drawn here again ...
win.flip()
Reset the viewport to cover the whole framebuffer.
Set the viewport to match the dimensions of the back buffer or framebuffer (if useFBO=True). The scissor rectangle is also set to match the dimensions of the viewport.
Add all stimuli which are on ‘hold’ back into the autoDraw list, and clear the hold list.
Save recorded screen frame intervals to disk, as comma-separated values.
fileName (None or str) – None or the filename (including path if necessary) in which to store the data. If None then ‘lastFrameIntervals.log’ will be used.
clear (bool) – Clear buffer frames intervals were stored after saving. Default is True.
Writes any captured frames to disk.
Will write any format that is understood by PIL (tif, jpg, png, …)
filename (str) – Name of file, including path. The extension at the end of the file determines the type of file(s) created. If an image type (e.g. .png) is given, then multiple static frames are created. If it is .gif then an animated GIF image is created (although you will get higher quality GIF by saving PNG files and then combining them in dedicated image manipulation software, such as GIMP). On Windows and Linux .mpeg files can be created if pymedia is installed. On macOS .mov files can be created if the pyobjc-frameworks-QTKit is installed. Unfortunately the libs used for movie generation can be flaky and poor quality. As for animated GIFs, better results can be achieved by saving as individual .png frames and then combining them into a movie using software like ffmpeg.
codec (str, optional) – The codec to be used by moviepy for mp4/mpg/mov files. If
None then the default will depend on file extension. Can be
one of libx264
, mpeg4
for mp4/mov files. Can be
rawvideo
, png
for avi files (not recommended). Can be
libvorbis
for ogv files. Default is libx264
.
fps (int, optional) – The frame rate to be used throughout the movie. Only for quicktime (.mov) movies.. Default is 30.
clearFrames (bool, optional) – Set this to False if you want the frames to be kept for
additional calls to saveMovieFrames
. Default is True.
Examples
Writes a series of static frames as frame001.tif, frame002.tif etc.:
myWin.saveMovieFrames('frame.tif')
As of PsychoPy 1.84.1 the following are written with moviepy:
myWin.saveMovieFrames('stimuli.mp4') # codec = 'libx264' or 'mpeg4'
myWin.saveMovieFrames('stimuli.mov')
myWin.saveMovieFrames('stimuli.gif')
Scissor rectangle (x, y, w, h) for the current draw buffer.
Values x and y define the origin, and w and h the size of the rectangle in pixels. The scissor operation is only active if scissorTest=True.
Usually, the scissor and viewport are set to the same rectangle to prevent drawing operations from spilling into other regions of the screen. For instance, calling clearBuffer will only clear within the scissor rectangle.
Setting the scissor rectangle but not the viewport will restrict drawing within the defined region (like a rectangular aperture), not changing the positions of stimuli.
True if scissor testing is enabled.
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
Set the eye buffer to draw to. Subsequent draw calls will be diverted to the specified eye.
Usually you can use stim.attribute = value
syntax instead,
but use this method if you want to set color and colorSpace
simultaneously.
See color
for documentation on colors.
Set the eye offset in centimeters.
When set, successive rendering operations will use the new offset.
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
Change the appearance of the cursor for this window. Cursor types provide contextual hints about how to interact with on-screen objects.
The graphics used ‘standard cursors’ provided by the operating system. They may vary in appearance and hot spot location across platforms. The following names are valid on most platforms:
arrow
: Default pointer.
ibeam
: Indicates text can be edited.
crosshair
: Crosshair with hot-spot at center.
hand
: A pointing hand.
hresize
: Double arrows pointing horizontally.
vresize
: Double arrows pointing vertically.
name (str) – Type of standard cursor to use (see above). Default is arrow
.
Notes
On Windows the crosshair
option is negated with the background
color. It will not be visible when placed over 50% grey fields.
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
Set an off-axis projection.
Create an off-axis projection for subsequent rendering calls. Sets the viewMatrix and projectionMatrix accordingly so the scene origin is on the screen plane. If eyeOffset is correct and the view distance and screen size is defined in the monitor configuration, the resulting view will approximate ortho-stereo viewing.
The convergence plane can be adjusted by setting convergeOffset. By default, the convergence plane is set to the screen plane. Any points on the screen plane will have zero disparity.
Set the projection and view matrix to render with perspective.
Matrices are computed using values specified in the monitor configuration with the scene origin on the screen plane. Calculations assume units are in meters. If eyeOffset != 0, the view will be transformed laterally, however the frustum shape will remain the same.
Note that the values of projectionMatrix
and
viewMatrix
will be replaced when calling this
function.
Deprecated: As of v1.61.00 please use setColor() instead
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the log message.
DEPRECATED: this method used to be used to switch between units for stimulus drawing but this is now handled by the stimuli themselves and the window should always be left in units of ‘pix’
Set toe-in projection.
Create a toe-in projection for subsequent rendering calls. Sets the viewMatrix and projectionMatrix accordingly so the scene origin is on the screen plane. The value of convergeOffset will define the convergence point of the view, which is offset perpendicular to the center of the screen plane. Points falling on a vertical line at the convergence point will have zero disparity.
Notes
This projection mode is only ‘correct’ if the viewer’s eyes are converged at the convergence point. Due to perspective, this projection introduces vertical disparities which increase in magnitude with eccentricity. Use setOffAxisView if you want to display something the viewer can look around the screen comfortably.
Show a message in the window. This can be used to show information to the participant.
This creates a TextBox2 object that is displayed in the window. The text can be updated by calling this method again with a new message. The updated text will appear the next time draw() is called.
msg (str or None) – Message text to display. If None, then any existing message is removed.
Show the visual indicator which shows we are in piloting mode.
Size of the drawable area in pixels (w, h).
Put autoDraw components on ‘hold’, meaning they get autoDraw set to False but are added to an internal list to be ‘released’ when .releaseAutoDraw is called.
True if stencil testing is enabled.
Retrieves the time on the next flip and assigns it to the attrib for this obj.
Examples
Assign time on flip to the tStartRefresh
key of myTimingDict
:
win.getTimeOnFlip(myTimingDict, 'tStartRefresh')
None, ‘height’ (of the window), ‘norm’, ‘deg’, ‘cm’, ‘pix’ Defines the default units of stimuli initialized in the window. I.e. if you change units, already initialized stimuli won’t change their units.
Can be overridden by each stimulus, if units is specified on initialization.
See Units for the window and stimuli for explanation of options.
Deprecated: use Window.flip() instead
Explicitly update scene lights if they were modified.
This is required if modifications to objects referenced in lights have been changed since assignment. If you removed or added items of lights you must refresh all of them.
index (int, optional) – Index of light source in lights to update. If None, all lights will be refreshed.
Examples
Call updateLights if you modified lights directly like this:
win.lights[1].diffuseColor = [1., 0., 0.]
win.updateLights(1)
Enable scene lighting.
Lights will be enabled if using legacy OpenGL lighting. Stimuli using shaders for lighting should check if useLights is True since this will have no effect on them, and disable or use a no lighting shader instead. Lights will be transformed to the current view matrix upon setting to True.
Lights are transformed by the present GL_MODELVIEW matrix. Setting useLights will result in their positions being transformed by it. If you want lights to appear at the specified positions in world space, make sure the current matrix defines the view/eye transformation when setting useLights=True.
This flag is reset to False at the beginning of each frame. Should be False if rendering 2D stimuli or else the colors will be incorrect.
View matrix defined as a 4x4 numpy array.
The origin of the window onto which stimulus-objects are drawn.
The value should be given in the units defined for the window. NB: Never change a single component (x or y) of the origin, instead replace the viewPos-attribute in one shot, e.g.:
win.viewPos = [new_xval, new_yval] # This is the way to do it
win.viewPos[0] = new_xval # DO NOT DO THIS! Errors will result.
Viewport rectangle (x, y, w, h) for the current draw buffer.
Values x and y define the origin, and w and h the size of the rectangle in pixels.
This is typically set to cover the whole buffer, however it can be changed for applications like multi-view rendering. Stimuli will draw according to the new shape of the viewport, for instance and stimulus with position (0, 0) will be drawn at the center of the viewport, not the window.
Examples
Constrain drawing to the left and right halves of the screen, where stimuli will be drawn centered on the new rectangle. Note that you need to set both the viewport and the scissor rectangle:
x, y, w, h = win.frameBufferSize # size of the framebuffer
win.viewport = win.scissor = [x, y, w / 2.0, h]
# draw left stimuli ...
win.viewport = win.scissor = [x + (w / 2.0), y, w / 2.0, h]
# draw right stimuli ...
# restore drawing to the whole screen
win.viewport = win.scissor = [x, y, w, h]
After a call to flip()
should we wait for the
blank before the script continues.
Size of the window to use when not fullscreen (w, h).