psychopy.data
- functions for storing/saving/analysing data¶Contents:
ExperimentHandler
- to combine multiple loops in one study
TrialHandler
- basic predefined trial matrix
TrialHandler2
- similar to TrialHandler but with ability to update mid-run
TrialHandlerExt
- similar to TrialHandler but with ability to run oddball designs
StairHandler
- for basic up-down (fixed step) staircases
QuestHandler
- for traditional QUEST algorithm
QuestPlusHandler
- for the updated QUEST+ algorithm (Watson, 2017)
PsiHandler
- the Psi staircase of Kontsevich & Tyler (1999)
MultiStairHandler
- a wrapper to combine interleaved staircases of any sort
Utility functions:
importConditions()
- to load a list of dicts from a csv/excel file
functionFromStaircase()
- to convert a staircase into its psychopmetric function
bootStraps()
- generate a set of bootstrap resamples from a dataset
getDateStr()
- provide a date string (in format suitable for filenames)
Curve Fitting:
ExperimentHandler
¶A container class for keeping track of multiple loops/handlers
Useful for generating a single data file from an experiment with many different loops (e.g. interleaved staircases or loops within loops
exp = data.ExperimentHandler(name=”Face Preference”,version=’0.1.0’)
As a useful identifier later
To keep track of which version of the experiment was run
Containing useful information about this run (e.g. {‘participant’:’jwp’,’gender’:’m’,’orientation’:90} )
psychopy.info.RunTimeInfo
Containing information about the system as detected at runtime
The path and filename of the originating script/experiment If not provided this will be determined as the path of the calling script.
This is defined in advance and the file will be saved at any point that the handler is removed or discarded (unless .abort() had been called in advance). The handler will attempt to populate the file even in the event of a (not too serious) crash!
savePickle : True (default) or False
saveWideText : True (default) or False
How (if at all) to sort columns in the data file, if none is given to saveAsWideText. Can be: - “alphabetical”, “alpha”, “a” or True: Sort alphabetically by header name - “priority”, “pr” or “p”: Sort according to priority - other: Do not sort, columns remain in order they were added
autoLog : True (default) or False
Returns the attribute names of loop parameters (trialN etc) that the current set of loops contain, ready to build a wide-format data file.
Returns the attribute names and values for the current trial of a particular loop. Does not return data inputs from the subject, only info relating to the trial execution.
Get a best guess at the priority of a column based on its name
name (str) – Name of the column
One of the following: - HIGH (19): Important columns which are near the front of the data file - MEDIUM (9): Possibly important columns which are around the middle of the data file - LOW (-1): Columns unlikely to be important which are at the end of the data file
NOTE: Values returned from this function are 1 less than values in constants.priority, columns whose priority was guessed are behind equivalently prioritised columns whose priority was specified.
Inform the ExperimentHandler that the run was aborted.
Experiment handler will attempt automatically to save data (even in the event of a crash if possible). So if you quit your script early you may want to tell the Handler not to save out the data files for this run. This is the method that allows you to do that.
Add an annotation at the current point in the experiment
value (str) – Value of the annotation
Add the data with a given name to the current experiment.
Typically the user does not need to use this function; if you added your data to the loop and had already added the loop to the experiment then the loop will automatically inform the experiment that it has received data.
Multiple data name/value pairs can be added to any given entry of the data file and is considered part of the same entry until the nextEntry() call is made.
e.g.:
# add some data for this trial
exp.addData('resp.rt', 0.8)
exp.addData('resp.key', 'k')
# end of trial - move to next line in data output
exp.nextEntry()
name (str) – Name of the column to add data as.
value (any) – Value to add
row (int or None) – Row in which to add this data. Leave as None to add to the current entry.
priority (int) – Priority value to set the column to - higher priority columns appear nearer to the start of the data file. Use values from constants.priority as landmark values: - CRITICAL: Always at the start of the data file, generally reserved for Routine start times - HIGH: Important columns which are near the front of the data file - MEDIUM: Possibly important columns which are around the middle of the data file - LOW: Columns unlikely to be important which are at the end of the data file - EXCLUDE: Always at the end of the data file, actively marked as unimportant
Add a loop such as a TrialHandler
or StairHandler
Data from this loop will be included in the resulting data files.
Return the loop which we are currently in, this will either be a handle to a loop, such as
a TrialHandler
or StairHandler
, or the handle
of the ExperimentHandler
itself if we are not in a loop.
Fetches a copy of all the entries including a final (orphan) entry if that exists. This allows entries to be saved even if nextEntry() is not yet called.
copy (not pointer) to entries
Returns all trials (elapsed, current and upcoming) with an index indicating which trial is the current trial.
list[Trial] – List of trials, in order (oldest to newest)
int – Index of the current trial in this list
Returns the current trial (.thisTrial)
The current trial
Trial
Returns the condition for n trials into the future, without advancing the trials. Returns ‘None’ if attempting to go beyond the last trial in the current loop, or if there is no current loop.
Returns Trial objects for a given range in the future. Will start looking at start trials in the future and will return n trials from then, so e.g. to get all trials from 2 in the future to 5 in the future you would use start=2 and n=3.
Get the experiment data as a JSON string.
priorityThreshold (int) – Output will only include columns whose priority is greater than or equal to this value. Use values in psychopy.constants.priority as a guideline for priority levels. Default is -9 (constants.priority.EXCLUDE + 1)
JSON string with the following fields: - ‘type’: Indicates that this is data from an ExperimentHandler (will always be “trials_data”) - ‘trials’: list of dict`s representing requested trials data - ‘priority’: `dict of column names
Get the priority value for a given column. If no priority value is stored, returns best guess based on column name.
name (str) – Column name
The priority value stored/guessed for this column, most likely a value from constants.priority, one of: - CRITICAL (30): Always at the start of the data file, generally reserved for Routine start times - HIGH (20): Important columns which are near the front of the data file - MEDIUM (10): Possibly important columns which are around the middle of the data file - LOW (0): Columns unlikely to be important which are at the end of the data file - EXCLUDE (-10): Always at the end of the data file, actively marked as unimportant
Informs the experiment handler that the loop is finished and not to include its values in further entries of the experiment.
This method is called by the loop itself if it ends its iterations, so is not typically needed by the user.
Calling nextEntry indicates to the ExperimentHandler that the current trial has ended and so further addData() calls correspond to the next trial.
Skip ahead n trials - the trials inbetween will be marked as “skipped”. If you try to skip past the last trial, will log a warning and skip to the last trial.
n (int) – Number of trials to skip ahead
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Saves a long, wide-format text file, with one line representing the attributes and data for a single trial. Suitable for analysis in R and SPSS.
If appendFile=True then the data will be added to the bottom of
an existing file. Otherwise, if the file exists already it will
be kept and a new file will be created with a slightly different
name. If you want to overwrite the old file, pass ‘overwrite’
to fileCollisionMethod
.
If matrixOnly=True then the file will not contain a header row, which can be handy if you want to append data to an existing file of the same format.
if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.tsv’ will be appended. Can include path info.
allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)
outputs the data with no header row.
will add this output to the end of the specified file if it already exists.
The encoding to use when saving a the file. Defaults to utf-8-sig.
Collision method passed to
handleFileCollision()
How (if at all) to sort columns in the data file. Can be: - “alphabetical”, “alpha”, “a” or True: Sort alphabetically by header name - “priority”, “pr” or “p”: Sort according to priority - other: Do not sort, columns remain in order they were added
Set the priority of a column in the data file.
name (str) – Name of the column, e.g. text.started
value (int) – Priority value to set the column to - higher priority columns appear nearer to the start of the data file. Use values from constants.priority as landmark values: - CRITICAL (30): Always at the start of the data file, generally reserved for Routine start times - HIGH (20): Important columns which are near the front of the data file - MEDIUM (10): Possibly important columns which are around the middle of the data file - LOW (0): Columns unlikely to be important which are at the end of the data file - EXCLUDE (-10): Always at the end of the data file, actively marked as unimportant
Skip ahead n trials - the trials inbetween will be marked as “skipped”. If you try to skip past the last trial, will log a warning and skip to the last trial.
n (int) – Number of trials to skip ahead
Add a timestamp (in the future) to the current row
win (psychopy.visual.Window) – The window object that we’ll base the timestamp flip on
name (str) – The name of the column in the datafile being written, such as ‘myStim.stopped’
format (str, class or None) – Format in which to return time, see clock.Timestamp.resolve() for more info. Defaults to float.
TrialHandler
¶Class to handle trial sequencing and data storage.
Calls to .next() will fetch the next trial object given to this handler, according to the method specified (random, sequential, fullRandom). Calls will raise a StopIteration error if trials have finished.
See demo_trialHandler.py
The psydat file format is literally just a pickled copy of the TrialHandler object that saved it. You can open it with:
from psychopy.tools.filetools import fromFile
dat = fromFile(path)
Then you’ll find that dat has the following attributes that
specifying conditions. This can be imported from an
excel/csv file using importConditions()
nReps: number of repeats for all conditions
‘sequential’ obviously presents the conditions in the order they appear in the list. ‘random’ will result in a shuffle of the conditions on each repeat, but all conditions occur once before the second repeat etc. ‘fullRandom’ fully randomises the trials across repeats as well, which means you could potentially run all trials of one condition before any trial of another.
e.g. [‘corr’,’rt’,’resp’]. If not provided then these
will be created as needed during calls to
addData()
This will be stored alongside the data and usually describes the experiment and subject ID, date etc.
If provided then this fixes the random number generator to use the same pattern of trials, by seeding its startpoint
script / experiment file path. The psydat file format will store a copy of the experiment if possible. If originPath==None is provided here then the TrialHandler will still store a copy of the script where it was created. If OriginPath==-1 then nothing will be stored.
class of a dictionary) of numpy arrays, one for each data type stored
.trialList - the original list of dicts, specifying the conditions
conditions list
.nTotal - the total number of trials that will be run
.nRemaining - the total number of trials remaining
.thisN - total trials completed so far
.thisRepN - which repeat you are currently on
.thisTrialN - which trial number within that repeat
trial
.finished - True/False for have we finished yet
.extraInfo - the dictionary of extra info as given at beginning
created the handler
Does the leg-work for saveAsText and saveAsExcel. Combines stimOut with ._parseDataOutput()
This just creates the dataOut part of the output matrix. It is called by _createOutputArray() which creates the header line and adds the stimOut columns
Pre-generates the sequence of trial presentations (for non-adaptive methods). This is called automatically when the TrialHandler is initialised so doesn’t need an explicit call from the user.
The returned sequence has form indices[stimN][repN] Example: sequential with 6 trialtypes (rows), 5 reps (cols), returns:
[[0 0 0 0 0]
[1 1 1 1 1]
[2 2 2 2 2]
[3 3 3 3 3]
[4 4 4 4 4]
[5 5 5 5 5]]
0, 1, 2, 3, 4, 5, 0, 1, 2, … … 3, 4, 5
To add a new type of sequence (as of v1.65.02): - add the sequence generation code here - adjust “if self.method in [ …]:” in both __init__ and .next() - adjust allowedVals in experiment.py -> shows up in DlgLoopProperties Note that users can make any sequence whatsoever outside of PsychoPy, and specify sequential order; any order is possible this way.
Creates an array of tuples the same shape as the input array where each tuple contains the indices to itself in the array.
Useful for shuffling and then using as a reference.
Remove references to ourself in experiments and terminate the loop
Returns the condition for the current trial, without advancing the trials.
Returns the condition information from n trials previously. Useful for comparisons in n-back tasks. Returns ‘None’ if trying to access a trial prior to the first.
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Returns the condition for n trials into the future, without advancing the trials. A negative n returns a previous (past) trial. Returns ‘None’ if attempting to go beyond the last trial.
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials have ended this method will raise a StopIteration error. This can be handled with code such as:
trials = data.TrialHandler(.......)
for eachTrial in trials: # automatically stops when done
# do stuff
or:
trials = data.TrialHandler(.......)
while True: # ie forever
try:
thisTrial = trials.next()
except StopIteration: # we got a StopIteration error
break #break out of the forever loop
# do stuff here for the trial
Exactly like saveAsText() except that the output goes to the screen instead of a file
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).
It has the advantage over the simpler text files (see
TrialHandler.saveAsText()
)
that data can be stored in multiple named sheets within the file.
So you could have a single file named after your experiment and
then have one worksheet for each participant. Or you could have
one file for each participant and then multiple sheets for
repeated sessions etc.
The file extension .xlsx will be added if not given already.
the name of the file to create or append. Can include relative or absolute path
the name of the worksheet within the file
the attributes of the trial characteristics to be output. To use this you need to have provided a list of dictionaries specifying to trialList parameter of the TrialHandler and give here the names of strings specifying entries in that dictionary
specifying the dataType and the analysis to be performed, in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including ‘mean’,’std’,’median’,’max’,’min’. e.g. rt_max will give a column of max reaction times across the trials assuming that rt values have been stored. The default values will output the raw, mean and std of all datatypes found.
If False any existing file with this name will be
kept and a new file will be created with a slightly different
name. If you want to overwrite the old file, pass ‘overwrite’
to fileCollisionMethod
.
If True then a new worksheet will be appended.
If a worksheet already exists with that name a number will
be added to make it unique.
Collision method (rename
,``overwrite``, fail
) passed to
handleFileCollision()
This is ignored if append
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
Basically just saves a copy of the handler (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write a text file with the data and various chosen stimulus attributes
will have .tsv appended and can include path info.
the stimulus attributes to be output. To use this you need to use a list of dictionaries and give here the names of dictionary keys that you want as strings
a list of strings specifying the dataType and the analysis to be performed,in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including; ‘mean’,’std’,’median’,’max’,’min’… The default values will output the raw, mean and std of all datatypes found
allows the user to use a delimiter other than tab (“,” is popular with file extension “.csv”)
outputs the data with no header row or extraInfo attached
will add this output to the end of the specified file if it already exists
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Write a text file with the session, stimulus, and data values from each trial in chronological order. Also, return a pandas DataFrame containing same information as the file.
each row comprises information from only a single trial.
no summarizing is done (such as collapsing to produce mean and standard deviation values across trials).
This ‘wide’ format, as expected by R for creating dataframes, and various other analysis programs, means that some information must be repeated on every row.
In particular, if the trialHandler’s ‘extraInfo’ exists, then each entry in there occurs in every row. In builder, this will include any entries in the ‘Experiment info’ field of the ‘Experiment settings’ dialog. In Coder, this information can be set using something like:
myTrialHandler.extraInfo = {'SubjID': 'Joan Smith',
'Group': 'Control'}
if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.tsv’ will be appended. Can include path info.
allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)
outputs the data with no header row.
will add this output to the end of the specified file if it already exists.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
TrialHandler2
¶Class to handle trial sequencing and data storage.
Calls to .next() will fetch the next trial object given to this handler, according to the method specified (random, sequential, fullRandom). Calls will raise a StopIteration error if trials have finished.
See demo_trialHandler.py
The psydat file format is literally just a pickled copy of the TrialHandler object that saved it. You can open it with:
from psychopy.tools.filetools import fromFile
dat = fromFile(path)
Then you’ll find that dat has the following attributes that
dictionaries specifying conditions
nReps: number of repeats for all conditions
‘sequential’ obviously presents the conditions in the order they appear in the list. ‘random’ will result in a shuffle of the conditions on each repeat, but all conditions occur once before the second repeat etc. ‘fullRandom’ fully randomises the trials across repeats as well, which means you could potentially run all trials of one condition before any trial of another.
e.g. [‘corr’,’rt’,’resp’]. If not provided then these
will be created as needed during calls to
addData()
This will be stored alongside the data and usually describes the experiment and subject ID, date etc.
If provided then this fixes the random number generator to use the same pattern of trials, by seeding its startpoint.
experiment file path. The psydat file format will store a copy of the experiment if possible. If originPath==None is provided here then the TrialHandler will still store a copy of the script where it was created. If OriginPath==-1 then nothing will be stored.
stored
.trialList - the original list of dicts, specifying the conditions
conditions list
.nTotal - the total number of trials that will be run
.nRemaining - the total number of trials remaining
.thisN - total trials completed so far
.thisRepN - which repeat you are currently on
.thisTrialN - which trial number within that repeat
trial
.finished - True/False for have we finished yet
.extraInfo - the dictionary of extra info as given at beginning
created the handler
Does the leg-work for saveAsText and saveAsExcel. Combines stimOut with ._parseDataOutput()
This just creates the dataOut part of the output matrix. It is called by _createOutputArray() which creates the header line and adds the stimOut columns
Remove references to ourself in experiments and terminate the loop
Abort the current trial.
Calling this during an experiment replace this trial. The condition related to the aborted trial will be replaced elsewhere in the session depending on the method in use for sampling conditions.
action (str) – Action to take with the aborted trial. Can be either of ‘random’, or ‘append’. The default action is ‘random’.
Notes
When using action=’random’, the RNG state for the trial handler is not used.
Rebuild the sequence of trial/state info as if running the trials
fromIndex (int, optional) – the point in the sequnce from where to rebuild. Defaults to -1.
Returns a pandas DataFrame of the trial data so far Read only attribute - you can’t directly modify TrialHandler.data
Note that data are stored internally as a list of dictionaries, one per trial. These are converted to a DataFrame on access.
Whether this loop has finished or not. Will be True if there are no upcoming trials and False if there are any. Set .finished = True to skip all remaining trials (equivalent to calling .skipTrials() with a value larger than the number of trials remaining)
True if there are no upcoming trials, False otherwise.
Returns all trials (elapsed, current and upcoming) with an index indicating which trial is the current trial.
list[Trial] – List of trials, in order (oldest to newest)
int – Index of the current trial in this list
Returns the current trial (.thisTrial)
The current trial
Trial
Returns the condition information from n trials previously. Useful for comparisons in n-back tasks. Returns ‘None’ if trying to access a trial prior to the first.
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Returns the condition for n trials into the future, without advancing the trials. Returns ‘None’ if attempting to go beyond the last trial.
Trial object for n trials into the future.
Trial or None
Returns Trial objects for a given range in the future. Will start looking at start trials in the future and will return n trials from then, so e.g. to get all trials from 2 in the future to 5 in the future you would use start=2 and n=3.
List of Trial objects n long. Any trials beyond the last trial are None.
list[Trial or None]
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials have ended this method will raise a StopIteration error. This can be handled with code such as:
trials = data.TrialHandler(.......)
for eachTrial in trials: # automatically stops when done
# do stuff
or:
trials = data.TrialHandler(.......)
while True: # ie forever
try:
thisTrial = trials.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
Exactly like saveAsText() except that the output goes to the screen instead of a file
Rewind back n trials - previously elapsed trials will return to being upcoming. If you try to rewind before the first trial, will log a warning and rewind to the first trial.
n (int) – Number of trials to rewind back
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).
It has the advantage over the simpler text files (see
TrialHandler.saveAsText()
)
that data can be stored in multiple named sheets within the file.
So you could have a single file named after your experiment and
then have one worksheet for each participant. Or you could have
one file for each participant and then multiple sheets for
repeated sessions etc.
The file extension .xlsx will be added if not given already.
the name of the file to create or append. Can include relative or absolute path
the name of the worksheet within the file
the attributes of the trial characteristics to be output. To use this you need to have provided a list of dictionaries specifying to trialList parameter of the TrialHandler and give here the names of strings specifying entries in that dictionary
specifying the dataType and the analysis to be performed, in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including ‘mean’,’std’,’median’,’max’,’min’. e.g. rt_max will give a column of max reaction times across the trials assuming that rt values have been stored. The default values will output the raw, mean and std of all datatypes found.
If False any existing file with this name will be
kept and a new file will be created with a slightly different
name. If you want to overwrite the old file, pass ‘overwrite’
to fileCollisionMethod
.
If True then a new worksheet will be appended.
If a worksheet already exists with that name a number will
be added to make it unique.
Collision method (rename
,``overwrite``, fail
) passed to
handleFileCollision()
This is ignored if append
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
The RNG self._rng cannot be serialized as-is, so we store its state in self._rng_state so we can restore it when loading.
Basically just saves a copy of the handler (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write a text file with the data and various chosen stimulus attributes
will have .tsv appended and can include path info.
the stimulus attributes to be output. To use this you need to use a list of dictionaries and give here the names of dictionary keys that you want as strings
a list of strings specifying the dataType and the analysis to be performed,in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including; ‘mean’,’std’,’median’,’max’,’min’… The default values will output the raw, mean and std of all datatypes found
allows the user to use a delimiter other than tab (“,” is popular with file extension “.csv”)
outputs the data with no header row or extraInfo attached
will add this output to the end of the specified file if it already exists
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Write a text file with the session, stimulus, and data values from each trial in chronological order. Also, return a pandas DataFrame containing same information as the file.
each row comprises information from only a single trial.
no summarising is done (such as collapsing to produce mean and standard deviation values across trials).
This ‘wide’ format, as expected by R for creating dataframes, and various other analysis programs, means that some information must be repeated on every row.
In particular, if the trialHandler’s ‘extraInfo’ exists, then each entry in there occurs in every row. In builder, this will include any entries in the ‘Experiment info’ field of the ‘Experiment settings’ dialog. In Coder, this information can be set using something like:
myTrialHandler.extraInfo = {'SubjID': 'Joan Smith',
'Group': 'Control'}
if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.tsv’ will be appended. Can include path info.
allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)
outputs the data with no header row.
will add this output to the end of the specified file if it already exists.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
Skip ahead n trials - the trials inbetween will be marked as “skipped”. If you try to skip past the last trial, will log a warning and skip to the last trial.
n (int) – Number of trials to skip ahead
TrialHandlerExt
¶A class for handling trial sequences in a non-counterbalanced design (i.e. oddball paradigms). Its functions are a superset of the class TrialHandler, and as such, can also be used for normal trial handling.
TrialHandlerExt has the same function names for data storage facilities.
To use non-counterbalanced designs, all TrialType dict entries in the trial list must have a key called “weight”. For example, if you want trial types A, B, C, and D to have 10, 5, 3, and 2 repetitions per block, then the trialList can look like:
{Name:’B’, …, weight:5}, {Name:’C’, …, weight:3}, {Name:’D’, …, weight:2}]
For experimenters using an excel or csv file for trial list, a column called weight is appropriate for this purpose.
Calls to .next() will fetch the next trial object given to this handler, according to the method specified (random, sequential, fullRandom). Calls will raise a StopIteration error when all trials are exhausted.
Authored by Suddha Sourav at BPN, Uni Hamburg - heavily borrowing from the TrialHandler class
specifying conditions. This can be imported from an
excel / csv file using importConditions()
For non-counterbalanced designs, each dict entry in
trialList must have a key called weight!
non-counterbalanced design, nReps is analogous to the number of blocks.
When the weights are not specified: ‘sequential’ presents the conditions in the order they appear in the list. ‘random’ will result in a shuffle of the conditions on each repeat, but all conditions occur once before the second repeat etc. ‘fullRandom’ fully randomises the trials across repeats as well, which means you could potentially run all trials of one condition before any trial of another.
In the presence of weights: ‘sequential’ presents each trial type the number of times specified by its weight, before moving on to the next type. ‘random’ randomizes the presentation order within block. ‘fulLRandom’ shuffles trial order across weights an nRep, that is, a full shuffling.
[‘corr’,’rt’,’resp’]. If not provided then these will be
created as needed during calls to
addData()
This will be stored alongside the data and usually describes the experiment and subject ID, date etc.
If provided then this fixes the random number generator to use the same pattern of trials, by seeding its startpoint
experiment file path. The psydat file format will store a copy of the experiment if possible. If originPath==None is provided here then the TrialHandler will still store a copy of the script where it was created. If OriginPath==-1 then nothing will be stored.
stored
.trialList - the original list of dicts, specifying the conditions
conditions list
.nTotal - the total number of trials that will be run
.nRemaining - the total number of trials remaining
.thisN - total trials completed so far
.thisRepN - which repeat you are currently on
.thisTrialN - which trial number within that repeat
trial
.finished - True/False for have we finished yet
.extraInfo - the dictionary of extra info as given at beginning
created the handler
weights are specified, then a list containing the weights of the trial types.
Does the leg-work for saveAsText and saveAsExcel. Combines stimOut with ._parseDataOutput()
This just creates the dataOut part of the output matrix. It is called by _createOutputArray() which creates the header line and adds the stimOut columns
Pre-generates the sequence of trial presentations (for non-adaptive methods). This is called automatically when the TrialHandler is initialised so doesn’t need an explicit call from the user.
The returned sequence has form indices[stimN][repN] Example: sequential with 6 trialtypes (rows), 5 reps (cols), returns:
[[0 0 0 0 0]
[1 1 1 1 1]
[2 2 2 2 2]
[3 3 3 3 3]
[4 4 4 4 4]
[5 5 5 5 5]]
0, 1, 2, 3, 4, 5, 0, 1, 2, … … 3, 4, 5
Example: random, with 3 trialtypes, where the weights of conditions 0,1, and 2 are 3,2, and 1 respectively, and a rep value of 5, might return:
[[0 1 2 0 1]
[1 0 1 1 1]
[0 2 0 0 0]
[0 0 0 1 0]
[2 0 1 0 2]
[1 1 0 2 0]]
0, 1, 0, 0, 2, 1, 1, 0, 2, 0, 0, 1, … … 0, 2, 0 stopIteration
To add a new type of sequence (as of v1.65.02): - add the sequence generation code here - adjust “if self.method in [ …]:” in both __init__ and .next() - adjust allowedVals in experiment.py -> shows up in DlgLoopProperties Note that users can make any sequence whatsoever outside of PsychoPy, and specify sequential order; any order is possible this way.
Creates an array of tuples the same shape as the input array where each tuple contains the indices to itself in the array.
Useful for shuffling and then using as a reference.
Remove references to ourself in experiments and terminate the loop
Returns the condition for the current trial, without advancing the trials.
Returns the condition information from n trials previously. Useful for comparisons in n-back tasks. Returns ‘None’ if trying to access a trial prior to the first.
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Returns the condition for n trials into the future, without advancing the trials. A negative n returns a previous (past) trial. Returns ‘None’ if attempting to go beyond the last trial.
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials have ended this method will raise a StopIteration error. This can be handled with code such as:
trials = data.TrialHandler(.......)
for eachTrial in trials: # automatically stops when done
# do stuff
or:
trials = data.TrialHandler(.......)
while True: # ie forever
try:
thisTrial = trials.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
Exactly like saveAsText() except that the output goes to the screen instead of a file
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).
It has the advantage over the simpler text files (see
TrialHandler.saveAsText()
)
that data can be stored in multiple named sheets within the file.
So you could have a single file named after your experiment and
then have one worksheet for each participant. Or you could have
one file for each participant and then multiple sheets for
repeated sessions etc.
The file extension .xlsx will be added if not given already.
the name of the file to create or append. Can include relative or absolute path
the name of the worksheet within the file
the attributes of the trial characteristics to be output. To use this you need to have provided a list of dictionaries specifying to trialList parameter of the TrialHandler and give here the names of strings specifying entries in that dictionary
specifying the dataType and the analysis to be performed, in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including ‘mean’,’std’,’median’,’max’,’min’. e.g. rt_max will give a column of max reaction times across the trials assuming that rt values have been stored. The default values will output the raw, mean and std of all datatypes found.
If False any existing file with this name will be
kept and a new file will be created with a slightly different
name. If you want to overwrite the old file, pass ‘overwrite’
to fileCollisionMethod
.
If True then a new worksheet will be appended.
If a worksheet already exists with that name a number will
be added to make it unique.
Collision method (rename
,``overwrite``, fail
) passed to
handleFileCollision()
This is ignored if append
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
Basically just saves a copy of the handler (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write a text file with the data and various chosen stimulus attributes
will have .tsv appended and can include path info.
the stimulus attributes to be output. To use this you need to use a list of dictionaries and give here the names of dictionary keys that you want as strings
a list of strings specifying the dataType and the analysis to be performed,in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including; ‘mean’,’std’,’median’,’max’,’min’… The default values will output the raw, mean and std of all datatypes found
allows the user to use a delimiter other than tab (“,” is popular with file extension “.csv”)
outputs the data with no header row or extraInfo attached
will add this output to the end of the specified file if it already exists
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Write a text file with the session, stimulus, and data values from each trial in chronological order.
each row comprises information from only a single trial.
no summarizing is done (such as collapsing to produce mean and standard deviation values across trials).
This ‘wide’ format, as expected by R for creating dataframes, and various other analysis programs, means that some information must be repeated on every row.
In particular, if the trialHandler’s ‘extraInfo’ exists, then each entry in there occurs in every row. In builder, this will include any entries in the ‘Experiment info’ field of the ‘Experiment settings’ dialog. In Coder, this information can be set using something like:
myTrialHandler.extraInfo = {'SubjID':'Joan Smith',
'Group':'Control'}
if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else ‘.txt’ will be appended. Can include path info.
allows the user to use a delimiter other than the default tab (“,” is popular with file extension “.csv”)
outputs the data with no header row.
will add this output to the end of the specified file if it already exists.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
StairHandler
¶Class to handle smoothly the selection of the next trial and report current values etc. Calls to next() will fetch the next object given to this handler, according to the method specified.
See Demos >> ExperimentalControl >> JND_staircase_exp.py
The staircase will terminate when nTrials AND nReversals have been exceeded. If stepSizes was an array and has been exceeded before nTrials is exceeded then the staircase will continue to reverse.
nUp and nDown are always considered as 1 until the first reversal is reached. The values entered as arguments are then used.
The initial value for the staircase.
The minimum number of reversals permitted. If stepSizes is a list, but the minimum number of reversals to perform, nReversals, is less than the length of this list, PsychoPy will automatically increase the minimum number of reversals and emit a warning. This minimum number of reversals is always set to be greater than 0.
The size of steps as a single value or a list (or array). For a single value the step size is fixed. For an array or list the step size will progress to the next entry at each reversal.
The minimum number of trials to be conducted. If the staircase has not reached the required number of reversals then it will continue.
The number of ‘incorrect’ (or 0) responses before the staircase level increases.
The number of ‘correct’ (or 1) responses before the staircase level decreases.
Whether to apply a 1-up/1-down rule until the first reversal point (if True), before switching to the specified up/down rule.
A dictionary (typically) that will be stored along with
collected data using
saveAsPickle()
or
saveAsText()
methods.
Not used and may be deprecated in future releases.
The type of steps that should be taken each time. ‘lin’ will simply add or subtract that amount each step, ‘db’ and ‘log’ will step by a certain number of decibels or log units (note that this will prevent your value ever reaching zero or less)
The smallest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.
The largest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.
Additional keyword arguments will be ignored.
The additional keyword arguments **kwargs might for example be passed by the MultiStairHandler, which expects a label keyword for each staircase. These parameters are to be ignored by the StairHandler.
Remove references to ourself in experiments and terminate the loop
Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:
.addResponse(result, intensity)
.addOtherData(‘dataName’, value’)
Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial.
This is essential to advance the staircase to a new intensity level!
Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.
Based on current intensity, counter of correct responses, and current direction.
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
The intensity (level) of the current staircase
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex.
If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:
staircase = data.StairHandler(.......)
for eachTrial in staircase: # automatically stops when done
# do stuff
or:
staircase = data.StairHandler(.......)
while True: # ie forever
try:
thisTrial = staircase.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
Exactly like saveAsText() except that the output goes to the screen instead of a file
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).
It has the advantage over the simpler text files
(see TrialHandler.saveAsText()
) that data can be stored
in multiple named sheets within the file. So you could have a
single file named after your experiment and then have one worksheet
for each participant. Or you could have one file for each participant
and then multiple sheets for repeated sessions etc.
The file extension .xlsx will be added if not given already.
The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.
the name of the file to create or append. Can include relative or absolute path.
the name of the worksheet within the file
If set to True then only the data itself will be output (no additional info)
If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.
Collision method passed to
handleFileCollision()
This is ignored if appendFile
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write a text file with the data
The name of the file, including path if needed. The extension .tsv will be added if not included.
the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
If True, prevents the output of the extraInfo provided at initialisation.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
PsiHandler
¶Handler to implement the “Psi” adaptive psychophysical method (Kontsevich & Tyler, 1999).
This implementation assumes the form of the psychometric function to be a cumulative Gaussian. Psi estimates the two free parameters of the psychometric function, the location (alpha) and slope (beta), using Bayes’ rule and grid approximation of the posterior distribution. It chooses stimuli to present by minimizing the entropy of this grid. Because this grid is represented internally as a 4-D array, one must choose the intensity, alpha, and beta ranges carefully so as to avoid a Memory Error. Maximum likelihood is used to estimate Lambda, the most likely location/slope pair. Because Psi estimates the entire psychometric function, any threshold defined on the function may be estimated once Lambda is determined.
It is advised that Lambda estimates are examined after completion of the Psi procedure. If the estimated alpha or beta values equal your specified search bounds, then the search range most likely did not contain the true value. In this situation the procedure should be repeated with appropriately adjusted bounds.
Because Psi is a Bayesian method, it can be initialized with a prior from existing research. A function to save the posterior over Lambda as a Numpy binary file is included.
Kontsevich & Tyler (1999) specify their psychometric function in terms of d’. PsiHandler avoids this and treats all parameters with respect to stimulus intensity. Specifically, the forms of the psychometric function assumed for Yes/No and Two Alternative Forced Choice (2AFC) are, respectively:
_normCdf = norm.cdf(x, mean=alpha, sd=beta) Y(x) = .5 * delta + (1 - delta) * _normCdf
Y(x) = .5 * delta + (1 - delta) * (.5 + .5 * _normCdf)
Initializes the handler and creates an internal Psi Object for grid approximation.
The number of trials to run.
Two element list containing the (inclusive) endpoints of the stimuli intensity range.
Two element list containing the (inclusive) endpoints of the alpha (location parameter) range.
Two element list containing the (inclusive) endpoints of the beta (slope parameter) range.
If stepType == ‘lin’, this specifies the step size of the stimuli intensity range. If stepType == ‘log’, this specifies the number of steps in the stimuli intensity range.
The step size of the alpha (location parameter) range.
The step size of the beta (slope parameter) range.
The guess rate.
The type of steps to be used when constructing the stimuli intensity range. If ‘lin’ then evenly spaced steps are used. If ‘log’ then logarithmically spaced steps are used. Defaults to ‘lin’.
The expected lower asymptote of the psychometric function (PMF).
For a Yes/No task, the PMF usually extends across the interval [0, 1]; here, expectedMin should be set to 0.
For a 2-AFC task, the PMF spreads out across [0.5, 1.0]. Therefore, expectedMin should be set to 0.5 in this case, and the 2-AFC psychometric function described above going to be is used.
Currently, only Yes/No and 2-AFC designs are supported.
Defaults to 0.5, or a 2-AFC task.
Optional prior distribution with which to initialize the Psi Object. This can either be a numpy ndarray object or the path to a numpy binary file (.npy) containing the ndarray.
Flag specifying whether prior is a file pathname or not.
Optional dictionary object used in PsychoPy’s built-in logging system.
Optional name for the PsiHandler used in PsychoPy’s built-in logging system.
If the supplied minVal parameter implies an experimental design other than Yes/No or 2-AFC.
decrement the current intensity and reset counter
increment the current intensity and reset counter
Remove references to ourself in experiments and terminate the loop
Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:
.addResponse(result, intensity)
.addOtherData(‘dataName’, value’)
Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial. Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.
Based on current intensity, counter of correct responses, and current direction.
Returns an intensity estimate for the provided probability.
The optional argument ‘lamb’ allows thresholds to be estimated without having to recompute the maximum likelihood lambda.
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
The intensity (level) of the current staircase
Advances to next trial and returns it.
Exactly like saveAsText() except that the output goes to the screen instead of a file
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).
It has the advantage over the simpler text files
(see TrialHandler.saveAsText()
) that data can be stored
in multiple named sheets within the file. So you could have a
single file named after your experiment and then have one worksheet
for each participant. Or you could have one file for each participant
and then multiple sheets for repeated sessions etc.
The file extension .xlsx will be added if not given already.
The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.
the name of the file to create or append. Can include relative or absolute path.
the name of the worksheet within the file
If set to True then only the data itself will be output (no additional info)
If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.
Collision method passed to
handleFileCollision()
This is ignored if appendFile
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write a text file with the data
The name of the file, including path if needed. The extension .tsv will be added if not included.
the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
If True, prevents the output of the extraInfo provided at initialisation.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Saves the posterior array over probLambda as a pickle file with the specified name.
fileCollisionMethod (string) – Collision method passed to handleFileCollision()
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
QuestHandler
¶Class that implements the Quest algorithm for quick measurement of psychophysical thresholds.
Uses Andrew Straw’s QUEST, which is a Python port of Denis Pelli’s Matlab code.
Measures threshold using a Weibull psychometric function. Currently, it is not possible to use a different psychometric function.
The Weibull psychometric function is given by the formula
\(\Psi(x) = \delta \gamma + (1 - \delta) [1 - (1 - \gamma)\, \exp(-10^{\\beta (x - T + \epsilon)})]\)
Here, \(x\) is an intensity or a contrast (in log10 units), and \(T\) is estimated threshold.
Quest internally shifts the psychometric function such that intensity at the user-specified
threshold performance level pThreshold
(e.g., 50% in a yes-no or 75% in a 2-AFC task) is euqal to 0.
The parameter \(\epsilon\) is responsible for this shift, and is determined automatically based on the
specified pThreshold
value. It is the parameter Watson & Pelli (1983) introduced to perform measurements
at the “optimal sweat factor”. Assuming your QuestHandler
instance is called q
, you can retrieve this
value via q.epsilon
.
Example:
# setup display/window
...
# create stimulus
stimulus = visual.RadialStim(win=win, tex='sinXsin', size=1,
pos=[0,0], units='deg')
...
# create staircase object
# trying to find out the contrast where subject gets 63% correct
# if wanted to do a 2AFC then the defaults for pThreshold and gamma
# are good. As start value, we'll use 50% contrast, with SD = 20%
staircase = data.QuestHandler(0.5, 0.2,
pThreshold=0.63, gamma=0.01,
nTrials=20, minVal=0, maxVal=1)
...
while thisContrast in staircase:
# setup stimulus
stimulus.setContrast(thisContrast)
stimulus.draw()
win.flip()
core.wait(0.5)
# get response
...
# inform QUEST of the response, needed to calculate next level
staircase.addResponse(thisResp)
...
# can now access 1 of 3 suggested threshold levels
staircase.mean()
staircase.mode()
staircase.quantile(0.5) # gets the median
0.82 which is equivalent to a 3 up 1 down standard staircase
(and might want gamma=0.01)
The variable(s) nTrials and/or stopSd must be specified.
beta, delta, and gamma are the parameters of the Weibull psychometric function.
Prior threshold estimate or your initial guess threshold.
Standard deviation of your starting guess threshold. Be generous with the sd as QUEST will have trouble finding the true threshold if it’s more than one sd from your initial guess.
Your threshold criterion expressed as probability of response==1. An intensity offset is introduced into the psychometric function so that the threshold (i.e., the midpoint of the table) yields pThreshold.
The maximum number of trials to be conducted.
The minimum 5-95% confidence interval required in the threshold estimate before stopping. If both this and nTrials is specified, whichever happens first will determine when Quest will stop.
The method used to determine the next threshold to test. If you want to get a specific threshold level at the end of your staircasing, please use the quantile, mean, and mode methods directly.
Controls the steepness of the psychometric function.
The fraction of trials on which the observer presses blindly.
The fraction of trials that will generate response 1 when intensity=-Inf.
The quantization of the internal table.
The intensity difference between the largest and smallest intensity that the internal table can store. This interval will be centered on the initial guess tGuess. QUEST assumes that intensities outside of this range have zero prior probability (i.e., they are impossible).
A dictionary (typically) that will be stored along with
collected data using
saveAsPickle()
or
saveAsText()
methods.
The smallest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.
The largest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance.
Can supply a staircase object with intensities and results. Might be useful to give the quest algorithm more information if you have it. You can also call the importData function directly.
Additional keyword arguments will be ignored.
The additional keyword arguments **kwargs might for example be passed by the MultiStairHandler, which expects a label keyword for each staircase. These parameters are to be ignored by the StairHandler.
decrement the current intensity and reset counter
increment the current intensity and reset counter
Remove references to ourself in experiments and terminate the loop
Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:
.addResponse(result, intensity)
.addOtherData(‘dataName’, value’)
Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial
Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.
Return estimate for the 5%–95% confidence interval (CI).
If True
, return the width of the confidence interval
(95% - 5% percentiles). If False
, return an NumPy array
with estimates for the 5% and 95% boundaries.
scalar or array of length 2.
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
import some data which wasn’t previously given to the quest algorithm
The intensity (level) of the current staircase
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN, thisIndex, finished, intensities
If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:
staircase = data.QuestHandler(.......)
for eachTrial in staircase: # automatically stops when done
# do stuff
or:
staircase = data.QuestHandler(.......)
while True: # i.e. forever
try:
thisTrial = staircase.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
Exactly like saveAsText() except that the output goes to the screen instead of a file
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).
It has the advantage over the simpler text files
(see TrialHandler.saveAsText()
) that data can be stored
in multiple named sheets within the file. So you could have a
single file named after your experiment and then have one worksheet
for each participant. Or you could have one file for each participant
and then multiple sheets for repeated sessions etc.
The file extension .xlsx will be added if not given already.
The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.
the name of the file to create or append. Can include relative or absolute path.
the name of the worksheet within the file
If set to True then only the data itself will be output (no additional info)
If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.
Collision method passed to
handleFileCollision()
This is ignored if appendFile
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write a text file with the data
The name of the file, including path if needed. The extension .tsv will be added if not included.
the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
If True, prevents the output of the extraInfo provided at initialisation.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
QuestPlusHandler
¶QUEST+ implementation. Currently only supports parameter estimation of a Weibull-shaped psychometric function.
The parameter estimates can be retrieved via the .paramEstimate attribute, which returns a dictionary whose keys correspond to the names of the estimated parameters (i.e., QuestPlusHandler.paramEstimate[‘threshold’] will provide the threshold estimate). Retrieval of the marginal posterior distributions works similarly: they can be accessed via the .posterior dictionary.
nTrials (int) – Number of trials to run.
intensityVals (collection of floats) – The complete set of possible stimulus levels. Note that the stimulus levels are not necessarily limited to intensities (as the name of this parameter implies), but they could also be contrasts, durations, weights, etc.
thresholdVals (float or collection of floats) – The complete set of possible threshold values.
slopeVals (float or collection of floats) – The complete set of possible slope values.
lowerAsymptoteVals (float or collection of floats) – The complete set of possible values of the lower asymptote. This corresponds to false-alarm rates in yes-no tasks, and to the guessing rate in n-AFC tasks. Therefore, when performing an n-AFC experiment, the collection should consists of a single value only (e.g., [0.5] for 2-AFC, [0.33] for 3-AFC, [0.25] for 4-AFC, etc.).
lapseRateVals (float or collection of floats) – The complete set of possible lapse rate values. The lapse rate defines the upper asymptote of the psychometric function, which will be at 1 - lapse rate.
responseVals (collection) – The complete set of possible response outcomes. Currently, only two outcomes are supported: the first element must correspond to a successful response / stimulus detection, and the second one to an unsuccessful or incorrect response. For example, in a yes-no task, one would use [‘Yes’, ‘No’], and in an n-AFC task, [‘Correct’, ‘Incorrect’]; or, alternatively, the less verbose [1, 0] in both cases.
prior (dict of floats) – The prior probabilities to assign to the parameter values. The
dictionary keys correspond to the respective parameters:
threshold
, slope
, lowerAsymptote
, lapseRate
.
startIntensity (float) – The very first intensity (or stimulus level) to present.
psychometricFunc ({'weibull'}) – The psychometric function to fit. Currently, only the Weibull function is supported.
stimScale ({'log10', 'dB', 'linear'}) – The scale on which the stimulus intensities (or stimulus levels) are provided. Currently supported are the decadic logarithm, log10; decibels, dB; and a linear scale, linear.
stimSelectionMethod ({'minEntropy', 'minNEntropy'}) – How to select the next stimulus. minEntropy will select the stimulus that will minimize the expected entropy. minNEntropy will randomly pick pick a stimulus from the set of stimuli that will produce the smallest, 2nd-smallest, …, N-smallest entropy. This can be used to ensure some variation in the stimulus selection (and subsequent presentation) procedure. The number N will then have to be specified via the stimSelectionOption parameter.
stimSelectionOptions (dict) – This parameter further controls how to select the next stimulus in case stimSelectionMethod=minNEntropy. The dictionary supports two keys: N and maxConsecutiveReps. N defines the number of “best” stimuli (i.e., those which produce the smallest N expected entropies) from which to randomly select a stimulus for presentation in the next trial. maxConsecutiveReps defines how many times the exact same stimulus can be presented on consecutive trials. For example, to randomly pick a stimulus from those which will produce the 4 smallest expected entropies, and to allow the same stimulus to be presented on two consecutive trials max, use stimSelectionOptions=dict(N=4, maxConsecutiveReps=2). To achieve reproducible results, you may pass a seed to the random number generator via the randomSeed key.
paramEstimationMethod ({'mean', 'mode'}) – How to calculate the final parameter estimate. mean returns the mean of each parameter, weighted by their respective posterior probabilities. mode returns the parameters at the peak of the posterior distribution.
extraInfo (dict) – Additional information to store along the actual QUEST+ staircase data.
name (str) – The name of the QUEST+ staircase object. This will appear in the PsychoPy logs.
label (str) – Only used by MultiStairHandler
, and otherwise ignored.
kwargs (dict) – Additional keyword arguments. These might be passed, for example,
through a MultiStairHandler
, and will be ignored. A
warning will be emitted whenever additional keyword arguments
have been passed.
RuntimeWarning – If an unknown keyword argument was passed.
Notes
The QUEST+ algorithm was first described by [1].
decrement the current intensity and reset counter
increment the current intensity and reset counter
Remove references to ourself in experiments and terminate the loop
Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:
.addResponse(result, intensity)
.addOtherData(‘dataName’, value’)
Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the staircase
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial.
This is essential to advance the staircase to a new intensity level!
Supplying an intensity value here indicates that you did not use the recommended intensity in your last trial and the staircase will replace its recorded value with the one you supplied here.
Based on current intensity, counter of correct responses, and current direction.
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
The intensity (level) of the current staircase
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex.
If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:
staircase = data.StairHandler(.......)
for eachTrial in staircase: # automatically stops when done
# do stuff
or:
staircase = data.StairHandler(.......)
while True: # ie forever
try:
thisTrial = staircase.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
The estimated parameters of the psychometric function.
A dictionary whose keys correspond to the names of the estimated parameters.
dict of floats
The marginal posterior distributions.
A dictionary whose keys correspond to the names of the estimated parameters.
dict of np.ndarrays
Exactly like saveAsText() except that the output goes to the screen instead of a file
The marginal prior distributions.
A dictionary whose keys correspond to the names of the parameters.
dict of np.ndarrays
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice (>=3.0).
It has the advantage over the simpler text files
(see TrialHandler.saveAsText()
) that data can be stored
in multiple named sheets within the file. So you could have a
single file named after your experiment and then have one worksheet
for each participant. Or you could have one file for each participant
and then multiple sheets for repeated sessions etc.
The file extension .xlsx will be added if not given already.
The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase / intensity level on every trial and the corresponding responses of the participant on every trial.
the name of the file to create or append. Can include relative or absolute path.
the name of the worksheet within the file
If set to True then only the data itself will be output (no additional info)
If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.
Collision method passed to
handleFileCollision()
This is ignored if appendFile
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write a text file with the data
The name of the file, including path if needed. The extension .tsv will be added if not included.
the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
If True, prevents the output of the extraInfo provided at initialisation.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
MultiStairHandler
¶A Handler to allow easy interleaved staircase procedures (simple or QUEST).
Parameters for the staircases, as used by the relevant
StairHandler
or
QuestHandler
(e.g. the startVal, minVal, maxVal…)
should be specified in the conditions list and may vary between
each staircase. In particular, the conditions must include
a startVal (because this is a required argument to the above
handlers), a label to tag the staircase and a startValSd
(only for QUEST staircases). Any parameters not specified in the
conditions file will revert to the default for that individual
handler.
If you need to customize the behaviour further you may want to look at the recipe on Coder - interleave staircases.
StairHandler
, a QuestHandler
, or aIf random, stairs are shuffled in each repeat but not randomized more than that (so you can’t have 3 repeats of the same staircase in a row unless it’s the only one still running). If fullRandom, the staircase order is “fully” randomized, meaning that, theoretically, a large number of subsequent trials could invoke the same staircase repeatedly. If sequential, don’t perform any randomization.
Can be used to control parameters for the different staircases. Can be imported from an Excel file using psychopy.data.importConditions MUST include keys providing, ‘startVal’, ‘label’ and ‘startValSd’ (QUEST only). The ‘label’ will be used in data file saving so should be unique. See Example Usage below.
Minimum trials to run (but may take more if the staircase
hasn’t also met its minimal reversals.
See StairHandler
The seed with which to initialize the random number generator (RNG). If None (default), do not initialize the RNG with a specific value.
Example usage:
conditions=[
{'label':'low', 'startVal': 0.1, 'ori':45},
{'label':'high','startVal': 0.8, 'ori':45},
{'label':'low', 'startVal': 0.1, 'ori':90},
{'label':'high','startVal': 0.8, 'ori':90},
]
stairs = data.MultiStairHandler(conditions=conditions, nTrials=50)
for thisIntensity, thisCondition in stairs:
thisOri = thisCondition['ori']
# do something with thisIntensity and thisOri
stairs.addResponse(correctIncorrect) # this is ESSENTIAL
# save data as multiple formats
stairs.saveDataAsExcel(fileName) # easy to browse
stairs.saveAsPickle(fileName) # contains more info
ValueError – If an unknown randomization option was passed via the method keyword argument.
Create a new iteration of the running staircases for this pass.
This is not normally needed by the user - it gets called at __init__ and every time that next() runs out of trials for this pass.
Remove references to ourself in experiments and terminate the loop
Abort the current trial (staircase).
Calling this during an experiment abort the current staircase used this trial. The current staircase will be reshuffled into available staircases depending on the action parameter.
action (str) – Action to take with the aborted trial. Can be either of ‘random’, or ‘append’. The default action is ‘random’.
Notes
When using action=’random’, the RNG state for the trial handler is not used.
Deprecated 1.79.00: It was ambiguous whether you were adding the response (0 or 1) or some other data concerning the trial so there is now a pair of explicit methods:
trial value
control staircase
Add some data about the current trial that will not be used to control the staircase(s) such as reaction time data
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial
This is essential to advance the staircase to a new intensity level!
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
Attempts to determine the path of the script that created this data file and returns both the path to that script and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath (fine from a standard python script).
The intensity (level) of the current staircase
Advances to next trial and returns it.
This can be handled with code such as:
staircase = data.MultiStairHandler(.......)
for eachTrial in staircase: # automatically stops when done
# do stuff here for the trial
or:
staircase = data.MultiStairHandler(.......)
while True: # ie forever
try:
thisTrial = staircase.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
Write the data to the standard output stream
the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
If True, prevents the output of the extraInfo provided at initialisation.
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and with OpenOffice (>=3.0).
It has the advantage over the simpler text files (see
TrialHandler.saveAsText()
)
that the data from each staircase will be save in the same file, with
the sheet name coming from the ‘label’ given in the dictionary of
conditions during initialisation of the Handler.
The file extension .xlsx will be added if not given already.
The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of reversal indices (trial numbers), the raw staircase/intensity level on every trial and the corresponding responses of the participant on every trial.
the name of the file to create or append. Can include relative or absolute path
If set to True then only the data itself will be output (no additional info)
If False any existing file with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique.
Collision method passed to
handleFileCollision()
This is ignored if append
is True
.
Serialize the object to the JSON format.
fileName (string, or None) – the name of the file to create or append. Can include a relative or absolute path. If None, will not write to a file, but return an in-memory JSON object.
encoding (string, optional) – The encoding to use when writing the file.
fileCollisionMethod (string) – Collision method passed to
handleFileCollision()
. Can be
either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before serializing because loading the created JSON file would sometimes fail otherwise.
Saves a copy of self (with data) to a pickle file.
This can be reloaded later and further analyses carried out.
fileCollisionMethod: Collision method passed to
handleFileCollision()
Write out text files with the data.
For MultiStairHandler this will output one file for each staircase that was run, with _label added to the fileName that you specify above (label comes from the condition dictionary you specified when you created the Handler).
The name of the file, including path if needed. The extension .tsv will be added if not included.
the delimiter to be used (e.g. ‘ ‘ for tab-delimited, ‘,’ for csv files)
If True, prevents the output of the extraInfo provided at initialisation.
Collision method passed to
handleFileCollision()
The encoding to use when saving a the file. Defaults to utf-8-sig.
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
because it needs to be performed using the weakref module.
FitWeibull
¶Fit a Weibull function (either 2AFC or YN) of the form:
y = chance + (1.0-chance)*(1-exp( -(xx/alpha)**(beta) ))
and with inverse:
x = alpha * (-log((1.0-y)/(1-chance)))**(1.0/beta)
After fitting the function you can evaluate an array of x-values
with fit.eval(x)
, retrieve the inverse of the function with
fit.inverse(y)
or retrieve the parameters from fit.params
(a list with [alpha, beta]
)
The Fit class that derives this needs to specify its _evalFunction
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
FitLogistic
¶Fit a Logistic function (either 2AFC or YN) of the form:
y = chance + (1-chance)/(1+exp((PSE-xx)*JND))
and with inverse:
x = PSE - log((1-chance)/(yy-chance) - 1)/JND
After fitting the function you can evaluate an array of x-values
with fit.eval(x)
, retrieve the inverse of the function with
fit.inverse(y)
or retrieve the parameters from fit.params
(a list with [PSE, JND]
)
The Fit class that derives this needs to specify its _evalFunction
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
FitNakaRushton
¶Fit a Naka-Rushton function of the form:
yy = rMin + (rMax-rMin) * xx**n/(xx**n+c50**n)
After fitting the function you can evaluate an array of x-values
with fit.eval(x)
, retrieve the inverse of the function with
fit.inverse(y)
or retrieve the parameters from fit.params
(a list with [rMin, rMax, c50, n]
)
Note that this differs from most of the other functions in not using a value for the expected minimum. Rather, it fits this as one of the parameters of the model.
The Fit class that derives this needs to specify its _evalFunction
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
FitCumNormal
¶Fit a Cumulative Normal function (aka error function or erf) of the form:
y = chance + (1-chance)*((special.erf((xx-xShift)/(sqrt(2)*sd))+1)*0.5)
and with inverse:
x = xShift+sqrt(2)*sd*(erfinv(((yy-chance)/(1-chance)-.5)*2))
After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [centre, sd] for the Gaussian distribution forming the cumulative)
NB: Prior to version 1.74 the parameters had different meaning, relating to xShift and slope of the function (similar to 1/sd). Although that is more in with the parameters for the Weibull fit, for instance, it is less in keeping with standard expectations of normal (Gaussian distributions) so in version 1.74.00 the parameters became the [centre,sd] of the normal distribution.
The Fit class that derives this needs to specify its _evalFunction
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
importConditions()
¶Imports a list of conditions from an .xlsx, .csv, or .pkl file
The output is suitable as an input to TrialHandler
trialList or to MultiStairHandler
as a conditions list.
If fileName ends with:
(header + row x col)
No support for older (.xls) is planned.
(header + row x col)
The file should contain one row per type of trial needed and one column for each parameter that defines the trial type. The first row should give parameter names, which should:
be unique
begin with a letter (upper or lower case)
contain no spaces or other punctuation (underscores are permitted)
selection is used to select a subset of condition indices to be used It can be a list/array of indices, a python slice object or a string to be parsed as either option. e.g.:
“1,2,4” or [1,2,4] or (1,2,4) are the same
“2:5” # 2, 3, 4 (doesn’t include last whole value)
“-10:2:” # tenth from last to the last in steps of 2
slice(-10, 2, None) # the same as above
random(5) * 8 # five random vals 0-7
functionFromStaircase()
¶Create a psychometric function by binning data from a staircase procedure. Although the default is 10 bins Jon now always uses ‘unique’ bins (fewer bins looks pretty but leads to errors in slope estimation)
usage:
intensity, meanCorrect, n = functionFromStaircase(intensities,
responses, bins)
are a list (or array) of intensities to be binned
are a list of 0,1 each corresponding to the equivalent intensity value
can be an integer (giving that number of bins) or ‘unique’ (each bin is made from aa data for exactly one intensity value)
a numpy array of intensity values (where each is the center of an intensity bin)
a numpy array of mean % correct in each bin
a numpy array of number of responses contributing to each mean
bootStraps()
¶Create a list of n bootstrapped resamples of the data
SLOW IMPLEMENTATION (Python for-loop)
out = bootStraps(dat, n=1)
an NxM or 1xN array (each row is a different condition, each column is a different trial)
number of bootstrapped resamples to create
dim[0]=conditions
dim[1]=trials
dim[2]=resamples