LabBench, Rev. 5 is a major release that is incompatible with LabBench 4, which means that protocols written for LabBench 4 cannot be run unmodified by LabBench 5.
The reason for this decision where inconsistencies in the LabBench Language in which protocols are written. These have now been corrected resulting in an consistent language with which it is easier to write protocols for scientific studies. The changes to language is part stylistic and conceptual and part practical. Certain tests with confusing names has been renamed to names that better explain what experimental procedures they implement, and constructs to the language has been added that makes it faster and less verbose to write protocols. The template system for generation of tests has also been greatly expanded and made more consistent. This template system makes it possible to write complex multisession protocols with significant less code LabBench 4 protocol can with minor modifications be made compatible with LabBench 5.
A second significant change is a change in its license. Previously, LabBench was devided into modules that must be licensed individually. In LabBench 5, there are no modules, when you have licensed LabBench you will have access to the entire program all of its capabilities.
However, the change in the license system means that license codes for LabBench 4 will not work with LabBench 5. If you have a LabBench 4 license, then please write to help@labbench.io to have your license code converted. For LabBench 4 licensees this is a free upgrade.
LabBench, Rev. 4.7 is a minor release that is compatible with LabBench 4.6.
LabBench, Rev. 4.6 is a minor release that is compatible with LabBench 4.5.
It is now possible to place time constraints on tests, in the form of a <time-constraint>
element:
<time-constraint
test-id="TestID"
min="60"
max="120"
notification="true"
time-reference="end"/>
This time constraint will enforce that the test can only be started between min
and max
seconds after the test with test ID test-id
has either been started or completed. If only min
is specified it means that the test can only be started after min
seconds has passed, but after that it can be started at any time. If only max
is specified it means the test must be started within max
seconds and after that it cannot be started. Whether the it is refered to the start of completion of the test identified by test-id
is determined by the time-reference attribute.
The notification attribute enables a beep that will occur when it becomes possible to start the test that the is time contrained.
Previously answers to survey question had to be accessed with an index notation in the form of TestID['QuestionID']
. With LabBench 4.6 answers can now be accessed as: TestID.QuestionID
.
A Random Toolkit has been added to the Test Context (tc), as tc.Random
. Currently, to functions are avaialble:
tc.Random.Permutate(length)
: will return a permutated array with indexes from 0 to length - 1.tc.Random.LatinShuffle(blockNo, length)
: will return the the blockNo row from a latin square where length is the length of the rows.LabBench, Rev. 4.5 is a minor release that is compatible with LabBench 4.4.
Previously, if the same test was needed multiple times in a protocol, it would have to be duplicated each time. This resulted in very long and verbose protocols that, in some cases, could reach thousands of lines of code.
A system for creating tests from test templates has been implemented. Below is the definition of a protocol that contains three sessions (SCREENING, SES01, SES02, SES03), each containing the same number of tests. Each session starts with a configuration, followed by the application of a Pruritogen and three assessments of the evoked Itch.
<tests>
<foreach variable="session" in="sessions">
<meta-survey-constructor
ID="var: '{id}'.format(id = SessionID)"
name="var: '{name}: Configuration'.format(name = SessionName)"
session="var: '{id}'.format(id = SessionID)"
template="configuration">
<variables>
<string value="var: session.ID" name="SessionID" />
<string value="var: session.Name" name="SessionName" />
<string value="var: session.Dependency" name="Dependency" />
</variables>
</meta-survey-constructor>
<meta-survey-constructor
ID="var: '{id}APPLICATION'.format(id = SessionID)"
name="var: '{name}: Application'.format(name = SessionName)"
session="var: '{id}'.format(id = SessionID)"
template="application">
<variables>
<string value="var: session.ID" name="SessionID" />
<string value="var: session.Name" name="SessionName" />
</variables>
</meta-survey-constructor>
<sequence type="random" offset="SubjectNumber">
<foreach variable="m" in="measurements">
<meta-survey-constructor
ID="var: '{sid}{id}'.format(sid = SessionID, id = m.ID)"
name="var: '{sid}: {name}'.format(sid = SessionName, name = m.Name)"
session="var: '{id}'.format(id = SessionID)"
template="nrsRating">
<variables>
<string value="var: session.ID" name="SessionID" />
<string value="var: session.Name" name="SessionName" />
<string value="var: m.Instruction" name="Instruction" />
</variables>
</meta-survey-constructor>
</foreach>
</sequence>
</foreach>
</tests>
With the new test template system, this can be done with 41 lines of code, whereas, previously, the same protocol took ~1500 lines of code to define. As shown in the example, this test templating system also allows for the generation of a series of tests with <foreach>
loop elements and randomizations of tests with <sequence>
elements.
It is now possible to structure protocols into sessions:
<sessions>
<session ID="SCREENING" name="Screening" />
<session ID="SES01" name="Session 1" />
<session ID="SES02" name="Session 2" />
</sessions>
And specify which session they belong to for each test in the protocol. When sessions are defined, you will be asked in the startup wizard which sessions are currently being performed, and then LabBench Runner will only show tests belonging to that session in the protocol view.
The session definition makes long protocols with multiple sessions much simpler for the operator to perform, as they will only see the tests relevant to the current session instead of all the tests in the protocol.
Additional response tasks have been implemented for the threshold estimation test:
Adding instructions to subjects to any test in a protocol is now possible. This is done by defining a test property:
<properties>
<subject-instructions
experimental-setup-id="image"
default="AlloknesisInstructionVAS" />
</properties>
This <subject-instruction>
test property specifies an image to display to the subject. It requires that an instrument named SubjectInstructions
of type ImageDisplay
is assigned to the test in the device-mapping section
of the experimental setup.
<device-assignment
device-id="display.image"
test-type="meta-survey"
instrument-name="SubjectInstructions" />
LabBench, Rev. 4.4 is a minor release that is compatible with LabBench 4.3.
It is not possible to invalidate results of dependent tests, which means that if a test is rerun it is not possible to discard the results of dependent tests. This is relevant for example in cuff pressure algometry protocols for tests such as Temporal Summation or Conditioned Pain Modulation, which depends on the Pain Tolerance Threshold determines by Stimulus Response tests. If these tests are rerun and a new Pain Tolerance Threshold is determined then that will invalidate the results of the Temporal Summation and Conditioned Pain Modulation tests.
An example of such a dependency is provided below:
<dependencies>
<dependency ID="SR2" virtual="false" />
</dependencies>
Whether or not a result of a dependent test is discarded is controlled by the virtual
attribute on the dependency
element. If this virtual
attribute is set to false then the result of the test will be discarded if the SR2 test is rerun. If this is not the intended behavior then the dependency can be declared as virtual by setting the virtual
attribute to true. The default value of the virtual
attribute if not specified is false.
LabBench, Rev. 4.3 is a minor release that is compatible with LabBench 4.2.
The following results of the response recording test is now written to the session log:
User access permissions has been reworked to allow access by Operators to the device page. This allow them to solve a name change for serial ports without having to request help from a Principal Investigator or Administrator.
A new foreach construct has been added to the PDF export session actions, which allow for iterating over a collection of items. This significantly reduce the coding effort and code size of PDF export actions.
<foreach variable="result" in="Results">
<cell><text
style="tblcell"
value="dynamic: result.ID"/>
</cell>
<cell><text
style="tblcell"
value="dynamic: result.RecordingTime.ToString('yyyy-MM-dd') if result.Completed else 'MISSING'"/>
</cell>
<cell><text
style="tblcell"
value="dynamic: result.RecordingTime.ToString('HH:mm') if result.Completed else 'MISSING'"/>
</cell>
<cell><text
tyle="tblcell"
value="dynamic: result.RecordingEndTime.ToString('HH:mm') if result.Completed else 'MISSING'"/>
</cell>
<cell><text style="tblcell" value="dynamic: 'YES' if result.Completed else 'NO'"/></cell>
<cell><text
style="tblcell"
value="dynamic: ('YES' if result.Iteration > 1 else 'NO') if result.Completed else 'MISSING'"/>
</cell>
</foreach>
Currently, this construct is available for tables.
LabBench, Rev. 4.2 is a minor release that is compatible with LabBench 4.1.
Log messages that occur during sessions are now saved to a session log.
It is not possible to require the operator to write a log message before rerunning tests is allowed. The message displayed to the operator is now configurable in the protocol.
A copy post-session action has been implemented. This post-session action allows files generated by other post-session actions to be copied to other folders for replication.
An export session log post-session action has been implemented. This post-session action will export the session log as a PDF file.
LabBench, Rev. 4.1 is a minor release that is compatible with LabBench 4.0.
Toolkits are a new concept in LabBench intended to provide easy access to LabBench functionality from Python scripts embedded in protocols.
The psychophysics toolkit provides access to Psychometric Functions and Adaptive Methods for estimating these functions. It is accessed as a subcomponent of the Test Context passed into all callable functions as the 'tc' parameter.
Here is an example of how this toolkit is used to create a Psi Method algorithm for estimating the stop-signal delay in a stop-signal task:
self.method = tc.Create(tc.Psychophysics.PsiMethod()
.NumberOfTrials(tc.Trials)
.Function(tc.Psychophysics.Functions.Quick(Beta=1, Lambda=0.02, Gamma=0))
.Alpha(X0=tc.AlphaX0,X1=1.0,N = tc.AlphaN)
.Beta(X0=tc.BetaX0,X1=tc.BetaX1,N = tc.BetaN)
.Intensity(X0 = tc.IntensityX0,X1 = 1.0,N = tc.IntensityN))
The Waveforms Toolkit provides access to create waveforms from the Waveforms LabBench library. It is accessed as tc.Waveforms.
The PDF Export Post Session allow you to create PDF files from the data recorded in a session.
The Script Export Post Session Action allow you to run a single script. These actions allow you to create figures and save them to disc from the data recorded in a session.
The ScottPlot library is now preloaded into the Python scripting environment, so it is possible to create ScottPlot plots from Python scripts.
Previously, the Image Display could only display static images indefinitely or for a specified period. This limitation made it challenging to implement the sequences of images that are required, for example, in a Stop Signal Task.
The Image Display has been updated to display sequences of Images where each step in this sequence can either be a static image or a function that is called. By calling a function, it is possible to perform additional steps at that point in the sequences, such as collecting responses and, from this response, displaying different images.
Below is an example of the implementation of Go and Stop signals in a stop signal task:
def Stimulate(tc, x):
display = tc.Devices.ImageDisplay
if tc.StimulusName == "STOP":
display.Run(display.Sequence(tc.StopTask)
.Display(tc.Images.FixationCross, tc.FixationDelay)
.Run(Go)
.Run(Stop)
.Display(tc.Images.FixationCross, tc.FeedbackDelay)
.Run(Feedback))
elif tc.StimulusName == "GO":
display.Run(display.Sequence(tc.GoTask)
.Display(tc.Images.FixationCross, tc.FixationDelay)
.Run(Go)
.Display(tc.Images.FixationCross, tc.FeedbackDelay)
.Run(Feedback))
else:
Log.Error("Unknown stimulus: {name}".format(name = tc.StimulusName))
return True
The evoked potentials test was initially intended to provide the capability to generate stimuli for electrically, auditory, and pressure-evoked potentials. It allows for the presentation of stimulation patterns and stimuli for which the stimulation pattern can be determined when the test is started.
However, it has proved versatile and has been used to implement Psychophysiological Research Paradigms effectively, for which it was not initially intended. Examples are Flanker Tasks, Stroop Tasks, Go/NoGo Tasks, Stop Signal Tasks, etc.
However, as the stimulation pattern was calculated when the test started, it was impossible to insert pauses in the tasks, which are often required in these tasks.
For this purpose, a pause attribute has been implemented on stimulation slots in the stimulation patterns. If this attribute is set to true, then the timing engine of the evoked potentials test will be paused until the operator presses a continue button that will be displayed in the UI of the test if this attribute is present in a stimulation pattern. Consequently, with this new attribute, it is possible to insert pauses into stimulation patterns.
LabBench, Rev. 4.0 is a major release that is incompatible with LabBench 3.x. A major release was required as the focus of the release was to remove inconsistences in the LabBench Language, which resulted in changes to the format of the Protocol Definition Files that is not backwards compatible.
To simplify protocol development the Protocol Definition Files (*.prtx
) is now part of the Experimental Definition Files (*.expx
), meaning that a protocol can be specified by a single file instead of requiring two separate files.
The new format of the Experimental Definition File (*.epx):
<?xml version="1.0" encoding="utf-8" ?>
<experiment xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://labbench.io ..\experiment.xsd">
<description />
<experimental-setup>
<description />
<devices>
</devices>
<device-mapping>
</device-mapping>
</experimental-setup>
<protocol>
<!-- Content of the prtx file -->
</protocol>
<post-actions>
</post-actions>
</experiment>
Where the content of the <protocol></protocol> is the content of the old Protocol Definition Files (*.prtx) files.
LabBench Designer will now check if protocols are compatible with the current version of LabBench, and will require that the repository index file (repository.xml) specify the version of LabBench (specified with the new labbench-version
attribute) for which a protocol was originally written:
<protocol id="buttonTestPad"
name="LabBench PAD: Test of functionality"
labbench-version="4.0.0"
category="LabBench PAD;Test Protocol" />
If the protocol is written for a version of LabBench that is not compatible with the current version of LabBench then the protocol will not be shown in LabBench Designer.
Previously, definition stimuli and trigger sequences was implemented by each test individually. This resulted in stimulus and trigger sequence specifications being inconsistent from one test to another. In LabBench 4, this has been completely rewritten so stimulus and trigger sequences are now implemented by a common Stimulation Engine that are used by all tests. With this change stimuli and trigger sequences are specified in the same format regardless of which test it is specified for.
Below is an examples of the new format for specification of stimuli:
<stimulation>
<stimulus>
<repeated Tperiod="10"
Tdelay="1"
N="4">
<sine Frequency="1000"
Is="x" Ts="4"/>
</repeated>
</stimulus>
</stimulation>
and of the new format for specification of trigger sequences:
<triggers start-triggger="internal">
<combined-triggers>
<trigger duration="1">
<code output="Code"
value="64" />
</trigger>
<repeated-trigger Tperiod="50"
Tdelay="20"
N="4">
<repeated-trigger Tperiod="5"
N="5">
<trigger duration="1">
<code output="Digital"
value="1" />
<code output="Stimulus"
value="1" />
</trigger>
</repeated-trigger>
</repeated-trigger>
</combined-triggers>
</triggers>
In the example above, the output
controls the output connector that the trigger is generated on. For the LabBench I/O:
Code
: The trigger will be generated on the INTERFACE port on the back of the LabBench I/ODigital
: The trigger will be generated on the TRIG OUT port on the back of the LabBench I/OStimulus
: The trigger will be generated on the STIMULATOR T port on the front of the LabBench I/OThe LabBench Language allow for stimuli to be specified in xml without the need for programming. However, this approach works when only a single-modality stimulus is required such as electrical, thermal, or similar, and when the test directly supports the interface implemented by your device.
For example, the Threshold Estimation test can be used with any stimulator that implements the StimulusGenerator interface. Devices that implements this interface are for example the LabBench I/O, NI DAQmx Cards, and sound cards. However, if you want to use the Threshold Estimation test to for example estimate pressure pain thresholds with the LabBench CPAR device this was previously impossible as this device does not implement the Stimulus Generator interface.
In LabBench 4 this is now possible with the use of custom stimulations. With custom stimulations stimuli is generated by calling a function in a Python scripts instead of being specified directly in XML in the Protocol Definition File. As an example, say you want to estimate the pressure required to evoke a VAS 3 rating for rectangular pressure stimuli of 1s in duration. To do this you can use a custom stimulus in a Threshold Estimation test:
<stimulation-scripts initialize="True"
stimulate="func: Script.Stimulate(tc,x)"
stimulus-description="Pressure"
stimulus-unit="kPa">
<instrument name="Algometer"
interface="pressure-algometer"
required="true"/>
</stimulation-scripts>
with the following Python function:
def Stimulate(tc, x):
# Is available in Devices names as Algometer because of the <instrument> element.
algometer = tc.Devices.Algometer
chan = algometer.Channels[0]
# Create a 1s rectangular pressure stimuli with intensity x
waveform = chan
.CreateWaveform() # Create an empty waveform
.Step(x, 1) # Step pressure to x for 1s
chan.SetStimulus(1, waveform)
# Connect waveform channel 0 to Pressure Outlet 1
algometer.ConfigurePressureOutput(0, ChannelID.CH01)
algometer.StartStimulation(AlgometerStopCriterion.STOP_CRITERION_ON_BUTTON_PRESSED, True)
return True
Other potential uses for custom stimulations are multi-modal stimuli where for example electrical and pressure stimuli are combined in simultaneous stimulations, something that is not supported per default by tests such as Threshold Estimation or Evoked Potentials tests, but which becomes possible with the use of custom stimulations.
Test events is a mechanism similar to custom stimulations that has been implemented in order to be able to extend tests with functionality that lies outside their original intended scope. Test events are scripts that are executed when a test is, started and completed or aborted, which can be defined by:
<test-events start="func: Functions.Condition(tc)"
abort="func: Functions.Stop(tc)"
complete="func: Functions.Stop(tc)">
<instrument interface="pressure-algometer"
name="Algometer"
required="true"/>
</test-events>
The example above is taken from a protocol where Somatosensory Evoked Potentials (SEPs) are conditioned by a constant pressure stimulus delivered by a LabBench CPAR device. The Evoked Potentials test enables a set of stimuli to be generated according to a stimulation pattern, and thus are suited for SEPs. However, the build-in functionality of this test does not allow for a conditioning stimulus to be generated while the test is running. To enable this test events are used, where the following scripts are used for the start, abort, and complete test events above:
def Condition(tc):
algometer = tc.Devices.Algometer
chan = algometer.Channels[0]
chan.SetStimulus(1, chan.CreateWaveform()
.Step(tc.SR.PTT, 9.9 * 60))
algometer.ConfigurePressureOutput(0, ChannelID.CH01)
algometer.StartStimulation(AlgometerStopCriterion.STOP_CRITERION_ON_BUTTON_PRESSED, True)
Log.Information("Starting conditioning: {intensity}", tc.SR.PTT)
return True
def Stop(tc):
algometer = tc.Devices.Algometer
algometer.StopStimulation()
return True
Which starts a 9.9 min long pressure stimuli that condition the SEPs. This pressure stimulation is stopped in the Stop() function when the test is either completed or aborted.
In this case the test events are used to deliver stimuli, however, it has also been used for other applications such as:
Support has been added for questionnaires to be shown on an external monitor and that an joystick can be used by a subject to answer the following types of questions:
Is a new test that allow a set of stimuli to be presented to a subject accordingly to a stimulation pattern. Stimulus patterns can be created as a composition of sequences, as long as the their timing can be determined when the test is started. This means that irregular stimulus patterns with for example burts of stimuli can be specified, however, it is not possible to create patterns that are non-deterministic such as a pattern that includes a wait period depending on subject or operator input.
Below, is an example of a stimulus set for a classical Oddball paradigm:
<stimulation-pattern time-base="seconds">
<uniformly-distributed-sequence iterations="NumberOfSets * NumberOfStimuli"
minTperiod="1.5"
maxTperiod="2.5"/>
</stimulation-pattern>
In which the stimulus set that consists of NumberOfStimuli
is presented with a uniformly distributed inter-stimulus-interval of 1.5s to 2.5s. The stimulus set is presented NumberOfSets
times. The NumberOfStimuli
is a variable that is automatically added by the tests to make it easier to define stimulation patterns, and the NumberOfSets
is a variable that is defined in the <defines>
section of the protocol.
The stimulus set for this Oddball paradigm is as follows:
<stimuli order="block-random">
<stimulus name="Normal"
count="4"
intensity="T01.Intensity">
<triggers>
<trigger duration="10">
<code output="Code" value="1" />
</trigger>
</triggers>
<stimulus>
<sine Is="x" Frequency="500" Ts="300"/>
</stimulus>
</stimulus>
<stimulus name="Oddball"
count="1"
intensity="T02.Intensity">
<triggers>
<trigger duration="10">
<code output="Code" value="2" />
</trigger>
</triggers>
<stimulus>
<sine Is="x" Frequency="1000" Ts="300"/>
</stimulus>
</stimulus>
</stimuli>
Where the T01
and T02
tests are Stimulus Presentation tests (please see below) that has previously determined the intensity for the Normal and Oddball stimuli, respectively. Not shown in this example, is that triggers for the EEG amplifier is generated with a LabBench I/O and the stimuli are auditory stimuli generated with the built-in sound card of the computer. The triggers is synchronized with the auditory stimuli with a LabBench ATRIG device which is inserted between the sound card and the headphones. The LabBench ATRIG is an accessory to the LabBench I/O that can be connected to one of its response ports and which will generate a trigger each time a sound is played. In this case this trigger starts the <triggers
that is specified above. Please note that because the sound card implements the StimulusGenerator interface which is supported by the Evoked Potentials Test no Python code is required to implement this experimental paradigm.
Is a new test that allow a stimulus to be manually presented to a subject. This can be used for example to familiarize a subject to a specific kind of stimulus, or instruct them in how to rate these stimuli with a psychophysical rating scale.
This test can for example be used to manually present auditory stimuli to a subject in order to determine the sound intensities that will be perceived as not uncomfortable loud for an Oddball paradigm (see above). Below is an example of the definition of such a Stimulus Presentation Test:
<psychophysics-stimulus-presentation ID="T01"
name="Normal stimulus"
stimulus-update-rate="44100"
trigger-update-rate="20000">
<properties>
<instructions default-instructions="T01"
override-results="false"/>
</properties>
<intensity type="array"
value="[Stimulator.Range * v/100 + Stimulator.Min for v in range(0, 101, 5)]" />
<responses response-collection="yes-no" />
<triggers start-triggger="response-port01">
<trigger duration="10">
<code output="Code" value="1" />
</trigger>
</triggers>
<stimulation>
<stimulus>
<sine Is="x" Frequency="500" Ts="300"/>
</stimulus>
</stimulation>
</psychophysics-stimulus-presentation>
It is now possible to use an external monitor to display rating scales to the subject. These scales can either be controlled by a LabBench SCALE device or by a 3rd party joystick.
Support has been added in the LabBench I/O driver for the LabBench PAD. The LabBench PAD is a response device consisting of up to 8 push buttons that can be used by subjects to provide response to psychophysical research paradigms, such as the Stroop, Flanker, Stop-Signal tasks and similar.
Support has been added for trigger response devices in the LabBench I/O for the following devices:
The purpose of these trigger devices is to enable the generation of up to 16-bit contextual triggers to EEG amplifiers and similar.
The LabBench I/O driver now support specification of logic convention for the INTERFACE port (positive/negative logic), default analog output, and expected voltage levels on the INTERFACE port.
LabBench, Rev. 3.3.0 is a minor release fully backward compatible with revision 3.2.0.
Previously, the Psychophysics Threshold Estimation test (<psychophysics-threshold-estimation>
), used to estimate psychometric functions with adaptive methods, would only plot the responses to the stimuli and the estimated threshold. This visualization meant that when the Psi Method was used, it was not possible to assess whether its estimation of the psychometric function was converging. The visualization of the Psi Method has been improved, so it displays the confidence interval of the alpha parameter of the psychometric functions in the plot of responses to stimuli. Furthermore, a plot has been added to display the estimated psychometric functions.
With this new visualization, an experimenter can assess if the estimation converges. The algorithm converges if confidence intervals progressively narrow as more stimuli are presented. A confidence-level parameter for each stimulus channel can specify the confidence interval for the alpha parameter. A default value of 0.95 will be used if no confidence-level parameter is specified.
The confidence intervals for the alpha and beta parameters of the psychometric function are now also stored and exported as part of the data set for an experiment.
Catch trials have been implemented in the Psychophysics Threshold Estimation test (<psychophysics-threshold-estimation>
), which can be specified per channel with the following channel configuration:
<channel ID="C01"
channel-type="single-sample"
trigger="1"
channel="3"
name="Sine (1000Hz)"
Imax="Imult * TA1['C01'] if Imult * TA1['C01'] < 40.0 else 40.0">
<catch-trials order="block-randomized"
interval="5" />
...
</channel>
</channels>
Catch trials are inserted into the estimation based on its order parameter; deterministic
) a catch trial is inserted for each interval stimulus
, block-randomized
a catch trial is inserted for each interval
block of stimuli, where it is inserted into this block is random, or randomized
) catch trials are inserted randomly with a probability of 1/interval
.
Calibration of auditory stimuli has been implemented. In the LabBench data directory, there is now a directory, calibration
in which calibration data can be supplied to LabBench.
To calibrate auditory stimuli/sound cards, a file soundcard.xml
must be placed in the calibration directory. This file must adhere to the following format:
<?xml version="1.0" encoding="utf-8" ?>
<sound-calibration format="dBFS-table">
<left>
dBFS,500,630,800,1000,1250,1500,2000
0,110.4,112.3,115,112.8,117.4,115.3,116.3
-5,105.4,107.2,109.9,107.7,112.5,110.4,111.5
...
-95,13.5,16.3,20.1,17.5,22.9,20.6,22.1
-100,14.6,16.3,20.1,17.6,22.9,20.9,22.3
</left>
<right>
dBFS,500,630,800,1000,1250,1500,2000
0,112.5,113.8,116.3,114.5,118.8,117,117.9
-5,107.1,108.4,111.1,109.2,113.7,111.7,112.7
...
-95,16.8,17.6,21.3,19.1,24.4,21.7,23.7
-100,16.6,17.6,21.1,19,24.4,22.2,23.3
</right>
</sound-calibration>
This file provides the calibration data for the left and proper channels of the sound card, and it consists of lookup tables that can provide the dBFS that will result in a given sound pressure. The first column in the lookup table consists of the dBFS for which sound pressures have been measured, and the first row consists of the frequencies for which these sound pressures have been measured.
When LabBench needs to generate a pure tone with a given sound pressure, it will first find the column corresponding to the pure tone's frequency. If this frequency is absent in the calibration data, it will linearly interpolate the column from the frequency nearest columns. When the frequency column is found or interpolated, it will look up the two nearest sound pressures in the column to the requested sound pressure and use linear interpolation to find the dBFS that will generate the requested sound pressure.
dBFS stands for "decibels relative to full scale" and is a unit of measurement used in digital audio to describe the level of a signal. It measures the amplitude of an audio signal relative to the maximum possible amplitude that can be represented in the digital system. In digital audio, the maximum amplitude that can be represented is typically represented by a full-scale value of 0 dBFS. Any values above this level will result in clipping, which is the distortion of the audio waveform. Negative values represent amplitudes below the maximum possible amplitude.
When measuring the level of an audio signal using dBFS, the reference point is always the maximum possible amplitude. This reference means that a signal with an amplitude of half the maximum possible level will be represented as -6 dBFS, since it is 6 dB below the maximum level. The dBFS unit is used in digital audio processing to ensure signals are not distorted or clipped.
The LabBench directory is where LabBench stores all internal data, which consists of the configuration of devices, protocol repositories, experimental data, etc. Data in this directory is not meant to be edited by the user but only through the LabBench Designer and Runner. The default location for this directory is C:\LabBench27
.
However, this new version can change this location by providing a -p [path to LabBench directory]
command line argument to the LabBench Designer and Runner programs. Changing the location of the LabBench directory has two use cases:
LabBench, Rev. 3.2.0 is a minor release fully backward compatible with revision 3.1.0.
Previously, if you defined a Survey consisting of several questions with identical content, such as a Pain Catastrophizing Scale that consists of 13 questions that are rated on a Likert scale, you would need to define and repeat the Likert scale for each question. Meaning each question would look as below:
<content>
<likert id="I01" title="dynamic: Text['QUESTION']" instruction="dynamic: Text['I01']">
<choice value="0" label="dynamic: Text['L0']"/>
<choice value="1" label="dynamic: Text['L1']"/>
<choice value="2" label="dynamic: Text['L2']"/>
<choice value="3" label="dynamic: Text['L3']"/>
<choice value="4" label="dynamic: Text['L4']"/>
</likert>
...
</content>
This made the Survey definition very verbose. Furthermore, if you needed to change the content of the Likert scale, you would need to change its definition in each question in the Survey. In the present version of LabBench, a template mechanism has been implemented that makes it less tedious and hence easier and less error-prone to define Likert scales and similar repeated content.
With this template mechanism, the definition above instead can be defined with a template and a question that is derived from this template:
<templates>
<likert id="pcs-question" title="dynamic: Text['QUESTION']">
<choice value="0" label="dynamic: Text['L0']"/>
<choice value="1" label="dynamic: Text['L1']"/>
<choice value="2" label="dynamic: Text['L2']"/>
<choice value="3" label="dynamic: Text['L3']"/>
<choice value="4" label="dynamic: Text['L4']"/>
</likert>
...
</templates>
<content>
<likert id="I01" template="pcs-question" instruction="dynamic: Text['I01']" />
...
</content>
Now the question definition consists of a single line that references the template in the <templates>
element.
File assets can now be localized instead of needing to implement localization in a backing script for a definition. This means that scripts that generate text for the UI can now be simpler and easier to develop.
With this new file assets localization, a file asset can now be defined as:
<file-asset id="TEXT" file="TEXT_EN.py">
<language code="DA" file="TEXT_DA.py"/>
</file-asset>
This is an example of a backing script that creates text for the UI. In this case, the protocol defines two languages that can be chosen in the start-up wizard (EN: English, and DA: Danish). If EN is selected (or any other language), then the file TEXT_EN.py
will be loaded and used for the file asset, and if DA is selected, then the TEXT_EN.py
. You can have as many <language>
elements as needed in a file asset.
It is now possible to store experiments remotely when installing them from a protocol repository. When experiments are stored remotely, they are not copied from the repository and into the LabBench local storage. Instead, each time the experiment has started, the files for the experiment are downloaded from the repository.
This is intended as a convenience when developing new protocols. When an experiment is stored remotely, you do not need to deinstall/reinstall an experiment each time you change its protocol.
However, it is not intended for actual experiments.
A new device has been implemented, termed the LabBench SERVER. The LabBench Server is a web server embedded within LabBench, which can host web apps. With this server, it is possible to turn any ethernet-connected device on the local network into a psychophysical rating device, such as a rating scale, response button, or response interface, such as what is required for questionnaires or psychophysical research paradigms.
The LabBench Server can be included in an experimental setup as:
<devices>
<server id="server">
<visual-analog-scale id="server.pain" name="Pain" length="10">
<modality value="Pain">
<localized-text language="DA" value="Smerte"/>
</modality>
<lower-anchor value="No Pain">
<localized-text language="DA" value="Ingen Smerte"/>
</lower-anchor>
<upper-anchor value="Maximal Pain">
<localized-text language="DA" value="Maksimal Smerte"/>
</upper-anchor>
</visual-analog-scale>
<visual-analog-scale id="server.itch" name="Itch" length="10">
<modality value="Itch">
<localized-text language="DA" value="Kløe"/>
</modality>
<lower-anchor value="No Itch">
<localized-text language="DA" value="Ingen Kløe"/>
</lower-anchor>
<upper-anchor value="Maximal Itch">
<localized-text language="DA" value="Maksimal Kløe"/>
</upper-anchor>
</visual-analog-scale>
</server>
</devices>
In this case, the LabBench SERVER defines two VAS scales; one for pain and one for itch. The scales are localized to the English and Danish languages, but can be localized to as many languages as required.
A new device has been implemented that makes it possible to use generic joysticks/game controllers as response buttons.
A generic joystick can be included in an experimental setup as:
<joystick id="joystick" />
A new device has been implemented that makes it possible to use generic sound cards for hearing tests and auditory evoked potentials.
A generic sound card can be included in an experimental setup as:
<sound id="sound" calidation-data=""/>
Note: calibration data is currently not implemented, but is in an upcoming release planned to make it possible to calibrate the sound card according to the ANSI S3.6 and IEC 60645-1 standards.
Below is an example how the sound card defined in the experimental setup above can be used to determining the hearing threshold to a 100kHz tone of 200ms in duration:
<psychophysics-threshold-estimation ID="T2"
name="Psi Method">
<dependencies>
<dependency ID="T1"/>
</dependencies>
<update-rate-deterministic value="2000" />
<yes-no-task stimulus-update-rate="44100" /
<channels>
<channel ID="C01"
channel-type="single-sample"
trigger="1"
channel="0"
name="Sine (1000Hz)"
Imax="Imult * T1['C01'] if Imult * T1['C01'] < 1.0 else 1.0">
<psi-method number-of-trials="Trials">
<quick alpha="0.5"
beta="1"
lambda="0.02"
gamma="0.0" />
<beta type="linspace"
base="10"
x0="-1.2041"
x1="1.2041"
n="20"/>
<alpha type="linspace"
x0="alphaX0"
x1="1"
n="alphaN" />
<intensity type="linspace"
x0="alphaX0"
x1="1"
n="intensityN" />
</psi-method
<sine Is="x"
Ts="200"
Frequency="1000"
Tdelay="0" />
</channel>
</channels>
</psychophysics-threshold-estimation>
A new test termed <psychophysics-manual-threshold-estimation>
has been implemented which manually guides an experimenter through determining tactile sensitivity with tactile stimulation devices such as von Frey Hairs, or two-point disciminators. The test provides adaptive estimation of psychometric thresholds and functions with the Up/Down and Psi methods, respenctively.
Below is an example of how this manual threshold determination test can be used to determine the two point discrimnation threshold for a subject.
<psychophysics-manual-threshold-estimation ID="TPD_PSI_FC1I2A"
name="2PD (Psi, Forced Choice (1I2A)">
<psi-algorithm number-of-trials="30"
intensities="[2,3,4,5,6,7,8,9,10,11,12,13,14,15,20,25]">
<quick alpha="0.5"
beta="1"
gamma="0.5"
lambda="0.02" />
<beta type="linspace"
base="10"
x0="-1.2041"
x1="1.2041"
n="20"/>
<!-- Change the 2.0 and 25.0 to the min/max from the intensities. Be sure to include a .0 to make it a floating point number. -->
<alpha type="linspace"
x0="2.0/25.0"
x1="1"
n="100" />
</psi-algorithm>
<one-interval-forced-choice-task alternative-a-image="TwoProngsAlong"
alternative-a="Along"
alternative-b-image="TwoProngsAcross"
alternative-b="Across"
question="What is the orientation of the two points (Along or Across the finger)?"/>
</psychophysics-manual-threshold-estimation>
In this case, the test determines the threshold with a Psi method and a one-interval two-alternatives forced-choice response task.
A new test termed <psychophysics-response-recording>
has been implemented. This test makes it possible to record psychophysical ratings for a pre-specified duration and to automatically obtain statistics such as area under the curve, maximal rating, and time of maximal rating.
Below is an example of how this test can be used to determine is used to record pain and itch ratings for 10 min:
<psychophysics-response-recording ID="T01"
name="Pain and Itch Recording"
duration="10 * 60"
sample-rate="5" />
The logging system has been fully migrated to the Serilog log system that provides full semantic logging of events within LabBench:
The logging system provides three sinks of logging data:
For all sinks, it is possible to define the minimal log level that will be sent to the sink.
Support for the Datalust Seq Log Server has been implemented. The Datalust Seq Log Server is a centralized repository that collects and stores log messages generated by multiple LabBench installations. The log messages can be used to track and monitor all experiments that are ongoing in a research center and can also be used to diagnose and debug issues.
Advantages of using the Datalust Seq Log Server:
LabBench, Rev. 3.1.0 is a minor release that is fully backwards compatible with revision 3.0.0. The focus of the release has been to implement functionality related to post-session actions, and bugfixes to revision 3.0.0.
Functionality for adding, configuring and rerunning post-session actions has been added to the LabBench Designer. In the Post Session Actions section in the Experiment tab it is now possible to:
To implement this functionality a new file format has been defined termed Action Definition Files. Currently, two types of actions exists:
Below is an example of a definition file for the <export-to-csv>
action:
<?xml version="1.0" encoding="UTF-8"?>
<export-to-csv xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://labbench.io http://labbench.io/xsd/3.1.0/csvaction.xsd"
name="Exporting session to CSV"
location="C:\CPAR"
header="true"
seperator=";"
filename="dynamic: '{session}-{time}.csv'.format(session = SESSION_NAME, time = SESSION_TIME)">
<item name="PDT"
value="PDT.PDT"
default="NA"/>
<item name="Operator"
value="PDT.Operator"
default="NA"/>
<item name="RecordingTime"
value="PDT.RecordingTime"
default="NA"/>
</export-to-csv>
An post-session action has been implemented for exporting to either JSON or MATLAB format:
<?xml version="1.0" encoding="UTF-8"?>
<export-data xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://labbench.io http://labbench.io/xsd/3.1.0/export_action.xsd"
name="Exporting session to MATLAB"
location="C:\CPAR"
filename="dynamic: '{session}-{time}.mat'.format(session = SESSION_NAME, time = SESSION_TIME)"
format="matlab"/>
LabBench, Rev. 3.0.0 is a major release that introduces a new UI and a dedicated program for configuring LabBench that replaces the Command Line Interface (CLI).
The Command Line Interface has been replaced by the LabBench Designer program, which provides a GUI for all the configuration tasks that was previously performed by command on the command line.
The user interface has been updated to Windows Presentation Foundation.
The license system has been completely reworked. With the old license system the license was tied to specific hardware devices, meaning that you would require a license for each hardware device you need to use in LabBench. This meant that if you needed to change for example a NI DAQmx card, you could not do that if there hasn't been made a license for the new card.
This has now been changed so the licenses are tied to tests in LabBench. Currently, there are three sets of tests:
This means that the license are no longer tied to hardware and that can be changed arbitrary. The license is tied to be able to be use on one computer at a time, however, it can be moved to a new computer as many times as required.
The DAQmx driver has been completely rewritten. It no longer uses the National Instruments .NET drivers, but instead the low-level C API. Tests shows that this greatly reduce the problems with DLL's not being the correct version.
With the old implementation, you would need initially to install exactly the same version of the NI DAQmx drivers as was used to build LabBench. If you used a different version it would complain that the version of the DAQmx driver dlls are not the correct version.
With the new implementation using the low-level C API, it is possible to use a newer version of NI DAQmx drivers as was used for building LabBench, as the NI DAQmx C API is backwards compatible.
It is now possible to include what has been termed test annotations. Test annotations are additional data that can help in analysing the results of a study, which can be specified in the test properties of a test.
<properties>
<annotations>
<bool name="boolean" value="true"/>
<number name="number" value="223.2"/>
<string name="string" value="Hello, World!"/>
<numbers name="list">
<number value="1"/>
<number value="2"/>
<number value="3"/>
</numbers>
</annotations>
</properties>
Test annotations can for example be used to specify the stimulus durations used in a Strength-Duration test.
Protocol repositories has been refactored, such that each protocol also contains a template for an experimental definition file (*.expx). With this template it is possible to create an experiment from the LabBench Designer by:
After the experiment has been created. Subject ID validation can be setup from its configuration page.
LabBench has been migrated from the old venerable .NET Framework platform to the .NET platform. In time this will enable LabBench to run on Mac and Linux computers.
LabBench, Rev. 2.7.9 is a legacy release that is maintained for compatability with the CPAR toolbox for the Nocitech CPAR device.
Our goal is to create open and novel research devices for neuroscience.
Inventors' Way ApS
Niels Jernes Vej 10
DK9220 Aalborg, Denmark
CVR.NR. 37596108
Copyright© Inventors' Way ApS