Downloads

Revision 5.0

LabBench, Rev. 5 is a major release that is incompatible with LabBench 4, which means that protocols written for LabBench 4 cannot be run unmodified by LabBench 5.

The reason for this decision where inconsistencies in the LabBench Language in which protocols are written. These have now been corrected resulting in an consistent language with which it is easier to write protocols for scientific studies. The changes to language is part stylistic and conceptual and part practical. Certain tests with confusing names has been renamed to names that better explain what experimental procedures they implement, and constructs to the language has been added that makes it faster and less verbose to write protocols. The template system for generation of tests has also been greatly expanded and made more consistent. This template system makes it possible to write complex multisession protocols with significant less code LabBench 4 protocol can with minor modifications be made compatible with LabBench 5.

A second significant change is a change in its license. Previously, LabBench was devided into modules that must be licensed individually. In LabBench 5, there are no modules, when you have licensed LabBench you will have access to the entire program all of its capabilities.

However, the change in the license system means that license codes for LabBench 4 will not work with LabBench 5. If you have a LabBench 4 license, then please write to help@labbench.io to have your license code converted. For LabBench 4 licensees this is a free upgrade.

List of changes

Major Updates

  • Categorical structuring of protocols in repositories are now possible
  • A 2D graphics engine (based on the Skia grapghics engine ) and accompanying toolkit has been included in LabBench and can be used to programmatically generate visual stimuli, test instructions, and subject instructions. Generating test instructions can be used to implement double-blinded RCT studies.
  • Fiducial on images display by the ImageDisplay instrument can now be automatically generated with by passing true to a fiducial parameter to the Display(image, time, fiducial)/Display(image, fiducial) functions.
  • The <survey> test has been renamed to <questionnaire>
  • Legacy question template generation system for <questionnaire> tests has been removed. Template generation for <questionnaire> tests now works identical to template generation for all other tests.
  • It is now possible to display progress in the title of questions in the <questionnaire> test.
  • It is now possible to use Assets embedded in zip files for test instructions.
  • Moved cue image attribute from alternatives to the task in the <alternative-forced-choice-task> response task
  • Script variable with the ID of the experimental setup
  • Possibility of using labels instead of intensities in the <manual-threshold-estimation> test
  • Possibility of showing a custom image for catch trials in the <manual-threshold-estimation> test
  • Experimental setup variants
  • Possibility of configuring LabBench Display devices in the LabBench Designer
  • Possibility of configuring Joysticks devices in the LabBench Designer
  • Body Map questions in <questionnaire> tests
  • Changed functionality of Ordinal, Interval, and Ratio scale response tasks
  • Generation of stimulus order with Python functions in <stimulus-sequence> tests
  • Caching of online repositories
  • Improved error information in python scripts
  • Shared LabBench data does not require admin access
  • Randomization parameters
  • Removed legacy naming of test instructions
  • Updated license model
  • New location for the LabBench Protocol repository
  • New Forced Yes/No response task added to the threshold estimation test.
  • Data directory for LabBench is now named LabBench5, meaning LabBench 4 and 5 can be used on the same computer without conflicts

Minor Updates

  • Enhancement: Protocol loading times has been significantly improved, by caching resources and only initializing tests that are used for the current session in multi-session experiments. Leading to a 20x improvement in loading time for the largest LabBench protocols.
  • Enhancement: Obsolete and unused tactile trigger device removed
  • Enhancement: It is now possible to access threshold estimates from the <threshold-estimation> tests with index notation. Meaning the threshold of channel with ID = CH01 in test with ID = T01 can be accessed as T01.CH01 in python scripts
  • Enhancement: Improved initialization time of questions in Questionnaires
  • Enhancement: Log system available from the (tc) parameter given to Python functions
  • Enhancement: Display of loading information in the startup wizard of LabBench Runner
  • Enhancement: Removed obsolete gain and sensitivity for trigger devices.
  • Enhancement: LabBench Designer will now show information on incompatible protocols.
  • Enhancement: LabBench Designer will now display information about REMOTE vs LOCAL experiments
  • Enhancement: LabBench Deisgner will now display the ID of created experiments
  • Enhancement: Obsolete ResponseIndicator instrument removed.
  • Enhancement: Naming of enum values in stop-mode for pressure algometry tests has been changed to follow naming conventions for the LabBench Language
  • Enhancement: Cleanup of script variables to follow naming conventions for the LabBench Language
  • Enhancement: Making possible to access instruments as tc.Instruments from Python scripts
  • Enhancement: Improved display of errors in LabBench Designer and Startup Wizards.
  • Enhancement: Better error information on missing Device to Instrument assignment for the <device-mapping> in experimental setups
  • Enhancement: Consistent naming of instruments
  • Enhancement: Wildcard assignment of devices to instruments, meaning a device can be assigned to all instruments of a given name in the <device-mapping> in experimental setups.
  • Enhancement: For Python functions and single line statement the result of the currently running test has been renamed from C to Current
  • Enhancement: Reduced internal data size, making it possible to include larger protocol assets such as images and videos.
  • Enhancement: Only show transformed intensity in manual threshold estimation tests to reduce risk of using an incorrect intensity.
  • Bugfix: Exceptions in custom stimulations will not any longer crash the program.
  • Bugfix: Exceptions in test events will now abort the test
  • Bugfix: A wrong asset ID in test instructions will no longer crash the program
  • Bugfix: When using LabBench I/O to generate triggers in combination an external Stimulus instrument such a sound card, the trigger will now be generated when the Stimulator is started.
  • Bugfix: Possible to configure the question on templates for the <yes-no-task> in templates for <psychophysics-manual-threshold-estimation> tests.

Revision 4.7

LabBench, Rev. 4.7 is a minor release that is compatible with LabBench 4.6.

List of changes

Major Updates

  • Trigger codes are now calculated parameters
  • Dot notation for assessing protocol assets
  • Randomization of foreach loops
  • If constructors in template generation

Minor Updates

  • Enhancement: Subject instructions is now a calculated parameters
  • Enhancement: Operator instructions is now a calculated parameters
  • Enhancement: Operator instructions can now be images

Revision 4.6

LabBench, Rev. 4.6 is a minor release that is compatible with LabBench 4.5.

List of changes

Major Updates

  • Time constraints
  • Improved scripting access to survey results
  • Random Toolkit

Minor Updates

  • Bugfix: Increased timeout for LabBench I/O communication.

Major Changes

Time constraints

It is now possible to place time constraints on tests, in the form of a <time-constraint> element:

<time-constraint 
    test-id="TestID" 
    min="60"
    max="120"
    notification="true"
    time-reference="end"/>

This time constraint will enforce that the test can only be started between min and max seconds after the test with test ID test-id has either been started or completed. If only min is specified it means that the test can only be started after min seconds has passed, but after that it can be started at any time. If only max is specified it means the test must be started within max seconds and after that it cannot be started. Whether the it is refered to the start of completion of the test identified by test-id is determined by the time-reference attribute. The notification attribute enables a beep that will occur when it becomes possible to start the test that the is time contrained.

Improved scripting access to survey results

Previously answers to survey question had to be accessed with an index notation in the form of TestID['QuestionID']. With LabBench 4.6 answers can now be accessed as: TestID.QuestionID.

Random Toolkit

A Random Toolkit has been added to the Test Context (tc), as tc.Random. Currently, to functions are avaialble:

  1. tc.Random.Permutate(length): will return a permutated array with indexes from 0 to length - 1.
  2. tc.Random.LatinShuffle(blockNo, length): will return the the blockNo row from a latin square where length is the length of the rows.

Revision 4.5

LabBench, Rev. 4.5 is a minor release that is compatible with LabBench 4.4.

List of changes

Major Updates

  • Test Templates
  • Protocols can be structured into sessions
  • Response Tasks for the Threshold Estimation Test
  • Subject Instructions

Minor Updates

  • Enhancement: Faster startup time for LabBench Designer and Runner
  • Enhancement: Improved validation of LabBench protocols
  • Enhancement: Improved Add Device dialog
  • Enhancement: Improved Configure Experimental Setup Dialog
  • Enhancement: Simplified configuration of LabBench Display scales

Patches

4.5.1

  • Bugfix: Choices in Likert Questions was not generated when their source was a LikertQuestionTemplate.

Major Changes

Test Templates

Previously, if the same test was needed multiple times in a protocol, it would have to be duplicated each time. This resulted in very long and verbose protocols that, in some cases, could reach thousands of lines of code.

A system for creating tests from test templates has been implemented. Below is the definition of a protocol that contains three sessions (SCREENING, SES01, SES02, SES03), each containing the same number of tests. Each session starts with a configuration, followed by the application of a Pruritogen and three assessments of the evoked Itch.

<tests>
    <foreach variable="session" in="sessions">
        <meta-survey-constructor 
            ID="var: '{id}'.format(id = SessionID)"
            name="var: '{name}: Configuration'.format(name = SessionName)"
            session="var: '{id}'.format(id = SessionID)"
            template="configuration">
            <variables>
                <string value="var: session.ID" name="SessionID" />
                <string value="var: session.Name" name="SessionName" />
                <string value="var: session.Dependency" name="Dependency" />                                                
            </variables>
        </meta-survey-constructor>

        <meta-survey-constructor 
            ID="var: '{id}APPLICATION'.format(id = SessionID)" 
            name="var: '{name}: Application'.format(name = SessionName)"
            session="var: '{id}'.format(id = SessionID)"
            template="application">
            <variables>
                <string value="var: session.ID" name="SessionID" />
                <string value="var: session.Name" name="SessionName" />                        
            </variables>
        </meta-survey-constructor>

        <sequence type="random" offset="SubjectNumber">
            <foreach variable="m" in="measurements">
                <meta-survey-constructor 
                    ID="var: '{sid}{id}'.format(sid = SessionID, id = m.ID)" 
                    name="var: '{sid}: {name}'.format(sid = SessionName, name = m.Name)" 
                    session="var: '{id}'.format(id = SessionID)"
                    template="nrsRating">
                    <variables>
                        <string value="var: session.ID" name="SessionID" />
                        <string value="var: session.Name" name="SessionName" />                        
                        <string value="var: m.Instruction" name="Instruction" />
                    </variables>
                </meta-survey-constructor>
            </foreach>
        </sequence>
    </foreach>
</tests>

With the new test template system, this can be done with 41 lines of code, whereas, previously, the same protocol took ~1500 lines of code to define. As shown in the example, this test templating system also allows for the generation of a series of tests with <foreach> loop elements and randomizations of tests with <sequence> elements.

Protocols can be structured into sessions

It is now possible to structure protocols into sessions:

<sessions>
    <session ID="SCREENING" name="Screening" />
    <session ID="SES01" name="Session 1" />
    <session ID="SES02" name="Session 2" />
</sessions>

And specify which session they belong to for each test in the protocol. When sessions are defined, you will be asked in the startup wizard which sessions are currently being performed, and then LabBench Runner will only show tests belonging to that session in the protocol view.

The session definition makes long protocols with multiple sessions much simpler for the operator to perform, as they will only see the tests relevant to the current session instead of all the tests in the protocol.

Response Tasks for the Threshold Estimation Test

Additional response tasks have been implemented for the threshold estimation test:

  1. N-Alternative Forced Choice Response Task: In this response task, one out of N alternative stimuli is presented, followed by the subject being asked which of the alternatives was presented. If they could discriminate between them, they would answer correctly with a probability equal to one minus the lapse rate. If they cannot discriminate between them, they will have a 1/N probability to answer correctly. This type of response task could, for example, be used to implement a Just Noticeable Difference (JND) test of sound stimuli in which three (3) tones are presented, and they have to indicate which of the tones is louder than the others (first, middle, or last). The estimation algorithm will then find the Psychometric Function for the sound intensity at which they can discriminate between the tones.
  2. N-Interval Forced Choice Response Task: In this response task, a series of visual cues (1, 2, ..., N) and for one of them, the stimulus is simultaneously given to the subject. One all the cues have been presented the subject is asked at which of the cues they felt the stimulus. If they could feel the stimulus, they would answer correctly with a probability equal to one minus the lapse rate. If they cannot feel the stimulus, they will have a 1/N probability of answering correctly.
  3. Categorial Response Task: In this response task, the subject will rate the stimuli on a categorical rating scale, and the algorithm will return false as long as the rating is lower than the target category plus one. Once the rating has been higher than the target category plus one it will return true until the rating is equal to or higher than the target category minus one. This response task can be used to find supra-maximal thresholds and is designed to be used with the Up/Down estimation algorithm only.
  4. Manual Categorial Response Task: The manual categorical response task is the same as the categorical rating task with the difference that in this task the subject is asked by the operator to give the rating and the rating is manually specified by the operator to the algorithm. Consequently, this response task is purely verbal and does not require a physical categorical rating scale.

Subject Instructions

Adding instructions to subjects to any test in a protocol is now possible. This is done by defining a test property:

<properties>
    <subject-instructions 
        experimental-setup-id="image"
        default="AlloknesisInstructionVAS" />
</properties>

This <subject-instruction> test property specifies an image to display to the subject. It requires that an instrument named SubjectInstructions of type ImageDisplay is assigned to the test in the device-mapping section of the experimental setup.

<device-assignment 
    device-id="display.image" 
    test-type="meta-survey" 
    instrument-name="SubjectInstructions" />

Revision 4.4

LabBench, Rev. 4.4 is a minor release that is compatible with LabBench 4.3.

List of changes

Major Updates

  • Invalidation of dependent data

Minor Updates

  • Enhancement: End time for tests are now added to Results
  • Bugfix: Override results for test instructions did not work.
  • Bugfix: RecordingTime not set when Surveys are restarted.

Major Changes

Invalidation of dependent data

It is not possible to invalidate results of dependent tests, which means that if a test is rerun it is not possible to discard the results of dependent tests. This is relevant for example in cuff pressure algometry protocols for tests such as Temporal Summation or Conditioned Pain Modulation, which depends on the Pain Tolerance Threshold determines by Stimulus Response tests. If these tests are rerun and a new Pain Tolerance Threshold is determined then that will invalidate the results of the Temporal Summation and Conditioned Pain Modulation tests.

An example of such a dependency is provided below:

<dependencies>
    <dependency ID="SR2" virtual="false" />
</dependencies>

Whether or not a result of a dependent test is discarded is controlled by the virtual attribute on the dependency element. If this virtual attribute is set to false then the result of the test will be discarded if the SR2 test is rerun. If this is not the intended behavior then the dependency can be declared as virtual by setting the virtual attribute to true. The default value of the virtual attribute if not specified is false.

Revision 4.3

LabBench, Rev. 4.3 is a minor release that is compatible with LabBench 4.2.

List of changes

Major Updates

  • Results of the response recording test are written to the session log
  • Reworked user access permissions
  • Foreach construct for PDF export actions

Minor Updates

  • Enhancement: LabBench I/O Communication Library upgraded to Rev. 2.0.2
  • Enhancement: Running time of CSV export action has been improved.
  • Enhancement: TestID are now written to session logs in addition to Test Name.
  • Bugfix: Restart of the test does not cause the iteration property to increased by one.
  • Bugfix: Save button state is fixed, previously if the experiment was selected to quickly after opening LabBench Designer it would be disabled.

Patches

4.3.1

  • Bugfix: PDF export could crash the program if an invalid color was used.

Major Changes

Results of the response recording test are written to the session log

The following results of the response recording test is now written to the session log:

  • Peak Response
  • Time of Peak Response
  • Area Under the Curve (AUC)
  • Response Duration

Reworked user access permissions

User access permissions has been reworked to allow access by Operators to the device page. This allow them to solve a name change for serial ports without having to request help from a Principal Investigator or Administrator.

Foreach construct for PDF export actions

A new foreach construct has been added to the PDF export session actions, which allow for iterating over a collection of items. This significantly reduce the coding effort and code size of PDF export actions.

<foreach variable="result" in="Results">
    <cell><text 
        style="tblcell" 
        value="dynamic: result.ID"/>
    </cell>
    <cell><text 
        style="tblcell" 
        value="dynamic: result.RecordingTime.ToString('yyyy-MM-dd') if result.Completed else 'MISSING'"/>
    </cell>
    <cell><text 
        style="tblcell" 
        value="dynamic: result.RecordingTime.ToString('HH:mm') if result.Completed else 'MISSING'"/>
    </cell>
    <cell><text 
        tyle="tblcell" 
        value="dynamic: result.RecordingEndTime.ToString('HH:mm') if result.Completed else 'MISSING'"/>
    </cell>
    <cell><text style="tblcell" value="dynamic: 'YES' if result.Completed else 'NO'"/></cell>
    <cell><text 
        style="tblcell" 
        value="dynamic: ('YES' if result.Iteration > 1 else 'NO') if result.Completed else 'MISSING'"/>
    </cell>
</foreach>

Currently, this construct is available for tables.

Revision 4.2

LabBench, Rev. 4.2 is a minor release that is compatible with LabBench 4.1.

List of changes

Major Updates

  • Sessions Log
  • Conditional rerunning of tests
  • Copy Post Session Action
  • Export Session Log Post Session Action

Minor Updates

  • Enhancement: Updated build system for XSD schema with embedded documentation annotations
  • Enhancement: Removed the required attribute from instrument specifications
  • Bugfix: Fixed background color for the Image Display sub-instrument in LabBench DISPLAY
  • Bugfix: Fixed double key presses in the LabBench PAD instrument driver.

Major Changes

Sessions Log

Log messages that occur during sessions are now saved to a session log.

Conditional rerunning of tests

It is not possible to require the operator to write a log message before rerunning tests is allowed. The message displayed to the operator is now configurable in the protocol.

Copy Post Session Action

A copy post-session action has been implemented. This post-session action allows files generated by other post-session actions to be copied to other folders for replication.

Export Session Log Post Session Action

An export session log post-session action has been implemented. This post-session action will export the session log as a PDF file.

Revision 4.1

LabBench, Rev. 4.1 is a minor release that is compatible with LabBench 4.0.

List of changes

Major Updates

  • Psychophysics Toolkit
  • Waveforms Toolkit
  • PDF Export Post-Session Action
  • Script Export Post-Session Action
  • Use of ScottPlot from scripts
  • Display of sequences of images
  • Pauses in stimulation patterns in the Evoked Potentials Test

Minor Updates

  • Enhancement Use of ScottPlot from scripts
  • Bugfix Completed test event got run after the data was saved to disk. Now it is run before.

Patches

4.1.1

  • Enhancement Reaction times measured by internal stopwatch for Joysticks.
  • Enhancement Threshold and gain is only set if they are specified in the protocol file.
  • Bugfix Updated Python interface for Ratio and Interval Scales so they can be used from python backing scripts.

Major Changes

Psychophysics Toolkit

Toolkits are a new concept in LabBench intended to provide easy access to LabBench functionality from Python scripts embedded in protocols.

The psychophysics toolkit provides access to Psychometric Functions and Adaptive Methods for estimating these functions. It is accessed as a subcomponent of the Test Context passed into all callable functions as the 'tc' parameter.

Here is an example of how this toolkit is used to create a Psi Method algorithm for estimating the stop-signal delay in a stop-signal task:

self.method = tc.Create(tc.Psychophysics.PsiMethod()
                        .NumberOfTrials(tc.Trials)
                        .Function(tc.Psychophysics.Functions.Quick(Beta=1, Lambda=0.02, Gamma=0))
                        .Alpha(X0=tc.AlphaX0,X1=1.0,N = tc.AlphaN)
                        .Beta(X0=tc.BetaX0,X1=tc.BetaX1,N = tc.BetaN)
                        .Intensity(X0 = tc.IntensityX0,X1 = 1.0,N = tc.IntensityN))

Waveforms Toolkit

The Waveforms Toolkit provides access to create waveforms from the Waveforms LabBench library. It is accessed as tc.Waveforms.

PDF Export Post-Session Action

The PDF Export Post Session allow you to create PDF files from the data recorded in a session.

Script Export Post-Session Action

The Script Export Post Session Action allow you to run a single script. These actions allow you to create figures and save them to disc from the data recorded in a session.

Use of ScottPlot from scripts

The ScottPlot library is now preloaded into the Python scripting environment, so it is possible to create ScottPlot plots from Python scripts.

Display of sequences of images

Previously, the Image Display could only display static images indefinitely or for a specified period. This limitation made it challenging to implement the sequences of images that are required, for example, in a Stop Signal Task.

The Image Display has been updated to display sequences of Images where each step in this sequence can either be a static image or a function that is called. By calling a function, it is possible to perform additional steps at that point in the sequences, such as collecting responses and, from this response, displaying different images.

Below is an example of the implementation of Go and Stop signals in a stop signal task:

def Stimulate(tc, x):   
    display = tc.Devices.ImageDisplay
    
    if tc.StimulusName == "STOP":
        display.Run(display.Sequence(tc.StopTask)
                    .Display(tc.Images.FixationCross, tc.FixationDelay)
                    .Run(Go)
                    .Run(Stop)
                    .Display(tc.Images.FixationCross, tc.FeedbackDelay)
                    .Run(Feedback))
        
    elif tc.StimulusName == "GO":
        display.Run(display.Sequence(tc.GoTask)
                    .Display(tc.Images.FixationCross, tc.FixationDelay)
                    .Run(Go)
                    .Display(tc.Images.FixationCross, tc.FeedbackDelay)
                    .Run(Feedback))
    else:
        Log.Error("Unknown stimulus: {name}".format(name = tc.StimulusName))

    return True

Pauses in stimulation patterns in the Evoked Potentials Test

The evoked potentials test was initially intended to provide the capability to generate stimuli for electrically, auditory, and pressure-evoked potentials. It allows for the presentation of stimulation patterns and stimuli for which the stimulation pattern can be determined when the test is started.

However, it has proved versatile and has been used to implement Psychophysiological Research Paradigms effectively, for which it was not initially intended. Examples are Flanker Tasks, Stroop Tasks, Go/NoGo Tasks, Stop Signal Tasks, etc.

However, as the stimulation pattern was calculated when the test started, it was impossible to insert pauses in the tasks, which are often required in these tasks.

For this purpose, a pause attribute has been implemented on stimulation slots in the stimulation patterns. If this attribute is set to true, then the timing engine of the evoked potentials test will be paused until the operator presses a continue button that will be displayed in the UI of the test if this attribute is present in a stimulation pattern. Consequently, with this new attribute, it is possible to insert pauses into stimulation patterns.

Revision 4.0

LabBench, Rev. 4.0 is a major release that is incompatible with LabBench 3.x. A major release was required as the focus of the release was to remove inconsistences in the LabBench Language, which resulted in changes to the format of the Protocol Definition Files that is not backwards compatible.

List of changes

Major Updates

  • Merging of experiment and protocol definition files
  • Version check of protocols
  • Universal and consistent specification of stimuli and trigger sequences
  • Custom stimulations
  • Test events
  • Questionnaires
  • Evoked Potentials Tests
  • Stimulus Presentation Tests
  • Psychophysical rating scales on external monitors
  • Support for LabBench PAD response devices
  • Support for LabBench ATRIG, VTRIG, and TTRIG response devices
  • Support for negative logic, default analog output, and expected INTERFACE cable

Minor Updates

  • Enhancement Simplification of specification of adaptive algorithms in the Threshold Estimation test
  • Enhancement Support for question templates in the Survey test.
  • Enhancement Stimuli can now be decorated with a window, that are applied to the output of nested stimuli.
  • Enhancement The Nocitech CPAR device now implements the RatioScale and CompositeScale interfaces.
  • Enhancement For the Up/Down adaptive algorithm intensities are now specified in absolute units.
  • Enhancement External triggering of National Instruments DAQmx cards.
  • Enhancement External monitors can now be used to display visual stimuli, psychophysical ratings scales, and questionnaires.
  • Enhancement Export of whole data sets as defined by a post-session action
  • Enhancement Timing source can now be configured for LabBench RESPONSE devices
  • Enhancement Startup time for the LabBench Designer has been improved, by deferring initialization of protocols and experiments until they are requested.
  • Bugfix Clicking on protocol labels in the Protocol Repository view in LabBench Designer will now select the protocol.
  • Bugfix License system now works for multiple domain users.

Patches

4.0.1

  • Bugfix Bug that prevented a protocol of starting if a RESPONSE PORT was left unconnected.

4.0.2

  • Bugfix Bug that caused the LabBench Runner to display an error screen if a LabBench I/O is not connected at startup.
  • Bugfix Bug that caused the LabBench Runner to display a test as incorrectly be Completed if it was Aborted programmatically by the test itself.

4.0.3

  • Bugfix Bug that caused default button maps to be used instead of unidentified button maps.

4.0.4

  • Enhancement Initialization of ID and Name from experiment repository record when installing experiments
  • Bugfix Correct initialization of rating scales in the LabBench DISPLAY device

Major Changes

Merging of experiment and protocol definition files

To simplify protocol development the Protocol Definition Files (*.prtx) is now part of the Experimental Definition Files (*.expx), meaning that a protocol can be specified by a single file instead of requiring two separate files.

The new format of the Experimental Definition File (*.epx):

<?xml version="1.0" encoding="utf-8" ?>
<experiment xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xsi:schemaLocation="http://labbench.io ..\experiment.xsd">
    <description />
    <experimental-setup>
        <description />
        <devices>
        </devices>
        <device-mapping>
        </device-mapping>        
    </experimental-setup>
    <protocol>
        <!-- Content of the prtx file -->
    </protocol>
    <post-actions>
        
    </post-actions>
</experiment>

Where the content of the <protocol></protocol> is the content of the old Protocol Definition Files (*.prtx) files.

Version check of protocols

LabBench Designer will now check if protocols are compatible with the current version of LabBench, and will require that the repository index file (repository.xml) specify the version of LabBench (specified with the new labbench-version attribute) for which a protocol was originally written:

<protocol id="buttonTestPad"
          name="LabBench PAD: Test of functionality"
          labbench-version="4.0.0"
          category="LabBench PAD;Test Protocol" />

If the protocol is written for a version of LabBench that is not compatible with the current version of LabBench then the protocol will not be shown in LabBench Designer.

Universal and consistent specification of stimuli and trigger sequences

Previously, definition stimuli and trigger sequences was implemented by each test individually. This resulted in stimulus and trigger sequence specifications being inconsistent from one test to another. In LabBench 4, this has been completely rewritten so stimulus and trigger sequences are now implemented by a common Stimulation Engine that are used by all tests. With this change stimuli and trigger sequences are specified in the same format regardless of which test it is specified for.

Below is an examples of the new format for specification of stimuli:

<stimulation>
    <stimulus>
        <repeated Tperiod="10"
                  Tdelay="1"
                  N="4">
            <sine Frequency="1000"
                    Is="x" Ts="4"/>
        </repeated>
    </stimulus>
</stimulation>

and of the new format for specification of trigger sequences:

<triggers start-triggger="internal">
    <combined-triggers>
        <trigger duration="1">
            <code output="Code"
                    value="64" />
        </trigger>
        <repeated-trigger Tperiod="50"
                            Tdelay="20"
                            N="4">
            <repeated-trigger Tperiod="5"
                                N="5">
                <trigger duration="1">
                    <code output="Digital"
                            value="1" />
                    <code output="Stimulus"
                            value="1" />
                </trigger>
            </repeated-trigger>
        </repeated-trigger>
    </combined-triggers>
</triggers>

In the example above, the output controls the output connector that the trigger is generated on. For the LabBench I/O:

  • Code: The trigger will be generated on the INTERFACE port on the back of the LabBench I/O
  • Digital: The trigger will be generated on the TRIG OUT port on the back of the LabBench I/O
  • Stimulus: The trigger will be generated on the STIMULATOR T port on the front of the LabBench I/O

Custom stimulations

The LabBench Language allow for stimuli to be specified in xml without the need for programming. However, this approach works when only a single-modality stimulus is required such as electrical, thermal, or similar, and when the test directly supports the interface implemented by your device.

For example, the Threshold Estimation test can be used with any stimulator that implements the StimulusGenerator interface. Devices that implements this interface are for example the LabBench I/O, NI DAQmx Cards, and sound cards. However, if you want to use the Threshold Estimation test to for example estimate pressure pain thresholds with the LabBench CPAR device this was previously impossible as this device does not implement the Stimulus Generator interface.

In LabBench 4 this is now possible with the use of custom stimulations. With custom stimulations stimuli is generated by calling a function in a Python scripts instead of being specified directly in XML in the Protocol Definition File. As an example, say you want to estimate the pressure required to evoke a VAS 3 rating for rectangular pressure stimuli of 1s in duration. To do this you can use a custom stimulus in a Threshold Estimation test:

<stimulation-scripts initialize="True"
                     stimulate="func: Script.Stimulate(tc,x)"
                     stimulus-description="Pressure"
                     stimulus-unit="kPa">
    <instrument name="Algometer"
                interface="pressure-algometer"
                required="true"/>
</stimulation-scripts>

with the following Python function:

def Stimulate(tc, x):
    # Is available in Devices names as Algometer because of the <instrument> element.
    algometer = tc.Devices.Algometer
    chan = algometer.Channels[0]

    # Create a 1s rectangular pressure stimuli with intensity x
    waveform = chan
       .CreateWaveform()  # Create an empty waveform
       .Step(x, 1)        # Step pressure to x for 1s

    chan.SetStimulus(1, waveform)
    # Connect waveform channel 0 to Pressure Outlet 1
    algometer.ConfigurePressureOutput(0, ChannelID.CH01)
    algometer.StartStimulation(AlgometerStopCriterion.STOP_CRITERION_ON_BUTTON_PRESSED, True)

    return True

Other potential uses for custom stimulations are multi-modal stimuli where for example electrical and pressure stimuli are combined in simultaneous stimulations, something that is not supported per default by tests such as Threshold Estimation or Evoked Potentials tests, but which becomes possible with the use of custom stimulations.

Test events

Test events is a mechanism similar to custom stimulations that has been implemented in order to be able to extend tests with functionality that lies outside their original intended scope. Test events are scripts that are executed when a test is, started and completed or aborted, which can be defined by:

<test-events start="func: Functions.Condition(tc)"
             abort="func: Functions.Stop(tc)"
             complete="func: Functions.Stop(tc)">
    <instrument interface="pressure-algometer" 
                name="Algometer" 
                required="true"/>
</test-events>

The example above is taken from a protocol where Somatosensory Evoked Potentials (SEPs) are conditioned by a constant pressure stimulus delivered by a LabBench CPAR device. The Evoked Potentials test enables a set of stimuli to be generated according to a stimulation pattern, and thus are suited for SEPs. However, the build-in functionality of this test does not allow for a conditioning stimulus to be generated while the test is running. To enable this test events are used, where the following scripts are used for the start, abort, and complete test events above:

def Condition(tc):
    algometer = tc.Devices.Algometer
    chan = algometer.Channels[0]

    chan.SetStimulus(1, chan.CreateWaveform()
                     .Step(tc.SR.PTT, 9.9 * 60))
    algometer.ConfigurePressureOutput(0, ChannelID.CH01)
    algometer.StartStimulation(AlgometerStopCriterion.STOP_CRITERION_ON_BUTTON_PRESSED, True)

    Log.Information("Starting conditioning: {intensity}", tc.SR.PTT)

    return True

def Stop(tc):
    algometer = tc.Devices.Algometer
    algometer.StopStimulation()

    return True

Which starts a 9.9 min long pressure stimuli that condition the SEPs. This pressure stimulation is stopped in the Stop() function when the test is either completed or aborted.

In this case the test events are used to deliver stimuli, however, it has also been used for other applications such as:

  • Display instructions to the subject on an external monitor
  • Save custom data from a test

Questionnaires

Support has been added for questionnaires to be shown on an external monitor and that an joystick can be used by a subject to answer the following types of questions:

  • Boolean Questions
  • Boolean List Questions
  • Likert Questions
  • Multiple Choice Questions
  • Visual Analog Scale Questions
  • Numerical Rating Scale Questions
  • Categorical Rating Scale Questions

Evoked Potentials Tests

Is a new test that allow a set of stimuli to be presented to a subject accordingly to a stimulation pattern. Stimulus patterns can be created as a composition of sequences, as long as the their timing can be determined when the test is started. This means that irregular stimulus patterns with for example burts of stimuli can be specified, however, it is not possible to create patterns that are non-deterministic such as a pattern that includes a wait period depending on subject or operator input.

Below, is an example of a stimulus set for a classical Oddball paradigm:

<stimulation-pattern time-base="seconds">
    <uniformly-distributed-sequence iterations="NumberOfSets * NumberOfStimuli"
                                    minTperiod="1.5"
                                    maxTperiod="2.5"/>
</stimulation-pattern>

In which the stimulus set that consists of NumberOfStimuli is presented with a uniformly distributed inter-stimulus-interval of 1.5s to 2.5s. The stimulus set is presented NumberOfSets times. The NumberOfStimuli is a variable that is automatically added by the tests to make it easier to define stimulation patterns, and the NumberOfSets is a variable that is defined in the <defines> section of the protocol.

The stimulus set for this Oddball paradigm is as follows:

<stimuli order="block-random">
    <stimulus name="Normal"
              count="4"
              intensity="T01.Intensity">
        <triggers>
            <trigger duration="10">
                <code output="Code" value="1" />
            </trigger>
        </triggers>

        <stimulus>
            <sine Is="x" Frequency="500" Ts="300"/>
        </stimulus>
    </stimulus>

    <stimulus name="Oddball"
             count="1"
             intensity="T02.Intensity">
        <triggers>
            <trigger duration="10">
                <code output="Code" value="2" />
            </trigger>
        </triggers>
    
        <stimulus>
            <sine Is="x" Frequency="1000" Ts="300"/>
        </stimulus>
    </stimulus>
</stimuli>

Where the T01 and T02 tests are Stimulus Presentation tests (please see below) that has previously determined the intensity for the Normal and Oddball stimuli, respectively. Not shown in this example, is that triggers for the EEG amplifier is generated with a LabBench I/O and the stimuli are auditory stimuli generated with the built-in sound card of the computer. The triggers is synchronized with the auditory stimuli with a LabBench ATRIG device which is inserted between the sound card and the headphones. The LabBench ATRIG is an accessory to the LabBench I/O that can be connected to one of its response ports and which will generate a trigger each time a sound is played. In this case this trigger starts the <triggers that is specified above. Please note that because the sound card implements the StimulusGenerator interface which is supported by the Evoked Potentials Test no Python code is required to implement this experimental paradigm.

Stimulus Presentation Tests

Is a new test that allow a stimulus to be manually presented to a subject. This can be used for example to familiarize a subject to a specific kind of stimulus, or instruct them in how to rate these stimuli with a psychophysical rating scale.

This test can for example be used to manually present auditory stimuli to a subject in order to determine the sound intensities that will be perceived as not uncomfortable loud for an Oddball paradigm (see above). Below is an example of the definition of such a Stimulus Presentation Test:

<psychophysics-stimulus-presentation ID="T01"
                                     name="Normal stimulus"
                                     stimulus-update-rate="44100"
                                     trigger-update-rate="20000">
    <properties>
        <instructions default-instructions="T01"
                      override-results="false"/>
    </properties>
    <intensity type="array"
               value="[Stimulator.Range * v/100 + Stimulator.Min for v in  range(0, 101, 5)]" />
    <responses response-collection="yes-no" />
    <triggers start-triggger="response-port01">
        <trigger duration="10">
            <code output="Code" value="1" />
        </trigger>    
    </triggers>
    
    <stimulation>
        <stimulus>
            <sine Is="x" Frequency="500" Ts="300"/>
        </stimulus>
    </stimulation>
</psychophysics-stimulus-presentation>

Psychophysical rating scales on external monitor

It is now possible to use an external monitor to display rating scales to the subject. These scales can either be controlled by a LabBench SCALE device or by a 3rd party joystick.

Support for LabBench PAD response devices

Support has been added in the LabBench I/O driver for the LabBench PAD. The LabBench PAD is a response device consisting of up to 8 push buttons that can be used by subjects to provide response to psychophysical research paradigms, such as the Stroop, Flanker, Stop-Signal tasks and similar.

Support for LabBench ATRIG, VTRIG, and TTRIG response devices

Support has been added for trigger response devices in the LabBench I/O for the following devices:

  • LabBench ATRIG Can generate triggers from any audio signal.
  • LabBench VTRIG Can generate triggers from fiducial markers on displays.
  • LabBnech TTRIG Can generate triggers from drive signal to vibrotactile stimulators.

The purpose of these trigger devices is to enable the generation of up to 16-bit contextual triggers to EEG amplifiers and similar.

Support for negative logic, default analog output, and expected INTERFACE cable

The LabBench I/O driver now support specification of logic convention for the INTERFACE port (positive/negative logic), default analog output, and expected voltage levels on the INTERFACE port.

Revision 3.3

LabBench, Rev. 3.3.0 is a minor release fully backward compatible with revision 3.2.0.

List of changes

Major Updates

  • Improved data visualization and saving in the threshold estimation test
  • Catch trials in the threshold estimation test
  • Calibration of auditory stimuli
  • Configuration of the LabBench Directory

Minor Updates

  • Enhancement: Error checking is now performed on all pressure algometry parameters.
  • Enhancement: Configuration of the Seq Log Server has been made easier. Suppose no configuration is provided when it is enabled. In that case, it will assume that the Seq Log Server is installed locally and set up a configuration with the default parameters for a local Seq Log Server.
  • Bugfix: Instructions given to the experimenter during a cold pressor test did not update. This bug has been fixed.

Major Changes

Improved data visualization and saving in the threshold estimation test

Previously, the Psychophysics Threshold Estimation test (<psychophysics-threshold-estimation>), used to estimate psychometric functions with adaptive methods, would only plot the responses to the stimuli and the estimated threshold. This visualization meant that when the Psi Method was used, it was not possible to assess whether its estimation of the psychometric function was converging. The visualization of the Psi Method has been improved, so it displays the confidence interval of the alpha parameter of the psychometric functions in the plot of responses to stimuli. Furthermore, a plot has been added to display the estimated psychometric functions.

New Psi Method Plotting

With this new visualization, an experimenter can assess if the estimation converges. The algorithm converges if confidence intervals progressively narrow as more stimuli are presented. A confidence-level parameter for each stimulus channel can specify the confidence interval for the alpha parameter. A default value of 0.95 will be used if no confidence-level parameter is specified.

The confidence intervals for the alpha and beta parameters of the psychometric function are now also stored and exported as part of the data set for an experiment.

Catch trials in the threshold estimation test

Catch trials have been implemented in the Psychophysics Threshold Estimation test (<psychophysics-threshold-estimation>), which can be specified per channel with the following channel configuration:

<channel ID="C01"
            channel-type="single-sample"
            trigger="1"
            channel="3"
            name="Sine (1000Hz)"
            Imax="Imult * TA1['C01'] if Imult * TA1['C01'] &lt; 40.0 else 40.0">
    <catch-trials order="block-randomized"
                  interval="5" />

    ...
</channel>
</channels>

Catch trials are inserted into the estimation based on its order parameter; deterministic) a catch trial is inserted for each interval stimulus, block-randomized a catch trial is inserted for each interval block of stimuli, where it is inserted into this block is random, or randomized) catch trials are inserted randomly with a probability of 1/interval.

Calibration of auditory stimuli

Calibration of auditory stimuli has been implemented. In the LabBench data directory, there is now a directory, calibration in which calibration data can be supplied to LabBench.

To calibrate auditory stimuli/sound cards, a file soundcard.xml must be placed in the calibration directory. This file must adhere to the following format:

<?xml version="1.0" encoding="utf-8" ?>
<sound-calibration format="dBFS-table">
    <left>
        dBFS,500,630,800,1000,1250,1500,2000
        0,110.4,112.3,115,112.8,117.4,115.3,116.3
        -5,105.4,107.2,109.9,107.7,112.5,110.4,111.5
        ...
        -95,13.5,16.3,20.1,17.5,22.9,20.6,22.1
        -100,14.6,16.3,20.1,17.6,22.9,20.9,22.3    
    </left>
    <right>
        dBFS,500,630,800,1000,1250,1500,2000
        0,112.5,113.8,116.3,114.5,118.8,117,117.9
        -5,107.1,108.4,111.1,109.2,113.7,111.7,112.7
        ...
        -95,16.8,17.6,21.3,19.1,24.4,21.7,23.7
        -100,16.6,17.6,21.1,19,24.4,22.2,23.3
    </right>
</sound-calibration>

This file provides the calibration data for the left and proper channels of the sound card, and it consists of lookup tables that can provide the dBFS that will result in a given sound pressure. The first column in the lookup table consists of the dBFS for which sound pressures have been measured, and the first row consists of the frequencies for which these sound pressures have been measured.

When LabBench needs to generate a pure tone with a given sound pressure, it will first find the column corresponding to the pure tone's frequency. If this frequency is absent in the calibration data, it will linearly interpolate the column from the frequency nearest columns. When the frequency column is found or interpolated, it will look up the two nearest sound pressures in the column to the requested sound pressure and use linear interpolation to find the dBFS that will generate the requested sound pressure.

dBFS stands for "decibels relative to full scale" and is a unit of measurement used in digital audio to describe the level of a signal. It measures the amplitude of an audio signal relative to the maximum possible amplitude that can be represented in the digital system. In digital audio, the maximum amplitude that can be represented is typically represented by a full-scale value of 0 dBFS. Any values above this level will result in clipping, which is the distortion of the audio waveform. Negative values represent amplitudes below the maximum possible amplitude.

When measuring the level of an audio signal using dBFS, the reference point is always the maximum possible amplitude. This reference means that a signal with an amplitude of half the maximum possible level will be represented as -6 dBFS, since it is 6 dB below the maximum level. The dBFS unit is used in digital audio processing to ensure signals are not distorted or clipped.

Configuration of the LabBench Directory

The LabBench directory is where LabBench stores all internal data, which consists of the configuration of devices, protocol repositories, experimental data, etc. Data in this directory is not meant to be edited by the user but only through the LabBench Designer and Runner. The default location for this directory is C:\LabBench27.

However, this new version can change this location by providing a -p [path to LabBench directory] command line argument to the LabBench Designer and Runner programs. Changing the location of the LabBench directory has two use cases:

  1. You can run multiple versions of LabBench on the same computer, which means you can run experiments requiring a newer version of LabBench without risking change to experiments using older versions of LabBench.
  2. The LabBench directory can now be placed on a network share, meaning no data is lost if your computer malfunctions or is stolen. If this happens, an experimental setup can be restored by installing Labbench on a new computer and configuring it to use the old LabBench directory on the network share.

Revision 3.2

LabBench, Rev. 3.2.0 is a minor release fully backward compatible with revision 3.1.0.

List of changes

Major Updates

  • Templating for Survey questions
  • Localization of file assets
  • Remotely installed experiments
  • LabBench SERVER
  • Support for generic joysticks
  • Support for generic sound cards
  • Test for manual threshold detection
  • Test for the recording of psychophysical ratings
  • Serilog and improved logging
  • Datalust Seq Log Server

Minor Updates

  • Enhancement: LabBench Designer will now warn you and allow you to cancel the operation before it releases a license.
  • Enhancement: The instruction text in Survey questions has been enlarged to make it easier for the operator to read.
  • Enhancement: If an error occurs in the Startup Wizard of LabBench Runner it will now be displayed in an error screen.
  • Enhancement: Python code is now checked that it is syntactically correct as part of the check of protocols.
  • Enhancement: If no devices are used by a protocol, the Start Wizard of LabBench Runner will not show the devices screen.
  • Bugfix: An error in the script of a Test condition would crash LabBench.
  • Bugfix: File assets are now verified that they are present in the check of protocols

Major Changes

Templating for Survey questions

Previously, if you defined a Survey consisting of several questions with identical content, such as a Pain Catastrophizing Scale that consists of 13 questions that are rated on a Likert scale, you would need to define and repeat the Likert scale for each question. Meaning each question would look as below:

<content>
   <likert id="I01" title="dynamic: Text['QUESTION']" instruction="dynamic: Text['I01']">
      <choice value="0" label="dynamic: Text['L0']"/>
      <choice value="1" label="dynamic: Text['L1']"/>
      <choice value="2" label="dynamic: Text['L2']"/>
      <choice value="3" label="dynamic: Text['L3']"/>
      <choice value="4" label="dynamic: Text['L4']"/>
   </likert>
   ...
</content>

This made the Survey definition very verbose. Furthermore, if you needed to change the content of the Likert scale, you would need to change its definition in each question in the Survey. In the present version of LabBench, a template mechanism has been implemented that makes it less tedious and hence easier and less error-prone to define Likert scales and similar repeated content.

With this template mechanism, the definition above instead can be defined with a template and a question that is derived from this template:

<templates>
   <likert id="pcs-question" title="dynamic: Text['QUESTION']">
      <choice value="0" label="dynamic: Text['L0']"/>
      <choice value="1" label="dynamic: Text['L1']"/>
      <choice value="2" label="dynamic: Text['L2']"/>
      <choice value="3" label="dynamic: Text['L3']"/>
      <choice value="4" label="dynamic: Text['L4']"/>
   </likert>
   ...
</templates>
<content>
   <likert id="I01" template="pcs-question" instruction="dynamic: Text['I01']" />
   ...
</content>

Now the question definition consists of a single line that references the template in the <templates> element.

Localization of file assets

File assets can now be localized instead of needing to implement localization in a backing script for a definition. This means that scripts that generate text for the UI can now be simpler and easier to develop.

With this new file assets localization, a file asset can now be defined as:

<file-asset id="TEXT" file="TEXT_EN.py">
    <language code="DA" file="TEXT_DA.py"/>
</file-asset>

This is an example of a backing script that creates text for the UI. In this case, the protocol defines two languages that can be chosen in the start-up wizard (EN: English, and DA: Danish). If EN is selected (or any other language), then the file TEXT_EN.py will be loaded and used for the file asset, and if DA is selected, then the TEXT_EN.py. You can have as many <language> elements as needed in a file asset.

Remotely installed experiments

It is now possible to store experiments remotely when installing them from a protocol repository. When experiments are stored remotely, they are not copied from the repository and into the LabBench local storage. Instead, each time the experiment has started, the files for the experiment are downloaded from the repository.

This is intended as a convenience when developing new protocols. When an experiment is stored remotely, you do not need to deinstall/reinstall an experiment each time you change its protocol.

However, it is not intended for actual experiments.

LabBench SERVER

A new device has been implemented, termed the LabBench SERVER. The LabBench Server is a web server embedded within LabBench, which can host web apps. With this server, it is possible to turn any ethernet-connected device on the local network into a psychophysical rating device, such as a rating scale, response button, or response interface, such as what is required for questionnaires or psychophysical research paradigms.

The LabBench Server can be included in an experimental setup as:

<devices>
    <server id="server">
        <visual-analog-scale id="server.pain" name="Pain" length="10">
            <modality value="Pain">
                <localized-text language="DA" value="Smerte"/>
            </modality>
            <lower-anchor value="No Pain">
                <localized-text language="DA" value="Ingen Smerte"/>
            </lower-anchor>
            <upper-anchor value="Maximal Pain">
                <localized-text language="DA" value="Maksimal Smerte"/>
            </upper-anchor>
        </visual-analog-scale>
        <visual-analog-scale id="server.itch" name="Itch" length="10">
            <modality value="Itch">
                <localized-text language="DA" value="Kløe"/>
            </modality>
            <lower-anchor value="No Itch">
                <localized-text language="DA" value="Ingen Kløe"/>
            </lower-anchor>
            <upper-anchor value="Maximal Itch">
                <localized-text language="DA" value="Maksimal Kløe"/>
            </upper-anchor>
        </visual-analog-scale>
    </server>
</devices>

In this case, the LabBench SERVER defines two VAS scales; one for pain and one for itch. The scales are localized to the English and Danish languages, but can be localized to as many languages as required.

Support for generic joysticks

A new device has been implemented that makes it possible to use generic joysticks/game controllers as response buttons.

A generic joystick can be included in an experimental setup as:

<joystick id="joystick" />

Support for generic sound cards

A new device has been implemented that makes it possible to use generic sound cards for hearing tests and auditory evoked potentials.

A generic sound card can be included in an experimental setup as:

<sound id="sound" calidation-data=""/>

Note: calibration data is currently not implemented, but is in an upcoming release planned to make it possible to calibrate the sound card according to the ANSI S3.6 and IEC 60645-1 standards.

Below is an example how the sound card defined in the experimental setup above can be used to determining the hearing threshold to a 100kHz tone of 200ms in duration:

<psychophysics-threshold-estimation ID="T2"
                                    name="Psi Method">
    <dependencies>
        <dependency ID="T1"/>
    </dependencies>
    <update-rate-deterministic value="2000" />
    <yes-no-task stimulus-update-rate="44100" /
    <channels>
        <channel ID="C01"
                 channel-type="single-sample"
                 trigger="1"
                 channel="0"
                 name="Sine (1000Hz)"
                 Imax="Imult * T1['C01'] if Imult * T1['C01'] &lt; 1.0 else 1.0">
            <psi-method number-of-trials="Trials">
                <quick alpha="0.5"
                       beta="1"
                       lambda="0.02"
                       gamma="0.0" />
                <beta type="linspace"
                      base="10"
                      x0="-1.2041"
                      x1="1.2041"
                      n="20"/>
                <alpha type="linspace"
                       x0="alphaX0"
                       x1="1"
                       n="alphaN" />
                <intensity type="linspace"
                           x0="alphaX0"
                           x1="1"
                           n="intensityN" />
            </psi-method
            <sine Is="x"
                  Ts="200"
                  Frequency="1000"
                  Tdelay="0" />
        </channel>
    </channels>
</psychophysics-threshold-estimation>

Test for manual threshold detection

A new test termed <psychophysics-manual-threshold-estimation> has been implemented which manually guides an experimenter through determining tactile sensitivity with tactile stimulation devices such as von Frey Hairs, or two-point disciminators. The test provides adaptive estimation of psychometric thresholds and functions with the Up/Down and Psi methods, respenctively.

Below is an example of how this manual threshold determination test can be used to determine the two point discrimnation threshold for a subject.

<psychophysics-manual-threshold-estimation ID="TPD_PSI_FC1I2A"
                                           name="2PD (Psi, Forced Choice (1I2A)">
    <psi-algorithm number-of-trials="30"
                   intensities="[2,3,4,5,6,7,8,9,10,11,12,13,14,15,20,25]">
        <quick alpha="0.5"
               beta="1"
               gamma="0.5"
               lambda="0.02" />
        <beta type="linspace"
              base="10"
              x0="-1.2041"
              x1="1.2041"
              n="20"/>
        <!-- Change the 2.0 and 25.0 to the min/max from the intensities. Be sure to include a .0 to make it a floating point number. -->
        <alpha type="linspace"
               x0="2.0/25.0"
               x1="1"
               n="100" />
    </psi-algorithm>
    <one-interval-forced-choice-task alternative-a-image="TwoProngsAlong"
                                     alternative-a="Along"
                                     alternative-b-image="TwoProngsAcross"
                                     alternative-b="Across"
                                     question="What is the orientation of the two points (Along or Across the finger)?"/>
</psychophysics-manual-threshold-estimation>

In this case, the test determines the threshold with a Psi method and a one-interval two-alternatives forced-choice response task.

Test for the recording of psychophysical ratings

A new test termed <psychophysics-response-recording> has been implemented. This test makes it possible to record psychophysical ratings for a pre-specified duration and to automatically obtain statistics such as area under the curve, maximal rating, and time of maximal rating.

Below is an example of how this test can be used to determine is used to record pain and itch ratings for 10 min:

<psychophysics-response-recording ID="T01"
                                  name="Pain and Itch Recording"
                                  duration="10 * 60"
                                  sample-rate="5" />

Serilog and improved logging

The logging system has been fully migrated to the Serilog log system that provides full semantic logging of events within LabBench:

The logging system provides three sinks of logging data:

  1. The log window within LabBench Runner (always enabled)
  2. Persisting log events to rolling log files in C:\LabBench27\logs (always enabled)
  3. The Datalust Seq Log server (optional)

For all sinks, it is possible to define the minimal log level that will be sent to the sink.

Datalust Seq Log Server

Support for the Datalust Seq Log Server has been implemented. The Datalust Seq Log Server is a centralized repository that collects and stores log messages generated by multiple LabBench installations. The log messages can be used to track and monitor all experiments that are ongoing in a research center and can also be used to diagnose and debug issues.

Advantages of using the Datalust Seq Log Server:

  1. Centralized management: All log messages from multiple systems are consolidated in one place, making it easier to manage and analyze the logs.
  2. Improved visibility: By having all log messages in a single location, it becomes easier to identify patterns and correlations across the system that might not be obvious from isolated logs.
  3. Easier troubleshooting: When issues arise, a central log server allows for faster and more efficient debugging by providing a complete view of all related log messages.
  4. Compliance with GDPR: The Datalust Seq Log Server is fully self-hosted and all data is stored locally, making it possible to comply with the GDPR.

Revision 3.1

LabBench, Rev. 3.1.0 is a minor release that is fully backwards compatible with revision 3.0.0. The focus of the release has been to implement functionality related to post-session actions, and bugfixes to revision 3.0.0.

List of changes

Major Updates

  • Adding, configuration and rerunning post-session actions.
  • Post-session action for exporting data to JSON and MATLAB files.

Minor Updates

  • Bugfix: The UI of Survey tests would not be enabled if a instruction screen is added for the test.
  • Bugfix: In the Startup Wizard it was possible to start an experiment with an invalid Session ID.
  • Bugfix: Conditions on tests caused a stack overflow and the program to crash.

Changes

Adding, configuration and rerunning post-session actions.

Functionality for adding, configuring and rerunning post-session actions has been added to the LabBench Designer. In the Post Session Actions section in the Experiment tab it is now possible to:

  1. Add an action, by selecting a Action Definition File (*.adx) file, in which the action is implemented in a xml formet.
  2. Changing the output directory (Location) of an action in an experiment.
  3. Deleting an action from an experiment.
  4. Rerunning the actions for all sessions in an experiment, thereby recreating the all files that is created by the LabBench Runner when a session is completed.

To implement this functionality a new file format has been defined termed Action Definition Files. Currently, two types of actions exists:

  1. <export-to-csv>: XSD: http://labbench.io/xsd/3.1.0/csvaction.xsd
  2. <export-data>: XSD: http://labbench.io/xsd/3.1.0/export_action.xsd

Below is an example of a definition file for the <export-to-csv> action:

<?xml version="1.0" encoding="UTF-8"?>
<export-to-csv xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://labbench.io http://labbench.io/xsd/3.1.0/csvaction.xsd"
               name="Exporting session to CSV"
               location="C:\CPAR"
               header="true"
               seperator=";"
               filename="dynamic: '{session}-{time}.csv'.format(session = SESSION_NAME, time = SESSION_TIME)">
    <item name="PDT"
          value="PDT.PDT"
          default="NA"/>
    <item name="Operator"
          value="PDT.Operator"
          default="NA"/>
    <item name="RecordingTime"
          value="PDT.RecordingTime"
          default="NA"/>

</export-to-csv>

Post-session action for exporting data to JSON and MATLAB files.

An post-session action has been implemented for exporting to either JSON or MATLAB format:

<?xml version="1.0" encoding="UTF-8"?>
<export-data xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://labbench.io http://labbench.io/xsd/3.1.0/export_action.xsd"
               name="Exporting session to MATLAB"
               location="C:\CPAR"
               filename="dynamic: '{session}-{time}.mat'.format(session = SESSION_NAME, time = SESSION_TIME)"
               format="matlab"/>

Revision 3.0

LabBench, Rev. 3.0.0 is a major release that introduces a new UI and a dedicated program for configuring LabBench that replaces the Command Line Interface (CLI).

List of changes

Major Updates

  • LabBench Designer
  • Updated User Interface
  • New License System
  • Rewritten DAQmx Driver
  • Test Annotations
  • Simplified setup of experiments from protocol repositories
  • Migrated to the .NET platform

Minor Updates

  • Algometry: Saving of target pressures

Changes

LabBench Designer

The Command Line Interface has been replaced by the LabBench Designer program, which provides a GUI for all the configuration tasks that was previously performed by command on the command line.

Updated User Interface

The user interface has been updated to Windows Presentation Foundation.

New License System

The license system has been completely reworked. With the old license system the license was tied to specific hardware devices, meaning that you would require a license for each hardware device you need to use in LabBench. This meant that if you needed to change for example a NI DAQmx card, you could not do that if there hasn't been made a license for the new card.

This has now been changed so the licenses are tied to tests in LabBench. Currently, there are three sets of tests:

  • Core: Tests such as Surveys. These are automatically included if you have a license for Algometry or Psychophysics.
  • Algometry: Tests for running Cuff Pressure Algometry experiments.
  • Psychophysics: Tests for running Nerve Excitability Testing, Cold Pressor, and Tactile Sensitivity Testing.

This means that the license are no longer tied to hardware and that can be changed arbitrary. The license is tied to be able to be use on one computer at a time, however, it can be moved to a new computer as many times as required.

Rewritten DAQmx Driver

The DAQmx driver has been completely rewritten. It no longer uses the National Instruments .NET drivers, but instead the low-level C API. Tests shows that this greatly reduce the problems with DLL's not being the correct version.

With the old implementation, you would need initially to install exactly the same version of the NI DAQmx drivers as was used to build LabBench. If you used a different version it would complain that the version of the DAQmx driver dlls are not the correct version.

With the new implementation using the low-level C API, it is possible to use a newer version of NI DAQmx drivers as was used for building LabBench, as the NI DAQmx C API is backwards compatible.

Test Annotations

It is now possible to include what has been termed test annotations. Test annotations are additional data that can help in analysing the results of a study, which can be specified in the test properties of a test.

<properties>
    <annotations>
        <bool name="boolean" value="true"/>
        <number name="number" value="223.2"/>
        <string name="string" value="Hello, World!"/>
        <numbers name="list">
            <number value="1"/>
            <number value="2"/>
            <number value="3"/>
        </numbers>
    </annotations>
</properties>

Test annotations can for example be used to specify the stimulus durations used in a Strength-Duration test.

Simplified setup of experiments from protocol repositories

Protocol repositories has been refactored, such that each protocol also contains a template for an experimental definition file (*.expx). With this template it is possible to create an experiment from the LabBench Designer by:

  1. Selecting the protocol in the Protocol Repository.
  2. Clicking the Add button and specifying an ID and Name for the new experiment.
  3. LabBench Designer creates a new experiment based on the experiment template and the ID and name provided.

After the experiment has been created. Subject ID validation can be setup from its configuration page.

Migrated to the .NET platform

LabBench has been migrated from the old venerable .NET Framework platform to the .NET platform. In time this will enable LabBench to run on Mac and Linux computers.

Revision 2.7

LabBench, Rev. 2.7.9 is a legacy release that is maintained for compatability with the CPAR toolbox for the Nocitech CPAR device.

Our goal is to create open and novel research devices for neuroscience.

Company

Inventors' Way ApS
Niels Jernes Vej 10
DK9220 Aalborg, Denmark
CVR.NR. 37596108

Copyright© Inventors' Way ApS