Skip to content

Tutorials

David Brainard edited this page Mar 16, 2021 · 18 revisions

##Introduction

The tutorials live in the tutorials folder of the respository. They are designed to provide an introduction to the ideas and code.

Each tutorial is implemented as a function that takes one or more parameters structures as input. When run with no explicit input arguments, the tutorials set reasonable defaults so that the calculation runs fairly quickly for some toy cases.

The tutorials are ordered and build up in complexity, so that calculations illustrated in early tutorials are then called as functions by later tutorials. Some of the functions are more elaborated than what is illustrated in the tutorials, to allow them run faster or handle additional options. The set of tutorials goes all the way from creating a Gabor stimulus to computing computational observer thresholds for a number of directions in the L-M cone contrast plane that may be compared with experimental results.

This repository currently assumes that you have some familiarity with human color vision and threshold psychophysics, but may provide a way for you to learn more even if you do not wish to do calculations of this sort yourself.

Description of individual tutorials

basic/t_colorGabor - Shows how to make an isetbio scene representing a colored Gabor pattern presented on a calibrated CRT monitor and take it through to cone mosaic responses. This is one of the basic stimuli whose detection threshold we are modeling in this project. The stimulus creation work in this tutorial is done inside function colorSceneCreate.

basic/t_coneIsomerizationsMovie - Shows how to take a temporally windowed color Gabor stimulus (Gaussian window) and compute a movie of the cone mosaic isomerizations at each time sampling point. This relies on function colorGaborSceneCreate. It may also be called with parameters that produce output for our AO spot stimulus modeling.

basic/t_coneCurrentEyeMovementsMovie - This goes further and adds modeling of eye movements as well as the tranformation from absorption to the outer segment cone current. The functionality shown in this tutorial is encapsulated in the function colorDetectResponseInstanceFastConstruct, with the function containing some optimizations to make it run faster.

basic/t_coneCurrentEyeMovementsResponseInstances - For building classifiers, we need to get multiple noisy instances of the responses to stimuli. This shows how to get such instances, by calling function colorDetectResponseInstanceFastConstruct. It does so for multiple contrasts and color directions, and saves the instances for use by the colorGaborDetectFindPerformance tutorial immediately below. The output can be rather large. This can also be called with AO spot paramters.

basic/t_colorDetectFindPerformance - Use an a computational observer to find thresholds. This reads the output of t_colorGaborConeCurrentEyeMovementsResponseInstances, and saves its output for later plotting. A number of different observers are implemented.

basic/t_fitPsychometricFunctions - Fit the data produced by t_colorDetectFindPerformance and extract thresholds. Called by t_plotColorGaborDetectThresholdsOnLMPlane.

basic/t_plotColorGaborDetectThresholdsOnLMPlane- Fit psychometric function to the output of t_colorGaborDetectFindPerformance and use this to plot an isodetection threshold contour in the LM plane, with an ellipse fit to it. This could be pretty easily generalized to other planes or full ellipsoids.

ellipsoids/t_colorThresholdEllipsoids - This tutorial implements the model developed in Poirson & Wandell (1996) and allows one to visual color threshold ellipsoids for various choices of spatial frequency, according to that model. These threshold ellipsoids summarize one of the data sets we would like to model with the code in this repository.

ellipsoids/t_colorThresholdEllipsoidFit - Illustrates how to fit an ellipsoid to color threshold data for a variety of directions in color space.

spot/t_coneIsomerizationsMovieSpot - Call into basic/t_coneIsomerizationsMovie with parameters to model AO spots.

spot/t_coneCurrentEyeMovementsResponseInstancesSpot - Call into basic/t_t_coneCurrentEyeMovementsResponseInstances with parameters to model AO spots.

spot/t_colorDetectFindPerformanceSpot - Call into basic/t_colorDetectFindPerformance with parameters to model AO spots.

Using the tutorials

If you want to see a complete calculation illustrated for just a few conditions, run these tutorials in order:

basic/t_coneCurrentEyeMovementsResponseInstances

basic/t_colorDetectFindPerformance

basic/t_plotDetectThresholdsOnLMPlane

This will generate responses instances for a few modulations, find the performance as a function of contrast for each, fit the psychometric functions, and make a plot in the LM plane of the thresholds and an ellipse fit through them. The contrasts are very coarsely sampled and the classifier training is done with just a few instances, so the actual numbers are not accurate. But this sequence illustrates the pieces.

Note that you have to run these in order because each saves out data needed by the next.

Clone this wiki locally