Skip to content

leonardopt/visual-memory-task-analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Visual Memory Task Analysis

Analysis code and anonymised behavioural data for a visual recognition memory experiment run on Prolific via JATOS.

Reference

Pettini, L., Bogler, C., Doeller, C., & Haynes, J.-D. (2025). Synthesis and perceptual scaling of high-resolution naturalistic images using Stable Diffusion. Behavior Research Methods, 58(1), 24. https://doi.org/10.3758/s13428-025-02889-8

Study Background

This project sits within a three-stage workflow:

  1. Stage 1: synthesis and model-based perceptual scaling of the stimulus set
  2. Stage 2: psychophysical validation via similarity judgments
  3. Stage 3: behavioral validation in a delayed match-to-sample memory task

This repository covers Stage 3: analysis of behavioural responses from the visual memory experiment (N = 240 participants).

Participants completed a six-block recognition memory task with naturalistic object-scenes. On each trial, they memorised a target object-scene over an 8-second delay, then identified it from a perceptually similar foil. Difficulty varied across trials along two dimensions: target-foil similarity (easy / medium / hard) and target repetition (repeated vs. non_repeated). The primary dependent variable is trial-level recognition accuracy.

The main analysis is a Bayesian hierarchical logistic regression modelling accuracy as a function of block, difficulty, and target condition, with participant-level random intercepts and block slopes.

Repository Structure

.
├── analysis/
│   ├── python/
│   │   └── preprocess_data.ipynb  # Transparency-only preprocessing notebook
│   └── r/
│       ├── fit_model.R            # Fits the primary brms model
│       ├── modelling_report.Rmd   # Full analysis report
│       ├── modelling_report.html  # Pre-rendered report
│       └── renv.lock              # R dependency lockfile
├── assets/
│   └── fonts/
│       └── cmunrm.otf             # Bundled font for reproducible figures
├── data/
│   └── responses.csv              # Anonymised trial-level dataset
├── figures/                       # Rendered figures used in the report
├── utils/
│   └── anonymise_data.py          # Local anonymisation helper
├── pyproject.toml                 # Python package metadata
├── requirements.txt               # Lightweight Python dependency list
├── CITATION.cff                   # Citation metadata
└── README.md

Data

data/responses.csv contains one row per trial. Useful columns include:

Column Description
participant_id Anonymous integer participant identifier
block Block number (0-5)
difficulty Trial difficulty (easy, medium, hard)
target_condition Repetition status (repeated, non_repeated)
correct Trial accuracy (1.0 = correct, 0.0 = incorrect)
rt Response time in milliseconds

The public dataset excludes Prolific IDs, JATOS worker IDs, and session IDs. The anonymisation logic is documented in utils/anonymise_data.py.

Reproducing The Analysis

The Python and R parts of the repository are intentionally separate.

Python preprocessing

analysis/python/preprocess_data.ipynb documents the original cleaning pipeline from raw JATOS exports to data/responses.csv.

Raw JATOS exports are not included in this repository for data protection reasons, so the notebook is provided for transparency rather than turnkey re-execution. The anonymised output is already included in data/responses.csv.

R modelling

Requirements:

  • R >= 4.3
  • A working C++ toolchain for brms / Stan

Restore the recorded package versions from analysis/r/renv.lock:

setwd("analysis/r")
install.packages("renv")
renv::restore()

If you prefer a non-renv install, the report and model scripts depend on the packages loaded near the top of analysis/r/modelling_report.Rmd and analysis/r/fit_model.R.

Fitted model artifact

The rendered report expects analysis/r/bayesian_model.rds, which is not tracked in Git because of its size. Download it from OSF and place it in analysis/r/:

https://osf.io/pf4tv/overview?view_only=ceb4e1b23d58465692cfd872117e28ca

To refit the model from scratch instead:

cd analysis/r
Rscript fit_model.R

To render the report:

setwd("analysis/r")
rmarkdown::render("modelling_report.Rmd")

A pre-rendered version is already included at analysis/r/modelling_report.html.

Reproducibility Notes

  • The public dataset is anonymised.
  • The preprocessing notebook is transparency-only because raw source exports are not distributed here.
  • Large binary model artifacts (*.rds) are intentionally excluded from version control.
  • The report and figure outputs included in the repository were generated from the anonymised dataset and the cached fitted model.

Citation

If you use this repository, please cite the associated paper above. GitHub citation metadata is provided in CITATION.cff.

License

No license file is included yet. Add one before publishing if you want others to have explicit permission to reuse the code and/or dataset.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages