Neural Programmer-Interpreter Implementation (Reed, de Freitas: https://arxiv.org/abs/1511.06279), in Tensorflow
-
Updated
Nov 17, 2018 - Python
Neural Programmer-Interpreter Implementation (Reed, de Freitas: https://arxiv.org/abs/1511.06279), in Tensorflow
A Word Level Transformer layer based on PyTorch and 🤗 Transformers.
Visual analytics approach presented in the paper "Visual Analytics Tool for the Interpretation of Hidden States in Recurrent Neural Networks" (VCIBA, 2021).
R package for Statistical Modeling of Animal Movements
This repository contains NLP Transfer learning projects with deployment and integration with UI.
Designing and training probabilistic graphical models (MATLAB).
IJRR 2026 | Situationally-Aware Dynamics Learning | Online and Unsupervised Latent Factor Representation Learning for Robot Dynamics Learning.
Investigating Layer-Specific Performance in Speaker Recognition with XLS-R Architecture
Measurement-first audit repo for hidden-state verifiers in structured reasoning: outcome readout vs process verification via counterfactual local validity.
Geometric phase structure in Transformer hidden states. LayerNorm placement predicts manifold geometry — 6x difference in PCA concentration. 9 models, 13 experiments.
Add a description, image, and links to the hidden-states topic page so that developers can more easily learn about it.
To associate your repository with the hidden-states topic, visit your repo's landing page and select "manage topics."