diff --git a/content/blog/tidymodels-cheatsheets/index.md b/content/blog/tidymodels-cheatsheets/index.md
new file mode 100644
index 000000000..c1c6dfe66
--- /dev/null
+++ b/content/blog/tidymodels-cheatsheets/index.md
@@ -0,0 +1,161 @@
+---
+title: "tidymodels Cheatsheets"
+date: 2026-04-29
+people:
+ - Edgar Ruiz
+description: >
+ Two new cheatsheets for Tidymodels are now available: one for creating models
+ with parsnip, and one for preprocessing data with recipes.
+image: "tidymodels-cheatsheets.png"
+image-alt: "The tidymodels hex logo centered above side-by-side previews of the two new cheatsheets: Preprocessing data with recipes on the left and Create models with parsnip on the right, against a dark background."
+topics:
+ - Machine Learning
+software:
+ - tidymodels
+languages:
+ - R
+resources:
+ - cheatsheets
+tags:
+ -
+source: tidyverse
+nohero: false
+hidesubscription: false
+---
+
+
+
+After almost 8 years, tidymodels finally has its first cheatsheets, and not just one, but two! The [first one](/resources/cheatsheets/ml-preprocessing-data/), covering data preprocessing with `recipes`, was released a couple of months ago. Today, we are delighted to announce [a second cheatsheet](/resources/cheatsheets/ml-create-models/), this time focusing on modeling with `parsnip`.
+
+Both cheatsheets have a dedicated HTML version on the Posit Open Source site, so you can browse and search them without opening a PDF. In this post we'll walk through what each one covers, starting with the newest.
+
+## Create Models with **parsnip**
+
+
+
+The cheatsheet is organized into three main parts: an introduction to parsnip's basics, a catalog of all models available through the package, and a hands-on operations reference for fitting and inspecting models. The basics section introduces how parsnip provides a single, unified interface for defining and fitting models, regardless of the underlying package powering them.
+
+### Model catalog
+
+The largest section of the cheatsheet catalogs all models available through parsnip, grouped by use case:
+
+- **Classification only:** models for binary and multiclass prediction. It also includes probability-based classification using Bayes' theorem and models for ordinal responses.
+- **Regression only:** models for predicting continuous numeric outcomes, from standard linear regression to generalized linear models for count data.
+- **General use:** a versatile mix of model types that work for both classification and regression, including decision trees, nearest neighbors, neural networks, and spline-based approaches.
+- **Discriminant analysis:** models that estimate the distribution of predictors separately for each class and use Bayes' theorem to assign probabilities, available in linear, quadratic, flexible, and regularized variants.
+- **Ensemble methods:** models that combine many individual learners into a stronger prediction, including random forests, gradient boosting, bagged trees, and Bayesian additive regression trees.
+- **Support Vector Machines:** models that find an optimal boundary between classes, or fit a robust regression, using linear, polynomial, or radial kernel functions.
+- **Feature rules:** models that extract simple, human-readable rules from tree ensembles and use them as the basis for prediction.
+- **Survival models:** models for time-to-event data, covering both proportional hazards and fully parametric approaches.
+
+{{< columns split="3,2" >}}
+One design choice in particular makes this section much easier to navigate: **pills**. Each model's compatible engines and supported modes are shown as small, visually distinct tags, so you can see at a glance which mode a given engine supports, without having to read through the description text. Each mode is encoded in the pill with a number: Classification (1), Regression (2), Censored Regression (3), and Quantile Regression (4). A legend mapping each number to its mode is available at the top of page one.
+
+---
+
+
+{{< /columns >}}
+
+And true to the R cheatsheet tradition, individual models or groups of related models are paired with **small illustrations**, thoughtfully designed for visual impact to aid recall. Each one attempts to accurately represent the function or functions it accompanies, making them a genuine navigation aid rather than decoration, especially when you have a vague memory of "that tree-based ensemble that used Bayesian analysis" and need to scan quickly.
+
+
+
+### Operations
+
+The last section covers the practical workflow of fitting and using a model. Each function is paired with a **quick runnable example**, and the examples build on each other starting from the two lines of code right below the section title, making it easy to follow the full workflow from model specification to results.
+
+
{{< button url="/resources/cheatsheets/ml-create-models/" text="Explore the parsnip cheatsheet" icon-right="boxicons--arrow-right" >}}
+
+## Preprocessing Data with **recipes**
+
+
+
+After a quick Basics section covering the core workflow, the vast majority of the cheatsheet is dedicated to `step_*()` functions, the building blocks of any recipe, before finishing with role and type management.
+
+### Step catalog
+
+The steps are organized into groups based on what they do, each listed with its arguments and a short description:
+
+- **Filters:** steps for removing variables that are sparse, zero-variance, linearly dependent, highly correlated, or missing too many values
+- **In-place Transformations:** basis functions (splines, polynomials), discretization, and normalization steps
+- **Imputation:** steps for filling in missing values, ranging from simple statistical substitution to model-based approaches
+- **Encodings:** type converters (e.g. factor to string, numeric to factor), value converters, and other factor-handling steps
+- **Dummy Variables:** one-hot and binary encoding, text pattern matching, and conversion helpers
+- **Multivariate Transformations:** signal extraction (PCA, ICA, PLS, and friends) and centroid-based distance measures
+- **Date & Time:** steps for converting date and datetime columns into usable numeric or factor features
+- **Row operations:** sampling, shuffling, slicing, and removing rows with missing values
+- **Other:** interaction terms, renaming, rolling window statistics, geographic distances, and ratios
+
+As with the parsnip cheatsheet, each group of steps is paired with **small, thoughtfully designed illustrations** to help you visually locate a step family when scanning.
+
+### Role & type
+
+{{< columns split="3,2" >}}
+The last section focuses on the selection and management of variable roles and types within the recipe. The selection side covers ways to target variables by their role (outcome, predictor, or any custom role) as well as by their type (numeric, factor, logical, and so on), including a handy set of convenience selectors for the most common combinations. The management side shows how to add, update, and remove roles, showing you how to gain fine-grained control over how each variable participates in the recipe.
+
+---
+
+
+{{< /columns >}}
+
+
+
+{{< button url="/resources/cheatsheets/ml-preprocessing-data/" text="Explore the recipes cheatsheet" icon-right="boxicons--arrow-right" >}}
+
+## Need them on the go? Print them!
+
+A lot of care went into ensuring both cheatsheets hold up when printed, particularly in black and white. We know that many folks print cheatsheets to keep at their desk for quick reference, and we wanted to make sure they remain fully usable in that medium. That meant making sure font sizes and weights stay legible on paper, that the illustrations remain perceptible without color, and that contrast levels are strong enough that no text ends up too pale to read or too heavy to parse. Accessibility in print mattered to us just as much as clarity on screen.
+
+
+
+
+
diff --git a/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-bw.png b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-bw.png
new file mode 100644
index 000000000..37bb75938
Binary files /dev/null and b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-bw.png differ
diff --git a/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-parsnip.png b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-parsnip.png
new file mode 100644
index 000000000..1fd89db9c
Binary files /dev/null and b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-parsnip.png differ
diff --git a/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-pills.png b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-pills.png
new file mode 100644
index 000000000..5adc049a7
Binary files /dev/null and b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-pills.png differ
diff --git a/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-recipes.png b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-recipes.png
new file mode 100644
index 000000000..bf4d3982a
Binary files /dev/null and b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-recipes.png differ
diff --git a/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-selectors.png b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-selectors.png
new file mode 100644
index 000000000..e81055737
Binary files /dev/null and b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets-selectors.png differ
diff --git a/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets.png b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets.png
new file mode 100644
index 000000000..c82b9e920
Binary files /dev/null and b/content/blog/tidymodels-cheatsheets/tidymodels-cheatsheets.png differ
diff --git a/content/resources/cheatsheets/ml-create-models/_index.md b/content/resources/cheatsheets/ml-create-models/_index.md
new file mode 100644
index 000000000..b05815a76
--- /dev/null
+++ b/content/resources/cheatsheets/ml-create-models/_index.md
@@ -0,0 +1,305 @@
+---
+title: Create models with parsnip
+image: page-1.png
+resource_type: cheatsheet
+date: '2026-04-09'
+description: Quick reference guide for create models with parsnip.
+download_url: ml-create-models.pdf
+software:
+- parsnip
+languages:
+- R
+people:
+- Edgar Ruiz
+thumbnails:
+- page-1.png
+- page-2.png
+---
+
+## Basics
+
+`parsnip` provides a tidy, unified interface to a range of models from other packages. It helps avoid having to remember how to properly call the modeling functions of those external packages.
+
+A `parsnip` specification is made up of 3 main components:
+
+1. The type of model to be used, such as Random Forest (`rand_forest()`) or linear regression (`linear_reg()`)
+
+2. How will the model be used, or mode. The two most common are "regression" and "classification".
+
+3. The computational engine, or program that will actually execute the training. It could be an external R package, such as ranger, or even an engine outside of R, such as Stan or Apache Spark.
+
+```r
+library(tidymodels)
+
+rand_forest(mtry = 10, trees = 2000) |> # Define type of model
+ set_engine("ranger", importance = "impurity") |> # Select an engine
+ set_mode("regression") # Set the mode
+```
+
+- `set_engine(object, engine, ...)` - Specifies which package or system will be used to fit the model, along with any arguments specific to that software.
+
+- `set_args(object, ...)` - Modifies the arguments of a model specification.
+
+- `set_mode(object, mode, …)` - Changes the model's mode.
+
+- `show_engines(x)` - The possible engines for a model can depend on what packages are loaded. Some parsnip extension add engines to existing models.
+
+```r
+show_engines("linear_reg")
+```
+
+## Legends
+
+### Mode Support Numbers
+
+- **1** - Classification
+- **2** - Regression
+- **3** - Censored Regression
+- **4** - Quantile Regression
+
+### Engine Tags
+
+Engine tags show the engine name and mode support numbers. For example, h2o 1, 2 means engine `h2o` supports classification (1) and regression (2).
+
+## Classification Only
+
+- `logistic_reg(mode = "classification", engine = "glm", penalty, mixture)` - Generalized linear model for binary outcomes. A linear combination of the predictors is used to model the log odds of an event.
+
+ brulee 1 gee 1 glm 1 glmer 1 glmnet 1 h2o 1 keras 1 LiblineaR 1 spark 1 stan 1 stan_glmer 1
+
+- `multinom_reg(mode = "classification", engine = "nnet", penalty, mixture)` - Uses linear predictors to predict multiclass data using the multinomial distribution.
+
+ brulee 1 glmnet 1 h2o 1 keras 1 nnet 1 spark 1
+
+- `naive_Bayes(mode = "classification", smoothness, Laplace, engine = "klaR")` - Uses Bayes' theorem to compute the probability of each class, given the predictor values.
+
+ h2o 1 klaR 1 naivebayes 1
+
+- `null_model(mode = "classification", engine = "parsnip")` - Fit a single mean or largest class model. This is the user-facing function for the null_model() specification.
+
+ parsnip 1
+
+- `ordinal_reg(mode = "classification", ordinal_link, odds_link, penalty, mixture, engine = "polr")` - Defines a generalized linear model that predicts an ordinal outcome.
+
+ rpartScore 1 polr 1 vgam 1 vglm 1
+
+## Regression Only
+
+- `linear_reg(mode = "regression", engine = "lm", penalty, mixture)` - Defines a model that can predict numeric values from predictors using a linear function.
+
+ brulee 2 gee 2 glm 2 glmer 2 glmnet 2 gls 2 h2o 2 keras 2 lm 2 lme 2 quantreg 2 spark 2 stan 2 stan_glmer 2
+
+- `poisson_reg(mode = "regression", penalty, mixture, engine = "glm")` - Defines a generalized linear model for count data that follow a Poisson distribution.
+
+ gee 2 glm 2 glmer 2 glmnet 2 h2o 2 hurdle 2 stan 2 stan_glmer 2 zeroinfl 2
+
+## General Use
+
+- `decision_tree(mode, engine = "rpart", cost_complexity, tree_depth, min_n)` - A set of if/then statements creates a tree-based structure.
+
+ partykit 1, 2, 3 rpart 1, 2, 3 spark 1, 2 C5.0 1
+
+- `mars(mode, engine = "earth", num_terms, prod_degree, prune_method)` - Uses artificial features for some predictors. These features resemble hinge functions and the result is a model that is a segmented regression in small dimensions.
+
+ earth 1, 2
+
+- `mlp(mode, engine = "nnet", hidden_units, penalty, dropout, epochs, activation, learn_rate)` - Defines a multilayer perceptron model (a.k.a. a single layer, feed-forward neural network).
+
+ nnet 1, 2 brulee 1, 2 brulee_two_layer 1, 2 keras 1, 2 grnn 1, 2
+
+- `gen_additive_mod(mode, select_features, adjust_deg_free, engine = "mgcv")` - Uses smoothed functions of numeric predictors in a generalized linear model.
+
+ mgcv 1, 2
+
+- `nearest_neighbor(mode, engine = "kknn", neighbors, weight_func, dist_power)` - Uses the K most similar data points from the training set to predict new samples.
+
+ kknn 1, 2
+
+- `pls(mode, predictor_prop, num_comp, engine = "mixOmics")` - Uses latent variables to model the data. Similar to a supervised version of PCA.
+
+ mixOmics 1, 2
+
+## Discriminant
+
+- `discrim_flexible(mode = "classification", num_terms, prod_degree, prune_method, engine = "earth")` - Fits a discriminant analysis model that uses nonlinear features created using MARS.
+
+ earth 1
+
+- `discrim_regularized(mode = "classification", frac_common_cov, frac_identity, engine = "klaR")` - Estimates a multivariate distribution for the predictors separately for the data in each class. The model's structure can be LDA, QDA, or a combination. Each probability class is computed using Bayes's theorem, given the predictor values.
+
+ klaR 1
+
+Estimates a multivariate distribution for the predictors separately for the data in each class using a method described below. Each class' probability is computed using Bayes' theorem, given the predictor values.
+
+- `discrim_linear(mode = "classification", regularization_method, engine = "MASS", penalty)` - Uses Gaussian with a common covariance matrix to perform the estimate.
+
+ MASS 1 mda 1 sda 1 sparsediscrim 1
+
+- `discrim_quad(mode = "classification", regularization_method, engine = "MASS")` - Uses Gaussian with separate covariance matrices to perform the estimate.
+
+ MASS 1 sparsediscrim 1
+
+## Support Vector Machine
+
+**Classification:** Maximizes the width of the margin between classes using a method described below.
+
+**Regression:** Optimizes a robust loss function only affected by very large model residuals and uses an additional method described below.
+
+- `svm_linear(mode, cost, engine = "LiblineaR", margin)` - Classification: A linear class boundary. Regression: Uses a linear fit.
+
+ kernlab 1, 2 LiblineaR 1, 2
+
+- `svm_poly(mode, cost, engine = "kernlab", degree, scale_factor)` - Classification: A polynomial class boundary. Regression: Uses polynomial functions of the predictors.
+
+ kernlab 1, 2
+
+- `svm_rbf(mode, cost, engine = "kernlab", rbf_sigma)` - Classification: A nonlinear class boundary. Regression: Uses nonlinear functions of the predictors.
+
+ kernlab 1, 2
+
+## Feature Rules
+
+- `rule_fit(mode, mtry, trees, min_n, tree_depth, learn_rate, loss_reduction, sample_size, stop_iter, penalty, engine = "xrf")` - Derives simple feature rules from a tree ensemble and uses them as features in a regularized model.
+
+ xrf 1, 2 h2o 1
+
+- `C5_rules(mode = "classification", trees, min_n, engine = "C5.0")` - Derives feature rules from a tree for prediction. A single tree or boosted ensemble can be used.
+
+ C5.0 1
+
+- `cubist_rules(mode = "regression", committees, neighbors, max_rules, engine = "Cubist")` - Derives simple feature rules from a tree ensemble and creates regression models within each rule.
+
+ Cubist 2
+
+## Ensemble
+
+*"E Pluribus Unum"*
+
+- `bag_mars(mode, num_terms, prod_degree, prune_method, engine = "earth")` - Ensemble of generalized linear models that use artificial features for some predictors. These features resemble hinge functions and the result is a model that is a segmented regression in small dimensions.
+
+ earth 1, 2
+
+- `bag_mlp(mode, hidden_units, penalty, epochs, engine = "nnet")` - An ensemble of single layer, feed-forward neural networks.
+
+ nnet 1, 2
+
+- `bag_tree(mode, cost_complexity = 0, tree_depth, min_n = 2, class_cost, engine = "rpart")` - Ensemble of decision trees.
+
+ C5.0 1 rpart 1, 2, 3
+
+- `bart(mode, engine = "dbarts", trees, prior_terminal_node_coef, prior_terminal_node_expo, prior_outcome_range)` - Tree ensemble model that uses Bayesian analysis to assemble the ensemble.
+
+ dbarts 1, 2
+
+- `boost_tree(mode, engine = "xgboost", mtry, trees, min_n, tree_depth, learn_rate, loss_reduction, sample_size, stop_iter)` - Creates a series of decision trees forming an ensemble. Each tree depends on the results of previous trees. All trees in the ensemble are combined to produce a final prediction.
+
+ C5.0 1 catboost 1, 2 h2o 1, 2 lightgbm 1, 2 mboost 3 spark 1, 2 xgboost 1, 2, 4
+
+- `rand_forest(mode, engine = "ranger", mtry, trees, min_n)` - Creates a large number of decision trees, each independent of the others. The final prediction uses all predictions from the individual trees and combines them.
+
+ aorsf 1, 2, 3 grf 1, 2, 4 h2o 1, 2 partykit 1, 2, 3 randomForest 1, 2 ranger 1, 2 spark 1, 2
+
+## Survival
+
+- `proportional_hazards(mode = "censored regression", engine = "survival", penalty, mixture)` - Defines a model for the hazard function as a multiplicative function of covariates times a baseline hazard.
+
+ glmnet 3 survival 3
+
+- `survival_reg(mode = "censored regression", engine = "survival", dist)` - Defines a parametric survival model.
+
+ flexsurv 3 flexsurvspline 3 survival 3
+
+## Operations
+
+```r
+library(tidymodels)
+
+lm_spec <- linear_reg() |>
+ set_engine("lm")
+
+lm_spec
+```
+
+### Methods
+
+- `fit(object, ...)` - Estimates parameters for a given model from a set of data.
+
+ ```r
+ lm_fit <- fit(lm_spec, mpg ~ ., data = mtcars)
+
+ lm_fit
+ ```
+
+- `predict(object, ...)`
+
+ ```r
+ predict(lm_fit, mtcars)
+ ```
+
+- `autoplot(object, ...)` - Uses ggplot2 to draw a particular plot for an object of a particular class.
+
+- `update(object, ...)` - Updates and (by default) re-fits a model. It does this by extracting the call stored in the object, updating the call and evaluating that call.
+
+### Tidiers
+
+- `augment(x, ...)` - Augment data with model results.
+
+ ```r
+ augment(lm_fit, mtcars)
+ ```
+
+- `glance(x, ...)` - Construct a single row summary "glance" of a model fit.
+
+ ```r
+ glance(lm_fit)
+ ```
+
+- `tidy(x, ...)` - Turn an object into a tidy tibble.
+
+ ```r
+ tidy(lm_fit)
+ ```
+
+### General
+
+- `repair_call(x, data)` - When the user passes a formula to fit() and the underlying model function uses a formula, the call object produced by fit() may not be usable by other functions.
+
+- `control_parsnip(verbosity = 1L, catch = FALSE)` - Pass options to the fit.model_spec() function to control its output and computations.
+
+ ```r
+ control_parsnip(verbosity = 2)
+ ```
+
+- `show_engines(x)` - The possible engines for a model can depend on what packages are loaded. Some parsnip extension add engines to existing models.
+
+ ```r
+ show_engines("linear_reg")
+ ```
+
+- `translate(x, ...)` - Translates a model specification into a code object that is specific to a particular engine (e.g. R package). It translates generic parameters to their counterparts.
+
+ ```r
+ translate(lm_spec)
+ ```
+
+- `multi_predict(object, ...)` - For some models, predictions can be made on sub-models in the model object.
+
+### Extract
+
+- `extract_spec_parsnip(x, ...)` - Returns a parsnip model specification.
+
+ ```r
+ extract_spec_parsnip(lm_fit)
+ ```
+
+- `extract_fit_engine(x, ...)` - Returns the engine specific fit embedded within a parsnip model fit. For example, when using linear_reg() with the "lm" engine, this returns the underlying lm object.
+
+ ```r
+ extract_fit_engine(lm_fit)
+ ```
+
+- `extract_parameter_dials(x, parameter, ...)` - Returns a single dials parameter object.
+
+- `extract_parameter_set_dials(x, ...)` - Returns a set of dials parameter objects.
+
+- `extract_fit_time(x, summarize = TRUE, ...)` - Returns a tibble with fit times. The fit times correspond to the time for the parsnip engine to fit and do not include other portions of the elapsed time in fit.model_spec().
diff --git a/content/resources/cheatsheets/ml-create-models/ml-create-models.pdf b/content/resources/cheatsheets/ml-create-models/ml-create-models.pdf
new file mode 100644
index 000000000..9452bc08c
Binary files /dev/null and b/content/resources/cheatsheets/ml-create-models/ml-create-models.pdf differ
diff --git a/content/resources/cheatsheets/ml-create-models/page-1.png b/content/resources/cheatsheets/ml-create-models/page-1.png
new file mode 100644
index 000000000..fdd493ebd
Binary files /dev/null and b/content/resources/cheatsheets/ml-create-models/page-1.png differ
diff --git a/content/resources/cheatsheets/ml-create-models/page-2.png b/content/resources/cheatsheets/ml-create-models/page-2.png
new file mode 100644
index 000000000..acc175c57
Binary files /dev/null and b/content/resources/cheatsheets/ml-create-models/page-2.png differ
diff --git a/scripts/import-cheatsheets.py b/scripts/import-cheatsheets.py
index f58fdaa4b..ba4795471 100755
--- a/scripts/import-cheatsheets.py
+++ b/scripts/import-cheatsheets.py
@@ -38,6 +38,7 @@
"gt",
"keras",
"lubridate",
+ "ml-create-models",
"nlp-with-llms",
"package-development",
"plotnine",
@@ -66,6 +67,7 @@
"gt": ["gt"],
"keras": ["keras"],
"lubridate": ["lubridate"],
+ "ml-create-models": ["parsnip"],
"nlp-with-llms": [],
"package-development": ["devtools", "usethis"],
"plotnine": ["plotnine"],
@@ -94,6 +96,7 @@
"gt": ["R"],
"keras": ["R", "Python"],
"lubridate": ["R"],
+ "ml-create-models": ["R"],
"nlp-with-llms": ["Python"],
"package-development": ["R"],
"plotnine": ["Python"],