This template demonstrates a reproducible analysis flow that feeds multiple deliverables (manuscripts, summaries, presentations) while keeping heavy lifting in one place.
Reusable functions live in analysis/pipeline.R.
The analysis document is cached and frozen so re-renders only recompute when code changes.
Artifacts (tables, figures, workspace snapshots) land in artifacts/ for other .qmd files to consume.
Metadata lists artifact paths so downstream documents can reference them without guessing filenames.
# A tibble: 6 × 11
manufacturer model displ year cyl trans drv cty hwy fl class
<fct> <fct> <dbl> <int> <int> <fct> <fct> <int> <int> <fct> <fct>
1 audi a4 1.8 1999 4 auto(l5) f 18 29 p compa…
2 audi a4 1.8 1999 4 manual(m5) f 21 29 p compa…
3 audi a4 2 2008 4 manual(m6) f 20 31 p compa…
4 audi a4 2 2008 4 auto(av) f 21 30 p compa…
5 audi a4 2.8 1999 6 auto(l5) f 16 26 p compa…
6 audi a4 2.8 1999 6 manual(m5) f 18 26 p compa…
Downstream documents can load("../artifacts/workspaces/sample-analysis.RData") to reuse objects without re-running chunks.
Next Deliverable Steps
Render this analysis (quarto render analysis/sample.qmd) to refresh artifacts.
In a manuscript or summary .qmd, load analysis/pipeline.R or the saved workspace, then pull in artifacts with read_artifact() or language-specific loaders.
Reference the metadata entries in this document (see YAML metadata.rsrc) to keep paths consistent across deliverables.
Tip
Consider a task pipeline (targets, drake, or quarto render --profile) if analyses branch into multiple parameter sets or data refreshes.
References
As you cite literature (e.g., @wickham2019tidyverse for tidyverse workflows or @quarto2024guide for authoring guidance), the bibliography below is populated automatically.