AnnoGram treats annotation as a core part of visualization specification, not a separate afterthought. We introduce a declarative annotation grammar that makes annotations easier to define, reuse, and adapt across charts and datasets.

Problem Most visualization tools treat annotations as add-ons, so authors must manually construct and position them. This makes annotations harder to write, maintain, and transfer when the chart, layout, or data changes.
Approach We extend the visualization grammar with an annotations primitive built around a target-and-effect model. This lets authors specify what is being annotated, which annotation elements are applied, and how they are positioned, while separating annotation intent from low-level graphical implementation.
Result In the paper's comparison, our proof-of-concept Vega-Lite extension required the fewest annotation-specific lines of code among the evaluated programmable tools and received Easy/Low/Easy ratings for intuitiveness, error-proneness, and portability.
Implication Our results show that making annotations a first-class grammatical construct can reduce authoring effort and support annotation workflows that are more portable, data-aware, semantically integrated, and maintainable as visualizations evolve.

How it works:

annotation-spec.json
Example: annotating a COVID-19 peak Click a band to highlight the matching step
Chart configuration
1 "$schema": "vega-lite-annotation",
2 "mark": "line", "encoding": { /* … */ },
3 "annotations": [{
4 "target": { "type": "data-space", "x": "2021-01-11" },
5 "enclosure": { "shape": "rect", "style": { "fill": "rgba(240,165,0,.15)" } },
6 "text": { "text": "Jan. 2021 peak", "position": "upperMiddle" }
7 }]

Contributions

Grammar extension

A declarative extension to the Grammar of Graphics that introduces a top-level annotations primitive with explicit support for targets, annotation types, and placement.

Proof-of-concept implementation

A Vega-Lite extension that parses annotation specifications, resolves targets against chart structure and scales, and computes placement for supported annotation types.

Comparative evaluation

A heuristic comparison across eight tools using annotation-specific LOC and Cognitive Dimensions ratings for bar, line, and scatterplot examples.

The AnnoGram Grammar

A top-level annotations primitive sits alongside scales and geometries. Each annotation specifies what to annotate, which effect to apply, and where to place it. Click any non-terminal to inspect it.

Non-terminal Terminal Operator / punctuation Grammar field Click non-terminals to inspect. Press Esc to clear.
Root
::= annotations[]
:= | | | | None
textopt :=
enclosureopt :=
connectoropt :=
indicatoropt :=
Annotation Targets
:= Expression | index[]
:= title | legend | subtitle |
:= ( axis := x | y,
  parts := label | tick | grid,
  rangeopt := [] | Expression )
Annotation Types
⟨X⟩ := ( id, , X )
:= ( string, | None )
:= ( Rect | Ellipse | SVGPath |, )
:= ( Markers, SVGPath, linear | catmull-rom |)
:= ( line | area | arrow |, Expression )
Placement
:= | |
:= auto | start | mid | end
:= auto | upLeft | midRight |
:= { type := data | pixel,
  x := , y := }
:= string | number

Reading the grammar

LHS defines a production, RHS composes alternatives. Click any non-terminal to inspect its definition and see related rules highlighted.

The problem today

GoG tools can represent annotations, but often through verbose and indirect encodings. In Vega-Lite, a labeled connector typically requires separate data sources, explicit coordinates, and multiple layered marks.

Target-and-effect model

Each annotation specifies what to annotate and which effect to use. This separates target selection from annotation effects and placement.

Reference and Composite

Reference links annotations via id — e.g. a connector to an enclosure. Composite groups repeated structures like seasonal time spans.

Automatic placement

The Position Resolver searches for available space using a backtracking procedure to reduce overlap. Explicit positioning and offsets are also supported.

One Annotation, Three Decisions

Annotation authoring is organized around three related decisions: choosing a target, choosing annotation effects, and specifying placement.

Target
Identify the chart element, axis region, data subset, or standalone note anchor.
Effect
Choose the annotation expression: text, enclosure, connector, indicator, or a combination.
Placement
Accept automatic placement or override it with anchoring, data-space, pixel-space, or offsets.
Target
Choose the referent

Identifies what to annotate — coupled to data semantics, not pixels.

  • DataPoint — select by index or expression; updates with data.
  • Axis — annotate values, ranges, ticks, or gridlines.
  • ChartPart / None — legends, titles, or standalone notes.

Semantic targets are the foundation of portability.

Effect
Choose the expression

The visual annotation type attached to the target.

  • Text — explanation, interpretation, or narrative.
  • Enclosure / Connector — group elements or link to referents.
  • Indicator — lines, areas, or arrows for values and trends.

Multiple effects can share one target.

Placement
Automatic, with full override

Annotations receive default placement from context. Override when needed.

  • Auto — finds unoccupied space via backtracking.
  • Anchoring — 1D/2D anchors align to target edges or centers.
  • Fixed — data-space, pixel-space, or dx/dy offsets.

Separation lets the same annotation adapt to different layouts.

Compiler Pipeline

Five stages extend Vega-Lite's compilation while preserving Vega compatibility.

Parser
Validate input and extract annotation blocks
Position Resolver
Compute positions and minimize occlusion
Assembler
Link related marks and resolve references
Transpiler
Convert annotations to Vega mark encodings
Post-Adder
Inject complex renders into the scene graph

Annotation Portability

Annotations stay valid as data or chart type changes. The same target and effect remain in the spec while runtime resolves coordinates and placement.

Fixed in the spec

Semantic target plus effect intent stay authored once inside the annotation block.

Resolved at runtime

Current marks and coordinates are recomputed from active encodings, scales, and scene-graph state.

Placed per layout

Automatic placement, anchors, and offsets adapt to available space while avoiding overlap.

Three passes:

portable-annotation.json
Cars dataset — best fuel economy Click a band to see what's fixed vs. resolved
1 "data": { "url": "cars.json" },
2 "mark": "point", // point | bar | line
3 "annotations": [{
4 "target": { "type": "data-expr",
5 "expr": "datum.mpg === max(mpg)" },
6 "text": { "text": "Best fuel economy", "position": "auto" },
7 "connector": { "curve": "natural" }
8 }]

Evaluation

GitHub opens in a new tab

Eight tools compared on annotation-specific LOC and four Cognitive Dimensions: first-class support, intuitiveness, error-proneness, and portability.

Tool Bar Ann. Bar LOC Line Ann. Line LOC Scatter Ann. Scatter LOC First-class Intuitive Error Prone Portable
High programmability
D3 151 81 175 76 170 104 Hard High Hard
d3-annotate 161 81 174 77 177 99 Med. Med. Med.
ggplot2 120 22 128 43 208 101 Med. High Med.
ggplot2-annotate 73 20 117 42 178 101 Med. High Med.
Low programmability
HighCharts 82 32 231 82 169 79 Med. Low Hard
Vega 515 161 491 219 387 186 Hard High Hard
Vega-Lite 259 23 253 29 177 34 Hard High Hard
VL Annotation (ours) 101 19 95 25 79 31 Easy Low Easy
Visual editors (PowerPoint, Figma, etc.)
Visual Editors Easy Low Hard
Key finding: In this comparison, systems with built-in annotation support generally require less annotation-specific code than lower-level alternatives. The prototype VL Annotation system received favorable ratings for intuitiveness, error-proneness, and portability, while visual editors remained easy to author in but weak in portability because annotations are detached from the underlying data specification.

Cite This Work

@inproceedings{rahman2025annogram,
  title     = {{AnnoGram}: An Annotative Grammar of
               Graphics Extension},
  author    = {Rahman, Md Dilshadur and
               Zaman, Md Rahat-uz and
               McNutt, Andrew and
               Rosen, Paul},
  booktitle = {IEEE Visualization and Visual Analytics
               (VIS)},
  year      = {2025},
  doi       = {10.1109/VIS60296.2025.00053}
}