Skip to content

Field practice

What 527 sealed field review reports reveal about Ontario fieldwork

We read 527 sealed field review reports from two Ontario structural and building-science practices and classified every observation. Concrete findings about how sealed fieldwork actually gets written, what photo evidence looks like across the corpus, and where the revision chains are longest.

What this is, and why it matters

A field review report sealed by a Professional Engineer is a legal document. It records what an engineer saw on site, how it compared to the approved drawings and the Ontario Building Code, and what the contractor is expected to do next. When a sealed report is later read by a code official, a condo board, a tribunal, or another engineer reviewing the same work, the words in that report carry the force of the engineer's professional opinion.

Writing these reports is not a fringe activity. The two Ontario structural and building-science practices whose corpus we analysed produced more than 550 sealed reports between them, across the fourteen building-science and structural categories that dominate Ontario restoration practice - from balcony repairs and garage decks to curtain wall, roofing, and hydro vault rehabilitation.

What this article reports is not opinion. Every number below traces back to the classified corpus. The article exists because sealed fieldwork is undermetered: the industry produces tens of thousands of these documents a year in Ontario alone, but almost no structured data has ever been published about how they are written, what they contain, or where the drafting patterns break down.

We are publishing this because Fermito is building a drafting tool for sealed engineering work, and we want our own product decisions, and the category's broader conversation, to start from data instead of anecdote.

The corpus

527 sealed field review reports. 523 of them contain at least one numbered observation; the remaining 4 are schedules, interim progress memos, or template files that were included in the source folders but do not follow the observation-report format. 4,569 classified observations in total.

The reports come from two Ontario structural and building-science practices. The firm names are withheld. What matters for this analysis is the combined corpus: a full cross-section of Ontario restoration fieldwork written by licensed engineers for real projects, spanning multiple years and the fourteen categories listed above. No single firm's style dominates; the observations hold up as an industry sample.

What sealed fieldwork actually documents

Every observation in the corpus was classified by a large language model into one of 26 category labels covering concrete and structure, waterproofing and roofing, cladding and openings, fire and safety, and process work. We checked the model's output on a sample for accuracy. The labels are specific enough to be useful and general enough to hold up across different firms' drafting voices.

The top ten categories, and what share of the corpus each one accounts for, are:

  1. Progress observation - 869 observations (19.0%)
  2. Windows and doors - 474 observations (10.4%)
  3. Waterproofing membrane installation - 468 observations (10.2%)
  4. Balcony guardrail - 416 observations (9.1%)
  5. Concrete placement - 380 observations (8.3%)
  6. Roofing assembly - 189 observations (4.1%)
  7. Drainage and water management - 183 observations (4.0%)
  8. Sealant degradation - 164 observations (3.6%)
  9. Painting and coating - 151 observations (3.3%)
  10. Concrete deterioration - 151 observations (3.3%)

The top five categories account for 57% of all observations; the top ten account for 75%. Restoration fieldwork in the GTA is dominated by a relatively narrow set of recurring conditions. Concrete deterioration, waterproofing failures, and the envelope repairs that follow from them make up the bulk of what engineers are writing about week after week.

Pillar-group rollup

Rolling the 26 labels up into five engineering pillars makes the pattern clearer:

The 2,169 observations in the concrete-and-structure and envelope-and-water pillars combined are not a coincidence. Ontario restoration work is predominantly water management and the consequences of water management failure. When water gets into a reinforced concrete assembly, the resulting corrosion, spalling, and membrane reinstatement work shows up in field review reports for years afterward.

Report length and observation density

Reports in the corpus are shorter than many engineers expect. The median report is 553 words - shorter than the article you are reading - and the middle 50% of reports fall between 440 and 685 words. The longest 10% run past 872 words and tend to be final-review summaries covering multiple phases. The shortest 10% are progress notes of 371 words or fewer, usually produced during the middle of a multi-month repair when the engineer is visiting weekly.

Observation density matches this pattern. The median report contains 7 observations, the 75th-percentile report contains 10, and the 90th-percentile report contains 16. Five to ten substantive observations per visit is the central tendency; the corpus rarely shows the kind of 30-item walkthrough list that some firm templates encourage.

This matters for drafting tooling. A generator that outputs 20 observations by default will feel wrong to most engineers. A generator that supports five to ten detailed observations per report - with the option to expand on the rare deeper review - matches what the corpus shows.

Photo evidence is near-universal, and photo density is high

90.3% of reports cite at least one photograph. The median report contains 5 photo references; the 75th-percentile report contains 7; the 90th-percentile report contains 9 or more.

51.6% of observations directly reference a specific photograph by number, using language like "Refer to Photos #3 and #4" or "(Photo 1)". Photo evidence is not optional garnish in Ontario sealed fieldwork; it is woven into the observation itself. An observation without a photo reference is the exception, not the rule.

This has an uncomfortable implication for any drafting workflow. If the engineer cannot capture, caption, and attach photos to observations in the same session that produces the report text, the report is harder to write, and the photo-text linkage weakens. The corpus is effectively a proof that site-visit-to-sealed-draft workflows need photo-first tooling, not text-first tooling.

Regulatory citations are sparse at the observation level

2.6% of observations in the corpus cite a specific code section, CSA standard number, PEO regulation, or named drawing reference. The remaining 97.4% of observations carry the engineer's professional opinion about conformance with the drawings and specifications without anchoring that opinion to a named clause inside the observation text itself.

This is not a gap in the engineering; it is a pattern in the drafting. Individual observations in the corpus are short. The regulatory context typically lives in the report's opening boilerplate, the referenced specifications, and the shared project drawings. What the corpus does not show is engineers repeating the citation inside every single observation, the way a compliance-checklist template would.

Within the observations that do name a regulatory reference, the citations concentrate on a handful of families:

Interpretation. 2.6% is not a quality judgement. It is a structural fact about Ontario sealed fieldwork that matters for AI-assisted drafting. A tool that tries to force a code citation into every observation will produce reports that look wrong to experienced engineers, because they do not match the corpus baseline. A tool that surfaces the right citation only when the observation type calls for it - CSA material standards for concrete placement, OBC Part 9 clauses for low-rise residential envelope, PEO Regulation 941 for sealing practice - is closer to what the corpus shows.

Revisions and multi-visit projects

1.3% of reports carry an explicit revision indicator in the filename or document header - "Rev 1", "Revision 2", "Re-issued", or an R-number suffix. This is a floor on the true revision rate because not every revised report is tagged that way; firms often replace the prior document without marking it as a revision.

A more reliable signal of multi-visit drafting is the project chain. Grouping reports by their project-root filename, 10.2% of projects in the corpus produced more than one sealed report. Most projects show up in the corpus once, usually because they were one-visit reviews or because only the final report was archived. The longest chain in the corpus reaches 52 reports; the 90th-percentile chain length is 1.

The most revision-heavy categories (among those with at least 10 reports):

The practical implication is that sealed drafting is not a one-shot activity. A firm producing 50 FRRs a month is re-issuing 10 to 15 of them as construction progresses. Drafting tooling that treats every report as independent misses the single biggest source of productivity leak in the workflow: the time spent re-reading, re-referencing, and re-constructing the context from the prior report when the next one is drafted.

Observation-to-recommendation ratio

A complete field review finding has three parts: what was observed, how it compares to the standard, and what should happen next. The corpus tells us the first two are nearly always present. The third is not.

34.5% of classified observations in the corpus contain an actionable recommendation. The remaining 65.5% report a condition without explicitly telling the contractor what to do about it. This is not a failure of the engineers; many observations are pure status notes ("progress is on schedule") where no recommendation is appropriate.

The observations that do carry a recommendation distribute across types as:

Categories with the highest recommendation density (observation types where engineers almost always close the loop):

Categories with the lowest recommendation density (observation types that tend to be status-only):

What the data does not say

It is worth being specific about what this analysis cannot tell you.

How Fermito uses these findings

A short note, since the report is not a product pitch. Fermito is building a drafting assistant for sealed engineering work. The corpus findings above directly shape what the product does:

Methodology

The corpus was loaded in full from two on-disk source directories. Every .docx file was extracted via the mammoth library; every .pdf file via pdf-parse. Where the same report existed in both formats, the .docx version was preferred. Duplicates across formats were deduplicated by normalized filename stem.

Firm names, engineer names, contractor names, project numbers, addresses, and municipal specifics were stripped from every observation before any downstream analysis. The anonymization ran on the raw extracted text using a combination of known-identifier regex and case-insensitive stem matching; the resulting text contains generic tokens like [FIRM], [PERSON], [PROJECT] in place of identifying content. A residual-leakage scan across the entire anonymized corpus returned zero hits against the sentinel list.

Observation classification used Anthropic's Haiku 4.5 model. Each observation was passed with the full anonymized context and assigned a primary category, an optional secondary category, a recommendation type, and a boolean recommendation flag. Classifications were cached on disk by content hash so the analysis is deterministic and re-running after a successful first pass is near-free.

Statistics were computed deterministically from the classified output. No claim in this article required the model to perform the aggregation step; the model's job was category assignment, not statistical reasoning.


Every number in this article traces back to a field in a machine-generated stats file. If you run sealed work in Ontario and want the full categorical breakdown for your own team's training, contact Fermito.

The Fermito notes

One email when a new piece or template drops.

Practice notes on sealed engineering work, the regulation around it, and what AI drafting does and does not change. No cross-promotion, no re-selling the list.

One email per release. Unsubscribe any time.

← All articles