Skip to content

AI responsibility

Drafting assistant vs. generator - the distinction that matters in sealed work

The engineering profession is adopting AI drafting tools without a shared vocabulary for what they do. The difference between 'assistant' and 'generator' is not marketing - it determines whether the tool fits within existing professional liability frameworks or creates a new category of risk.

Two words, different liability

Call a tool a "generator" and you imply autonomy - the AI produces output that claims to be complete. Call it an "assistant" and you imply supervision - the AI produces a draft that claims to need review.

This is not a semantic debate. The distinction determines how the tool fits within existing regulatory frameworks for licensed professional practice. A generator that produces a finished sealed report sits in a regulatory grey zone - no licensing body has defined what it means for AI to "author" a professional document. An assistant that produces a draft for review sits squarely within the existing framework - licensed professionals have always reviewed drafts produced by others before signing them.

The professions that adopted AI drafting first - medicine and law - did not stumble into this framing. They chose it deliberately, and the choice preceded regulatory acceptance.

What medicine settled on

When Abridge launched its AI clinical documentation tool, it did not call itself an "AI note generator." It called itself an "ambient AI scribe." The word scribe was deliberate - it implies transcription, not diagnosis. The physician dictates or the conversation happens naturally; the scribe records and structures. The physician reviews and signs.

Ambience Healthcare followed the same pattern across hundreds of clinical specialties. Nuance DAX Copilot, backed by Microsoft's acquisition of Nuance, uses "copilot" - another word that implies human-in-the-loop, not autonomy.

The naming was not an afterthought. It was a regulatory strategy. When the American Medical Association published its augmented intelligence governance framework, the policy materials addressed "AI scribes" and "AI assistants" - the vocabulary that the tools themselves had established. The framing shaped the regulatory conversation, not the other way around.

Every one of these tools produces a complete clinical note. The AI output is often indistinguishable from a human-drafted note. But the tools call themselves assistants, and the workflow enforces the framing: the physician must review and sign before the note enters the medical record. The quality of the draft is high. The claim of autonomy is zero.

What law learned

The legal profession's framing lesson came with a cost. In 2023, Mata v. Avianca produced sanctions against attorneys who filed a ChatGPT-generated brief containing fabricated case citations. The attorneys did not claim that AI wrote their brief - they claimed they had not known the citations were fabricated. The court found that they had a professional obligation to verify.

Every major legal AI tool launched after Mata chose its words carefully. Harvey positions itself as an "intelligent legal coworker." Spellbook focuses on "AI-powered contract drafting." CoCounsel is an "AI-powered legal assistant." None of these tools claims to "write" briefs or "generate" legal documents. They assist. They draft. They keep the professional in the loop.

The distinction is not cosmetic. Harvey's workflow requires the attorney to review every output before it leaves the platform. CoCounsel integrates Westlaw-backed citation verification directly into the drafting workflow. The tools are designed so that the professional cannot claim ignorance of the output's content - the review step is structural, not suggested.

The legal profession learned the framing lesson through sanctions. Medicine learned it through careful product design. Engineering has the opportunity to learn it from both.

Why the distinction is structural, not marketing

A tool that calls itself an assistant but ships output without a review step is engaged in marketing. A tool that calls itself an assistant and requires attestation before export is making a structural claim about how the output enters the professional record.

The test is simple: can the output reach a client, a regulator, or a project file without the licensed professional explicitly acknowledging ownership?

If the answer is yes - if the AI-generated report can be exported, emailed, or filed without a review step - the tool is a generator regardless of what it calls itself. The professional may review, but the tool does not require it. The workflow does not record it. If a dispute arises, there is no auditable evidence that the professional exercised judgment over the output.

If the answer is no - if the tool blocks export until the professional affirms that they have reviewed the content and are taking ownership - the tool is an assistant in the structural sense. The professional's judgment is recorded in the workflow, not assumed.

This is where the UX and the framing become inseparable. Fermito's attestation modal is not a feature - it is the mechanism that makes the word "assistant" true. The modal records who reviewed, when they reviewed, and what they acknowledged. The DOCX does not download until that record exists. If PEO or a tribunal ever asks how the firm ensures quality assurance over AI-drafted reports, the attestation log is the answer.

The engineering profession's window

Engineering is in the early adoption window for AI drafting tools. The major AEC platforms - Procore, Autodesk Build, Bluebeam - have not shipped AI drafting features for sealed documents. The frontier AI models - ChatGPT, Claude, Gemini - can produce passable field review prose for $20/month, but they have no attestation step, no template fidelity, no revision lifecycle, and no regulatory citation awareness.

The profession has a narrow window to establish the right vocabulary before regulators define it for them. If PE principals adopt tools that frame themselves as generators - tools that produce complete reports with no structural review step - regulators will write rules that restrict AI's role in sealed work. If the profession adopts tools that frame themselves as assistants - tools that produce drafts within a supervised workflow - regulators will write rules that accommodate them.

The medical profession demonstrated that the framing the tools establish becomes the framing the regulators adopt. The legal profession demonstrated what happens when the framing is absent.

Three questions to ask any AI drafting vendor

If you are evaluating AI tools for sealed engineering work, ask these three questions. The answers will tell you whether the tool is an assistant or a generator, regardless of what the marketing says.

Does the tool require review before output leaves the platform? If you can export a report without reviewing it, the tool has no attestation boundary. It is a generator. The professional may review, but the tool does not enforce it, and there is no record that it happened.

Does the tool record the review? If the tool requires a review step but does not record who reviewed, when, and what they acknowledged, the attestation is theatre. A defensible review practice needs an auditable trail - the same standard that medical AI tools meet with their EHR integration.

Does the tool claim to replace professional judgment or to assist it? Read the marketing copy, but also read the terms of service. If the tool claims that its output is "ready to seal" or "compliant with OBC," it is making a professional judgment claim that no AI tool is qualified to make. If the tool claims that its output is a draft requiring professional review, it is making an accurate statement about its role in the workflow.

The distinction between assistant and generator is not about the quality of the draft. A generator can produce excellent prose. An assistant can produce mediocre prose. The distinction is about where the professional's responsibility begins in the workflow - and whether the tool enforces that boundary or leaves it to chance.

The Fermito notes

One email when a new piece or template drops.

Practice notes on sealed engineering work, the regulation around it, and what AI drafting does and does not change. No cross-promotion, no re-selling the list.

One email per release. Unsubscribe any time.

← All articles