What Dental Narratives Actually Require

Dental narratives are written documentation attached to insurance claims that explain and justify the medical necessity of specific procedures. They're not optional add-ons for many treatments.

Each dental plan has its own policies about which CDT codes require written narratives, according to the American Association of Endodontists. You can't apply one template across all carriers.

Aetna documentation requirements show exactly how specific these narratives must be:

  • Soft tissue grafts need millimeter measurements for recession, attached gingiva, and keratinized tissue, plus tooth numbers and pre-operative photos

  • Periodontal scaling and root planing must document local anesthetic administration, treatment details, and appointment length

  • Bridge procedures require documentation specifying whether it's initial placement or replacement, extraction dates for the pontic, and identification of other missing teeth in the arch

This level of detail creates challenges for any AI system, which needs access to specific clinical data, including quantified measurements, knowledge of carrier-specific requirements for different CDT codes, and the ability to generate quantified, objective clinical descriptions.


The ChatGPT Compliance Problem

Here's what most dental practices don't realize: ChatGPT is not legally compliant for use with patient information.

HIPAA Rules guidance from HHS states that covered entities using cloud service providers to maintain electronic protected health information must enter into Business Associate Agreements. Without a signed BAA, the covered entity violates HIPAA Rules.

OpenAI does not sign BAAs with healthcare providers, according to HIPAA compliance analysis, which makes ChatGPT legally non-compliant for creating patient-related clinical narratives.

Removing patient names doesn't solve this problem because Protected Health Information includes any data that could identify a patient. Treatment dates, tooth numbers, clinical measurements, and your practice's information, when combined, could potentially identify individuals.

Using non-compliant AI tools with patient data constitutes a HIPAA violation. The consequences include substantial fines and legal liability. Since OpenAI does not sign BAAs with healthcare providers, using ChatGPT to process patient information violates federal law, even if you remove names.

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

What AI Can and Cannot Do for Narratives

Healthcare has widely adopted AI technology, with federal health data showing that 71% of U.S. non-federal acute-care hospitals reported using predictive AI applications integrated with their EHRs in 2024.

Documentation applications show promise, with research from The Permanente Medical Group finding that generative AI scribes saved physicians an estimated 15,791 hours of documentation time, equivalent to 1,794 eight-hour workdays.

There's a critical gap, though: peer-reviewed research hasn't validated AI specifically for dental narrative generation. Nature's 2025 publication states that real-world evaluation through deployment studies in dentistry is scarce, with most studies focused on validation rather than operational implementation.

The research identifies this as a fundamental limitation: "AI is emerging as a promising technology in dentistry, but its adoption in clinical practice is limited due to challenges such as the need for large training datasets, validation, data privacy, and deployment of AI-based applications."

Current AI language models cannot reliably make autonomous clinical decisions, cannot guarantee factual accuracy without human verification, do not match clinician-level expertise across all scenarios, and risk producing convincing but medically incorrect information.

Research published in Nature Medicine found that no current large language model consistently reaches the diagnostic accuracy of clinicians across all pathologies. OpenAI's GPT-4 System Card warns about the model's limitation of "producing convincing text that is subtly false."


The Accuracy and Liability Concerns

Medical "hallucinations" are documented patient safety risks, with research in MDPI's healthcare review identifying mitigating hallucinations (where models generate erroneous information) as a critical challenge requiring resolution before responsible healthcare integration.

A clinician survey in medRxiv found that 64.6% of respondents identified false or misleading information as a top concern, over 60% emphasized the importance of human supervision for safe clinical use, and quality concerns focused specifically on accuracy and reliability.

Large language models are poor medical coders based on systematic testing, according to NEJM AI benchmarking, which directly impacts insurance reimbursement accuracy and quality metrics tracking.

If AI-generated narratives contain errors that harm a patient or result in insurance fraud allegations, you are legally responsible. The AI vendor is not. This liability asymmetry is rarely emphasized in vendor marketing, yet it represents a fundamental legal reality: you cannot delegate professional responsibility to an AI system, and the AI cannot testify in malpractice or compliance proceedings. 

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

Find Top-Tier Temp Hygienists

Get instant access to skilled dental hygienists ready to fill in when you need them.

If You Decide to Explore AI: Implementation Requirements

Healthcare quality organizations have established specific requirements for practices implementing AI tools, with The Joint Commission requiring fitness-for-purpose assessment, bias detection and mitigation, local data validation, and continuous monitoring before deployment.

The American Dental Association has published ANSI-accredited standards (ANSI/ADA Standard No. 1110-1:2025 and ADA Technical Report No. 1109:2025) establishing technical requirements for validation datasets and evaluation of dental AI systems.

Before implementing any AI tool for clinical documentation, you must:

  • Verify the vendor signs comprehensive BAAs with HIPAA Security Rule safeguards

  • Check FDA clearance status if the tool diagnoses or recommends treatment

  • Establish practitioner review protocols for all AI-generated documentation before billing submission

  • Test algorithm performance on your practice's specific patient demographics

  • Validate against your local clinical workflow patterns

  • Document validation methodology and results comprehensively

  • Compare actual performance to published vendor claims using independent datasets

Implementation requires ongoing governance, not a one-time setup. According to The Joint Commission's framework for responsible AI use in healthcare, continuous monitoring is mandatory, with monthly performance audits appropriate for the first year before transitioning to quarterly reviews.

Research from BMC Oral Health found that dental hygienists allocate 12.15% of professional time to documentation tasks (nearly one hour in an eight-hour workday). An AHIMA systematic review found variable results across studies: some showing 19.0% to 92.0% decreases in documentation time, while others showed 13.4% to 50.0% increases.

Even if AI reduces documentation time by 50%, you must factor in validation time for reviewing AI-generated content for clinical accuracy, since peer-reviewed research documents that large language models can produce convincing but medically incorrect information.


Making Informed Decisions About AI in Your Practice

AI tools for dental narratives require substantial investment in compliance verification, local validation, ongoing monitoring, and professional oversight. General-purpose tools like ChatGPT lack HIPAA compliance for patient information, and the technology requires comprehensive governance structures and clinical review before deployment.

Implementing AI correctly demands months of assessment, testing, and validation. During that process, your documentation workload continues.

AI narrative tools require your team to verify vendor compliance, test algorithm performance on your patient demographics, establish review protocols, and conduct monthly audits for the first year. You validate the technology, review every output, and remain legally responsible for errors.

Remote billing services hand off the entire documentation and claims process to specialists who already maintain compliance infrastructure, know carrier-specific requirements, and handle narratives daily. Teero's remote billing service combines AI automation for routine tasks with U.S.-based experts who handle exceptions, integrating with your existing practice management software while providing real-time claim visibility.

If you're exploring ways to reduce documentation burden without months of technical validation, get started with Teero's remote billing.

Full schedule. Maximum revenue. Every single day.

Full schedule. Maximum revenue. Every single day.

Full schedule. Maximum revenue. Every single day.

Full schedule. Maximum revenue. Every single day.