Circular Documentation
  • Getting Started
  • Introduction
    • Introducing Circular
  • Circular's Mission
  • Circular's Technology
    • Certificates & Data Anchoring
    • Multi-Chain Architecture
    • Certified Nodes & Jurisdictional Deployment
    • HyperCode & GPU Accelerated Processing
    • Proof of Reputation Consensus Mechanism
  • Certified Intelligence
  • Developer Tools
    • Enterprise APIs
      • Javascript
        • CEP_Account.open()
        • CEP_Account.setNetwork()
        • CEP_Account.setBlockchain()
        • CEP_Account.update()
        • CEP_Account.submitCertificate()
        • CEP_Account.getTransactionOutcome()
        • CEP_Account.getTransaction()
        • CEP_Account.close()
      • Node.JS
        • CEP_Account.open()
        • CEP_Account.setNetwork()
        • CEP_Account.setBlockchain()
        • CEP_Account.update()
        • CEP_Account.submitCertificate()
        • CEP_Account.getTransactionOutcome()
        • CEP_Account.getTransaction()
        • CEP_Account.close()
      • PHP
        • CEP_Account.open()
        • CEP_Account.setNetwork()
        • CEP_Account.setBlockchain()
        • CEP_Account.updateAccount()
        • CEP_Account.submitCertificate()
        • CEP_Account.getTransactionOutcome()
        • CEP_Account.getTransaction()
        • CEP_Account.close()
      • Python
        • CEP_Account.open()
        • CEP_Account.set_network()
        • CEP_Account.set_blockchain()
        • CEP_Account.update_account()
        • CEP_Account.submit_certificate()
        • CEP_Account.get_transaction_outcome()
        • CEP_Account.get_transaction()
        • CEP_Account.close()
      • Java
        • CEP_Account.open()
        • CEP_Account.setNetwork()
        • CEP_Account.setBlockchain()
        • CEP_Account.updateAccount()
        • CEP_Account.submitCertificate()
        • CEP_Account.getTransactionOutcome()
        • CEP_Account.getTransaction()
        • CEP_Account.close()
  • SDK
  • CLI & Tooling
  • Core Concepts
    • Overview
    • Certificates
  • Accounts
  • Private Chains
  • Data Management
  • Fees
  • Nodes
  • Private Keys
  • Recovery Phrases
  • Tutorials & Examples
    • Circular Connect Guide
      • Create an Organisation Account
      • Create a Blockchain Account
      • Purchase Certificates
      • Using the Explorer & Viewing Certificate Details
    • Create Your First Certificate
  • Industry Use Cases
    • Industry Use Cases - Overview
    • Clinical Trials
    • Medical Devices
    • Public Health
    • Pharma Supply Chains
    • Research and Academia
Powered by GitBook
On this page
  • Introduction
  • What is Certified AI?
  • The Risks of Uncertified AI Training Data
  • The Benefits of Certified AI Data.
  • Provenance and Lifecycle Tracking
  • Bias Detection and Audit Readiness
  • Improved Model Outcomes
  • Trusted, Reusable Data Assets
  • Core Certification Capabilities
  • Model Integrity, Guaranteed
  • Certified Inputs, Auditable Outputs
  • Resalable and Redeemable AI Datasets
  • Example Use Cases
  • Developer Integration Examples
  • Certifying a dataset
  • Registering a model artifact
  • Querying provenance
  • Response:
  • Regulatory Alignment

Certified Intelligence

Introduction

Circular Protocol provides a blockchain-based certification framework that ensures the integrity, traceability, and compliance of artificial intelligence systems across regulated sectors. From initial dataset creation to inference outputs, every element of the AI lifecycle is recorded, cryptographically secured, and made verifiable.

This allows regulators, auditors, and enterprise partners to trust not just the performance of your models, but the provenance and quality of everything that shaped them.

What is Certified AI?

As AI becomes embedded in critical systems such as, medical diagnostics, clinical trial design, insurance claims, and pharmaceutical R&D, the stakes have changed. It’s no longer acceptable to use black-box models trained on unverified or synthetic data.

Circular solves this problem by turning every dataset, model, and inference step into a certified, auditable digital asset. This gives enterprises and regulators the tools they need to deploy AI safely, explain decisions clearly, and meet the growing demands of laws like the EU AI Act, FDA’s SaMD guidance, and ISO/IEC 42001.

Whether you’re a pharmaceutical firm, an insurer, or a hospital deploying clinical AI, certified AI turns trust into a feature.

The Risks of Uncertified AI Training Data

Uncertified data introduces invisible but critical risks into your models:

  1. Hidden narratives can skew outputs. For example, if a training dataset underrepresents certain ethnic groups, an AI model may produce inaccurate diagnoses or unfair policy decisions.

  2. Fake or synthetic data may inflate model performance in the lab but fail in real-world environments. Without verifiable origins, you don’t know what’s real—and neither does your regulator.

  3. "Patched” datasets, where small datasets are augmented or duplicated to simulate volume, can mislead teams into trusting unreliable models. These shortcuts leave no trail and are nearly impossible to audit after deployment.

Circular eliminates these risks by certifying data at the moment of ingestion and preserving an unbroken chain of trust through every model iteration.

The Benefits of Certified AI Data.

Provenance and Lifecycle Tracking

Every dataset you certify with Circular is hashed, versioned, and tagged with metadata like its source, license, tags, and collection method. This allows every stakeholder: auditors, partners, downstream users to know exactly where the data came from and how it has changed over time. No more guessing or combing through spreadsheets during regulatory reviews.

Bias Detection and Audit Readiness

Because datasets are certified at source, Circular makes it possible to audit models after training by tracing back to the original inputs. This allows researchers, compliance teams, or regulators to analyze whether data bias may have led to model drift, discriminatory outcomes, or unsafe decisions, without starting from scratch.

Improved Model Outcomes

Clean data doesn’t just help with compliance, it leads to better AI. Certified datasets reduce the risk of label leakage, misaligned features, and poorly structured training processes. This leads to models that generalize better, require less post-processing correction, and are easier to maintain.

Trusted, Reusable Data Assets

When data is certified, it becomes reusable across teams, departments, and even companies. It can be exported, reused in future models, or licensed to third parties—because the certification process makes it safe to share. This is critical for CROs, research institutes, and pharma companies who need to monetize or reuse data across collaborations.

Core Certification Capabilities

Model Integrity, Guaranteed

Circular allows developers to certify model artifacts at each training step. Every version is cryptographically linked to the dataset it was trained on, the environment it was trained in (e.g. framework, version, GPU type), and the team that ran the training. This creates a model history that is immutable, verifiable, and compliant with audit standards like the FDA’s 21 CFR Part 11 and the EU AI Act’s risk classification framework.

This system supports everything from reproducibility in academic research to version control for regulated medical AI.

Certified Inputs, Auditable Outputs

Training pipelines can ingest certified datasets and produce outputs that are also certified—whether it’s model performance logs, inferences, or API decisions. Every transformation step is recorded and hashed, allowing organizations to prove not just what a model did, but how it made its decisions. This is critical for real-world scenarios like:

  • Auditing a diagnostic AI that flagged a false positive

  • Explaining how an AI system denied an insurance claim

  • Validating that a pharmacovigilance model used only approved clinical datasets

Resalable and Redeemable AI Datasets

Circular enables organizations to treat certified datasets as monetizable, policy-bound assets. Once certified, a dataset can be:

  • Traded or licensed across borders

  • Embedded with usage limits (e.g. one-time access, expiration after 90 days)

  • Shared with regulators, partners, or external auditors

This allows for the creation of data markets and research consortiums that can rely on shared governance and cryptographic trust instead of complex legal contracts.

Example Use Cases

Use Case

Description

AI in Clinical Trials

Certify patient-level datasets used for predictive trial models; trace outcomes to source data

Insurance Fraud Detection AI

Ensure that the claims used to train models are traceable and regulator-safe

Drug Discovery with Generative AI

Certify the datasets (e.g. molecule structures, assay data) used to fine-tune diffusion or transformer models

Pharmacy Network Monitoring

Certify dispensing and refill data that supports AI used in adherence tracking

Government AI Tenders

Provide compliant audit trails for public sector AI grants and defense-funded models

Developer Integration Examples

Certifying a dataset

circular certify dataset --input patient_data.csv --tags "phase-3, EU-AI-act, GCP" --owner cro_team_alpha

Registering a model artifact

circular certify model --input trial_predictor_v2.pt --dataset cert_87319 --trained_by ai_unit_b1 --framework pytorch

Querying provenance

GET /certified-ai/model/{model_id}/provenance

Response:

{
  "model_id": "model_873A6D",
  "dataset_id": "cert_87319",
  "trained_by": "ai_unit_b1",
  "certified_on": "2025-04-27T10:43:22Z",
  "transformations": [
    "labeling:v2.1",
    "scaling:z-score",
    "tokenization:bert-format"
  ]
}

Regulatory Alignment

Certified AI is fully aligned with evolving global standards:

  • EU AI Act: Traceability, high-risk classification, post-market monitoring

  • FDA SaMD & AI/ML Action Plan: Change control logs, training traceability, performance reproducibility

  • ICH GCP E6(R3): Clinical data integrity in AI-assisted trials

  • ISO/IEC 42001: AI Management Systems

  • HIPAA / GDPR: Data minimization and auditability for personal data used in AI systems

PreviousProof of Reputation Consensus MechanismNextEnterprise APIs

Last updated 17 days ago