//

4 min read

Intro

At Hyperscience, we believe that meaningful and responsible AI starts with openness. Our newly released AI Transparency Report provides an in-depth look at how we design, govern, and secure the machine learning models that power our intelligent automation platform. It’s more than a document, it reflects our ongoing commitment to fairness, privacy, accountability, and human oversight.

Why We Published the Transparency Report

As AI becomes central to critical business processes, organizations must demonstrate how these systems work and how risks are managed. Our report highlights:

  • Our Model Ecosystem: A wide array of machine learning and deep learning models, both pre-trained and user-inputted, designed for precise tasks like rotation detection, text segmentation, and field location.
  • Human-in-the-Loop Oversight: Human validation is integrated at key points, especially when model confidence is low, ensuring accuracy and accountability.
  • Privacy and Data Handling: Rigorous PII redaction and strict data usage policies mean only de-identified, consented data is used for model training.
  • Security and Risk Management: We apply secure development practices across both our AI and supporting infrastructure. Our platform is backed by SOC 2 Type II, Cyber Essentials Plus, and FedRAMP certifications. We’re also advancing our commitments to AI security and exploring future alignment with industry-recognized frameworks like the OWASP Top 10 for LLMs, MITRE ATLAS, and an AI Bill of Materials (AIBOM) to further enhance transparency.

The report highlights our proactive approach: our models are designed for specific, granular tasks, backed by rigorous human oversight to minimize unintended consequences. We’re not chasing general-purpose AI, we’re delivering AI that supports and enhances human decision-making.

Beyond the Report: Our Broader AI Ethics Work

The Transparency Report is part of a wider effort driven by our AI Ethics Committee. A key output is the revised Hyperscience AI Ethics Code of Conduct, which guides every stage of our AI development and deployment:

  • Transparency & Explainability: We work to make models and outputs understandable and decisions traceable, so users and stakeholders can trust how AI reaches its conclusions.
  • Fairness & Inclusivity: We actively test for and work to mitigate bias, using diverse datasets and inclusive design practices to promote equitable outcomes.
  • Privacy & Data Protection: We prioritize user consent, confidentiality, and compliance with data protection laws, ensuring data is handled responsibly at every stage.
  • Safety & Robustness: We build safeguards into our AI, with ongoing testing and monitoring to ensure systems are resilient, secure, and reliable in real-world use.
  • Accountability & Human Oversight: We ensure that AI complements, not replaces, human judgment by embedding human review and clear accountability into our workflows.
  • Environmental & Social Responsibility: We consider the societal impact and environmental footprint of our AI, striving for positive contributions and sustainable practices.
  • Continuous Improvement: We regularly review our practices, engage external stakeholders, and refine our approaches to meet evolving ethical standards and expectations.

Ethics in Action: Connecting with Our Community

Our ethics work extends beyond policies. Last year, we hosted students from Eastern Middle School at our One World Trade Center office. These future innovators asked insightful questions about AI bias, data sourcing, and human oversight. We discussed:

  • How our AI helps computers “learn to read” by transcribing and locating data fields
  • How anonymized, consented data and human review ensure accuracy
  • The inevitability of bias and the importance of diverse inputs and checks
  • How AI shows up in daily life, from tools like Grammarly to schoolwork, and why fair, inclusive policies matter

It was inspiring to see young minds engage deeply with the ethical dimensions of AI; A powerful reminder of why transparency matters.

Looking Ahead

Our AI Ethics Committee continues advancing transparency, fairness, and safety through:

  • Expanding human oversight and quality assurance
  • Enhancing data governance and privacy protections
  • Exploring greater AI system transparency, including progress toward an AI Bill of Materials (AIBOM)

By sharing our progress openly, we aim to foster trust not just with customers but with the broader AI-impacted community.