SECURING AI CORRECTNESS AT SCALE

Uniting AI and Formal Methods to Attain
Alignment of,Verification of,Explainability of,Accountability for,Assurance for,Safe,Robust
Trustworthy Solutions

Thank you for visiting our website and for reading about our technology and R&D program. We are building active controls for generative AI using formal methods and proof assistants. We think our methods and tools will prove emergent in AI safety. We hope you will join us for a live R&D experience. Please join us as we explore and improve the future of platform Ai safety!

Formal Foundy AI
Safe AI
OUR MISSION

Correctness at Scale

Safe AI Through Rigorous Methods

Our goal for this program is to bring a hive mind of participants and industry partners to bear on correctness at scale for generative AI models.  We are onto a set of techniques and tools that could change the picture of generative AI safety.  And we need each of you at each of the 2 R&D levels to join us on our journey and interact with our safety platform tests. We promise in return, knowledge and code and information and know how and our very best for you. 

Learn a little more

Future Vision: We are developing safety layer architectures (SLAs) to plug into generative AI systems. These SLAs employ formal methods and proof assistants to ensure correctness and safety for AI input/output layers. In a sense, we are using the power of genAi models to help guide AI systems through their critical decision points and prove correctness along their pathways. In our SLAs, LLMs generate formal proofs using dependently typed languages for proof verification and employing heuristic techniques for iterative proof repair and refinement.

Program Overview: Our R&D program is designed to test critical use cases in important verticals with safety minded genAi industry participants and partners. Our process first abstracts tool layers in a unified proof of concept (PoC) for the underlying tool set and configuration. Later, we produce a PoC for each of the critical and important use cases we test with Industry Partner participants, highest level. They can gauge and customize ways of assuring output safety for their specific use case(s) as part of the program (and also receive white glove installation). This way, in addition to tech transfer, each highest level participant receives their PoC as a demonstration of the efficacy of using this tech to enhance the safety of their specific systems.

R&D Live: Our program is organized for a series of live interactive virtual events coinciding with code releases and test tooling updates and releases. A stearing committee of members from the Industry Partner level will meet regularly. Input from sponsors will also be invited. Sponsorship doubles our live community and enhances the value of our R&D.

Our technology may be complex, but our explanations aren't.

Whether you’re a novice or an expert, we break down our approach in a way that makes sense for you. No matter your role or expertise, we’ve got you covered with an explanation of our technology that fits your level of experience. 

Explanation for everyone

Listen (3:54) or Read

You will learn:

  1. The distinction between heuristics and provably correct solutions in AI.
  2. How our startup merges AI with proof assistants for reliable, efficient outcomes.
  3. The future potential of AI and proof assistant collaborations in various domains.

Explanation for developers

Listen (7:05) or Read

You will learn:

  1. The role of type systems in language safety and the balance between safety and flexibility.
  2. The potential of dependently typed languages for enhancing AI safety and their challenges.
  3. How AI advancements can transform proof assistants and contribute to safer AI systems.

Explanation for experts

Listen (3:42) or Read

You will learn:

  1. How we efficiently progress by leveraging existing tools without reinventing the wheel.
  2. Our approach for integrating proof assistants with AI.
  3. How our demonstrations cohesively fit together, potentially forming a functioning system with further refinement.