Simulation of embodied minds is a responsibility, not just a capability
We're building something that matters. Which means we can get it wrong in ways that hurt people.
When you simulate what it's like to be someone, you're shaping how people understand each other. How they build empathy, or fail to. Get it wrong, and you perpetuate harm at scale.
We're scientists who've studied brains and behavior for years. That doesn't make us experts in every culture, every disability, every lived experience we might simulate. We know what we don't know.
So we've built accountability into the foundation. Authentic voices lead authentic stories. Communities review before release. Our deterministic systems can be audited when concerns arise. Not promises for later—commitments now.
Gothic Grandma treats simulation of embodied minds as a responsibility, not just a capability. These commitments guide our development today and will govern how we operate at launch and beyond.
Every Living World representing a culture or community will be led by a Creative Director whose lived experience makes them the authentic voice—not consultants, but compensated partners who will receive ongoing royalties. We provide the platform and resources; they provide the vision and cultural truth.
Before release, we will conduct interviews and product testing with diverse stakeholders: academics, policy experts, foundation leaders, community organizations, and everyday people with ties to the stories being told. Review will happen at design and pre-release stages to catch issues early.
Our core simulation systems are deterministic mathematical models—not black-box machine learning. Every behavior can be traced through every computation. While our models are proprietary, our team maintains full internal auditability to investigate feedback and refine systems with precision.
Our biological simulation is the source of truth. AI plays one role: translating structured simulation data into natural language—like a thermometer translates temperature. Characters don't hallucinate; they report actual simulated states. All AI processing happens locally on the user's device using small (3-7 billion parameter) models. No cloud APIs. No data collection. Complete privacy.
We will never sell user data. EVER. Basic performance data will help us improve the platform. Behavioral data—your in-world decisions and choices—will be strictly opt-in. Users who want to contribute to large-scale research that only population-level simulation can answer may choose to participate. All research data will be deidentified, and we are establishing external IRB oversight to ensure our consent processes meet the highest ethical standards.
We don't avoid difficult topics—we make them customizable. Through a guided setup process, users will set their comfort levels for sensitive content before diving in. Age-appropriate defaults will adjust automatically, with additional controls for parents and educators.
Users will be able to report concerns directly in-app, through community forums, or via public channels. Feedback will be tracked against system behavior, allowing us to identify issues and implement corrections. Our systems are designed to be challengeable and correctable—not static.
Gothic Grandma. Laboratories conducts research exclusively in pursuit of advancing human understanding, empathy, and well-being. We actively collaborate with academic institutions, policy research organizations, healthcare institutions, and educational partners whose work shares these objectives.
We apply consistent ethical standards to all use of MUSE for research purposes. We do not and will not—for ourselves or any collaborators—partner with organizations whose research:
We distinguish between research that advances harm and research that seeks to prevent or heal it. MUSE may be used to study trauma, improve therapeutic outcomes, support conflict resolution, or train caregivers—but never to develop strategic military capabilities, enhance interrogation techniques, or create tools of coercion.
Simulation technology capable of fostering deep understanding carries inherent dual-use risks. We acknowledge this responsibility directly: the same systems that enable empathy can, if misapplied, enable exploitation. Our commitment is to ensure MUSE remains an instrument of connection, never coercion.
This is not merely policy—it is foundational to who we are. The core MUSE platform remains accessible only to the Gothic Grandma. Laboratories team, ensuring direct oversight of every system and every partnership. When we say no to a collaboration, we explain why. Transparency and accountability are not optional.
All research partnerships will undergo ethics review by our internal team, with input from external advisors as needed. Decisions will be made transparently, with clear reasoning. If we decline a collaboration, we will explain why and suggest alternative approaches when possible.
Learn about research opportunities and applications enabled by the MUSE platform.
We welcome dialogue about our approach. If you have questions, concerns, or want to discuss a potential collaboration, we'd love to hear from you.
Get in Touch