Colectivo Ciudadano
Publicado 17/11/25 17:47

AI & Emotional Safety Architect (Foundational Volunteer Role → Paid Position After Funding)

A Distancia, El/la voluntario/a puede estar en cualquier país del mundo
Quiero ayudar


  • Descripción

    Horarios Disponibles:
    Días laborables (durante el día)
    Flexibilidad:
    Flexible
    Frecuencia:
    Recurrente
    Área de Impacto:
    Desarrollo de Comunidades, Educación, Salud Mental, Investigación & Ciencias Sociales, Ciencia & Tecnología
    Edad requerida:
    18+

    Descripción

    AI & Emotional Safety Architect (Foundational Volunteer Role → Paid Position After Funding)

    TEG-Blue™ is building a new kind of emotional technology — a system that helps humans and AI understand emotional patterns, belonging, harm, and repair. We are looking for someone who can help us design the AI layer of the project: a person who understands technology deeply and humanity gently.

    This role is for someone who wants to build ethically, slowly, and responsibly — a person who values emotional safety as much as innovation.

    About TEG-Blue™

    TEG-Blue is a large-scale emotional technology system created to map how emotions move through individuals, communities, institutions, and AI.

    We work at the intersection of psychology, neuroscience, design, and machine learning — building tools for emotional clarity, healing, and repair.

    We are now forming the AI & Emotional Safety Department, which will design how emotional logic is translated into machine-readable structures (TEG-Code, EMLU), and how AI systems can recognize trauma patterns, manipulation, belonging, and repair.

    We are looking for someone who can help us build this foundation with care, rigor, and integrity.

    Responsibilities

    This role combines architecture, research, and prototyping.

    You will:

    • Help design the high-level architecture for the AI layer of TEG-Blue
    • (TEG-Code, emotional logic schemas, embeddings, datasets)
    • Work with Anna (Founder) to understand the emotional and relational logic behind the maps
    • Translate emotional patterns into technical structures (JSON schemas, embeddings, model attributes)
    • Build early prototypes that show how AI can detect emotional modes
    • (Belonging, Defense, Manipulation, Tyranny)
    • Explore how machine learning could be trained to identify:
      • trauma patterns
      • emotional safety cues
      • belonging signals
      • relational harm
    • Work with the research team to connect empirical findings to AI structures
    • Design ethical and safety guardrails for all AI usage and outputs
    • Build small tools that help the platform understand user states ethically and safely
    • Support future development of the EMLU (Emotional Multitask Language Understanding) layer

    You don’t need to do everything alone — but you will help design the path.

    1. Requirements (Realistic + Values-aligned)

    The ideal person has experience with:

    Technical Skills

    • AI system design / ML engineering
    • Python (PyTorch or TensorFlow)
    • NLP / language models
    • Embeddings, dataset design, vector stores
    • Building prototypes or research tools
    • Data ethics & responsible AI practices
    • JSON schemas, APIs, data modeling

    Knowledge / Orientation

    • Interest in emotional patterns, relational systems, or human behavior
    • Understanding of trauma-informed principles (or willingness to learn)
    • Experience working with complex, abstract systems
    • Ability to translate human concepts into structured logic

    Personal Capacities

    • emotionally mature and self-aware
    • comfortable working in ambiguity
    • collaborative, grounded, low-ego
    • curious, patient, and precise
    • enjoys working at the frontier of disciplines

    Nice to Have

    • Experience with computational psychology
    • Experience with safety alignment research
    • Anthropic / OpenAI / HuggingFace experience
    • Experience building data labeling pipelines
    • Graph database understanding (Neo4j)
    • Experience designing ontology-level systems
    • Familiarity with ethics committees or sensitive data protection
    • Knowledge of neuroscientific or psychological models

    What Kind of Person Belongs Here

    This role is for someone who:

    • wants to build technology that protects people
    • cares about emotional safety
    • enjoys bridging science and human experience
    • thinks slowly and deeply
    • doesn’t rush for the sake of rushing
    • values clarity, honesty, and responsibility
    • believes AI should be built with integrity

    You don’t need to be “perfect.”

    Just grounded, curious, and aligned with the mission of making intelligence — human or artificial — safer and more connected.

    TEG-Blue™ is a place for people who care — about dignity, about repair, and about the emotional wellbeing of communities and future AI. We are building slowly and consciously. If this mission speaks to you, we would love to hear from you.

    Ubicación

    A Distancia
    La persona voluntaria puede estar en cualquier lugar del mundo
    Ubicación Asociada
    Barcelona, Spain

    Por favor, llena este formulario

    Instrucciones:

    This is a foundational volunteer role during the early building phase.

    Once funding arrives, the position transitions to a paid role with fair and transparent compensation.

    Todos los campos son obligatorios
    Entiendo que el uso de la herramienta de seguimiento de candidaturas de Idealist está sujeto a la Política de Privacidad de Idealist y a los Términos del Servicio.
    Illustration

    Descubre Tu Vocación

    Encuentra oportunidades para cambiar el mundo con las últimas oportunidades de empleo, pasantías/prácticas y voluntariado con impacto social. Además, podrás explorar recursos para generar impacto positivo en tu comunidad.
    ¿Ya eres usuario(a)? Ingresa