



Interhuman AI is building the next generation of social intelligence infrastructure—multimodal AI systems that understand not just what humans say, but how they say it. We're developing models that interpret behavioral signals like hesitation, engagement, confusion, and interest across voice, facial expressions, body language, and natural language - in real time. We are looking for a Student Researcher to join our AI engineering team. This is not a typical "support" role; it is an invitation to apply the latest research in multimodal evaluation and data synthesis to a production-scale infrastructure. You will work on the practical foundations that allow our models to move from "experimental" to "state-of-the-art" every single week. What You * You are gonna own a scoped project that upgrades how we collect, validate, or evaluate complex behavioral-signal data, directly impacting model performance. * Create high-signal benchmarks to track improvements on the most challenging "long-tail" machine learning cases in social interaction. * Improve consistency and auditability across our pipeline so that our research results are trustworthy, repeatable, and scalable. * Contribute to the internal ecosystem of scripts and utilities that accelerate our experiment cycles, result comparison, and artifact organization.