Context
As part of our vision, we are creating a transversal team dedicated to establishing the building blocks and core expertise that underpin our foundation model development efforts.
This team will collaborate closely with all squads, starting with the Brain Foundation Model team, to create scalable, reusable methodologies, frameworks, and solutions.
Role Overview:
As a Foundation Model Research Scientist, you will join a growing team (now only Mike Gartrell) that bridges applied research and service delivery to develop the essential tools and methods required for efficient and impactful foundation model development.
The team’s work will span fundamental research and the creation of domain-agnostic frameworks, with an emphasis on multimodal data integration, explainability, and alternative architectures.
Your work will contribute to
Core Research: Advancing transformer architectures and exploring novel AI methods.
Applied Innovation: Developing scalable solutions for domain-specific models.
Service-Oriented: Providing expertise and tools to enhance the work of other squads.
This role is a blend of applied research, collaborative problem-solving, and contribution to the broader AI research community through publications at leading conferences (e.g., NeurIPS, ICML, IEEE).
Responsibilities:
Contribute to the development of multimodal frameworks (e.g., integrating EEG and MRI data).
Explore and implement methods for uncertainty estimation and explainability in foundation models.
Investigate and prototype alternative architectures beyond traditional transformer models.
Create reusable methodologies for foundation model training, including tokenization, data manipulation, and scaling.
Collaborate with domain-specific teams (e.g., Brain Foundation Model team) to ensure solutions are applicable across projects.
Engage with the AI research community, publishing findings and contributing to open problems.
Propose strategies to determine what elements of foundation model development should be abstracted, “platformed,” or customized.
PhD in Machine Learning, AI, or a related field, with postdoctoral experience or a combination of academic and industry exposure.
Expertise in developing and scaling large vision and language models (LVLM) or domain-specific foundation models.
Hands-on experience with foundation model training, architecture design, and multimodal data integration.
Strong publication record in top-tier conferences/journals (e.g., NeurIPS, ICML, IEEE).
Deep knowledge of the state-of-the-art in transformer architectures and awareness of unpublished advancements from leading labs.
Excellent problem-solving, collaboration, and communication skills.
Introduction call with Head of TA (Paul) - 30min
AI Research deep dive with Lead AI Scientist (Mike Gartrell) - 45min
Behavioural Interview with Paul - 45min
Research talk presentation / Pair Coding / Architectural design - 1/2 DAY (onsite)
Ces entreprises recrutent aussi au poste de “Data / Business Intelligence”.
Voir toutes les offres