AI Researcher _ Foundation Models team

Permanent contract
Paris
Salary: Not specified
A few days at home
Experience: > 3 years
Education: PhD or more

Sigma Nova
Sigma Nova

Interested in this job?

Questions and answers about the job

The position

Job description

Context

As part of our vision, we are creating a transversal team dedicated to establishing the building blocks and core expertise that underpin our foundation model development efforts.

This team will collaborate closely with all squads, starting with the Brain Foundation Model team, to create scalable, reusable methodologies, frameworks, and solutions.

Role Overview:

As a Foundation Model Research Scientist, you will join a growing team (now only Mike Gartrell) that bridges applied research and service delivery to develop the essential tools and methods required for efficient and impactful foundation model development.

The team’s work will span fundamental research and the creation of domain-agnostic frameworks, with an emphasis on multimodal data integration, explainability, and alternative architectures.

Your work will contribute to

  1. Core Research: Advancing transformer architectures and exploring novel AI methods.

  2. Applied Innovation: Developing scalable solutions for domain-specific models.

  3. Service-Oriented: Providing expertise and tools to enhance the work of other squads.

This role is a blend of applied research, collaborative problem-solving, and contribution to the broader AI research community through publications at leading conferences (e.g., NeurIPS, ICML, IEEE).

Responsibilities:

  • Contribute to the development of multimodal frameworks (e.g., integrating EEG and MRI data).

  • Explore and implement methods for uncertainty estimation and explainability in foundation models.

  • Investigate and prototype alternative architectures beyond traditional transformer models.

  • Create reusable methodologies for foundation model training, including tokenization, data manipulation, and scaling.

  • Collaborate with domain-specific teams (e.g., Brain Foundation Model team) to ensure solutions are applicable across projects.

  • Engage with the AI research community, publishing findings and contributing to open problems.

  • Propose strategies to determine what elements of foundation model development should be abstracted, “platformed,” or customized.


Preferred experience

  • PhD in Machine Learning, AI, or a related field, with postdoctoral experience or a combination of academic and industry exposure.

  • Expertise in developing and scaling large vision and language models (LVLM) or domain-specific foundation models.

  • Hands-on experience with foundation model training, architecture design, and multimodal data integration.

  • Strong publication record in top-tier conferences/journals (e.g., NeurIPS, ICML, IEEE).

  • Deep knowledge of the state-of-the-art in transformer architectures and awareness of unpublished advancements from leading labs.

  • Excellent problem-solving, collaboration, and communication skills.


Recruitment process

  • Introduction call with Head of TA (Paul) - 30min

  • AI Research deep dive with Lead AI Scientist (Mike Gartrell) - 45min

  • Behavioural Interview with Paul - 45min

  • Research talk presentation / Pair Coding / Architectural design - 1/2 DAY (onsite)

Want to know more?

These job openings might interest you!

These companies are also recruiting for the position of “Data / Business Intelligence”.

See all job openings