Giskard

Giskard

Inteligencia artificial/Aprendizaje automático, Software

Paris

Organization and methodologies

Our team is organized in autonomous, cross-disciplinary product squads, combining complementary profiles: ML Researchers, Software Engineers, Product Managers & Designers. The goal of each squad is to take care of specific product areas and customer problems, with a strong autonomy on how to design the technical solutions to these problems.

In terms of work methodology, we follow the original principles of the agile manifesto mixed with lean software development. We also foster habits of open & async communication, with dedicated time to focus & learn.

Currently, we have 2 squads, one working on LLM Quality & Safety, and another working on AI Compliance. You can read more information on these 2 projects in the section below. 👇

Additionally, we've formed a new Customer ML Engineering team in charge of deploying Giskard at our enterprise customers.

Product, project or technical challenge

AI Compliance

AI Compliance

As the AI Act was voted in May 2024 (cf. timeline of developments), the EU will required businesses to better control the risks raised by AI models. Specifically, 4 high-risk industries will be regulated: finance, healthcare, public service and infrastructure; as well as foundational model providers, deemed as “systemic-risk”.

This landmark regulation will require AI teams to change the way they develop AI projects, adding new requirements for conformity assessments, quality management and documentation. In parallel, standard organizations including ISO, CEN-CENELEC and NIST are working on technical AI standards.

We're working on a brand-new product to help enterprise teams automate compliance with the EU AI Act and upcoming AI standards.

LLM Quality & Security

Over the last 2 years, the biggest AI breakthrough has been Large Language Models (LLMs) such as ChatGPT and Mistral. Despite their impressive performance, these models raise many risks in terms of errors (hallucinations), ethical biases and security vulnerabilities when applied to critical applications.

As part of an R&D consortium with Mistral, Artefact, INA and BnF, Giskard got funding from the French government (France 2030 strategic plan) to develop novel methods for evaluating LLM-based agents, as well as mitigating problems found during the evaluation.

We have a 2-year R&D roadmap to work on this exciting frontier problem, with the end-goal of shipping open-source LLM evaluation & mitigation tools. This new field of ML research, sometimes called "red-teaming" (evaluation) and "blue-teaming" (mitigating) will play a critical role to help AI builders create better Generative AI systems.

LLM Quality & Security

Recruitment process

You can get an offer in 4 weeks 🚀

  • HR fit interview: 15 minutes
  • Manager fit interview: 30 minutes
  • Technical exercise: 10 days to complete
  • Technical interview: 45 minutes
  • Reference calls (exc. interns): 2 persons
  • Final interview with founders: 45 minutes

Latest job posts

No openings for now, please check back in a few days!