Giskard

Giskard

Artificial Intelligence / Machine Learning, Software

Paris

Organization and methodologies

Giskard

Our team is organized in autonomous, cross-disciplinary product squads, combining complementary profiles: ML Researchers, Software Engineers, Product Managers & Designers. The goal of each squad is to take care of specific product areas and customer problems, with a strong autonomy on how to design the technical solutions to these problems.

In terms of work methodology, we follow the original principles of the agile manifesto mixed with lean software development. We also foster habits of open & async communication, with dedicated time to focus & learn.

Currently, we have 2 squads, one working on LLM Quality & Safety, and another working on AI Compliance. You can read more information on these 2 projects in the section below. 👇

Additionally, we've formed a new Customer ML Engineering team in charge of deploying Giskard at our enterprise customers.

Product, project or technical challenge

Giskard

LLM Evaluation

Over the last 2 years, the biggest AI breakthrough has been Large Language Models (LLMs) such as ChatGPT and Mistral. Despite their impressive performance, these models raise many risks in terms of errors (hallucinations), ethical biases and security vulnerabilities when applied to critical applications.

As part of an R&D consortium with Mistral, Artefact, INA and BnF, Giskard got funding from the French government (France 2030 strategic plan) to develop novel methods for evaluating LLM-based agents, as well as mitigating problems found during the evaluation.

We have a 2-year R&D roadmap to work on this exciting frontier problem, with the end-goal of shipping open-source LLM evaluation & mitigation tools. This new field of ML research, sometimes called "red-teaming" (evaluation) and "blue-teaming" (mitigating) will play a critical role to help AI builders create better Generative AI systems.

LLM Guardrails

This R&D project focuses on developing next-generation guardrails for LLM agents that operate with near-zero latency while providing comprehensive protection against hallucinations, toxic content, and security vulnerabilities.

These guardrails are aimed to be highly customizable, through a modular component system that allows LLM engineers to enforce specific safety measures without compromising the underlying agent's inference time, and meeting specific industry requirements (finance, manufacturing, retail, infrastructure).

Giskard

Recruitment process

Giskard

You can get an offer in 4 weeks 🚀

  • HR fit interview: 15 minutes

  • Technical exercise: 10 days to complete

  • Technical interview: 45 minutes

  • Reference calls: 2 persons

  • Final interview with founders: 45 minutes