2023 Agenda

Opening session

WhereKeynote
  • Jensen
    Huang

    Founder and CEO NVIDIA
  • Xavier
    Niel

    Founder iliad Group
  • Eric
    Schmidt

    Former CEOGoogle

Afternoon sessions

Master Stage

Central Room

Creativity Room

14:00
14:10
14:20
14:30
14:40
14:50
15:00
15:10
15:20
15:30
15:40
15:50
16:00
16:10
16:20
16:30
16:40
16:50
17:00
17:10
17:20
17:30
17:40
17:50
18:00
18:10
18:20
18:30
Building Datasets: Tech Constraints, IP Challenges, and Quality Maintenance

In this talk, you'll gain insights into overcoming technology constraints specific to various data types, and explore the legal intricacies of intellectual property. Moderator:Jérôme Rastit, Head of Ethical Hacking, Free Pro

From Transformers to Benchmarks: Revolutionizing ML Infrastructure Choices

Discover the key role of benchmarks in guiding your ML infrastructure choices and how it continually drives innovations to overcome challenges.

Reinforcement Learning to turn LLMs into useful tools

Uncover real-world applications, technical challenges, and exciting announcements! Moderate: Adrienne Jan, CPO, Scaleway

  • Iacopo Poli, CTO, LightOn
  • Alexandre Laterre, Head of research, InstaDeep
Mistral AI's Open Source Initiative: Ambitions, approaches, and roadmap ahead

Understand the practical impact of their architecture choices for their groundbreaking Mistral 7B model.

Artificial Intelligence Development: How to navigate through regulation

Unravel the regulatory frameworks governing artificial intelligence. Stick around for a 15-minute Q&A. Moderator: Daphné Leprince-Ringuet, French tech reporter

Next-Gen AI Hardware: Meeting Tomorrow's Compute Demands While Balancing Accessibility, and environmental impact

Uncover how next-generation technology could address the surging demand for AI compute while prioritizing accessibility and environmental impact. Join us for a glimpse into the AI landscape of tomorrow and beyond. Moderator: Albane Bruyas, COO, Scaleway

Invitation-Only. Ampere: Improve AI Performance and Cut Costs with AMP2 Instances

Closed Session: Personal Invitations Required! The power and cost inefficiency of large-scale AI deployments on the most frequently chosen hardware platforms heavily impact the end user's ability to achieve desirable ROI. In response to this issue, Ampere created a new class of GPU-Free AI inference that delivers the best price/performance compared to both the GPUs themselves and the legacy architecture x86 processors. Ampere Cloud Native processors are optimized to run AI inference meeting all the performance needs as well as saving both energy and space in the data center. Join Ampere’s expert to discover how and have a glimpse at the performance these new chips can deliver!

  • Victor Jakubiuk, Head of AI, Ampere
  • Kornel Krysa, PMM AI, Ampere
Invitation-Only. NVIDIA: Efficient deployment and inference of GPU-accelerated LLMs

​Closed Session: Personal Invitations Required! NVIDIA TensorRT-LLM, which will be part of NVIDIA AI Enterprise, is an open-source software that delivers state-of-the-art performance for LLM serving using NVIDIA GPUs. It consists of the TensorRT deep learning compiler and includes optimized kernels, pre- and post-processing steps, and multi-GPU/multi-node communication primitives.​ During this session, we will present TensorRT-LLM features and capabilities and we will walk you through the steps needed to build and run your model in TensorRT-LLM on both single GPU and multi-GPUs. We will also use TRT-LLM backend and Triton Inference Server for deployment.​

Invitation-Only. Ampere: Improve AI Performance and Cut Costs with AMP2 Instances

Closed Session: Personal Invitations Required! The power and cost inefficiency of large-scale AI deployments on the most frequently chosen hardware platforms heavily impact the end user's ability to achieve desirable ROI. In response to this issue, Ampere created a new class of GPU-Free AI inference that delivers the best price/performance compared to both the GPUs themselves and the legacy architecture x86 processors. Ampere Cloud Native processors are optimized to run AI inference meeting all the performance needs as well as saving both energy and space in the data center. Join Ampere’s expert to discover how and have a glimpse at the performance these new chips can deliver!

  • Victor Jakubiuk, Head of AI, Ampere
  • Kornel Krysa, PMM AI, Ampere
Invitation-Only. NVIDIA: Let’s take advantage of the H100 PCIe GPU for your application

Closed Session: Personal Invitations Required! The NVIDIA H100 Tensor Core GPU features the fourth-generation Tensor Cores and also the new Transformer Engine with FP8 precision, which provides a great boost for training and inference over the prior generation of LLM models. During this session, we will review the latest features and capabilities of the NVIDIA H100 PCIe GPU and explore various techniques to best take advantage of its performance.

Networking area

Closing Cocktail