Meetup

AI Innovators Munich with Weights & Biases × JetBrains

Event Overview
Join Weights & Biases and JetBrains in Munich for an evening dedicated to practical, production-grade AI development, from structured experimentation to deploying specialized AI systems that deliver real business impact. AI is evolving fast. But building modern AI applications requires more than prompting a frontier model. It demands rigorous experimentation, scalable infrastructure, and developer workflows that make iteration seamless. This meetup brings together ML engineers, software developers, and AI practitioners who are building and shipping the next generation of intelligent systems.
What to expect

Expect technical depth, live demos, and practical lessons learned from real-world teams.

What you’ll learn
  • Experiment tracking & evaluation at scale
  • Building, debugging, and evaluating LLM applications
  • AI developer tools that improve velocity and reliability
  • Collaboration between ML and software engineering teams
  • From research notebooks to production systems
Who should attend
  • Machine Learning Engineers
  • AI / LLM Application Developers
  • MLOps & Platform Engineers
  • Software Engineers building AI features
  • Developer Tooling & Platform Teams
  • Anyone experimenting with LLMs in production
Featured talk from Weights & Biases

Are general-purpose LLMs falling short of your company’s highly specialized practical requirements? While Supervised Fine-Tuning (SFT) is an option, what happens when you simply don’t have enough data?

“On-the-job” Reinforcement Learning (RL) is emerging as the key to filling this gap, enabling models to acquire advanced reasoning and align with highly specific business intents. However, the barrier to entry is notoriously high. Manually comparing GPU providers, building deployment scripts, and configuring infrastructure can delay RL training jobs by hours or even days.

Join us for a deep dive into Serverless RL, a new backend powered by Weights & Biases and CoreWeave designed to abstract away infrastructure headaches.

In this talk, we will cover:

  • The Evolution of RL in LLMs: Why “On-the-job RL” is the next frontier for practical AI development.
  • Bypassing the Infra Bottleneck: How a serverless backend drops cold start latency from 114 seconds to just 3 seconds while reducing overall training costs and wall-clock time compared to local H100 setups.
  • Automating Rewards with RULER: Learn how to implement Relative Universal LLM-Elicited Rewards to bypass the need for manual labels and human feedback entirely.
  • Real-World Successes: Discover how companies in food delivery and high-stakes finance are using these pipelines to replace slow frontier models with highly specialized, low-latency 8B parameter agents.

Whether you’re looking to build hyper-fast voice agents or specialized internal experts, this session will show you how to empower your software teams to train specialized open-source models on demand, without forcing them to become hardware managers.

Featured
speakers

Hans Ramsl, AI Solutions Engineer, Weights & Biases
Hans
Ramsl
AI Solutions Engineer
Weights & Biases
Maria
Tigina
AI Agents Platform, Software Developer
JetBrains