Navigating Strategies For Debugging WASM

In the world of WebAssembly (WASM), debugging can often feel like navigating a complex maze. With current debugging tools falling short and the unique challenges posed by WASM, developers are in need of effective strategies and solutions. Shivay will begin by discussing the current state of WASM debugging, drawing parallels with technologies like LLVM. He will cover the approaches to enable debugging for WASM using open-source tools like Modsurfer, while also proposing a compile-time debugging approach. A key part of the presentation will be the role of LLVM Intermediate Representation (IR) in the debugging process. Shivay will explore how understanding the conversion of source code into LLVM IR and its subsequent stages can provide valuable insights for debugging WASM applications. The goal of this talk is to equip developers with the knowledge and strategies they need to effectively debug their WASM applications.

Fine-Tuning Large Language Models With Declarative ML Orchestration

Large language models like GPT-3 and BERT have revolutionized natural language processing by achieving state-of-the-art performance. However, these models are typically trained by tech giants with massive resources. Smaller organizations struggle to fine-tune these models for their specific needs due to infrastructure challenges.

This talk will demonstrate how open-source ML orchestration tools like Flyte, can help overcome these challenges by providing a declarative way to specify the infrastructure required for ML workloads. Flyte's capabilities can streamline ML pipelines, reduce costs, and make fine-tuning of large language models accessible to a wider audience.

Specifically, attendees will learn:

- How large language models work and their potential applications

- The infrastructure requirements and challenges for fine-tuning these models

- How Flyte's declarative specification and abstractions can automate and simplify infrastructure setup

- How to leverage Flyte to specify ML workflows for fine-tuning large language models

- How Flyte can reduce infrastructure costs and optimize resource usage

By the end of the talk, attendees will understand how open-source ML orchestration tooling can unlock the full potential of large language models by making their fine-tuning easier and more accessible, even with limited resources. This will enable a larger community of researchers and practitioners to leverage and train large language models for their specific use cases.

Starting from: $500

Unchain your mind at LambdaConf 2024

Buy tickets