Advanced Generative AI Engineering Pathway (Beta) - Databricks Learning
Databricks Fine-Tuning: MLflow Sweep Comparison & Fast Model Serving Demo (Llama/ Unsloth)
Join Ryan Cicak, Solutions Engineer at Databricks, as he explores the art of fine-tuning models using serverless GPU compute. Discover how to pull models from Hugging Face, fine-tune them with ease, and serve them via API.
Create custom model serving endpoints | Databricks Documentation
Learn how to create and configure model serving endpoints that serve custom models.
From Zero to GenAI Hero: Building Your GenAI App with HuggingFace and Databricks | Databricks Blog
A comprehensive guide to building a GenAI app using a HuggingFace model, MLflow, Unity Catalog and Databricks Apps, covering setup, development, and deployment.
What is Databricks Feature Serving? - Azure Databricks
Feature Serving provides structured data for RAG applications and makes data in the Databricks platform available to applications deployed outside of Databricks.
With Databricks Feature Serving, you can serve structured data for retrieval augmented generation (RAG) applications, as well as features that are required for other applications, such as models served outside of Databricks or any other application that requires features based on data in Unity Catalog.
Databricks Foundation Model APIs - Azure Databricks
This article provides an overview of the Foundation Model APIs in Databricks. It includes requirements for use, supported models, and limitations.
Using the Foundation Model APIs you can:
Query a generalized LLM to verify a project’s validity before investing more resources.
Query a generalized LLM in order to create a quick proof-of-concept for an LLM-based application before investing in training and deploying a custom model.
Use a foundation model, along with a vector database, to build a chatbot using retrieval augmented generation (RAG).
Replace proprietary models with open alternatives to optimize for cost and performance.
Efficiently compare LLMs to see which is the best candidate for your use case, or swap a production model with a better performing one.
Build an LLM application for development or production on top of a scalable, SLA-backed LLM serving solution that can support your production traffic spikes.
Databricks exam guide generative ai engineer associate exam guide
Text classification
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Batch inference using Foundation Model APIs - Azure Databricks
Learn how to do batch inference using a provisioned throughput endpoint.
Tutorial: Deploy and query a custom model
Learn the overview and basic steps for performing model serving on Databricks.
Fine Tune your LLMs with Mosaic AI Model Training
dbdemos - Databricks Lakehouse demos : Fine Tune your LLMs with Mosaic AI Model Training
LLM Chatbot With Retrieval Augmented Generation (RAG) and DBRX
dbdemos - Databricks Lakehouse demos : LLM Chatbot With Retrieval Augmented Generation (RAG) and DBRX
Tutorials | Databricks
Discover the power of Lakehouse. Install demos in your workspace to quickly access best practices for data ingestion, governance, security, data science and data warehousing.
Databricks Generative AI Cookbook — Databricks Generative AI Cookbook
databricks/genai-cookbook
My Journey towards “Databricks Certified Generative AI Engineer Associate”
My experiences from preparing for and successfully passing the (beta) exam