Data Science and Analytics

🔴 Building, Deploying and Monitoring Large Language Models with Jinen Setpal

   ​ I speak with Jinen Setpal, ML Engineer at DagsHub about actually building, deploying, and monitoring large language model applications. We dive into evaluation methods, ways to reduce hallucinations and much more. We also answer the audience’s great questions.​ ​ In this live episode, I’m speaking with Jinen Setpal, ML Engineer at DagsHub about actually …

🔴 Building, Deploying and Monitoring Large Language Models with Jinen Setpal Read More »

How to Build CI/CD Pipeline for Continuous Deployment with SageMaker

   ​ Learn how to create a simple CI/CD deployment pipeline for your Machine Learning project using AWS SageMaker and DagsHub​ ​ Most Machine Learning models are dynamic. They continuously learn and enhance their performance with additional data. This requires us to constantly update the deployed model. As we know, deployment is a repetitive process. And …

How to Build CI/CD Pipeline for Continuous Deployment with SageMaker Read More »

LLMOps: Experiment Tracking with Weights & Biases for Large Language Models

   ​ We will check how Weights & Biases log prompts, document the model architecture, and effectively record versioned artifacts.​ ​ In the previous article, we explored MLflow‘s support for experiment tracking of LLM applications through logging of prompts and their outputs. Another widely used tool for tracking experiments is Weights & Biases (W&B or WandB). …

LLMOps: Experiment Tracking with Weights & Biases for Large Language Models Read More »

Google’s “Gemini” is 5 Times Stronger than “GPT-4”

   ​ Learn about Google’s latest flagship model, “Gemini,” which emerges as a direct competitor to GPT-4, with 5 times the computing resources used for training and multimodal capabilities.​ ​ Google’s latest flagship model, codenamed “Gemini,” boasts an astonishing level of power that surpasses GPT-4 by a factor of five and is able to produce text …

Google’s “Gemini” is 5 Times Stronger than “GPT-4” Read More »

Tutorial: How to Setup SageMaker for Machine Learning CI/CD Pipelines

   ​ Learn how to setup AWS SageMaker environment to create and run custom CI/CD pipelines for Machine Learning​ ​ With the advent of “bigger and badder” machine learning models and their usage in production, it has become necessary to orchestrate the entire MLOps process. This process is often time-consuming, repetitive, and resource-dependent. The issue is …

Tutorial: How to Setup SageMaker for Machine Learning CI/CD Pipelines Read More »

LLMOps: Experiment Tracking with MLflow for Large Language Models

   ​ Learn how the new version of MLflow supports logging experiments of Large Language Models​ ​ The rapid development of Large Language Models (LLMs) like Chat GPT, LLAMA 2, and Falcon has revolutionized the Data Science world and introduced a new concept: ״Prompt Engineering״ Prompting involves using input text or questions to guide LLMs in …

LLMOps: Experiment Tracking with MLflow for Large Language Models Read More »

Tutorial: Build an Active Learning Pipeline using Data Engine

   ​ With the release of Data Engine, DagsHub has made it easier to create an Active Learning Pipeline. This tutorial shows how to create one for an image segmentation model using COCO 1K​ ​ An end-to-end active learning pipeline is something many struggle with. Even large companies with experienced data science teams run into issues. …

Tutorial: Build an Active Learning Pipeline using Data Engine Read More »

⛹️‍♂️ Large Scale Video ML at WSC Sports with Yuval Gabay

   ​ In this episode, Dean speaks with Yuval Gabay, MLOps Engineer at WSC Sports. They talk about MLOps methodologies, standardizing deployment in the organization, and closing the loop back from production into training.​ ​ In this episode, I had the pleasure of speaking with Yuval Gabay, MLOps Engineer at WSC Sports. Yuval builds better infrastructure …

⛹️‍♂️ Large Scale Video ML at WSC Sports with Yuval Gabay Read More »

Scroll to Top