Machine Learning Model Optimization

A catalogue of Heartbeat posts exploring techniques, tips, and best practices for fine-tuning and optimizing machine learning models

Machine Learning Model Optimization

Whether it’s handling and preparing datasets for model training, pruning model weights, tuning parameters, or any number of other approaches and techniques, optimizing machine learning models is a labor of love.

It’s important to note that there’s no one-size-fits-all approach: different use cases require different techniques, and various stages of the model building lifecycle determine possible and preferred optimization strategies.

To help you explore the myriad methods and techniques for ML model optimization across the lifecycle, we’ve collected a number of helpful Heartbeat posts that cover topics including:

  • Handling Data
  • Architecture Selection
  • Model Debugging
  • ML Model Visualization
  • ML Model Evaluation and Selection
  • Hyperparameter Tuning, Optimization Algorithms, and More

Handling Data

H2O AutoML + Big Data Processing with Apache Spark

Leverage the speed and scalability of H2O AutoML with the big data processing capabilities of Apache Spark

— by Jamshed Khan

Data Pre-processing and Visualization for Machine Learning Models

Learn the fundamentals of data pre-processing and visualization, including why it matters and practical steps for these crucial processes.

— by Natasha Sharma

Introduction to Text Wrangling Techniques for Natural Language Processing

An introduction to a few core techniques for pre-processing text data: sentence splitting, tokenization, stemming, lemmatization, and stop word removal.

— by Sai Kambampati

Without well-prepared data, no algorithm or machine learning architecture will work effectively. Here’s a quick look at some data-related debugging methods for ML models.

— by Dhiraj Kumar

Research Guide: Data Augmentation for Deep Learning

Exploring popular and recent approaches for data augmentation techniques for various deep learning tasks.

— by Derrick Mwiti

Data Handling Scenarios: Reading CSV files, converting categories, and finding empty strings in a dataframe

Learn techniques for handling common dataset challenges: Read CSV, converting categories, and finding empty strings in a dataframe.

— by Okoh Anita

Is your data good enough?

This post introduces a number of key questions all machine learning engineers should ask themselves about their data before building models.

— by Salma Ghoneim (Machine’s Creativity)

Architecture Selection

Selecting the Best Architecture for Artificial Neural Networks

A question commonly asked by beginners to ANNs is whether it’s possible to select an optimal architecture. This post seeks to answer this essential question.

— by Ahmed Gad

Reviewing popular and current approaches to Neural Architecture Search, which seeks to automate the process of designing neural networks.

— by Derrick Mwiti

Model Debugging

Deep Learning Best Practices: Regularization Techniques for Better Neural Network Performance

This tutorial covers the basics of bias-variance trade-off in neural networks and presents regularization techniques to improve model performance.

— by Niranjan Kumar

5 TensorFlow techniques to eliminate overfitting in DNNs

Early stopping, L1 and L2 regularization, dropout, max-norm regularization and data augmentation with TensorFlow.

— by Salma Ghoneim

Debugging machine learning models: Solving high bias and high variance

Learn how to identify and solve problems involving high bias and high variance in machine learning models.

— by Dhiraj Kumar

Bias-Variance Tradeoff to Avoid Under/Overfitting

An overview of two fundamental concepts in evaluating machine learning model performance: bias and variance.

— by Maher

Machine Learning Model Visualization

Modeling & Deploying Object Detection Based Recommenders to the Web

How to Create and Deploy Your Object Detection Based Recommender

— by Pranjal Saxena

Heatmaps and CNNs Using Fast.ai

Learn all about CNNs and a visual representation technique to intuitively understand their outputs.

— by Dipam Vasani

Model Interpretability and Visualization: Looking Inside the Neural Network Black Box

Neural networks can feel a bit mysterious — like a black box, you might say. This article looks at how model interpretability and visualization can help us see what’s happening under the hood.

— by Avinash

Analyzing Machine Learning Models with Yellowbrick

Learn how to analyze and visualize ML models with the Yellowbricks diagnostic platform.

— by Parul Pandey

Machine Learning Model Selection and Evaluation

Pre-Trained Machine Learning Models vs Models Trained from Scratch

A re-examination of the growing assumption that working with pre-trained models results in higher model accuracy.

— by Avinash

Machine Learning Model Evaluation & Selection

Learn best practices for validating your machine learning model.

— by Shikhar Gupta

Classification Model Evaluation

A detailed look at how machine learning models are evaluated, including: evaluation procedures, metrics, and where to apply them.

— by Lahiru Liyanapathirana

Introduction to Machine Learning Model Evaluation

In this article, we explain the techniques used in evaluating how well a machine learning model generalizes to new, previously unseen data. We’ll also illustrate how common model evaluation metrics are implemented for classification and regression problems using Python.

— by Steve Mutuvi

Hyperparameter Tuning, Optimization Algorithms, and More

Exploring Optimizers in Machine Learning

A guide to the widely used optimizer functions and a breakdown of their benefits and limitations

— by Nikita Sharma

Building Production Machine Learning Systems on Google Cloud Platform (Part 1)

Scaling out to a cloud platform for fast model training, evaluation, inferencing, logging, and monitoring.

— by Bamigbade Opeyemi

The 2 Types of Hardware Architectures for Efficient Training and Inference of Deep Neural Networks

Exploring temporal and spatial hardware architectures for optimizing neural network training and inference.

— by James Le

Deep learning has a size problem

Questioning deep learning’s focus on accuracy at all costs, and exploring methods to build more efficient models.

— by Jameson Toole

Tuning Machine Learning Hyperparameters

Demystifying the meaning of hyperparameters, understanding their importance, and optimizing and fine-tuning them.

— by Gilbert Adjei

Introduction to Learning Rates in Machine Learning

A machine learning model’s learning rate defines how quickly a model replaces the concepts it has learned with new ones. Here’s a quick introductory overview.

— by Sahil Chaudhary

Anatomy of a High-Performance Convolution

Exploring ways to optimize neural network operations.

— by Manas Sahni

Boosting your Machine Learning Models Using XGBoost

Learn what gradient boosting is and how you leverage it to optimize your machine learning model with XGBoost.

— by Derrick Mwiti

The 5 Algorithms for Efficient Deep Learning Inference on Small Devices

Examining 5 common and popular algorithms for more efficient deep learning inference, which is especially important for working with edge devices.

— by James Le

Ensemble Learning Techniques Demystified

Learn about ensembles used in machine learning. A detailed explanation of bagging, boosting, and stacking techniques for data science.

— by Rising Odegua

Research Guide: Advanced Loss Functions for Machine Learning Models

A deep dive into popular and recent advanced loss functions designed to improve a variety of ML models.

— by Derrick Mwiti

Create ML for iOS — Increasing model accuracy

Learn methods for increasing model accuracy metrics when building iOS-ready machine learning models with Apple’s Create ML.

— by Navdeep Singh

8-Bit Quantization and TensorFlow Lite: Speeding up mobile inference with low precision

Deploying efficient neural nets on mobiles is becoming increasingly important. This post explores the concept of quantized inference, and how it works in TensorFlow Lite.

— by Manas Sahni

How to Make Your Machine Learning Models Robust to Outliers

Learn common methods for detecting outliers, their impacts, and how to treat them in your machine learning models.

— by Alvira Swalin

Some Essential Hacks and Tricks for Machine Learning with Python

An overview of some essential hacks and tricks for practicing machine learning with Python. Packages for general ML workflows, visualization, and more.

— by Tirthajyoti Sarkar

https://heartbeat.comet.ml/some-essential-hacks-and-tricks-for-machine-learning-with-python-5478bc6593f2

How to Fit Large Neural Networks on the Edge

Learn techniques to help fit neural networks onto edge devices, both for training and inference.

— by Bharath Raj

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square