Tag Archives | classification

Stochastic Gradient Descent (SGD) with Python

In last week’s blog post, we discussed gradient descent, a first-order optimization algorithm that can be used to learn a set of classifier coefficients for parameterized learning. However, the “vanilla” implementation of gradient descent can be prohibitively slow to run on large datasets — in fact, it can even be considered computationally wasteful. Instead, we should apply Stochastic […]

Continue Reading 8

Understanding regularization for image classification and machine learning

In previous tutorials, I’ve discussed two important loss functions: Multi-class SVM loss and cross-entropy loss (which we usually refer to in conjunction with Softmax classifiers). In order to to keep our discussions of these loss functions straightforward, I purposely left out an important component: regularization. While our loss function allows us to determine how well (or poorly) our […]

Continue Reading 5

Softmax Classifiers Explained

Last week, we discussed Multi-class SVM loss; specifically, the hinge loss and squared hinge loss functions. A loss function, in the context of Machine Learning and Deep Learning, allows us to quantify how “good” or “bad” a given classification function (also called a “scoring function”) is at correctly classifying data points in our dataset. However, […]

Continue Reading 7

Multi-class SVM Loss

A couple weeks ago,we discussed the concepts of both linear classification and parameterized learning. This type of learning allows us to take a set of input data and class labels, and actually learn a function that maps the input to the output predictions, simply by defining a set of parameters and optimizing over them. Our linear classification tutorial focused […]

Continue Reading 11

An intro to linear classification with Python

Over the past few weeks, we’ve started to learn more and more about machine learning and the role it plays in computer vision, image classification, and deep learning. We’ve seen how Convolutional Neural Networks (CNNs) such as LetNet can be used to classify handwritten digits from the MNIST dataset. We’ve applied the k-NN algorithm to classify whether or […]

Continue Reading 15

How to tune hyperparameters with Python and scikit-learn

In last week’s post, I introduced the k-NN machine learning algorithm which we then applied to the task of image classification. Using the k-NN algorithm, we obtained 57.58% classification accuracy on the Kaggle Dogs vs. Cats dataset challenge: The question is: “Can we do better?” Of course we can! Obtaining higher accuracy for nearly any machine learning algorithm […]

Continue Reading 6