Tag Archives | classification

Stochastic Gradient Descent (SGD) with Python

In last week’s blog post, we discussed gradient descent, a first-order optimization algorithm that can be used to learn a set of classifier coefficients for parameterized learning. However, the “vanilla” implementation of gradient descent can be prohibitively slow to run on large datasets — in fact, it can even be considered computationally wasteful. Instead, we should apply Stochastic […]

Every relationship has its building blocks. Love. Trust. Mutual respect. Yesterday, I asked my girlfriend of 7.5 years to marry me. She said yes. It was quite literally the happiest day of my life. I feel like the luckiest guy in the world, not only because I have her, but also because this incredible PyImageSearch […]

A simple neural network with Python and Keras

If you’ve been following along with this series of blog posts, then you already know what a huge fan I am of Keras. Keras is a super powerful, easy to use Python library for building neural networks and deep learning networks. In the remainder of this blog post, I’ll demonstrate how to build a simple neural […]

Understanding regularization for image classification and machine learning

In previous tutorials, I’ve discussed two important loss functions: Multi-class SVM loss and cross-entropy loss (which we usually refer to in conjunction with Softmax classifiers). In order to to keep our discussions of these loss functions straightforward, I purposely left out an important component: regularization. While our loss function allows us to determine how well (or poorly) our […]

Softmax Classifiers Explained

Last week, we discussed Multi-class SVM loss; specifically, the hinge loss and squared hinge loss functions. A loss function, in the context of Machine Learning and Deep Learning, allows us to quantify how “good” or “bad” a given classification function (also called a “scoring function”) is at correctly classifying data points in our dataset. However, […]

Multi-class SVM Loss

A couple weeks ago,we discussed the concepts of both linear classification and parameterized learning. This type of learning allows us to take a set of input data and class labels, and actually learn a function that maps the input to the output predictions, simply by defining a set of parameters and optimizing over them. Our linear classification tutorial focused […]

An intro to linear classification with Python

Over the past few weeks, we’ve started to learn more and more about machine learning and the role it plays in computer vision, image classification, and deep learning. We’ve seen how Convolutional Neural Networks (CNNs) such as LetNet can be used to classify handwritten digits from the MNIST dataset. We’ve applied the k-NN algorithm to classify whether or […]

How to tune hyperparameters with Python and scikit-learn

In last week’s post, I introduced the k-NN machine learning algorithm which we then applied to the task of image classification. Using the k-NN algorithm, we obtained 57.58% classification accuracy on the Kaggle Dogs vs. Cats dataset challenge: The question is: “Can we do better?” Of course we can! Obtaining higher accuracy for nearly any machine learning algorithm […]

k-NN classifier for image classification

Now that we’ve had a taste of Deep Learning and Convolutional Neural Networks in last week’s blog post on LeNet, we’re going to take a step back and start to study machine learning in the context of image classification in more depth. To start, we’ll reviewing the k-Nearest Neighbor (k-NN) classifier, arguably the most simple, easy […]

Detecting cats in images with OpenCV

Did you know that OpenCV can detect cat faces in images…right out-of-the-box with no extras? I didn’t either. But after Kendrick Tan broke the story, I had to check it out for myself…and do a little investigative work to see how this cat detector seemed to sneak its way into the OpenCV repository without me noticing (much […]