My Blog
Entrepreneur

AutoML’s Rise To Prominence

AutoML’s Rise To Prominence
AutoML’s Rise To Prominence


By Ben Avner, co-founder & CTO, Matchly.

The concept of machine learning first came up when Alan Turing wrote a paper about whether machines could achieve artificial intelligence. In 1957, Frank Rosenblatt designed the first neural network, called the perceptron algorithm. They are called neural networks because they are thought to be designed based on a simplistic way of how the brain works in order to process information. Though there were some initial real-world applications for machine learning, such as the Madaline network, which could eliminate phone lines’ background echo, it wouldn’t rise back to prominence until computer vision applications emerged in 2012.

In 2012, AlexNet, a deep neural network designed by Alex Krizhevsky achieved 84% accuracy in Imagenet’s image classification contest. The previous best result was 74%. There began the wide adoption of machine learning to address computer vision problems. Deep machine learning quickly became the standard and outperformed humans on many tasks. Some examples are Google’s diabetic retinopathy and breast cancer projects.

ML works by feeding a neural network large amounts of data and having it learn patterns by tuning the activation levels of neurons within the network. It can solve a wide variety of problems for many different data types.

What Types Of ML Exist?

There are many techniques for producing ML models. Some of these techniques include:

• Embeddings: A technique for taking data sets and converting them from a high-dimension to a low-dimensional space. This enables us to take a highly complex data set and make it easier to use.

• Linear regression: A technique that enables quick and efficient modeling of the relationship between a scalar response and one or more explanatory variables.

• Trees: A technique that uses a decision tree to represent how different input variables can be used to predict a target value.

• Neural architecture search: A technique for automating the design of a model’s underlying architecture.

What Is AutoML?

AutoML is what its name implies. It’s an automated or rather semi-automated method for building ML models. How much is automated varies by what autoML technology/platform you use. Several exist, such as Google’s Vertex or Adanet and AWS’s Gluon. AutoML aggregates several techniques you could leverage in a custom model.

To produce a custom model, you would need to choose a framework, choose an architecture, bring the data, and transform and clean the data (this is no simple task). All of these seemingly simple steps actually require quite a lot of computational resources and tech know-how, such as accessing virtual machines and installing GPU drivers and running distributed code.

At its core, autoML alleviates the need to do all of the steps mentioned above. It allows you to perform many of the above steps with a relatively small data set at the click of a button.

Above all, autoML offers competitive performance at a reasonable price and an exceedingly short time span. But there are some drawbacks. Most autoML technologies operate in a sort of black box mode. You are limited in the number of knobs you can configure and can’t really inspect the underlying process, which can sometimes be beneficial for increased model performance. An example of such an inspection can be a custom loss function.

Without a lot of work, such as exploring and cleaning the data, autoML can achieve a top five rank in Kaggle, an online platform that offers machine learning competitions for data scientists from around the world; they can share ideas and compete for prizes on many problem sets/competitions.

Most importantly, it enables you to skip the following steps of producing a custom model with a click of a button:

Acquiring a minimal data set

Labeling

Uploading to the relevant platform

Generating a predictive model

Trend 1: ML Is Expanding

Up until a couple of years ago, it used to be that you had to have a master’s degree or a Ph.D. to implement ML. Over the years, I’ve noticed the entry barrier shifting. Nowadays, you’ll likely find fewer Ph.D.s and more capable software engineers, analysts and even semi-tech-savvy product managers.

This advent is in part due to the maturity and prevalence of capable frameworks such as Pytorch and Tensorflow, allowing business leaders more access to better talent at a cheaper cost.

Trend 2: Back To Simplicity

Custom ML is indeed very powerful, and some problems simply cannot be solved without it, or at least not solved as accurately. In the past couple of years, I’ve noticed an ongoing trend where practitioners realize that even though deep learning has a lot of benefits, there’s a growing understanding in the industry that other, older, more simplistic methods actually make more sense.

These methods require less data, are faster to train, cheaper, have explainable AI capabilities and have faster prediction latency:

Embeddings

Linear regressions

Trees

Statistical methods

AutoML

So what’s changed? In the past couple of years, we’ve seen it become easier and easier to utilize ML with methods like:

1. Easily accessible ML frameworks such as Google’s Tensorflow and Facebook’s Pytorch

2. Public pre-made architectures such as Resnet

3. Publicly available data sets such as Imagenet

4. Technologies such as transfer learning, which enable you to retrain only part of the neural network

5. Neural Architecture Search: a brute-force method for finding a specific architecture to suit your use case

6. AutoML

Where in the past people mostly developed large complex models in-house, I believe that in the future new practitioners will opt for more simplistic models that offer higher performance in the form of autoML.

Related posts

Learning From India’s Denotified Tribes

newsconquest

7 Lessons About Life After Power From Former Presidents

newsconquest

Establish the Ache Issues That Make Consumers Come to a decision What They are Going to Purchase

newsconquest