The Ultimate Guide to the RANSAC Algorithm

A few years ago, Elon Musk put out a video called "How Not to Land an Orbital Rocket Booster" online. The video showed SpaceX's ups and downs in trying to make rockets land back on Earth, all of them being total failures. The rocket could blow up before even taking off, or right at landing, or while in the air, or sometimes even after having landed.

But failing dozens of time has benefits, because after each attempt, the SpaceX team learned from experience, perfected their model, and was able to make little changes until the rocket was able to land. In particular, they made a breakthrough in perfecting the grid fins and engine throttling for precision control during descent.

This process is a lot like the RANSAC algorithm. Just like SpaceX kept testing and fixing their rockets to get the best design, RANSAC picks random data points, builds a model, and keeps refining it to find the best fit. Both involve trying things out, learning from mistakes, and getting better with each go.

And this is what I'd like to talk in this article: What is the RANSAC algorithm, how does it work, and how you can use it, mainly in autonomous tech applications like autonomous robots or self-driving cars.

On this note, let's begin:

What is the RANSAC algorithm?

RANSAC (RANdom SAmple Consensus) is an iterative outlier detection algorithm. Its goal is to find the best fit for data that's full of noise or errors. For this, it picks random samples, makes a mathematical model, and checks how many points fit that model. The model with the most good fits is chosen as the best solution.

So you can already see a methodology from this simple definition:

  1. Pick random samples
  2. Makes a mathematical model
  3. Check how many points fit the model and pick the model with highest match

We'll come back to the "how" in a second; for now I'd like to show you a visual example.

Say you are a Computer Vision Engineer, and you are tasked with finding lane lines on the road for self-driving cars. It's a fun task to do, but it's also highly critical to self-driving cars. If you get the wrong parameters, estimate the lines badly, you're in a bad spot. Imagine you're using a Computer Vision algorithm that finds some interest points, and then fits a line on these points.

The algorithm might have a bit of noise, or "outliers", and the RANSAC algorithm is precisely here to find the data points that don't belong to the "model", and highlight them:

The RANSAC algorithm can be perfect to filter out noise, remove outliers, and build models of planes or structures from the data

See how the line fitting would have been impacted negatively if we had to consider the red points as part of our equation.

So this is the point of RANSAC: to find outliers, and more importantly: inliers.

Now let's see...

How does RANSAC Work?

As we said earlier, I showed you a 3-process for the algorithm. Let me detail it a bit.

  1. Start with the input data
  2. Pick random samples
  3. Fit a mathematical model (line, curve, 3d shape, ...)
  4. Compute a cost by checking how many points fit the model
  5. Repeat until you found the model with the lowest cost

So it's an iterative process (we repeat it until we "find" a solution), that involves some weird steps involving complex mathematical models. Hmm.. Let's take a look.

1) Start with the input data

In most examples, RANSC is shown as a 2D lane fitting problem. However, in practice, RANSAC is not limited to 2D; it's commonly applied to 3D problems and higher-dimensional spaces in real-world applications. We could call it "model-agnostic". You can use it to fit planes, circles, ellipses, point clouds, or even complex transformations like homographies or 3D object poses.

What's going to be important is how you define your model, so let's see...

2) Pick Random Samples and 3) Fit a Mathematical Model

Let's go back to the lane detection example, and focus on the left part only. The first step involves picking a random subset from all the points in the observed data. So in the case of a 2D line as we have, we can pick 2 points, and then fit a model. In our case, the model is a straight line; so a linear model, so y = ax+b.

The first 3 steps are pretty random, but also critical, because you define the problem you want to solve

See how it begins? It takes 2 points, and fits a linear regression model. Now this is really just the beginning, but if we focus on the "model" aspect for a second, realize that it does NOT have to be just 2D shapes. What we'll want to do is to find the outliers. I like to refer these as those who "don't belong".

What other applications could have things that don't belong? For example, if you're doing transcription of an interview, RANSAC could find all the "hum" and "heu...." in the text. Or it could remove noise in audio. Or estimate 3D transformations. I'll go back to these a bit later, for now, let's naively continue the algorithm.

4) Compute a Cost by checking how many points fit the model

Let's see how this works

The cost can be any loss function you have in mind that works. L1, L2, Perpendicular. Based on what you use, penalties will be applied differently.

How do we calculate the cost? In this specific example, the cost is going to be the L1 distance from all other points to our model. You know how to compute the distance between a point and a line, right? Great, now if you wanted to play it fancy, you could also try an L2 distance (squared), and thus penalize points that are really far from your line.

5) Repeat and find the best line

As we said, it's an iterative algorithm. So we start over! Select 2 points at random, try again to fit a simple linear regression, calculate the cumulated cost, and do that for a selected number of iterations. Let me show you how it's like when the algorithm found the best "2 points".

What a Good Model looks like

So how do we know we have the best line? When should we continue? Stop? It's a bit up to you, and how you set the RANSAC parameters in the iterative method. If you look at the model definition from Scikit-Learn, you're going to see it contains many parameters...

Function call definition from Scikit-Learn. Can you intuitively guess what the highlighted variables do?

Some of these parameters:

  • estimator: The actual model you want to use. It can be LinearRegression() for example.
  • min_samples: The minimum number of points you need to select at minimum (for 3D plane fitting, you can't do that with just 2 points, right?)
  • max_trials: The maximum number of iterations the model will run. If you select 2, it might never find the perfect line. If you select 5000, it is going to be much slower, and a bit inefficient, since it may need only 50 iterations to find the perfect line.
  • stop_score: A threshold score or value after which it considers the model is "good enough". Sometimes you want the perfect output, but in many cases, good enough is good enough.
  • loss: The loss function you're going to use to calculate the total cost.

See? It's pretty customizable. In this example, it's almost intuitive that if after 2 or 3 attempts, the model is already going to find the right random points. But there are many cases when it really needs to iterate hundreds of times until it finds the right model.

Now, something else:

If you pay attention to the definition above, there is also a value called the residual_threshold (second line, middle); and it's actually the most important. The residual threshold determines the threshold value that's going to be set to define points as either inliers or outliers. In some projects like 3D Segmentation, this is basically what gives you the output.

The residual_threshold value determines the range of the inliers. It has to be set very accurate, and since RANSAC is most of the time used in an unsupervised manner (without labels), you don't always know if you used the right one

Now, I see some of you coming and ask "CAN'T WE AUTOMATE THESE PARAMETERS?! DOES IT HAVE TO BE MANUAL?!" And I'd like to say; you can pretty much use Grid Search and test hundreds of parameters, or you can use Deep Learning and analyze a whole dataset of 500 million lane lines and thresholds, or you can use some adaptive thresholds, or even more advanced RANSAC versions.

Personally? I think there is a certain charm in manually tweaking the parameters. It has you think about your data, start with a basic assumption, think about model parameters, trying out multiple models, benchmark them with your measurement errors, etc... and even though it's not the the "best" way to do it; I'd recommend starting manual.

Now that we saw how RANSAC works, let's see some applications of RANSAC.

Applications of RANSAC

Let's begin with the simple 2D models, and then try to explore some more advanced ones...

#1: Simple 2D Fitting

Some simple examples of how to apply RANSAC algorithm on

Here, you can see that RANSAC can also fit an ellipse or a curve. It can also be applied to more "real-life" examples like lane lines or LiDAR point clouds.

#2: 3D Planes

The first time I learned about random sample consensus (RANSAC), it was actually in the context of 3D point clouds; where you had to tell apart the "ground" (inliers), from everything else (outliers). I spent a lot of time using RANSAC for 3D data, to the point where it became part of my course POINT CLOUDS CONQUEROR.

Let's see the example, where the algorithm helps to highlight objects and drivable surfaces.

From my course POINT CLOUDS CONQUEROR, where you'll learn how to use RANSAC with 3D Data

#3:Transformations

In many applications of SLAM, or Point Cloud Registration, or Transformation Estimation, there is a need to align a source point cloud to a target point cloud. Like this:

Alignment is a very common task in robotics, and RANSAC can be very useful there too

You may wondering, how on earth could RANSAC be of any use here? Well, if you think of the process, it can be:

  1. Input Data: A source point cloud and a target point cloud
  2. Pick Samples: We could pick n points from the source point cloud, and their corresponding point from the target point cloud (assuming an overlap)
  3. Model Fitting: From these two points, we could define the body rigid transformation needed to go from the source to the target
  4. Compute a loss function: We could apply this transformation to every other point and see how "aligned" the final point cloud is.
  5. Repeat/Iterative Approach: We can try and find other point pairs, that make the model do better than previous iterations. The process here is to find the best model that aligns the point cloud.

In reality, the applications of RANSAC are endless; from point clouds, to financial data, to GPS data filtering, to statistics, to sensor fusion, to point cloud registration, to audio data, to healthcare data, and more...

Now, at this point, it's likely that you have a few questions, let's see...

Is RANSAC a guaranteed thing? Will it ever "converge"?

No. As we showed in the function call, it's not a guaranteed thing (is anything in life really ever guaranteed?). If you set your number of iterations to 20, it may need 200 until it finds the right one. But what if you set it to 1,000,000? Well, you'll surely increase your chances of finding the right model, but there will still be some unknown, from your models, your loss function, or simply the complexity.

See it like finding a needle in a haystack; your odds improve with more attempts, but you may still miss it after many iterations.

Is RANSAC evolving? Are there better techniques?

Yes! Tons! In fact, some entire papers have been written about RANSAC variants like MSAC (M-estimator SAmple Consensus), which, instead of just counting inliers, minimizes a cost function based on the residuals (outliers contribute higher penalties in the cost function).

Another one is PROSAC (Progressive Sample Consensus), which samples points progressively based on a ranking of their quality or confidence score. Basically, you need fewer iterations to find the right points.

I'll name a few others here, and they all start from RANSAC but improve one thing:

  • LO-RANSAC (Locally Optimized RANSAC): Adds a local optimization step to refine the model after identifying a promising set of inliers.
  • USAC (Universal RANSAC): Combines multiple strategies, such as PROSAC, LO-RANSAC, and pre-verification techniques.
  • R-RANSAC (Randomized RANSAC): Adds a pre-check step before full consensus evaluation to quickly reject bad models.
  • Multi-RANSAC: Extends RANSAC to simultaneously fit multiple models (e.g., multiple planes in a point cloud).
  • and more...

Okay, we saw many examples and we get a great idea of how RANSAC works. Let's now do a summary...

Summary & Next Steps: Random Sample Consensus

Summary

  • The RANSAC algorithm, or Random Sample Consensus, is an iterative outlier detection algorithm used to find the best fit for data with noise or errors by picking random samples, fitting a mathematical model, and evaluating how many data points align with that model.
  • RANSAC is highly versatile, often demonstrated in 2D data fitting scenarios but applicable to 3D problems and higher-dimensional spaces, such as computer vision, point clouds, and transformation estimation.
  • The algorithm follows a structured process: starting with input data, selecting random samples, fitting a model, computing a cost function, and iterating until the best model is found.
  • RANSAC's applications are diverse, ranging from simple 2D fitting tasks to complex 3D plane detection and transformations in fields like SLAM, point cloud registration, and sensor fusion.
  • RANSAC is NOT a converging algorithm; meaning it's not guaranteed to find the best model to fit the data. But if you set a high number of iterations, it should.
  • Numerous RANSAC variants exist, such as MSAC, PROSAC, and LO-RANSAC, each enhancing the original algorithm by refining model selection, improving efficiency, or minimizing cost functions based on residuals.

Next Steps

📥
If you want to learn more about Machine Learning algorithms and how they're used in autonomous robots, I'm sending emails every day about these technologies, and they're read by over 10,000 Engineers. You can join the daily emails here.