Detecting Out-of-Distribution Datapoints via Embeddings or Predictions

What’s the Achilles’ heel of modern ML?


Model predictions are often wrong for out-of-distribution (OOD) inputs that do not resemble previous training data. To fix this, scientists have proposed many complex OOD detection algorithms that are only applicable to specific data types.

On the contrary, our latest research demonstrates straightforward methods that work just as effectively and work directly with any type of data for which either a feature embedding or trained classifier is available.

With these findings, we’ve:

  • Open-sourced our methods
  • Published new research and easy tutorials
  • Benchmarked their performance in detecting OOD images

Hope this helps make your ML more trustworthy!

Blog: (+ many more resources linked within!)