Show HN: TabPFN v2 – A SOTA foundation model for small tabular data

nature.com

149 points by onasta 5 days ago


I am excited to announce the release of TabPFN v2, a tabular foundation model that delivers state-of-the-art predictions on small datasets in just 2.8 seconds for classification and 4.8 seconds for regression compared to strong baselines tuned for 4 hours. Published in Nature, this model outperforms traditional methods on datasets with up to 10,000 samples and 500 features.

The model is available under an open license: a derivative of the Apache 2 license with a single modification, adding an enhanced attribution requirement inspired by the Llama 3 license: https://github.com/PriorLabs/tabpfn. You can also try it via API: https://github.com/PriorLabs/tabpfn-client

TabPFN v2 is trained on 130 million synthetic tabular prediction datasets to perform in-context learning and output a predictive distribution for the test data points. Each dataset acts as one meta-datapoint to train the TabPFN weights with SGD. As a foundation model, TabPFN allows for fine-tuning, density estimation and data generation.

Compared to TabPFN v1, v2 now natively supports categorical features and missing values. TabPFN v2 performs just as well on datasets with or without these. It also handles outliers and uninformative features naturally, problems that often throw off standard neural nets.

TabPFN v2 performs as well with half the data as the next best baseline (CatBoost) with all the data.

We also compared TabPFN to the SOTA AutoML system AutoGluon 1.0. Standard TabPFN already outperforms AutoGluon on classification and ties on regression, but ensembling multiple TabPFNs in TabPFN v2 (PHE) is even better.

There are some limitations: TabPFN v2 is very fast to train and does not require hyperparameter tuning, but inference is slow. The model is also only designed for datasets up to 10k data points and 500 features. While it may perform well on larger datasets, it hasn't been our focus.

We're actively working on removing these limitations and intend to release new versions of TabPFN that can handle larger datasets, have faster inference and perform in additional predictive settings such as time-series and recommender systems.

We would love for you to try out TabPFN v2 and give us your feedback!

gcr - 4 days ago

Thanks for such a cool project! It's immediately apparent how to use it and I appreciate the brief examples.

Quick question: In the breast cancer example from the README, simple support vector machines from sklearn (the first thing i tried to compare baseline performance, incidentally) seem to outperform TabPFN. Is this expected? I know it's a baseline to demonstrate ease of use rather than SOTA performance, but I am curious.

    # (TabPFN)
    In [13]: print("ROC AUC:", roc_auc_score(y_test, prediction_probabilities[:, 1]))
    ROC AUC: 0.996299494264216

    # (LinearSVC)
    In [27]: from sklearn.svm import LinearSVC
    
    In [28]: clf=LinearSVC(C=0.01).fit(X_train, y_train)
    
    In [29]: roc_auc_score(y_test, clf.decision_function(X_test))
    Out[29]: 0.997532996176144
instanceofme - 5 days ago

Related: CARTE-AI, which can also deal with multiple tables.

https://soda-inria.github.io/carte/ https://arxiv.org/pdf/2402.16785

The paper includes a comparison to TabPFN v1 (among others), noting the lack of categorical & missing values handling which v2 now seems to have. Would be curious to see an updated comparison.

nickpsecurity - 4 days ago

A while back, I was looking for a project amateurs could do for experimenting with Transformer alternatives and optimization algorithms. My concept was grabbing objective, test functions from the literature, making custom ones based on realistic data, and layering them together based on real-world depth. Then, training various approaches on them using consumer GPU’s or spot instances of high-end GPU’s.

What I read in this paper blew that idea out the water! I mean, it’s still doable but you’ve far exceeded it.

I love that you covered many types of structures, used 8x consumer GPU’s more like OSS folks do (widely-accessible pretraining), claim no copyright infringement for pretraining, and use enough techniques in ML that people can enjoy Googling stuff for days.

I do have some questions about what I might have overlooked in the paper.

1. Is the training data and code available to reproduce the model? And iteratively improve its architectural decisions?

2. Most authors claiming their data was legal or open were actually committing copyright infringement. Your method might dodge that if users generate their own synthetic data using methods they can verify aren’t themselves encumbered. Is that code available under open licensing? If not, would you offer it for a fee for companies or free for researchers?

3. What specific, common uses could amateurs try that would display the model’s ability in a business setting? (Both to drive more research or build products on the model.)

I thank you for your time.

fuenal - 2 days ago

Great work you guys! I have been following discussions on DL vs ML for tabular data for some time now and am very excited to see TabPFN perform so well. I would like to play around with it a bit and am wondering if there is a way to use TabPFN with larger sample sizes, say, 1000000 rows? Can I disable the 10000 sample limitation? I would appreciate a code example if so. Great work again!

patcon - 4 days ago

Neat! Might this even be useful to impute missing data for a sparse network of votes, for a system like this (pol.is) whose goal is to do dimensional reduction and visualise the opinion space of divisive social topics: https://gwern.net/doc/sociology/2021-small.pdf

200 voters on 50 statements would fall within the 10,000 sample threshold. This is well within the bounds of some existing conversations with open data, so it could be tested... Potential values on each statement are agree/disagree/pass (+1/-1/0)

https://github.com/compdemocracy/openData/blob/master/brexit...

https://github.com/compdemocracy/openData/blob/master/brexit...

OutOfHere - 5 days ago

Related repo: https://github.com/liam-sbhoo/tabpfn-time-series

mlepath - 4 days ago

Great work!

Do you see any artifacts from having trained on synthetic data? Is there a natural benchmark dataset (real tables in the wild)?

In my experience synthetic data can only take you so far, it has all the quirk the dataset creator can think of but the real value is usually in patterns they cannot. Vision took a huge leap forward with ImageNet dataset release

lcrmorin - 3 days ago

Thanks for sharing this. Of course I will closely watch it because claiming to beat gbdts might be a bit early.

- It is not entirely clear how the datasets split is done. Do you make sure that the model is evaluated on unseen data ? More generally how does one knows whether a dataset was part of the training or not ?

- You mention some serious limitations (10k rows, 500 cols.). It seems a bit weird to have fixed numbers. Can these numbers be roughly balanced ? (eg. 1M rows, 5 columns ... ). Does these numbers scale with memory ? (what memory was used for the 10k rows / 500 cols figure ?)

enigmaa99 - 5 days ago

I tried this on a few CARTE datasets and it works surprisingly better!! Woahhh

tmostak - 4 days ago

This looks amazing!

Just looking through the code a bit, it seems that the model both supports a (custom) attention mechanism between features and between rows (code uses the term items)? If so, does the attention between rows help improve accuracy significantly?

Generally, for standard regression and classification use cases, rows (observations) are seen to be independent, but I'm guessing cross-row attention might help the model see the gestalt of the data in some way that improves accuracy even when the independence assumption holds?

ggnore7452 - 5 days ago

anyone tried this? is this actually overall better than xgboost/catboost?

pplonski86 - 4 days ago

Amazing results! Beating AutoML with single model is not easy :)

Could you please explain like I'm five what is doing a trick? You have model pre-trained on large set of small datasets and you leverage it to boost performance?

Training is fast, few seconds, but what is time needed to compute predictions?

How large is the model?

Dowwie - 4 days ago

Congrats on your release. What is the best way to share feedback? I would like to share with you what I believe to be a challenging problem that this may help with.

jacob019 - 2 days ago

Found the web interface: https://ux.priorlabs.ai/ Really cool!

Just playing around with regression mode...

    A very simple dataset, powers of two: 
    1:2, 2:4, 3:8, 5:32, 6:64, 7:128 (missing the #4 value)
    Predictions (1-10): 
    1.582 5.236 13.150 22.943 37.584 67.475 109.945 155.322 218.001 10,300.425
    Error (1-10): 
    -26.4% 23.6% 39.2% 30.3% 14.9% 5.2% -16.4% -64.8% -134.9% -240.9% 
... well, it has a positive slope

Let's see what happens if we copy the exact same values in the dataset 10 times first.

    Predictions (1-10): 
    1.993 3.967 7.986 18.138 31.965 64.140 128.125 126.607 130.667 161.756 
    Error (1-10): 
    -0.3% -0.8% -0.2% 11.8% -0.1% 0.2% 0.1% -102.2% -291.8% -533.1%
Interesting, repeated values give the model a lot more confidence of the known values. The interpolated #4 value is still off by 12%. It does not extrapolate well at all.

Looking forward to trying it on real world data with more features.

- 5 days ago
[deleted]
storyweaver2 - 4 days ago

Did you compare the performance with o1 or Claude 3.5 Sonnet?

hooloovoo_zoo - 4 days ago

Were your benchmark methods tuned per dataset or across datasets?

bbstats - 5 days ago

looks amazing - finally, DL that beats a tuned catboost?

peepeepoopoo99 - 4 days ago

How can you train a tabular foundation model when the tabular features themselves are inherently domain-specific? Is there some kind of preprocessing step beforehand to match the inference time features with their closest analogues in the training set?

owalerys - 5 days ago

[dead]

xnx - 5 days ago

[dead]

amini2nt - 5 days ago

[dead]

_giorgio_ - 5 days ago

It's probably the same model with the same limitations, released nearly two years ago?

https://arxiv.org/abs/2207.01848