Papers
arxiv:2605.04916

A Foundation Model for Zero-Shot Logical Rule Induction

Published on May 6
· Submitted by
Yin Jun Phua
on May 7
Authors:

Abstract

Neural Rule Inducer (NRI) enables zero-shot rule induction by representing literals through domain-agnostic statistical properties and using parallel decoding to maintain permutation invariance in logical disjunctions.

AI-generated summary

Inductive Logic Programming (ILP) learns interpretable logical rules from data. Existing methods are transductive: their learned parameters are bound to specific predicates and require retraining for each new task. We introduce Neural Rule Inducer (NRI), a pretrained model for zero-shot rule induction. Rather than encoding literal identities, NRI represents literals using domain-agnostic statistical properties such as class-conditional rates, entropy, and co-occurrence, which generalize across variable identities and counts without retraining. The model consists of a statistical encoder and a parallel slot-based decoder. Parallel decoding preserves the permutation invariance of logical disjunction; an autoregressive decoder would instead impose an arbitrary clause order. Product T-norm relaxation makes rule execution differentiable, allowing end-to-end training on prediction accuracy alone. We evaluate NRI on rule recovery, robustness to label noise and spurious correlations, and zero-shot transfer to real-world benchmarks, and we believe this work opens up the possibility of foundation models for symbolic reasoning. Code and the reference checkpoint are available at https://github.com/phuayj/neural-rule-inducer.

Community

Paper author Paper submitter

Trained once on synthetic Boolean formulas, NRI induces interpretable DNF rules zero-shot on any tabular task. No retraining, no fine-tuning, no per-task weights.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.04916
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.04916 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.04916 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.