SAVANT-IcosaGNN-IRM
Repository:
antonypamo/SAVANT-IcosaGNN-IRM
Model type:IcosahedralGNNReasoner(graph neural network over an icosahedron)
Este repositorio contiene una GNN icosaédrica simbiótica entrenada sobre un grafo icosaédrico para razonar sobre roles (física, geometría, ética, información, etc.) a partir de salidas de un micro-AGI T5 (t5-small) y embeddings RRFSAVANTMADE (antonypamo/RRFSAVANTMADE).
In other words, this repository provides a small icosahedral graph neural network (GNN) that performs structured reasoning over 12 high-level cognitive/semantic roles, using:
- a micro-AGI language backbone:
t5-small - a resonant embedder:
antonypamo/RRFSAVANTMADE
The model is designed as a symbiotic reasoning core within the broader Savant/RRF framework.
1. Model Details
1.1. Config summary
{
"model_type": "IcosahedralGNNReasoner",
"graph": "icosahedron",
"num_nodes": 12,
"roles": [
"física",
"geometría",
"información",
"ética",
"epistemología",
"creatividad",
"simbolismo",
"entropía",
"coherencia",
"musicalidad",
"cómputo",
"metacognición"
],
"micro_agi_repo": "t5-small",
"embedder_repo": "antonypamo/RRFSAVANTMADE",
"in_dim": 32,
"hidden_dim": 64
}
Key points:
- Model type:
IcosahedralGNNReasoner - Graph topology: icosahedron (
graph: "icosahedron",num_nodes: 12) - Nodes: 12 labeled cognitive/semantic roles:
- physics, geometry, information, ethics, epistemology, creativity, symbolism, entropy, coherence, musicality, computation, metacognition
- Backends:
micro_agi_repo = "t5-small"→ micro-AGI language modelembedder_repo = "antonypamo/RRFSAVANTMADE"→ RRF-inspired embedding model
- Dimensions:
- input dimension to the GNN:
in_dim = 32 - hidden dimension in the GNN:
hidden_dim = 64
- input dimension to the GNN:
This is not a text generator by itself. It is a graph-based reasoning layer that operates on embeddings derived from text.
2. Relation to the RRF / Savant framework
This model is architecturally much closer to the RRF (Resonant Reasoning Framework) vision than a plain Transformer:
- It uses an icosahedral graph with 12 nodes, each node explicitly mapped to a cognitive/semantic role (e.g., ethics, coherence, metacognition).
- The GNN implements message passing over this fixed geometry, encouraging structured interactions between domains (e.g., physics ↔ geometry ↔ information; ethics ↔ coherence ↔ metacognition).
- The language backbone (
t5-small) and the resonant embedder (RRFSAVANTMADE) act as input organs; the GNN is the reasoning core that aggregates and organizes these representations.
In short:
The icosahedral GNN is intended to act as a symbiotic reasoning nucleus inside a larger Savant/RRF system, rather than as a standalone large language model.
3. Intended Use
3.1. Primary use cases
The model is suitable as a secondary/auxiliary model that receives embeddings from text (via t5-small + RRFSAVANTMADE) and outputs structured signals such as:
- Role activations: how much a given input engages each of the 12 roles (ethics, creativity, metacognition, etc.).
- Control / scoring signals for:
- ranking or scoring candidate text generations,
- evaluating coherence, entropy, or ethical alignment,
- guiding selection of actions in an agent loop.
Typical applications:
- Meta-evaluation of language outputs (critic/judge model).
- Educational or curricular analysis:
- Mapping texts, course descriptions, or student work into the icosahedral role space.
- Research on resonant / geometric cognition:
- Studying how different domains (physics, ethics, information) interact in a structured graph.
3.2. Non-intended use
This model should not be used as:
- A standalone text generator (it does not generate text).
- A single source of truth for:
- medical, legal, financial, or high-stakes decisions.
- A guarantee of ethical or value-aligned behavior:
- the “ethics” node is a learned representation, not a normative authority.
Human oversight and domain expertise are required in any critical application.
4. Architecture
Conceptual pipeline (high-level):
- Text input (e.g., prompt, document, conversation snippet).
- Embedding stage:
- The text is encoded by:
t5-small(micro-AGI repo) and/orRRFSAVANTMADE(resonant embedder)
- Result: an embedding of dimension 32 (
in_dim), or projected to that size.
- The text is encoded by:
- Icosahedral GNN:
- The embedding is distributed/initialized across the 12 nodes.
- A graph neural network runs over the icosahedron:
- message passing between neighboring roles,
- hidden states of size
hidden_dim = 64.
- Role-level outputs:
- Final node states can be:
- read individually (per role activation),
- pooled (global representation),
- further mapped to scores, probabilities, or control signals.
- Final node states can be:
Because the 12 nodes are labeled, the model offers a structured, interpretable intermediate representation of the reasoning process.
5. Example Usage (conceptual)
Note: This is illustrative pseudo-code. Actual usage depends on the code released in the repository.
from transformers import AutoTokenizer, T5EncoderModel
from savant_icosagnn_irm import IcosahedralGNNReasoner # hypothetical import
# 1. Load micro-AGI encoder (t5-small)
text_encoder_name = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(text_encoder_name)
text_encoder = T5EncoderModel.from_pretrained(text_encoder_name)
# 2. Load Icosahedral GNN reasoner
gnn = IcosahedralGNNReasoner.from_pretrained("antonypamo/SAVANT-IcosaGNN-IRM")
text = "Explain how energy, entropy, and information are related in thermodynamics."
# Encode text
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
enc_outputs = text_encoder(**inputs).last_hidden_state # [batch, seq, hidden]
# (Simplify) Pool to a single embedding, then project to in_dim (32)
pooled = enc_outputs.mean(dim=1) # [batch, hidden]
embedded = some_projection(pooled, out_dim=32) # user-defined or from repo
# Run through the Icosahedral GNN
role_states, global_state = gnn(embedded) # e.g. role_states: [batch, 12, 64]
# role_states can be mapped to scores per role (physics, entropy, ethics, etc.)
role_scores = heads_to_scores(role_states) # application-dependent
You can then use role_scores to:
- Diagnose which roles are strongly engaged by the input.
- Guide downstream decisions (e.g., if “ethics” is low for a sensitive question, request human review).
6. Bias, Risks & Limitations
Data and training unknown:
Without full training details (loss functions, datasets, IRM setup, etc.), one should assume:- possible biases inherited from both
t5-smallandRRFSAVANTMADE, - no guarantees of fairness or robustness across domains.
- possible biases inherited from both
Interpretation risk:
The labeled roles (ethics, metacognition, coherence, etc.) are learned representations, not grounded philosophical or moral categories.
Misinterpreting them as “absolute measures” of ethics or truth can be misleading.Small dimensionality:
Within_dim = 32andhidden_dim = 64, the model is designed to be lightweight, not a large-scale general reasoner.
It is best suited for:- exploratory research,
- adding structured signals on top of other models,
- not as a single, universal decision-maker.
No real-time knowledge:
The model does not have access to current events or dynamic world updates. Any “knowledge” is static from the training phase.
Always combine this model with:
- Domain-specific checks.
- Human-in-the-loop review for sensitive tasks.
7. How to Cite
If you use SAVANT-IcosaGNN-IRM in academic or technical work, you can cite it along these lines (adapt as needed):
@misc{savant_icosagnn_irm,
title = {SAVANT-IcosaGNN-IRM: Icosahedral Graph Neural Network Reasoner},
author = {Antonypamo},
howpublished = {\url{https://huggingface.co/antonypamo/SAVANT-IcosaGNN-IRM}},
note = {Icosahedral GNN reasoner over 12 cognitive/semantic roles, driven by t5-small and RRFSAVANTMADE embeddings},
year = {2025}
}
8. License
The precise license for this model should be checked on the Hugging Face model page.
This README does not define or override the official license.
9. Summary
What it is:
A lightweight icosahedral GNN reasoner operating over 12 explicit roles, fed byt5-smallandRRFSAVANTMADEembeddings.Why it matters:
It introduces geometric, role-based structure in line with the Savant/RRF framework, enabling:- interpretable role activations,
- structured reasoning signals on top of language models.
How to use it:
As a symbiotic reasoning module—a critic, controller, or analyzer—rather than a standalone text generator.
Model tree for antonypamo/SAVANT-IcosaGNN-IRM
Base model
sentence-transformers/all-MiniLM-L6-v2