Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
AbstractPhil 
posted an update 3 days ago
Post
202
The Long: this is a proof of concept; ensemble compilation vmap prototype is functional and can be used to increase throughput for wider batches on FFN, MLP, LLM, or other models than just ensembles. This system traces your model and creates stages of functional activation. Based on the stage it will combine or remove combinations of stages meant to assign your layers to batched functional calls meant to put pressure on your GPU with less loops with directly curated cudagraph compliance where applicable. Identical weights yield identical results at the cost of hardware and vram.

TLDR:
This is an ensemble optimization adapted to standard models. This will yield high-capacity speed improvements through increased throughput for inference and training alike using carefully traced staged vmap structures.

https://github.com/AbstractEyes/pytorch-parallel-compiler

The early list of layers isn't fully represented yet, so this is a preliminary look into the potentials of this structure when fully fleshed out.

MLP (N=100, batch=32, CUDA):
Eager:    2-3x speedup
Compiled: 35-40x speedup


ResBlock (N=20, batch=8, CUDA):
Eager:    ~5x speedup  
Compiled: ~10x speedup


This is early testing and so far the yields indicate that WIDENING your model with adjacent shared batched vmaps for uniformly staged models will yield considerably higher output for inference at the cost of additional hardware utilization.

This is akin to lining up all your systems and uniformly passing the necessary implications through a shared frozen representation gate.

Training for this is not tested nor supported yet, use at your own risk.

This preliminary version will be expanded for primarily ease-of-use capacity coupled with adjacent secondary intermediate skill wrappers for usable micro-management if you wish to ensure the assembler formats your system correctly or bugs occur - there's always bugs.

Early tests will be targeting models such as standard conv systems, resnets, t5's, llama, qwen, and more as time progresses. The tests and benchmarks will be listed for use with a multitude of easy-access capacity utilizers, many of which will be omitted simply for not improving performance over standard sequential due to the precompilation simply not improving performance.

Full battery logistics will be available with the full structure as the system is fleshed out. For now look forward to a potential massive expansion to utilizing your models on scaled structures with minimal work from the developers.

I learned from my mistakes with the geofractal router system, it's too complicated to simply integrate someone's models into, so I'm taking a page directly from the ease-of-use book and ensuring this system is not only easy to use but WORKS.

In this post