HeadlinesBriefing favicon HeadlinesBriefing.com

Transformers Encode Formal Languages More Compactly

Hacker News •
×

Researchers have introduced succinctness as a metric for a transformer’s expressive power when encoding concepts. By proving that transformers can capture formal languages with far fewer symbols than traditional finite automata or Linear Temporal Logic formulas, the study argues that these models achieve a level of compactness previously unseen in symbolic representations. It may cut memory use in large language models.

Beyond compactness, the authors demonstrate that checking properties of such transformer‑generated specifications is provably hard, landing in the EXPSPACE-complete class. The result implies that automated verification tools will face exponential resource demands when reasoning about transformer‑generated specifications. Such complexity hinders real‑time analysis in embedded systems. The findings were submitted by Pascal Bergsträßer on 22 October 2025 and revised a day later.

These insights reshape how engineers view transformer models in formal methods, suggesting that while they can compress complex specifications, integrating them into safety‑critical pipelines may require new abstraction techniques. Consequently, tool designers must balance brevity against tractability. Researchers now have a theoretical foundation to explore trade‑offs between model size and verification feasibility, grounding future toolchains in provable limits.