-
Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
Paper β’ 2402.14083 β’ Published β’ 47 -
Linear Transformers are Versatile In-Context Learners
Paper β’ 2402.14180 β’ Published β’ 7 -
Training-Free Long-Context Scaling of Large Language Models
Paper β’ 2402.17463 β’ Published β’ 23 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper β’ 2402.17764 β’ Published β’ 627
Yang Lee
innovation64
AI & ML interests
AGI
Recent Activity
upvoted
a
paper
about 1 month ago
WorldGen: From Text to Traversable and Interactive 3D Worlds
upvoted
a
paper
about 1 month ago
SAM 3: Segment Anything with Concepts
upvoted
a
paper
about 1 month ago
Agent0: Unleashing Self-Evolving Agents from Zero Data via Tool-Integrated Reasoning