-
Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
Paper • 2402.14083 • Published • 47 -
Linear Transformers are Versatile In-Context Learners
Paper • 2402.14180 • Published • 7 -
Training-Free Long-Context Scaling of Large Language Models
Paper • 2402.17463 • Published • 24 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 626
Yang Lee
innovation64
AI & ML interests
AGI
Recent Activity
liked a model 27 minutes ago
Qwen/Qwen3.5-122B-A10B upvoted a collection 4 days ago
Qwen3-Omni