Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
1
2
Ahmed Wez
ahmed-wez
Follow
upgraedd's profile picture
lakshay315's profile picture
2 followers
·
2 following
AI & ML interests
None yet
Recent Activity
liked
a dataset
11 days ago
uzaymacar/math-rollouts
liked
a model
28 days ago
NousResearch/nomos-1
reacted
to
Kseniase
's
post
with ❤️
30 days ago
15 Outstanding Research Papers from NeurIPS 2025 NeurIPS 2025, as a premier annual event in machine learning and computational neuroscience, tackles major topics like the future of AI, current research, and the most difficult challenges. While we’re not attending this year, we’re closely following the updates and today we pull together a quick, easy-to-digest roundup of a few standout papers so you can jump in without getting overwhelmed. Here is a list of 15 papers from NeurIPS 2025, including 8 top research papers that received awards, along with 7 others that caught our attention: 1. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks → https://neurips.cc/virtual/2025/loc/san-diego/test-of-time/128328 Test of Time Award winner. Introduces the RPN, a small convnet that predicts objectness and boxes on shared features, enabling Faster R-CNN to share computation and run around 5 fps on a GPU 2. Artificial Hivemind: The Open-Ended Homogeneity of LMs (and Beyond) → https://neurips.cc/virtual/2025/loc/san-diego/poster/121421 Releases a huge open-ended prompt dataset, showing that LLMs often fall into an “artificial hivemind” – generate surprisingly similar answers – and measuring diversity collapse 3. Optimal Mistake Bounds for Transductive Online Learning → https://neurips.cc/virtual/2025/loc/san-diego/poster/119098 Settles a 30-year-old question by showing how much unlabeled data helps in online learning – it gives a precise quadratic advantage with tight matching bounds 4. Gated Attention for LLMs: Non-linearity, Sparsity, and Attention-Sink-Free → https://neurips.cc/virtual/2025/loc/san-diego/poster/120216 Demonstrates how gating actually affects attention: a simple sigmoid gate after Scaled Dot-Product Attention (SDPA) boosts performance, stability, and long-context behavior by adding useful nonlinearity and sparse modulation Read further below ⬇️ Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
ahmed-wez
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a dataset
11 days ago
uzaymacar/math-rollouts
Viewer
•
Updated
Aug 25, 2025
•
21k
•
2.5k
•
8
liked
a model
28 days ago
NousResearch/nomos-1
Text Generation
•
31B
•
Updated
19 days ago
•
1.45k
•
132