RWKV开源发布了 RWKV7-G1 1.5B 推理模型(Reasoning Model)。模型基于 World v3.5 数据集训练,包含更多小说、网页、数学、代码和 reasoning 数据,总数据为 5.16T tokens。其具备其它同尺寸模型不具备的推理能力和任务能力,同时还支持现实世界 100+ 种语言。 在实际测试中,RWKV7-G1 1.5B 模型的推理逻辑性较强,能够完成有难度的多语言、数学和代码任务。该模型已上线始智AI-wisemodel开源社区

RWKV (pronounced RwaKuv) is an RNN with great LLM performance and parallelizable like a Transformer. We are at RWKV7-G1 "GooseOne" reasoning model.
It's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctxlen, and free text embedding. And it's 100% attention-free, and a Linux Foundation AI project.

RWKV-Projects
RWKV-LM
Training RWKV (and latest developments)
RWKV-Runner
RWKV GUI with one-click install and API
RWKV pip package
Official RWKV pip package
RWKV-PEFT
Finetuning RWKV (9GB VRAM can finetune 7B)
RWKV-server
Fast WebGPU inference (NVIDIA/AMD/Intel), nf4/int8/fp16
More... (400+ RWKV projects)
Misc
RWKV raw weights
All latest RWKV weights

RWKV weights
HuggingFace-compatible RWKV weights
RWKV-related papers
RWKV wiki
Community wiki