This AI Paper Introduces GRPO-based Open-RS: A Low-Cost Reinforcement Learning Framework to Enhance Reasoning in Small Language Models
One particular focus on large language models has been improving their logical thinking and problem-solving skills. Reinforcement learning (RL) is increasingly used in this space for massive models and compact versions that can perform well in restricted computing environments. One major challenge in this field is improving a model’s reasoning capability without relying on extremely […] The post This AI Paper Introduces GRPO-based Open-RS: A Low-Cost Reinforcement Learning Framework to Enhance Reasoning in Small Language Models appeared first on MarkTechPost.

One particular focus on large language models has been improving their logical thinking and problem-solving skills. Reinforcement learning (RL) is increasingly used in this space for massive models and compact versions that can perform well in restricted computing environments. One major challenge in this field is improving a model’s reasoning capability without relying on extremely large infrastructure or excessive training time. Leading models require expensive hardware and proprietary data pipelines, putting them out of reach for smaller labs or companies. This raises the question of whether smaller models can be enhanced using cost-efficient approaches and achieve performance comparable to their larger counterparts on challenging tasks like math reasoning.
Several methods have been explored to address this. Chain-of-thought prompting helps guide models through problem steps. Search algorithms such as Beam Search and Monte Carlo Tree Search are also used to improve the logical flow of answers. Reinforcement learning itself has been tested in multiple settings. However, many of these approaches are still bound by the same issues: they depend on massive datasets or lead to unstable performance in small-scale setups. Furthermore, the results often fail to match those of proprietary models like OpenAI’s o1-preview.
Research introduced by a team from Knovel Engineering Lab in Singapore and VNU University of Science in Vietnam focused on overcoming these problems. The researchers used a 1.5-billion-parameter model named DeepSeek-R1-Distill-Qwen-1.5B. They adopted the Group Relative Policy Optimization (GRPO) algorithm for their setup, training the model using four NVIDIA A40 GPUs with 48 GB VRAM each, all within a strict 24-hour limit. Their key objective was to enhance the model’s reasoning without large financial or computational investment. Their training consumed only $42 in computing costs, a drastic reduction compared to baselines that require thousands of dollars.
The team assembled a dataset of 39,659 mathematics-specific questions to achieve this by refining two existing datasets—open-s1 and open-deep scale. The filtering process involved removing trivial or noisy questions using different models such as Qwen2.5-7B-Instruct and DeepSeek-R1-Distill-Qwen-1.5B. The reward system was rule-based and focused on three components: correctness of answers (using boxed notation), structural formatting (enforced with tags), and output length (rewarded with a cosine function to promote concise reasoning). The GRPO algorithm was used to sample group responses and apply score-based optimization, avoiding the need for a critical model and thus reducing computational demands further.
The performance of this approach was tested across five benchmark datasets: AMC23, AIME24, MATH-500, OlympiadBench, and Minerva. In one experiment, using just the open-s1 dataset, the model’s AMC23 accuracy improved from 63% to 70% within the first 100 global steps but later declined. In another trial that combined 7,000 samples of mixed difficulty, the accuracy on AMC23 rose to 80%, and AIME24 reached 46.7%. The model named Open-RS2, trained in that setup, also showed competitive scores on OlympiadBench (52.4%) and MATH-500 (85%). In the final experiment, the cosine reward helped regulate output length to a range of 1000–3500 tokens, and the model maintained 72.5% accuracy on AMC23 and 84.4% on MATH-500.
This research showed that effective reasoning in small language models is achievable even with limited resources. The problem of training small models without significant hardware investment was addressed with a low-cost and efficient training strategy. The proposed method used reinforcement learning and curated data to deliver surprisingly strong results. With continued improvements in reward design and optimization stability, small models may soon rival their larger counterparts in practical reasoning tasks.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.
The post This AI Paper Introduces GRPO-based Open-RS: A Low-Cost Reinforcement Learning Framework to Enhance Reasoning in Small Language Models appeared first on MarkTechPost.