Tuesday, February 24, 2026
Mobile Offer

🎁 You've Got 1 Reward Left

Check if your device is eligible for instant bonuses.

Unlock Now
Survey Cash

🧠 Discover the Simple Money Trick

This quick task could pay you today — no joke.

See It Now
Top Deals

📦 Top Freebies Available Near You

Get hot mobile rewards now. Limited time offers.

Get Started
Game Offer

🎮 Unlock Premium Game Packs

Boost your favorite game with hidden bonuses.

Claim Now
Money Offers

💸 Earn Instantly With This Task

No fees, no waiting — your earnings could be 1 click away.

Start Earning
Crypto Airdrop

🚀 Claim Free Crypto in Seconds

Register & grab real tokens now. Zero investment needed.

Get Tokens
Food Offers

🍔 Get Free Food Coupons

Claim your free fast food deals instantly.

Grab Coupons
VIP Offers

🎉 Join Our VIP Club

Access secret deals and daily giveaways.

Join Now
Mystery Offer

🎁 Mystery Gift Waiting for You

Click to reveal your surprise prize now!

Reveal Gift
App Bonus

📱 Download & Get Bonus

New apps giving out free rewards daily.

Download Now
Exclusive Deals

💎 Exclusive Offers Just for You

Unlock hidden discounts and perks.

Unlock Deals
Movie Offer

🎬 Watch Paid Movies Free

Stream your favorite flicks with no cost.

Watch Now
Prize Offer

🏆 Enter to Win Big Prizes

Join contests and win amazing rewards.

Enter Now
Life Hack

💡 Simple Life Hack to Save Cash

Try this now and watch your savings grow.

Learn More
Top Apps

📲 Top Apps Giving Gifts

Download & get rewards instantly.

Get Gifts
Summer Drinks

🍹 Summer Cocktails Recipes

Make refreshing drinks at home easily.

Get Recipes

Latest Posts

Google DeepMind Researchers Apply Semantic Evolution to Create Non Intuitive VAD-CFR and SHOR-PSRO Variants for Superior Algorithmic Convergence


In the competitive arena of Multi-Agent Reinforcement Learning (MARL), progress has long been bottlenecked by human intuition. For years, researchers have manually refined algorithms like Counterfactual Regret Minimization (CFR) and Policy Space Response Oracles (PSRO), navigating a vast combinatorial space of update rules via trial-and-error.

Google DeepMind research team has now shifted this paradigm with AlphaEvolve, an evolutionary coding agent powered by Large Language Models (LLMs) that automatically discovers new multi-agent learning algorithms. By treating source code as a genome, AlphaEvolve doesn’t just tune parameters—it invents entirely new symbolic logic.

Semantic Evolution: Beyond Hyperparameter Tuning

Unlike traditional AutoML, which often optimizes numeric constants, AlphaEvolve performs semantic evolution. It utilizes Gemini 2.5 pro as an intelligent genetic operator to rewrite logic, introduce novel control flows, and inject symbolic operations into the algorithm’s source code.

The framework follows a rigorous evolutionary loop:

  • Initialization: The population begins with standard baseline implementations, such as standard CFR.
  • LLM-Driven Mutation: A parent algorithm is selected based on fitness, and the LLM is prompted to modify the code to reduce exploitability.
  • Automated Evaluation: Candidates are executed on proxy games (e.g., Kuhn Poker) to compute negative exploitability scores.
  • Selection: Valid, high-performing candidates are added back to the population, allowing the search to discover non-intuitive optimizations.

VAD-CFR: Mastering Game Volatility

The first major discovery is Volatility-Adaptive Discounted (VAD-) CFR. In Extensive-Form Games (EFGs) with imperfect information, agents must minimize regret across a sequence of histories. While traditional variants use static discounting, VAD-CFR introduces three mechanisms that often elude human designers:

  1. Volatility-Adaptive Discounting: Using an Exponential Weighted Moving Average (EWMA) of the instantaneous regret magnitude, the algorithm tracks the “shake” of the learning process. When volatility is high, it increases discounting to forget unstable history faster; when it drops, it retains more history for fine-tuning.
  2. Asymmetric Instantaneous Boosting: VAD-CFR boosts positive instantaneous regrets by a factor of 1.1. This allows the agent to immediately exploit beneficial deviations without the lag associated with standard accumulation.
  3. Hard Warm-Start & Regret-Magnitude Weighting: The algorithm enforces a ‘hard warm-start,’ postponing policy averaging until iteration 500. Interestingly, the LLM generated this threshold without knowing the 1000-iteration evaluation horizon. Once accumulation begins, policies are weighted by the magnitude of instantaneous regret to filter out noise.

In empirical tests, VAD-CFR matched or surpassed state-of-the-art performance in 10 out of 11 games, including Leduc Poker and Liar’s Dice, with 4-player Kuhn Poker being the only exception.

SHOR-PSRO: The Hybrid Meta-Solver

The second breakthrough is Smoothed Hybrid Optimistic Regret (SHOR-) PSRO. PSRO operates on a higher abstraction called the Meta-Game, where a population of policies is iteratively expanded. SHOR-PSRO evolves the Meta-Strategy Solver (MSS), the component that determines how opponents are pitted against each other.

The core of SHOR-PSRO is a Hybrid Blending Mechanism that constructs a meta-strategy σ by linearly blending two distinct components:

σ hybrid = (1 -𝛌) . σ ORM + 𝛌 . σSoftmax

  • σ ORM : Provides the stability of Optimistic Regret Matching.
  • σSoftmax: A Boltzmann distribution over pure strategies that aggressively biases the solver toward high-reward modes.

SHOR-PSRO employs a dynamic Annealing Schedule. The blending factor 𝛌 anneals from 0.3 to 0.05, gradually shifting the focus from greedy exploration to robust equilibrium finding. Furthermore, it discovered a Training vs. Evaluation Asymmetry: the training solver uses the annealing schedule for stability, while the evaluation solver uses a fixed, low blending factor (𝛌=0.01) for reactive exploitability estimates.

Key Takeaways

  • AlphaEvolve Framework: DeepMind Researchers introduced AlphaEvolve, an evolutionary system that uses Large Language Models (LLMs) to perform ‘semantic evolution’ by treating an algorithm’s source code as its genome. This allows the system to discover entirely new symbolic logic and control flows rather than just tuning hyperparameters.
  • Discovery of VAD-CFR: The system evolved a new regret minimization algorithm called Volatility-Adaptive Discounted (VAD-) CFR. It outperforms state-of-the-art baselines like Discounted Predictive CFR+ by using non-intuitive mechanisms to manage regret accumulation and policy derivation.
  • VAD-CFR’s Adaptive Mechanisms: VAD-CFR utilizes a volatility-sensitive discounting schedule that tracks learning instability via an Exponential Weighted Moving Average (EWMA). It also features an ‘Asymmetric Instantaneous Boosting’ factor of 1.1 for positive regrets and a hard warm-start that delays policy averaging until iteration 500 to filter out early-stage noise.
  • Discovery of SHOR-PSRO: For population-based training, AlphaEvolve discovered Smoothed Hybrid Optimistic Regret (SHOR-) PSRO. This variant utilizes a hybrid meta-solver that blends Optimistic Regret Matching with a smoothed, temperature-controlled distribution over best pure strategies to improve convergence speed and stability.
  • Dynamic Annealing and Asymmetry: SHOR-PSRO automates the transition from exploration to exploitation by annealing its blending factor and diversity bonuses during training. The search also discovered a performance-boosting asymmetry where the training-time solver uses time-averaging for stability while the evaluation-time solver uses a reactive last-iterate strategy.

Check out the Paper. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.




Source link

Latest Posts

Don't Miss

Stay in touch

To be updated with all the latest news, offers and special announcements.