Friday, March 13, 2026
Mobile Offer

🎁 You've Got 1 Reward Left

Check if your device is eligible for instant bonuses.

Unlock Now
Survey Cash

🧠 Discover the Simple Money Trick

This quick task could pay you today — no joke.

See It Now
Top Deals

📦 Top Freebies Available Near You

Get hot mobile rewards now. Limited time offers.

Get Started
Game Offer

🎮 Unlock Premium Game Packs

Boost your favorite game with hidden bonuses.

Claim Now
Money Offers

💸 Earn Instantly With This Task

No fees, no waiting — your earnings could be 1 click away.

Start Earning
Crypto Airdrop

🚀 Claim Free Crypto in Seconds

Register & grab real tokens now. Zero investment needed.

Get Tokens
Food Offers

🍔 Get Free Food Coupons

Claim your free fast food deals instantly.

Grab Coupons
VIP Offers

🎉 Join Our VIP Club

Access secret deals and daily giveaways.

Join Now
Mystery Offer

🎁 Mystery Gift Waiting for You

Click to reveal your surprise prize now!

Reveal Gift
App Bonus

📱 Download & Get Bonus

New apps giving out free rewards daily.

Download Now
Exclusive Deals

💎 Exclusive Offers Just for You

Unlock hidden discounts and perks.

Unlock Deals
Movie Offer

🎬 Watch Paid Movies Free

Stream your favorite flicks with no cost.

Watch Now
Prize Offer

🏆 Enter to Win Big Prizes

Join contests and win amazing rewards.

Enter Now
Life Hack

💡 Simple Life Hack to Save Cash

Try this now and watch your savings grow.

Learn More
Top Apps

📲 Top Apps Giving Gifts

Download & get rewards instantly.

Get Gifts
Summer Drinks

🍹 Summer Cocktails Recipes

Make refreshing drinks at home easily.

Get Recipes

Latest Posts

How to Build an Autonomous Machine Learning Research Loop in Google Colab Using Andrej Karpathy’s AutoResearch Framework for Hyperparameter Discovery and Experiment Tracking


In this tutorial, we implement a Colab-ready version of the AutoResearch framework originally proposed by Andrej Karpathy. We build an automated experimentation pipeline that clones the AutoResearch repository, prepares a lightweight training environment, and runs a baseline experiment to establish initial performance metrics. We then create an automated research loop that programmatically edits the hyperparameters in train.py, runs new training iterations, evaluates the resulting model using the validation bits-per-byte metric, and logs every experiment in a structured results table. By running this workflow in Google Colab, we demonstrate how we can reproduce the core idea of autonomous machine learning research: iteratively modifying training configurations, evaluating performance, and preserving the best configurations, without requiring specialized hardware or complex infrastructure.

import os, sys, subprocess, json, re, random, shutil, time
from pathlib import Path


def pip_install(pkg):
   subprocess.check_call([sys.executable, "-m", "pip", "install", "-q", pkg])


for pkg in [
   "numpy","pandas","pyarrow","requests",
   "rustbpe","tiktoken","openai"
]:
   try:
       __import__(pkg)
   except:
       pip_install(pkg)


import pandas as pd


if not Path("autoresearch").exists():
   subprocess.run(["git","clone","https://github.com/karpathy/autoresearch.git"])


os.chdir("autoresearch")


OPENAI_API_KEY=None
try:
   from google.colab import userdata
   OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
except:
   OPENAI_API_KEY=os.environ.get("OPENAI_API_KEY")


if OPENAI_API_KEY:
   os.environ["OPENAI_API_KEY"]=OPENAI_API_KEY

We begin by importing the core Python libraries required for the automated research workflow. We install all necessary dependencies and clone the autoresearch repository directly from GitHub, ensuring the environment includes the original training framework. We also configure access to the OpenAI API key, if available, allowing the system to optionally support LLM-assisted experimentation later in the pipeline.

prepare_path=Path("prepare.py")
train_path=Path("train.py")
program_path=Path("program.md")


prepare_text=prepare_path.read_text()
train_text=train_path.read_text()


prepare_text=re.sub(r"MAX_SEQ_LEN = \d+","MAX_SEQ_LEN = 512",prepare_text)
prepare_text=re.sub(r"TIME_BUDGET = \d+","TIME_BUDGET = 120",prepare_text)
prepare_text=re.sub(r"EVAL_TOKENS = .*","EVAL_TOKENS = 4 * 65536",prepare_text)


train_text=re.sub(r"DEPTH = \d+","DEPTH = 4",train_text)
train_text=re.sub(r"DEVICE_BATCH_SIZE = \d+","DEVICE_BATCH_SIZE = 16",train_text)
train_text=re.sub(r"TOTAL_BATCH_SIZE = .*","TOTAL_BATCH_SIZE = 2**17",train_text)
train_text=re.sub(r'WINDOW_PATTERN = "SSSL"','WINDOW_PATTERN = "L"',train_text)


prepare_path.write_text(prepare_text)
train_path.write_text(train_text)


program_path.write_text("""
Goal:
Run autonomous research loop on Google Colab.


Rules:
Only modify train.py hyperparameters.


Metric:
Lower val_bpb is better.
""")


subprocess.run(["python","prepare.py","--num-shards","4","--download-workers","2"])

We modify key configuration parameters inside the repository to make the training workflow compatible with Google Colab hardware. We reduce the context length, training time budget, and evaluation token counts so the experiments run within limited GPU resources. After applying these patches, we prepare the dataset shards required for training so that the model can immediately begin experiments.

subprocess.run("python train.py > baseline.log 2>&1",shell=True)


def parse_run_log(log_path):
   text=Path(log_path).read_text(errors="ignore")
   def find(p):
       m=re.search(p,text,re.MULTILINE)
       return float(m.group(1)) if m else None
   return {
       "val_bpb":find(r"^val_bpb:\s*([0-9.]+)"),
       "training_seconds":find(r"^training_seconds:\s*([0-9.]+)"),
       "peak_vram_mb":find(r"^peak_vram_mb:\s*([0-9.]+)"),
       "num_steps":find(r"^num_steps:\s*([0-9.]+)")
   }


baseline=parse_run_log("baseline.log")


results_path=Path("results.tsv")


rows=[{
   "commit":"baseline",
   "val_bpb":baseline["val_bpb"] if baseline["val_bpb"] else 0,
   "memory_gb":round((baseline["peak_vram_mb"] or 0)/1024,1),
   "status":"keep",
   "description":"baseline"
}]


pd.DataFrame(rows).to_csv(results_path,sep="\t",index=False)


print("Baseline:",baseline)

We execute the baseline training run to establish an initial performance reference for the model. We implement a log-parsing function that extracts key training metrics, including validation bits-per-byte, training time, GPU memory usage, and optimization steps. We then store these baseline results in a structured experiment table so that all future experiments can be compared against this starting configuration.

TRAIN_FILE=Path("train.py")
BACKUP_FILE=Path("train.base.py")


if not BACKUP_FILE.exists():
   shutil.copy2(TRAIN_FILE,BACKUP_FILE)


HP_KEYS=[
"WINDOW_PATTERN",
"TOTAL_BATCH_SIZE",
"EMBEDDING_LR",
"UNEMBEDDING_LR",
"MATRIX_LR",
"SCALAR_LR",
"WEIGHT_DECAY",
"ADAM_BETAS",
"WARMUP_RATIO",
"WARMDOWN_RATIO",
"FINAL_LR_FRAC",
"DEPTH",
"DEVICE_BATCH_SIZE"
]


def read_text(path):
   return Path(path).read_text()


def write_text(path,text):
   Path(path).write_text(text)


def extract_hparams(text):
   vals={}
   for k in HP_KEYS:
       m=re.search(rf"^{k}\s*=\s*(.+?)$",text,re.MULTILINE)
       if m:
           vals[k]=m.group(1).strip()
   return vals


def set_hparam(text,key,value):
   return re.sub(rf"^{key}\s*=.*$",f"{key} = {value}",text,flags=re.MULTILINE)


base_text=read_text(BACKUP_FILE)
base_hparams=extract_hparams(base_text)


SEARCH_SPACE={
"WINDOW_PATTERN":['"L"','"SSSL"'],
"TOTAL_BATCH_SIZE":["2**16","2**17","2**18"],
"EMBEDDING_LR":["0.2","0.4","0.6"],
"MATRIX_LR":["0.01","0.02","0.04"],
"SCALAR_LR":["0.3","0.5","0.7"],
"WEIGHT_DECAY":["0.05","0.1","0.2"],
"ADAM_BETAS":["(0.8,0.95)","(0.9,0.95)"],
"WARMUP_RATIO":["0.0","0.05","0.1"],
"WARMDOWN_RATIO":["0.3","0.5","0.7"],
"FINAL_LR_FRAC":["0.0","0.05"],
"DEPTH":["3","4","5","6"],
"DEVICE_BATCH_SIZE":["8","12","16","24"]
}


def sample_candidate():
   keys=random.sample(list(SEARCH_SPACE.keys()),random.choice([2,3,4]))
   cand=dict(base_hparams)
   changes={}
   for k in keys:
       cand[k]=random.choice(SEARCH_SPACE[k])
       changes[k]=cand[k]
   return cand,changes


def apply_hparams(candidate):
   text=read_text(BACKUP_FILE)
   for k,v in candidate.items():
       text=set_hparam(text,k,v)
   write_text(TRAIN_FILE,text)


def run_experiment(tag):
   log=f"{tag}.log"
   subprocess.run(f"python train.py > {log} 2>&1",shell=True)
   metrics=parse_run_log(log)
   metrics["log"]=log
   return metrics

We build the core utilities that enable automated hyperparameter experimentation. We extract the hyperparameters from train.py, define the searchable parameter space, and implement functions that can programmatically edit these values. We also create mechanisms to generate candidate configurations, apply them to the training script, and run experiments while recording their outputs.

N_EXPERIMENTS=3


df=pd.read_csv(results_path,sep="\t")
best=df["val_bpb"].replace(0,999).min()


for i in range(N_EXPERIMENTS):


   tag=f"exp_{i+1}"


   candidate,changes=sample_candidate()


   apply_hparams(candidate)


   metrics=run_experiment(tag)


   if metrics["val_bpb"] and metrics["val_bpb"]<best:
       status="keep"
       best=metrics["val_bpb"]
       shutil.copy2(TRAIN_FILE,BACKUP_FILE)
   else:
       status="discard"
       shutil.copy2(BACKUP_FILE,TRAIN_FILE)


   row={
       "commit":tag,
       "val_bpb":metrics["val_bpb"] or 0,
       "memory_gb":round((metrics["peak_vram_mb"] or 0)/1024,1),
       "status":status,
       "description":str(changes)
   }


   df=pd.concat([df,pd.DataFrame([row])],ignore_index=True)
   df.to_csv(results_path,sep="\t",index=False)


   print("Experiment",tag)
   print("Changes:",changes)
   print("Metrics:",metrics)
   print("Status:",status)
   print()


print("Final Results")
print(df.sort_values("val_bpb"))


try:
   from google.colab import files
   files.download("train.py")
   files.download("results.tsv")
except:
   pass

We run the automated research loop that repeatedly proposes new hyperparameter configurations and evaluates their performance. For each experiment, we modify the training script, run the training process, and compare the resulting validation score with the best configuration discovered so far. We log all experiment results, preserve improved configurations, and export the best training script along with the experiment history for further analysis.

In conclusion, we constructed a complete automated research workflow that demonstrates how machines can iteratively explore model configurations and improve training performance with minimal manual intervention. Throughout the tutorial, we prepared the dataset, established a baseline experiment, and implemented a search loop that proposes new hyperparameter configurations, runs experiments, and tracks results across multiple trials. By maintaining experiment logs and automatically preserving improved configurations, we created a reproducible and extensible research process that mirrors the workflow used in modern machine learning experimentation. This approach illustrates how we can combine automation, experimentation tracking, and lightweight infrastructure to accelerate model development and enable scalable research directly from a cloud notebook environment.


Check out Full Codes hereAlso, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.




Source link

Latest Posts

Don't Miss

Stay in touch

To be updated with all the latest news, offers and special announcements.