Cloud Code: How I Grew My GitHub Repo by 30% | Brav

Discover how I leveraged Cloud Code, Kaguya, and GitHub CLI to grow a GitHub repo by 30% in 17 days, streamline CI debugging, and keep token costs low.

Cloud Code: How I Grew My GitHub Repo by 30%

Published by Brav

Table of Contents

TL;DR

  • I grew a GitHub repo from 3,300 to 4,300 stars in 17 days using agentic coding.
  • Kaguya gave ChatGPT local file access and let it run scripts.
  • Cloud Code fetched GitHub data and visualized growth insights.
  • GitHub CLI acted as a RAG tool, pulling relevant snippets in seconds.
  • Voice input, visual Git clients, and token-cost management kept my workflow fast and cheap.

Why this matters

I was staring at a pile of code with no clear direction. The biggest frustrations for me were:

  • No AI access to my local files – every time I asked ChatGPT about a function it had to be pasted in the prompt.
  • Finding the right snippet – searching my own codebase was like looking for a needle in a haystack.
  • Manual data fetching – pulling statistics from GitHub to understand growth felt like a chore.
  • Uncertainty about AI correctness – could I trust an answer from a model?
  • Debugging CI failures – the build logs were a maze of timestamps and stack traces.
  • Token costs – long conversations were expensive.
  • Integrating AI with my IDE – my editor was clunky and not AI-friendly.
  • Trust and hallucinations – when the AI made a mistake I didn’t know how to catch it.

These pains are common to many developers. If you’ve ever wondered why your AI-powered tools feel sluggish or unreliable, read on. I’ll walk through how I turned Cloud Code, Kaguya, GitHub CLI, and a few other tricks into a productivity engine that grew my repo by more than 30% in just 17 days.

Core concepts

Agentic coding is the idea that an AI can act in your environment – create, edit, delete, move files, run code, and fetch data. Think of the model as a coworker who can actually press keys, run a script, or pull data from an API without you typing a single line. In my setup this is achieved through:

ToolUse CaseLimitation
KaguyaGives ChatGPT direct access to local files and can run Python, JavaScript, and Bash scriptsRequires plugin installation and a local MCP server; only supported languages
Cloud CodeRuns commands in your cloud environment and can fetch external data, including GitHub API callsNeeds GCP credentials and a Cloud Code installation; can incur cloud costs
GitHub CLIActs as a Retrieval-Augmented Generation (RAG) tool for code – fetch snippets, list commits, and run Git commands from the terminalLimited to the terminal interface; must remember commands

These tools are the backbone of the agentic workflow. GPT-4 can produce code, but only if it knows where to look. Kaguya gives it that look-at-local ability, Cloud Code gives it the look-at-cloud ability, and the CLI lets me ask the model to fetch a snippet and return it instantly.

GPT-4 can write good code when given good context – the GPT-4 Technical Report shows that the model’s coding accuracy improves dramatically when the prompt contains the relevant file contents and a clear objective. OpenAI — GPT-4 Technical Report (2023)

Kaguya is a ChatGPT plugin that gives ChatGPT access to local files – the plugin can create, edit, delete, move files, and even run scripts on the local machine. Kaguya — Kaguya plugin for ChatGPT (2024)

GitHub repo grew from 3,300 to 4,300 in 17 days, >30% increase – I tracked the star count manually and noticed this spike while testing the new automation. GitHub — Eventual-Inc/Daft repository (2025)

CLI is the ultimate RAG tool for code – the GitHub CLI is designed to pull code, metadata, and history directly from a repository, making it a natural RAG engine for the AI. GitHub — GitHub CLI documentation (2025)

Cloud Code can fetch and visualize GitHub data to provide insights – Cloud Code lets you run scripts that hit the GitHub API, pull repo statistics, and generate visual dashboards in the IDE. Google Cloud — Cloud Code documentation (2025)

How to apply it

1. Set up Kaguya

  1. Install the Kaguya plugin in ChatGPT via the plugin store.
  2. Run the MCP server locally (npm install -g @chatgpt/mcp-server and mcp-server).
  3. In ChatGPT, enable the plugin and give it permission to access your workspace.
  4. Test by asking: “Read utils/helpers.py and explain what it does.”
    The plugin will stream the file contents back to the chat.

Tip: Use Control-A (or Command-A on Mac) to copy the file path and paste it into the prompt quickly.

2. Connect Cloud Code to GitHub

  1. Install Cloud Code in VS Code or IntelliJ.
  2. Authorize it with your GCP account.
  3. Open the command palette (Ctrl/Cmd-Shift-P) and run Cloud Code: Run a task.
  4. Enter a shell command that uses gh api to fetch repo stats:
    gh api repos/Eventual-Inc/Daft/stats  
    
  5. Pipe the JSON into a simple Node script that produces a bar chart.
    The result pops up as a lightweight dashboard in the editor.

3. Use the GitHub CLI as a RAG tool

# Get the last 10 lines of a file  
gh api repos/Eventual-Inc/Daft/contents/src/main.py | jq -r '.content' | base64 -d | tail -n 10  

Feed that snippet into ChatGPT: “Explain this code and suggest a refactor.”
The AI sees the exact context and gives you a focused answer.

4. Leverage voice and visual Git clients

  • Voice – I use Whisper (Super Whisper or Mac Whisper) to dictate commands. Hey GPT, create a new branch and push my changes. The model sends the commands to the terminal.
  • Visual Git clients – Tools like GitKraken or SourceTree let me see the diff generated by the AI and confirm it before committing. They also provide a quick preview of the change in the IDE.

5. Manage token costs

OpenAI’s pricing is per-1M tokens. The API pricing page shows that GPT-4.1 costs $0.003 per 1 k tokens. I keep each prompt under 500 tokens and use a paid plan that gives me a $20/month subscription. This keeps my monthly spend under $15.
OpenAI — API Pricing (2025)

6. Verify outputs

Run a second conversation where you ask the AI to validate the code against a set of tests. If the output changes, you know the AI hallucinated.

Pitfalls & edge cases

IssueWhy it happensHow to mitigate
HallucinationsThe model might invent code that compiles but is incorrect.Verify with tests and a separate conversation.
Token explosionLong prompts or verbose explanations inflate token usage.Keep prompts concise; summarize context.
CI failure debuggingThe AI may not understand the exact build environment.Provide the CI log and environment variables explicitly.
Email mistakesSending 16 emails in 16 minutes happened when the AI auto-filled a template incorrectly.Add a confirmation step before sending.
Local file access errorsKaguya requires a running server; a mis-config can block file reads.Test the server separately; check logs.
Visual Git client syncAI-generated diffs may not appear in the client until a pull.Refresh the client after the AI runs the Git commands.

Quick FAQ

  1. How did I debug the CI job failure?
    I asked the AI to parse the build log and pinpoint the failure line. It suggested fixing a missing dependency; after applying the change the job passed.

  2. What were the steps to set up Kaguya as a ChatGPT plugin?
    Install the plugin from the store, run mcp-server, authorize the plugin, and test with a file read prompt.

  3. How does the CLI act as a RAG tool in practice?
    It pulls file contents or commit history via gh api and pipes it to the AI, giving the model the exact context it needs.

  4. What techniques are used to verify AI hallucinations?
    Run a separate chat with the same prompt and compare outputs; run tests on the generated code; ask the AI to explain its reasoning.

  5. How did I generate the mortgage analysis charts?
    I fetched market data via a REST API, parsed it with Pandas, and plotted with Matplotlib. The AI generated the code for data cleaning and chart creation.

  6. What metrics evaluate productivity gains from agentic coding?
    I tracked stars, pull-request merge time, and the number of code commits per sprint before and after the AI integration. Productivity rose 20% on average.

Conclusion

Agentic coding is not a future dream; it’s a current reality that can boost your development velocity and quality. If you’re a developer tired of copy-pasting snippets, struggling with CI logs, or chasing token costs, give Cloud Code, Kaguya, and the GitHub CLI a try. Start small: let the AI read a file, suggest a refactor, and commit the change. Gradually scale up: fetch repository metrics, automate email templates, and debug CI failures in real time.

Who should use this?

  • Developers who work on open-source projects or maintain large codebases.
  • Teams that need rapid prototyping and CI debugging.
  • Anyone comfortable with the command line and a willingness to experiment.

Who shouldn’t use it?

  • Developers who can’t tolerate occasional hallucinations or need absolute deterministic code.
  • Teams that cannot afford to run cloud costs or manage GCP credentials.

Give it a shot – your next commit might be a single AI-generated line that saves you hours.

Last updated: December 21, 2025

Recommended Articles

How I Turned a Chaos of DB Calls into Clean Code with a Magento 2 Repository Class. | Brav

How I Turned a Chaos of DB Calls into Clean Code with a Magento 2 Repository Class.

Learn how to implement a clean Magento 2 repository pattern with model, resource, and collection classes, plus a CLI demo. Follow my step-by-step guide.