<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>code.charliegleason.com</title>
    <description>Code, resources, and thoughts on design and front-end development</description>
    <link>https://code.charliegleason.com</link>
    <atom:link href="https://code.charliegleason.com/resources/rss" rel="self" type="application/rss+xml"/>
    <language>en-us</language>
    <lastBuildDate>Sat, 11 Apr 2026 21:10:28 GMT</lastBuildDate>
    
    <item>
      <title>Dotfiles: One command to set up any Mac</title>
      <description>How I went from hours of manual setup to a single command that configures a brand new Mac with all my tools, settings, and preferences.</description>
      <link>https://code.charliegleason.com/dotfiles-one-command-setup</link>
      <guid isPermaLink="true">https://code.charliegleason.com/dotfiles-one-command-setup</guid>
      <pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>Setting up a new Mac used to take me days. I'd install Homebrew, configure my shell, set up Git with SSH keys, install Node, download my editor extensions, and inevitably realize at some point that I'd forgotten to copy over some config file.</p>
<p>I came across <a href="https://github.com/elithrar/dotfiles/">Matt Silverlock's dotfiles</a> and realized this could be automated. The idea is simple - version control your configuration so one command reproduces your entire environment on a new machine.</p>
<p>My dotfiles now handle everything from Xcode Command Line Tools to VS Code extensions. What used to take hours now happens while I make a crisp beverage.</p>
<h2>What it does</h2>
<ol>
<li><strong>Checks prerequisites</strong> - Verifies macOS and internet connection</li>
<li><strong>Installs Xcode CLT</strong> - The foundation for everything else</li>
<li><strong>Installs Homebrew</strong> - Package manager</li>
<li><strong>Installs packages</strong> - From a Brewfile with CLI tools and apps</li>
<li><strong>Sets up mise</strong> - Manages Node, Bun, pnpm, Python versions</li>
<li><strong>Configures zsh</strong> - oh-my-zsh with plugins and theme</li>
<li><strong>Generates SSH keys</strong> - Ed25519 keys with commit signing</li>
<li><strong>Symlinks dotfiles</strong> - GNU Stow links configs into place</li>
<li><strong>Sets up secrets</strong> - API token configuration</li>
<li><strong>Installs editor extensions</strong> - VS Code and Cursor</li>
<li><strong>Imports Raycast settings</strong> - Window management and shortcuts</li>
</ol>
<p>The script is idempotent - safe to run multiple times, skipping what's already installed.</p>
<h2>Work mode</h2>
<p>The <code>--work</code> flag separates personal from work setups, given I don't need certain tools or apps on a work laptop, but I want a consistent environment for non-sensitive configurations.</p>
<p>Running <code>sh install.sh --work</code> swaps the Brewfile:</p>
<ul>
<li><code>Brewfile</code> - Core tools for everyone</li>
<li><code>Brewfile.personal</code> - Personal stuff</li>
<li><code>Brewfile.work</code> - Work-specific stuff</li>
</ul>
<h2>CLI tools that make a difference</h2>
<p>These tools fundamentally change how I work in the terminal:</p>
<ul>
<li><strong><a href="https://github.com/sharkdp/bat">bat</a></strong> - Syntax-highlighting <code>cat</code></li>
<li><strong><a href="https://github.com/dandavison/delta">delta</a></strong> - Beautiful git diffs</li>
<li><strong><a href="https://github.com/sharkdp/fd">fd</a></strong> - Intuitive <code>find</code> replacement</li>
<li><strong><a href="https://github.com/junegunn/fzf">fzf</a></strong> - Fuzzy finder bound to <code>Ctrl+T</code> and <code>Ctrl+R</code></li>
<li><strong><a href="https://github.com/BurntSushi/ripgrep">ripgrep</a></strong> - Fast search respecting <code>.gitignore</code></li>
<li><strong><a href="https://github.com/ajeetdsouza/zoxide">zoxide</a></strong> - Smarter <code>cd</code> that learns your directories</li>
</ul>
<h2>Managing configuration with Stow</h2>
<p><a href="https://www.gnu.org/software/stow/">GNU Stow</a> manages the dotfiles by creating symlinks from <code>~</code> back to the repo. Edit <code>~/.zshrc</code> and the change is immediately in version control.</p>
<p>Stow has a <code>.stow-local-ignore</code> file that prevents certain files from being linked. I use it to skip the README, install scripts, and anything else that shouldn't live in <code>~</code>:</p>
<pre><code>\.git
\.gitignore
README.md
install.sh
Brewfile
</code></pre>
<p>My <code>.zshrc</code> configures fzf with custom previews, integrates zoxide, and activates mise. Everything is configured exactly how I like it, on every machine.</p>
<h2>Should you do this?</h2>
<p>If you set up more than one Mac every couple of years, yes. The investment pays for itself quickly. It's also very comforting.</p>
<p>Start small - fork my repo or <a href="https://github.com/elithrar/dotfiles/">Matt's</a>, and edit it to your liking. You could just use Homebrew, your shell, and your editor, and add more as you find repetitive tasks.</p>]]></content:encoded>
    </item>
    <item>
      <title>Rebuilding charliegleason.com</title>
      <description>How I rebuilt my personal site&apos;s authentication and infrastructure as a pnpm monorepo with Astro, Cloudflare Workers, Durable Objects, and real-time features.</description>
      <link>https://code.charliegleason.com/rebuilding-charliegleason-com</link>
      <guid isPermaLink="true">https://code.charliegleason.com/rebuilding-charliegleason-com</guid>
      <pubDate>Thu, 12 Feb 2026 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>Back in June 2024, I wrote about <a href="/public-private-remix-routes">a hack I'd cobbled together</a> to open-source my personal site while keeping some routes behind a password. The gist: <code>npm link</code> and <code>sync-directory</code> would watch a private repo and pipe protected routes into the public Remix app's <code>node_modules</code>. It worked. It also felt like it could fall apart at any moment, which didn't feel great.</p>
<p>The site has been completely rebuilt. The new <a href="https://charliegleason.com">charliegleason.com</a> is a proper <code>pnpm</code> monorepo running <a href="https://astro.build/">Astro</a> on <a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a>, with <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a> for real-time features and a real session-based authentication system. No more symlinks, either.</p>
<h2>The architecture</h2>
<p>The monorepo has four packages:</p>
<ul>
<li><strong><code>@charliegleason/web</code></strong> - The Astro SSR site, deployed as a Cloudflare Worker.</li>
<li><strong><code>@charliegleason/visitor-counter</code></strong> - A Durable Object that tracks real-time visitors via WebSocket.</li>
<li><strong><code>@charliegleason/lastfm-tracker</code></strong> - A Durable Object that broadcasts what I'm listening to on Last.fm via WebSocket.</li>
<li><strong><code>@charliegleason/private</code></strong> - Protected content that never leaves the private repo.</li>
</ul>
<p>Each package has its own <code>wrangler.jsonc</code> and its own deploy step. You find them all on GitHub: <a href="https://github.com/charliegleason/charliegleason.com">charliegleason/charliegleason.com</a>.</p>
<h2>Authentication flow</h2>
<p>The auth system is deliberately simple. Password-based, session-stored, no OAuth dance.</p>
<p>The flow: visit a protected route, middleware redirects you to <code>/login</code>, and you enter the password. A session gets created in Cloudflare KV with a 7-day TTL, a cookie gets set, and you're redirected back to where you were trying to go. KV handles expiration automatically.</p>
<p>I'm using <a href="https://effect.website/">Effect</a> for the session management, mostly because I wanted typed error handling without a bunch of try/catch nesting. The <code>createSession</code> function is a good example of what that looks like:</p>
<pre><code class="language-ts">export const createSession = (
  kv: KVNamespace,
  userId: string,
): Effect.Effect&#x3C;string, KVError> =>
  Effect.gen(function* () {
    const sessionId = crypto.randomUUID();
    const now = Date.now();

    const session: Session = {
      userId,
      createdAt: now,
      expiresAt: now + SESSION_DURATION_MS,
    };

    yield* Effect.tryPromise({
      try: () =>
        kv.put(`session:${sessionId}`, JSON.stringify(session), {
          expirationTtl: SESSION_DURATION_SECONDS,
        }),
      catch: (error) =>
        new KVError({
          operation: "put",
          key: `session:${sessionId}`,
          message: "Failed to create session",
          cause: error,
        }),
    });

    yield* Effect.logDebug(`Created session: ${sessionId}`);
    return sessionId;
  });
</code></pre>
<p>A UUID session ID, a JSON blob in KV, and a TTL that means I never have to clean up stale sessions. If you're thinking "that's a key-value store with extra steps," you're right, but the extra steps have types.</p>
<h2>Protected routes injection</h2>
<p>This is the part I'm most pleased with. Instead of the old symlink-and-sync approach, I (read: the robots) wrote an Astro integration that scans the private content directory at build time and uses <code>injectRoute</code> to register protected pages:</p>
<pre><code class="language-ts">export default function protectedRoutes(
  options: ProtectedRoutesOptions = {},
): AstroIntegration {
  return {
    name: "protected-routes",
    hooks: {
      "astro:config:setup": ({ injectRoute, config, logger }) => {
        const rootDir = fileURLToPath(config.root);
        const protectedDir = options.protectedDir || "../private/content";
        const contentDir = join(rootDir, protectedDir);

        if (!existsSync(contentDir)) {
          logger.info("No protected content directory found");
          logger.info("This is expected in public mirror builds");
          return;
        }

        const routes = findAstroFiles(contentDir, contentDir);

        for (const route of routes) {
          logger.info(`Injecting protected route: ${route.pattern}`);
          injectRoute({
            pattern: route.pattern,
            entrypoint: route.entrypoint,
          });
        }
      },
    },
  };
}
</code></pre>
<p>The key detail: when the private package isn't there - like in public mirror builds - it logs a friendly message and moves on. No crash, no build failure. The public site builds and deploys perfectly fine without the protected content. The private monorepo builds with everything.</p>
<h2>Durable Objects for real-time features</h2>
<p>I wanted a live visitor counter and a now-playing widget. Both felt like natural fits for Durable Objects - they're stateful, long-lived, and need to broadcast to multiple clients.</p>
<h3>Visitor counter</h3>
<p>The visitor counter is ridiculous simple. The count <em>is</em> the number of open WebSocket connections. No database, no persistence needed. Someone connects, the count goes up. Someone disconnects, the count goes down. Everyone gets a broadcast.</p>
<pre><code class="language-ts">export class VisitorCounter extends DurableObject&#x3C;Env> {
  async fetch(request: Request): Promise&#x3C;Response> {
    const upgradeHeader = request.headers.get("Upgrade");
    if (upgradeHeader !== "websocket") {
      return new Response("Expected WebSocket", { status: 426 });
    }

    const webSocketPair = new WebSocketPair();
    const [client, server] = Object.values(webSocketPair);

    this.ctx.acceptWebSocket(server);
    this.broadcast();

    return new Response(null, {
      status: 101,
      webSocket: client,
    });
  }

  async webSocketClose(ws: WebSocket, code: number, reason: string): Promise&#x3C;void> {
    ws.close(code, reason);
    this.broadcast();
  }

  private getCount(): number {
    return this.ctx.getWebSockets().length;
  }

  private broadcast(): void {
    const count = this.getCount();
    const message = JSON.stringify({ count });
    for (const ws of this.ctx.getWebSockets()) {
      try { ws.send(message); } catch {}
    }
  }
}
</code></pre>
<p>WebSocket hibernation means Cloudflare isn't charging me for idle connections, and the global singleton pattern means everyone sees the same count. It's the kind of feature that's disproportionately fun relative to the effort involved.</p>
<h3>Last.fm tracker</h3>
<p>The Last.fm tracker is slightly more involved. The frontend connects directly to the tracker Worker via WebSocket - it doesn't go through the main Astro app at all. When a connection comes in, the Durable Object immediately sends the current track (if it has one) and starts polling. Every 30 seconds, an Alarm fires, hits the Last.fm API, and checks if the track has changed. If it has, it broadcasts to all connected clients. If not, it does nothing.</p>
<p>The current track gets stored in Durable Object storage so it survives cold starts - when the DO spins back up, it restores from storage in the constructor before accepting any connections. The Last.fm API response includes a <code>nowplaying</code> attribute, which gets passed through as an <code>isNowPlaying</code> flag. The frontend uses that to show "Listening to" with animated equalizer bars when music is playing, or "Last played" with static bars when it's not.</p>
<p>One nice detail: the Alarm only reschedules itself if there are active WebSocket connections. No listeners, no polling. It starts up again when someone connects.</p>
<h2>Hosting on Cloudflare</h2>
<p>Everything runs on Cloudflare's edge. The main Astro site is a Worker. Sessions live in KV. The Durable Objects are deployed as independent Workers with service bindings connecting them to the main app.</p>
<p>The <code>wrangler.jsonc</code> for the web app ties it all together:</p>
<pre><code class="language-json">{
  "name": "astro-charliegleason-com",
  "kv_namespaces": [{ "binding": "SESSION", "id": "..." }],
  "durable_objects": {
    "bindings": [
      { "name": "VISITOR_COUNTER", "class_name": "VisitorCounter", "script_name": "visitor-counter" },
      { "name": "LASTFM_TRACKER", "class_name": "LastFmTracker", "script_name": "lastfm-tracker" }
    ]
  }
}
</code></pre>
<p>One thing that tripped me up: deployment order matters. The Durable Object Workers need to exist before the web app can bind to them. GitHub Actions handles the sequencing - Durable Objects deploy first, then the web app. I learned this the hard way, which is how I learn most things.</p>
<h2>The public mirror</h2>
<p>The mental model has completely flipped from the old approach. Previously, the public repo was the source of truth - the private repo consumed it via <code>npm link</code> and <code>sync-directory</code>. Now the private monorepo is the source of truth, and <code>git subtree push</code> mirrors <code>apps/web/</code> to the <a href="https://github.com/superhighfives/charliegleason.com">public repo</a>. Protected content never leaves the private repo. It never gets committed to the public mirror. It's a much cleaner separation.</p>
<h2>Wrapping up</h2>
<p>The old system worked, but the new setup works better. It's a real monorepo with real auth, real-time features, and a deployment pipeline that doesn't make me nervous. It's the kind of rebuild where the end result looks simple, which I think means it was worth doing.</p>]]></content:encoded>
    </item>
    <item>
      <title>Running Ollama on your desktop GPU from anywhere with Cloudflare Tunnel</title>
      <description>A complete guide to exposing Ollama running on a Windows PC with a 4090 GPU so you can use it remotely with opencode via a secure Cloudflare Tunnel.</description>
      <link>https://code.charliegleason.com/cloudflare-tunnel-ollama-opencode</link>
      <guid isPermaLink="true">https://code.charliegleason.com/cloudflare-tunnel-ollama-opencode</guid>
      <pubDate>Mon, 12 Jan 2026 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>At some point in the last few years I thought it was monumentally important that I build a gaming PC. I don't use it often as I'd hoped, but it turns out the 4090 is great for running local LLMs. I wanted to use it from my laptop when I'm traveling, because tokens are expensive and little projects are great. The solution turned out to be <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/">Cloudflare Tunnel</a> - a secure way to expose Ollama without opening ports or dealing with VPNs.</p>
<p>This is how I set it up to work with <a href="https://opencode.ai">opencode</a>, the open source AI coding assistant that makes it easy to use various LLM providers, including your own.</p>
<h2>The architecture</h2>
<p>The setup looks like this:</p>
<pre><code class="language-bash">[Laptop] → opencode → HTTPS → [Cloudflare Access] → [Cloudflare Tunnel] → [Windows PC:11434] → Ollama → 4090 GPU
</code></pre>
<p>Your laptop talks to Cloudflare over HTTPS, Cloudflare authenticates the request, then forwards it through an encrypted tunnel to your Windows PC (or whatever machine you're running whatever service on). From opencode's perspective, it's just hitting an HTTPS endpoint. From your PC's perspective, it's just receiving local requests.</p>
<p>No port forwarding, no dynamic DNS, no exposing your home network to the internet. Easy as.</p>
<h2>Why this is useful</h2>
<p>Running LLMs locally is nice. You get full control over the model, no API costs, and reasonable speed if you have decent hardware. But "local" usually means you're sitting at that specific machine.</p>
<p>Cloudflare Tunnel lets you access your local Ollama instance from anywhere - at a coffee shop, your phone, or shared with others. All without the usual networking hassles.</p>
<h2>Part 1: Windows PC setup</h2>
<p>Let's start by getting Ollama running properly on the Windows machine, but these instructions should work for any other service you want to expose.</p>
<h3>Installing Ollama</h3>
<p>Download the Windows installer from <a href="https://ollama.com/download/windows">ollama.com</a> and run it. It'll install natively and automatically use CUDA if you have an NVIDIA GPU.</p>
<h3>Choosing a model</h3>
<p>For coding with tool calling support, Qwen3 works well. The 30B MoE version fits comfortably in 24GB VRAM:</p>
<pre><code class="language-bash">ollama pull qwen3-coder:30b
</code></pre>
<blockquote>
<p>[!NOTE]
<strong>A note on model choice:</strong> I initially tried Qwen 2.5 Coder but ran into issues with tool calling in Ollama. Qwen3 handles it much better. Also avoid the VL (vision-language) variants if you're tight on memory.</p>
</blockquote>
<h3>Allowing network access</h3>
<p>By default, Ollama only listens on localhost. To accept connections from the tunnel, set an environment variable.</p>
<blockquote>
<p>[!NOTE]
Ollama does have a setting to expose itself to the network, so you can also try that. I probably should have, on reflection.</p>
</blockquote>
<p>In PowerShell (as administrator):</p>
<pre><code class="language-powershell">[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "0.0.0.0:11434", "Machine")
</code></pre>
<p>Restart the Ollama service after setting this. You can do this byfinding the Ollama service in Task Manager or in your running applications.</p>
<h3>Installing cloudflared</h3>
<p>The tunnel software is called <code>cloudflared</code>. Install it with <code>winget</code>:</p>
<pre><code class="language-powershell">winget install --id Cloudflare.cloudflared
</code></pre>
<p>If you're on a Mac, you can use <code>brew</code>:</p>
<pre><code class="language-bash">brew install cloudflared
</code></pre>
<p>Or you download it from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/">Cloudflare downloads page</a>.</p>
<h3>Authenticating cloudflared</h3>
<p>Run this to authenticate with your Cloudflare account:</p>
<pre><code class="language-powershell">cloudflared tunnel login
</code></pre>
<p>This opens your browser. Select the domain you want to use for the tunnel (you'll need a domain in your Cloudflare account).</p>
<h3>Creating the tunnel</h3>
<p>Create a named tunnel:</p>
<pre><code class="language-powershell">cloudflared tunnel create ollama-tunnel
</code></pre>
<p>Note the UUID that gets printed out. You'll need it in the config file.</p>
<h3>Configuring the tunnel</h3>
<p>Create a config file at <code>C:\Users\&#x3C;username>\.cloudflared\config.yml</code>:</p>
<pre><code class="language-yaml">tunnel: &#x3C;TUNNEL_UUID>
credentials-file: C:\Users\&#x3C;username>\.cloudflared\&#x3C;TUNNEL_UUID>.json

ingress:
  - hostname: ollama.yourdomain.com
    service: http://localhost:11434
  - service: http_status:404
</code></pre>
<p>Replace <code>&#x3C;TUNNEL_UUID></code> with the UUID from the previous step, and <code>ollama.yourdomain.com</code> with your actual domain.</p>
<p>The ingress rules tell cloudflared where to route requests. Anything hitting <code>ollama.yourdomain.com</code> gets forwarded to <code>localhost:11434</code> (where Ollama is listening).</p>
<h3>Adding DNS</h3>
<p>This creates the DNS record that points your domain to the tunnel:</p>
<pre><code class="language-powershell">cloudflared tunnel route dns ollama-tunnel ollama.yourdomain.com
</code></pre>
<h3>Running the tunnel</h3>
<p>Test it first:</p>
<pre><code class="language-powershell">cloudflared tunnel run ollama-tunnel
</code></pre>
<p>You can also <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/do-more-with-tunnels/local-management/as-a-service/windows/">install it as a Windows service</a> so it starts automatically.</p>
<p>There you go, your very own tunnel to the outside world.</p>
<h2>Part 2: Cloudflare Access setup</h2>
<p>Cloudflare Tunnel creates a secure connection between Cloudflare's edge and your PC, but by default anyone who knows the URL can use it. <a href="https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/secure-with-access/">Cloudflare Access</a> adds authentication.</p>
<h3>Generating a service token</h3>
<p>Since opencode uses API calls, we need a token that can authenticate without a browser.</p>
<p>Go to <code>Access</code> → <code>Service Auth</code> → <code>Service Tokens</code>, then click <strong>Create Service Token</strong>.</p>
<p>Name it something like <code>opencode-laptop</code>.</p>
<p>You'll get two values: <code>CF-Access-Client-Id</code> and <code>CF-Access-Client-Secret</code>. Copy both and keep them somewhere safe. You'll need them on your laptop.</p>
<p>These tokens provide full API access, so treat them like passwords. If they leak, rotate them immediately.</p>
<h3>Creating an Access application</h3>
<p>Go to your <a href="https://dash.cloudflare.com/">Cloudflare Dashboard</a> → Zero Trust → Access controls → Applications, then click "Add an application" and choose "Self-hosted."</p>
<pre><code class="language-yaml">Application name:    Ollama API
Session duration:    24 hours (or whatever you prefer)
Application domain:  ollama.yourdomain.com
</code></pre>
<h3>Creating the right policy</h3>
<p>Next we need to define who can access our tunnel.</p>
<p>When you create an access policy, there are different actions you can choose. For browser-based access, you'd use "Allow" and authenticate with email or SSO.</p>
<pre><code class="language-yaml">Policy name:         Service Token Access
Action:              Service Auth (not "Allow")
Include rule:
 - Selector:         Service Token
 - Value:            Select your service token (the one you made earlier)
</code></pre>
<p>Without the Service Auth action, API requests get redirected to a browser login page and fail. This took me longer to figure out than I'd like to admit.</p>
<h2>Part 3: Laptop setup</h2>
<p>Now we configure opencode to use the remote Ollama instance.</p>
<h3>Installing opencode</h3>
<p>On macOS:</p>
<pre><code class="language-bash">brew install anomalyco/tap/opencode
</code></pre>
<h3>Configuring opencode</h3>
<p>Create or edit <code>~/.config/opencode/opencode.json</code>:</p>
<pre><code class="language-json">{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "ollama-remote": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama (Remote)",
      "options": {
        "baseURL": "https://ollama.yourdomain.com/v1",
        "headers": {
          "CF-Access-Client-Id": "&#x3C;YOUR_CLIENT_ID>",
          "CF-Access-Client-Secret": "&#x3C;YOUR_CLIENT_SECRET>"
        }
      },
      "models": {
        "qwen3-coder": {
          "name": "Qwen 3 Coder 30B"
        }
      }
    }
  }
}
</code></pre>
<p>Replace the placeholder values with your actual domain and service token credentials.</p>
<p>The <code>baseURL</code> uses <code>/v1</code> because Ollama exposes an OpenAI-compatible API at that path. The headers include your Cloudflare Access credentials, which get sent with every request.</p>
<h3>Testing it</h3>
<p>First, test the API directly:</p>
<pre><code class="language-bash">curl -X POST https://ollama.yourdomain.com/api/generate \
  -H "CF-Access-Client-Id: &#x3C;YOUR_CLIENT_ID>" \
  -H "CF-Access-Client-Secret: &#x3C;YOUR_CLIENT_SECRET>" \
  -H "Content-Type: application/json" \
  -d '{"model": "qwen3-coder", "prompt": "Say hello", "stream": false}'
</code></pre>
<p>If you get a JSON response with generated text, it's working.</p>
<p>Then try opencode:</p>
<pre><code class="language-bash">opencode
</code></pre>
<p>You should see your remote model available as an option.</p>
<h2>Troubleshooting</h2>
<p>Here are the issues I ran into and how I fixed them.</p>
<h3>Tool calls failing</h3>
<p>If the model generates responses but tool calls don't work, check your context window. Tool calling needs at least 16K context. Make sure your Modelfile has <code>PARAMETER num_ctx 16384</code> or higher.</p>
<p>Also double-check you're using Qwen3, not Qwen 2.5 Coder. The latter has known issues with tool calling in Ollama.</p>
<h3>Connection refused</h3>
<p>If you can't connect at all, verify the <code>OLLAMA_HOST</code> environment variable is set to <code>0.0.0.0:11434</code>. Then restart the Ollama service.</p>
<p>Check the tunnel is running:</p>
<pre><code class="language-powershell">cloudflared tunnel info ollama-tunnel
</code></pre>
<h3>401/403 errors or redirects</h3>
<p>If you're getting authentication errors, verify your service token headers are correct in the opencode config.</p>
<p>Also double-check that your Access policy uses "Service Auth" action, not "Allow." This was my mistake - I had created an email-based policy initially, which doesn't work for API requests.</p>
<h3>Model crashes with memory errors</h3>
<p>If you see "memory layout cannot be allocated" errors, reduce the context size in your Modelfile. Try 8192 instead of 16384.</p>
<p>You can also check VRAM usage with:</p>
<pre><code class="language-bash">nvidia-smi
</code></pre>
<p>If you're maxing out, consider using a smaller model or a more aggressive quantization.</p>
<h3>Slow responses</h3>
<p>This is normal for large models. The 30B model takes a few seconds to start generating, especially on the first request after loading the model into VRAM.</p>
<p>I typically see around 70 tokens per second with the Q4 quantization on a 4090, which is fast enough for coding.</p>
<h2>Security considerations</h2>
<p>A few things to keep in mind.</p>
<p>Service tokens bypass browser authentication. If someone gets your tokens, they have full access to your Ollama instance. Don't commit them to git, don't share screenshots with them visible, and rotate them if you suspect they've leaked.</p>
<p>The tunnel credentials (the JSON file in <code>.cloudflared</code>) should also stay private. They're what allow cloudflared to connect to your specific tunnel.</p>
<p>If your laptop has a static IP, you can add IP restrictions in Cloudflare Access for an extra layer of security.</p>
<h2>What's next</h2>
<p>This setup has been surprisingly reliable. The tunnel can reconnect automatically if my home internet drops, and Cloudflare's edge handles the authentication before anything reaches my PC.</p>
<p>Latency is good - usually under 100ms - which is barely noticeable compared to the time the model takes to generate responses.</p>
<p>The same pattern works for other services too. I've used Cloudflare Tunnel to expose Jupyter notebooks, local web servers, and various development tools. It's become my default way of accessing home-hosted services remotely.</p>
<p>Now go put that gaming GPU to work.</p>]]></content:encoded>
    </item>
    <item>
      <title>Building an MCP weather server on Cloudflare Workers</title>
      <description>A complete guide to building and deploying a Model Context Protocol server that gives Claude weather superpowers.</description>
      <link>https://code.charliegleason.com/building-mcp-weather-cloudflare</link>
      <guid isPermaLink="true">https://code.charliegleason.com/building-mcp-weather-cloudflare</guid>
      <pubDate>Tue, 04 Nov 2025 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>I wanted to give Claude the ability to check the weather. Not by scraping websites or parsing HTML, but properly - with actual tools that return structured data. The <a href="https://modelcontextprotocol.io/">Model Context Protocol (MCP)</a> makes this possible, and <a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a> turned out to be the perfect place to host it.</p>
<p>MCP is Anthropic's open protocol for connecting AI assistants to external data sources and tools. Instead of Claude having a fixed set of capabilities, you can extend it with custom servers that provide new tools, resources, and prompts. Think of it as a plugin system, but standardized and interoperable.</p>
<p>This is a walkthrough of building a weather MCP server from scratch, getting it running on Cloudflare's edge network, and hooking it up to Claude Desktop so you can ask about weather anywhere in the world.</p>
<h2>Try it now</h2>
<p>Want to see it in action before building your own? I've deployed a working version you can use right away. Add this to your Claude Desktop config:</p>
<pre><code class="language-json">{
  "mcpServers": {
    "weather": {
      "url": "https://weather-mcp-server.superhighfives.workers.dev/mcp"
    }
  }
}
</code></pre>
<p>The full source code is available at <a href="https://github.com/superhighfives/weather-mcp">github.com/superhighfives/weather-mcp</a>.</p>
<h2>What we're building</h2>
<p>A simple weather MCP server that can be used to get the current weather and forecast for any city in the world. Our server will provide two tools:</p>
<ul>
<li><code>get-current-weather</code> - Current conditions for any city</li>
<li><code>get-forecast</code> - Multi-day forecasts (1-7 days)</li>
</ul>
<p>We'll use the <a href="https://open-meteo.com/">Open-Meteo API</a> for weather data (it's free and doesn't require authentication), TypeScript for type safety, and <a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a> for deployment. The whole thing will run on the edge with sub-100ms response times globally.</p>
<h2>Prerequisites</h2>
<p>Before starting, make sure you have:</p>
<ul>
<li><strong>Node.js 18 or later</strong> - The MCP SDK and Wrangler require modern Node.js features</li>
<li><strong>npm or yarn</strong> - For installing dependencies</li>
<li><strong>A Cloudflare account</strong> - Free tier is sufficient (<a href="https://dash.cloudflare.com/sign-up">sign up here</a>)</li>
<li><strong>Claude Desktop</strong> - To test your MCP server (<a href="https://claude.ai/download">download here</a>)</li>
</ul>
<p>You can verify your Node.js version with:</p>
<pre><code class="language-bash">node --version
</code></pre>
<p>If you're on an older version, update Node.js before proceeding.</p>
<h2>Setting up the project</h2>
<p>Start by creating a new directory and initializing a Node.js project:</p>
<pre><code class="language-bash">mkdir weather-mcp-server
cd weather-mcp-server
npx wrangler init
</code></pre>
<p><a href="https://developers.cloudflare.com/workers/wrangler/">Wrangler</a> is Cloudflare's CLI for deploying Workers. You'll need to select a directory. Choose the <code>Hello World example</code>, and then <code>Worker only</code>, with the language set to <code>TypeScript</code>.</p>
<p>Maybe grab a coffee while you wait.</p>
<hr>
<p>Update the <code>wrangler.jsonc</code> file with <code>"compatibility_flags": ["nodejs_compat"]</code>. The <code>nodejs_compat</code> flag enables Node.js APIs in the Workers runtime.</p>
<pre><code class="language-json">{
	"$schema": "node_modules/wrangler/config-schema.json",
	"name": "weather-mcp-example",
	"main": "src/index.ts",
	"compatibility_date": "2025-11-06",
	"compatibility_flags": ["nodejs_compat"],
	"observability": {
		"enabled": true
	}
}
</code></pre>
<p>Once that's done, install your dependencies:</p>
<pre><code class="language-bash">npm install @modelcontextprotocol/sdk @types/node agents typescript zod
</code></pre>
<p>Here's what each package does:</p>
<ul>
<li><code>@modelcontextprotocol/sdk</code> - The official MCP SDK for building servers</li>
<li><code>@types/node</code> - TypeScript type definitions for Node.js</li>
<li><code>agents</code> - Cloudflare's MCP framework that handles HTTP transport</li>
<li><code>typescript</code> - Type safety and compilation</li>
<li><code>zod</code> - Runtime schema validation for tool inputs</li>
</ul>
<h2>Building the MCP server</h2>
<p>Jump into <code>src/index.ts</code> and delete whatever code is there. You can then start with the imports:</p>
<pre><code class="language-ts">import { createMcpHandler } from "agents/mcp";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
</code></pre>
<p>The MCP SDK provides the core server functionality, <code>agents/mcp</code> handles the HTTP transport for Cloudflare Workers, and Zod lets us define type-safe schemas for tool inputs.</p>
<h3>Weather codes and helper functions</h3>
<p>We'll need to add some types for the weather data. Open-Meteo returns data with the following structure:</p>
<pre><code class="language-ts">interface GeocodingResponse {
  results?: Array&#x3C;{
    latitude: number;
    longitude: number;
    name: string;
  }>;
}

interface WeatherResponse {
  current: {
    temperature_2m: number;
    relative_humidity_2m: number;
    weathercode: number;
    windspeed_10m: number;
    winddirection_10m: number;
  };
}

interface ForecastResponse {
  daily: {
    time: string[];
    temperature_2m_max: number[];
    temperature_2m_min: number[];
    weathercode: number[];
    precipitation_probability_max: number[];
  };
}
</code></pre>
<p>Open-Meteo returns numeric weather codes (part of the WMO standard), so we need to map them to readable descriptions:</p>
<pre><code class="language-ts">const WEATHER_CODES: Record&#x3C;number, string> = {
  0: "Clear sky",
  1: "Mainly clear",
  2: "Partly cloudy",
  3: "Overcast",
  45: "Foggy",
  48: "Depositing rime fog",
  51: "Light drizzle",
  53: "Moderate drizzle",
  55: "Dense drizzle",
  61: "Slight rain",
  63: "Moderate rain",
  65: "Heavy rain",
  71: "Slight snow",
  73: "Moderate snow",
  75: "Heavy snow",
  77: "Snow grains",
  80: "Slight rain showers",
  81: "Moderate rain showers",
  82: "Violent rain showers",
  85: "Slight snow showers",
  86: "Heavy snow showers",
  95: "Thunderstorm",
  96: "Thunderstorm with slight hail",
  99: "Thunderstorm with heavy hail",
};

function getWeatherDescription(code: number): string {
  return WEATHER_CODES[code] || "Unknown";
}
</code></pre>
<h3>Geocoding cities to coordinates</h3>
<p>Open-Meteo needs latitude and longitude, so we'll use their geocoding API to convert city names:</p>
<pre><code class="language-ts">async function geocodeCity(cityName: string): Promise&#x3C;{
  latitude: number;
  longitude: number
}> {
  const url = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(
    cityName
  )}&#x26;count=1&#x26;language=en&#x26;format=json`;

  const response = await fetch(url);
  if (!response.ok) {
    throw new Error(`Geocoding API error: ${response.statusText}`);
  }

  const data = await response.json() as GeocodingResponse;

  if (!data.results || data.results.length === 0) {
    throw new Error(`City "${cityName}" not found`);
  }

  const result = data.results[0];
  return {
    latitude: result.latitude,
    longitude: result.longitude,
  };
}
</code></pre>
<p>This searches for the city name and returns the first match. In a production app you might want to handle multiple matches or be more specific about which city you mean (there are multiple Portlands, for example).</p>
<h3>Fetching current weather</h3>
<p>Now we can fetch the actual weather data:</p>
<pre><code class="language-ts">async function getCurrentWeather(latitude: number, longitude: number) {
  const url = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&#x26;longitude=${longitude}&#x26;current=temperature_2m,relative_humidity_2m,weathercode,windspeed_10m,winddirection_10m`;

  const response = await fetch(url);
  if (!response.ok) {
    throw new Error(`Weather API error: ${response.statusText}`);
  }

  const data = await response.json() as WeatherResponse;
  const current = data.current;

  return {
    temperature: current.temperature_2m,
    weathercode: current.weathercode,
    windspeed: current.windspeed_10m,
    winddirection: current.winddirection_10m,
    humidity: current.relative_humidity_2m,
  };
}
</code></pre>
<p>The <code>current</code> parameter in the URL specifies which fields we want. Open-Meteo has dozens of options - precipitation, UV index, visibility, etc. We're keeping it simple with temperature, humidity, wind, and conditions.</p>
<h3>Fetching forecasts</h3>
<p>For multi-day forecasts, we need a slightly different endpoint:</p>
<pre><code class="language-ts">async function getWeatherForecast(
  latitude: number,
  longitude: number,
  days: number
) {
  const url = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&#x26;longitude=${longitude}&#x26;daily=weathercode,temperature_2m_max,temperature_2m_min,precipitation_probability_max&#x26;timezone=auto&#x26;forecast_days=${days}`;

  const response = await fetch(url);
  if (!response.ok) {
    throw new Error(`Weather API error: ${response.statusText}`);
  }

  const data = await response.json() as ForecastResponse;
  const daily = data.daily;

  const forecast = [];
  for (let i = 0; i &#x3C; daily.time.length; i++) {
    forecast.push({
      date: daily.time[i],
      temperature_max: daily.temperature_2m_max[i],
      temperature_min: daily.temperature_2m_min[i],
      weathercode: daily.weathercode[i],
      precipitation_probability: daily.precipitation_probability_max[i],
    });
  }

  return forecast;
}
</code></pre>
<p>The daily endpoint returns arrays of values, one for each day. We're transforming this into an array of objects to make it easier to work with.</p>
<h3>Creating the MCP server</h3>
<p>Now we can create our MCP server instance:</p>
<pre><code class="language-ts">const server = new McpServer({
  name: "weather-server",
  version: "1.0.0",
});
</code></pre>
<h3>Adding the current weather tool</h3>
<p>Tools in MCP are defined with a name, description, input schema (using Zod), and a handler function:</p>
<pre><code class="language-ts">server.tool(
  "get-current-weather",
  "Get the current weather for a specific city. Returns temperature, conditions, humidity, and wind information.",
  {
    city: z.string().describe("Name of the city (e.g., 'London', 'New York', 'Tokyo')"),
  },
  async ({ city }) => {
    try {
      const coords = await geocodeCity(city);
      const weather = await getCurrentWeather(coords.latitude, coords.longitude);

      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(
              {
                city,
                coordinates: coords,
                current: {
                  temperature: `${weather.temperature}°C`,
                  conditions: getWeatherDescription(weather.weathercode),
                  humidity: `${weather.humidity}%`,
                  wind: {
                    speed: `${weather.windspeed} km/h`,
                    direction: `${weather.winddirection}°`,
                  },
                },
              },
              null,
              2
            ),
          },
        ],
      };
    } catch (error) {
      const errorMessage = error instanceof Error ? error.message : String(error);
      return {
        content: [
          {
            type: "text",
            text: `Error: ${errorMessage}`,
          },
        ],
        isError: true,
      };
    }
  }
);
</code></pre>
<p>The handler calls our geocoding and weather functions, then formats the response as JSON. The <code>content</code> array can contain multiple items - text, images, or other content types. We're just using text here.</p>
<p>Error handling is important. If the city isn't found or the API is down, we return an error response instead of throwing, which gives Claude a helpful message to show the user.</p>
<h3>Adding the forecast tool</h3>
<p>The forecast tool is similar but takes an additional parameter for the number of days:</p>
<pre><code class="language-ts">server.tool(
  "get-forecast",
  "Get weather forecast for a specific city. Returns daily forecasts with high/low temperatures and conditions.",
  {
    city: z.string().describe("Name of the city (e.g., 'London', 'New York', 'Tokyo')"),
    days: z
      .number()
      .min(1)
      .max(7)
      .describe("Number of days to forecast (1-7)")
      .default(3),
  },
  async ({ city, days }) => {
    try {
      const coords = await geocodeCity(city);
      const forecast = await getWeatherForecast(
        coords.latitude,
        coords.longitude,
        Math.min(Math.max(days, 1), 7)
      );

      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(
              {
                city,
                coordinates: coords,
                forecast: forecast.map((day) => ({
                  date: day.date,
                  temperature: {
                    max: `${day.temperature_max}°C`,
                    min: `${day.temperature_min}°C`,
                  },
                  conditions: getWeatherDescription(day.weathercode),
                  precipitation_probability: `${day.precipitation_probability}%`,
                })),
              },
              null,
              2
            ),
          },
        ],
      };
    } catch (error) {
      const errorMessage = error instanceof Error ? error.message : String(error);
      return {
        content: [
          {
            type: "text",
            text: `Error: ${errorMessage}`,
          },
        ],
        isError: true,
      };
    }
  }
);
</code></pre>
<p>Zod handles validation automatically. If someone asks for 10 days, it'll clamp to 7. If they don't specify, it defaults to 3 days.</p>
<h3>Adding a resource</h3>
<p>Resources in MCP are static content that the server can provide. They're useful for documentation or contextual information:</p>
<pre><code class="language-ts">server.resource(
  "weather-info",
  "weather://info",
  {
    name: "Weather Service Information",
    description: "Information about the weather service and API",
    mimeType: "text/plain",
  },
  async () => ({
    contents: [
      {
        uri: "weather://info",
        mimeType: "text/plain",
        text: `Weather MCP Server (Cloudflare Workers Edition)

This server provides weather information using the Open-Meteo API.

Available Tools:
1. get-current-weather - Get current weather for any city
2. get-forecast - Get weather forecast for 1-7 days

Data Source: Open-Meteo (https://open-meteo.com/)
- Free weather API with no authentication required
- Provides current conditions and forecasts
- Data updated regularly from meteorological services

Example Usage:
- "What's the weather in London?"
- "Get me a 5-day forecast for Tokyo"
- "What's the current temperature in New York?"

Deployment: Cloudflare Workers (Global Edge Network)
- Low latency worldwide
- Automatic scaling
- Highly available`,
      },
    ],
  })
);
</code></pre>
<p>Claude can read this resource to understand what the server does and how to use it. It's optional but helpful for more complex servers.</p>
<h2>Cloudflare Workers integration</h2>
<p>Now we need to make this work as a Cloudflare Worker. The <code>agents</code> package provides a helper that wraps our MCP server with HTTP transport:</p>
<pre><code class="language-ts">const mcpHandler = createMcpHandler(server, {
  route: "/mcp",
});
</code></pre>
<p>This creates a request handler that speaks the MCP protocol over HTTP. Claude will POST to this endpoint with JSON-RPC requests.</p>
<p>Finally, export the Worker fetch handler:</p>
<pre><code class="language-ts">export default {
  async fetch(request: Request, env: any, ctx: any): Promise&#x3C;Response> {
    const url = new URL(request.url);

    // Handle MCP requests
    if (url.pathname === "/mcp") {
      return mcpHandler(request, env, ctx);
    }

    // Health check endpoint
    if (url.pathname === "/health") {
      return new Response(
        JSON.stringify({
          status: "healthy",
          server: "weather-mcp-server",
          version: "1.0.0",
          endpoint: "/mcp",
        }),
        {
          headers: { "Content-Type": "application/json" },
        }
      );
    }

    // Root endpoint with information
    if (url.pathname === "/") {
      return new Response(
        `Weather MCP Server - Cloudflare Workers

Available Endpoints:
  POST /mcp     - MCP endpoint
  GET  /health  - Health check

To connect with Claude Desktop or other MCP clients, use the /mcp endpoint.

Example configuration:
{
  "mcpServers": {
    "weather": {
      "url": "https://your-worker.your-subdomain.workers.dev/mcp"
    }
  }
}

Documentation: https://modelcontextprotocol.io/
Source: https://github.com/cloudflare/agents
`,
        {
          headers: { "Content-Type": "text/plain" },
        }
      );
    }

    return new Response("Not Found", { status: 404 });
  },
};
</code></pre>
<p>We're setting up three routes:</p>
<ul>
<li><code>/mcp</code> - The actual MCP endpoint</li>
<li><code>/health</code> - Health check for monitoring</li>
<li><code>/</code> - Documentation and usage info</li>
</ul>
<p>This makes it easy to verify the server is running and see how to connect to it.</p>
<h2>Deploying to Cloudflare Workers</h2>
<p>If you haven't already, authenticate with Cloudflare:</p>
<pre><code class="language-bash">wrangler login
</code></pre>
<p>This will open your browser and ask you to log in. Once you're done, deploy:</p>
<pre><code class="language-bash">npm run deploy
</code></pre>
<p>Wrangler will bundle your code and push it to Cloudflare's edge network. You'll get a URL like <code>https://weather-mcp-server.your-subdomain.workers.dev</code>.</p>
<p>Visit that URL in your browser and you should see the documentation page. Try <code>https://your-worker-url/health</code> to verify it's working.</p>
<h2>Using with Claude Desktop</h2>
<p>Now for the fun part - connecting it to Claude.</p>
<p>On macOS, Claude Desktop's config file is at:</p>
<pre><code>~/Library/Application Support/Claude/claude_desktop_config.json
</code></pre>
<p>On Windows, it's at:</p>
<pre><code>%APPDATA%\Claude\claude_desktop_config.json
</code></pre>
<p>Open that file (create it if it doesn't exist) and add your weather server:</p>
<pre><code class="language-json">{
  "mcpServers": {
    "weather": {
      "url": "https://weather-mcp-server.your-subdomain.workers.dev/mcp"
    }
  }
}
</code></pre>
<p>Replace <code>your-subdomain</code> with your actual Cloudflare Workers subdomain.</p>
<p>Restart Claude Desktop. You should see a little hammer icon indicating tools are available. Now you can ask:</p>
<ul>
<li>"What's the weather like in Tokyo?"</li>
<li>"Give me a 5-day forecast for London"</li>
<li>"What's the current temperature in San Francisco?"</li>
</ul>
<p>Claude will call your weather tools, get the data, and format it into a natural response. You've just extended Claude with live weather data, running on Cloudflare's global network.</p>
<h2>Why Cloudflare Workers</h2>
<p>You might wonder why we're using Workers instead of running this locally. A few reasons:</p>
<p><strong>Always available</strong> - Your MCP server is up 24/7, not just when your laptop is on. This matters if you're using Claude Desktop across multiple machines or sharing the server with others.</p>
<p><strong>Global edge network</strong> - Cloudflare runs in 200+ locations worldwide. Requests are handled by the nearest datacenter, so weather queries are fast no matter where you are.</p>
<p><strong>Automatic scaling</strong> - If you build something popular (or give the URL to your team), it'll handle the traffic without you doing anything. Workers scale from zero to millions of requests automatically.</p>
<p><strong>Free tier is generous</strong> - 100,000 requests per day free. For a personal MCP server, that's effectively unlimited.</p>
<p><strong>Stateless and simple</strong> - No databases to manage, no servers to patch, no Docker containers to maintain. It's just code that runs when needed.</p>
<h2>Addendum: Running locally</h2>
<p>For development, you can run the server locally with Wrangler's dev mode:</p>
<pre><code class="language-bash">npm run dev
</code></pre>
<p>This starts a local server at <code>http://localhost:8787</code>. You can test the endpoints:</p>
<pre><code class="language-bash">curl http://localhost:8787/health
</code></pre>
<p>To use it with Claude Desktop, update your config to point to localhost:</p>
<pre><code class="language-json">{
  "mcpServers": {
    "weather": {
      "url": "http://localhost:8787/mcp"
    }
  }
}
</code></pre>
<p>Restart Claude Desktop and it'll connect to your local server instead. This is useful for testing changes before deploying.</p>
<p>The nice thing about MCP over HTTP is you can switch between local and deployed just by changing the URL. When you're ready to go live, deploy to Workers and update the config. No code changes needed.</p>
<h2>What's next</h2>
<p>This is a basic weather server, but it shows the pattern for building MCP servers on Cloudflare Workers. You could extend it with:</p>
<ul>
<li>More weather data (UV index, air quality, hourly forecasts)</li>
<li>Multiple weather sources (compare forecasts from different APIs)</li>
<li>Historical weather data</li>
<li>Weather alerts and warnings</li>
<li>Caching to reduce API calls</li>
</ul>
<p>The same pattern works for other kinds of data too. Want to give Claude access to your company's internal APIs? Want to build tools that read from databases or call external services? MCP servers on Workers are a solid foundation.</p>
<p>The combination of MCP's protocol, Cloudflare's <code>agents</code> package, and Workers' edge deployment makes it surprisingly straightforward to extend Claude with whatever capabilities you need. And unlike browser extensions or API wrappers, MCP servers work across all MCP clients - not just Claude Desktop.</p>
<p>Now go build something useful and tell Claude what the weather's like.</p>]]></content:encoded>
    </item>
    <item>
      <title>Getting started with Cloudflare Workers for image generation</title>
      <description>A quick guide to building dynamic Open Graph images on the edge with Cloudflare Workers.</description>
      <link>https://code.charliegleason.com/getting-started-cloudflare-workers-image-generation</link>
      <guid isPermaLink="true">https://code.charliegleason.com/getting-started-cloudflare-workers-image-generation</guid>
      <pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>I've been working on generating dynamic social card images for this site, and Cloudflare Workers turned out to be perfect for the job. They run on the edge, they're fast, and they don't require spinning up a browser with Puppeteer just to render some text on a colored background.</p>
<p>The <a href="https://github.com/kvnang/workers-og">workers-og</a> package makes this surprisingly straightforward. It's inspired by Vercel's <code>@vercel/og</code> but designed specifically for Cloudflare Workers, handling WASM bundling differently so it actually works on Cloudflare's edge runtime.</p>
<h2>What you'll need</h2>
<p>Before you start, make sure you have Node.js and npm installed. That's pretty much it.</p>
<h2>Setting up Wrangler</h2>
<p>Wrangler is Cloudflare's command-line tool for working with Workers. Install it globally:</p>
<pre><code class="language-bash">npm install -g wrangler
</code></pre>
<p>Then authenticate with your Cloudflare account:</p>
<pre><code class="language-bash">wrangler login
</code></pre>
<p>This will open your browser and ask you to log in. Once you're done, you're ready to create a Worker.</p>
<h2>Creating your Worker</h2>
<p>Create a new Worker project:</p>
<pre><code class="language-bash">wrangler init my-og-image-worker
</code></pre>
<p>This will walk you through a few setup questions.</p>
<p>Choose <code>Hello World</code> > <code>Worker only</code> > <code>TypeScript</code>.</p>
<p>Once that's done, navigate into your project and install workers-og:</p>
<pre><code class="language-bash">cd my-og-image-worker
npm install workers-og
</code></pre>
<h2>Writing the Worker</h2>
<p>Open up your <code>src/index.ts</code> and replace the contents with this:</p>
<pre><code class="language-ts">import { ImageResponse } from 'workers-og'

export default {
  async fetch(request: Request): Promise&#x3C;Response> {
    const url = new URL(request.url)
    const title = url.searchParams.get('title') || 'Hello World'

    const html = `
      &#x3C;div style="display: flex; flex-direction: column; align-items: center; justify-content: center; height: 630px; width: 1200px; font-family: sans-serif; background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);">
        &#x3C;h1 style="font-size: 60px; font-weight: bold; margin: 0; color: white; padding: 40px; text-align: center;">
          ${title}
        &#x3C;/h1>
      &#x3C;/div>
    `

    return new ImageResponse(html, {
      width: 1200,
      height: 630,
    })
  },
}
</code></pre>
<p>That's it. The <code>ImageResponse</code> class takes your HTML and turns it into a PNG image. The dimensions (1200x630) are the standard size for Open Graph images, which is what shows up when you share a link on social media.</p>
<h2>Testing locally</h2>
<p>Run the development server:</p>
<pre><code class="language-bash">wrangler dev
</code></pre>
<p>This will start a local server, usually at <code>http://localhost:8787</code>. Open that in your browser and you should see your generated image. Try adding <code>?title=Your Text Here</code> to the URL to see it change.</p>
<p>The HTML you pass to <code>ImageResponse</code> gets parsed using Cloudflare's HTMLRewriter API, so you can use standard HTML and inline styles. It's not a full browser, so keep your styling simple - think flexbox and basic CSS properties.</p>
<h2>Deploying</h2>
<p>When you're ready to deploy, it's just:</p>
<pre><code class="language-bash">wrangler deploy
</code></pre>
<p>Your Worker will be live on Cloudflare's edge network. You'll get a URL like <code>https://my-og-image-worker.YOUR_SUBDOMAIN.workers.dev</code> that you can use anywhere.</p>
<h2>Making it more useful</h2>
<p>The real power comes from making these images dynamic. You can pull in data from query parameters, fetch content from your site, or even grab data from a database.</p>
<p>Here's a slightly fancier example that uses multiple parameters:</p>
<pre><code class="language-ts">import { ImageResponse } from 'workers-og'

export default {
  async fetch(request: Request): Promise&#x3C;Response> {
    const url = new URL(request.url)
    const title = url.searchParams.get('title') || 'Hello World'
    const subtitle = url.searchParams.get('subtitle') || ''
    const theme = url.searchParams.get('theme') || 'purple'

    const gradients = {
      purple: 'linear-gradient(135deg, #667eea 0%, #764ba2 100%)',
      blue: 'linear-gradient(135deg, #0093E9 0%, #80D0C7 100%)',
      orange: 'linear-gradient(135deg, #FA8BFF 0%, #2BD2FF 50%, #2BFF88 100%)',
    }

    const html = `
      &#x3C;div style="display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100%; width: 100%; font-family: sans-serif; background: ${gradients[theme] || gradients.purple}">
        &#x3C;h1 style="font-size: 60px; font-weight: bold; margin: 0; color: white; padding: 40px 80px 20px; text-align: center;">
          ${title}
        &#x3C;/h1>
        ${subtitle ? `&#x3C;p style="font-size: 30px; color: white; opacity: 0.9; margin: 0; padding: 0 80px;">${subtitle}&#x3C;/p>` : ''}
      &#x3C;/div>
    `

    return new ImageResponse(html, {
      width: 1200,
      height: 630,
    })
  },
}
</code></pre>
<p>Now you can customize the output with URLs like:</p>
<ul>
<li><code>?title=My Post&#x26;subtitle=A short description&#x26;theme=blue</code></li>
<li><code>?title=Another Post&#x26;theme=orange</code></li>
</ul>
<h2>A few notes</h2>
<p>The workers-og library uses <a href="https://github.com/vercel/satori">Satori</a> under the hood for rendering, which means it's converting your HTML and CSS into an image without needing a browser. This is what makes it fast enough to run on the edge.</p>
<p>One thing to watch out for - not all CSS properties are supported. Stick to flexbox layouts, basic colors, and standard fonts. If you need custom fonts, you'll need to fetch them and pass them to the <code>ImageResponse</code> options, which I won't get into here but is covered in the workers-og documentation.</p>
<p>Also, because this is running on every request, you might want to think about caching. Cloudflare Workers have built-in caching support, or you could generate images at build time and serve them statically if they don't need to be dynamic.</p>
<h2>Why this is useful</h2>
<p>I use these for automatically generating social card images for blog posts. Instead of manually creating an image in Figma for every post, I can just point to a URL like <code>/og-image?title=Post Title</code> and get a consistent, on-brand image every time.</p>
<p>It's also handy for user-generated content. If you're building something where users create pages or profiles, you can generate unique social cards for each one without storing thousands of image files.</p>
<p>The whole thing runs on Cloudflare's edge network, so it's fast no matter where your visitors are, and you're only charged for the compute time when someone actually requests an image. Good times all round.</p>]]></content:encoded>
    </item>
    <item>
      <title>Adding beautiful shaders to your site with paper-design/shaders</title>
      <description>How to use WebGL on easy mode.</description>
      <link>https://code.charliegleason.com/paper-design-shaders</link>
      <guid isPermaLink="true">https://code.charliegleason.com/paper-design-shaders</guid>
      <pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>I've used shaders, GPU-backed canvases—in a <a href="https://lysterfieldlake.com">bunch of my projects</a>. While I love them, I've always been a bit frustrated by the boilerplate for setting up simple ones. The visual possibilities are endless, but there's a steep learning curve and a bunch of setup complexity. That's why <a href="https://github.com/paper-design/shaders">paper-design/shaders</a> is great. Zero-dependency shaders out of the box.</p>
<h2>Getting it running</h2>
<p>The setup is refreshingly simple. Just grab the package that matches your environment:</p>
<pre><code class="language-bash"># For React projects
npm i @paper-design/shaders-react

# For vanilla JavaScript
npm i @paper-design/shaders
</code></pre>
<h2>Examples that actually work</h2>
<p>Let me show you a couple of ways I've been using Paper Shaders:</p>
<h3>React: Animated mesh gradient</h3>
<p>This one creates a flowing, organic background effect that works great for hero sections.</p>
<p>Here's what that looks like in action:</p>
<pre><code class="language-jsx">import { MeshGradient } from '@paper-design/shaders-react' // ^0.0.55

export default function App() {
  return (
    &#x3C;div className="relative h-full">
      &#x3C;MeshGradient
        colors={['#5100ff', '#00ff80', '#ffcc00', '#ea00ff']}
        distortion={1}
        swirl={0.8}
        speed={0.2}
        height='100%'
      />
      &#x3C;div className="absolute inset-0 flex items-center justify-center">
        👋
      &#x3C;/div>
    &#x3C;/div>
  )
}
</code></pre>
<h3>Vanilla JavaScript: Custom shader effect</h3>
<p>For non-React projects, the vanilla JavaScript approach is just as straightforward:</p>
<pre><code class="language-javascript">import { createShader } from '@paper-design/shaders'

const canvas = document.querySelector('#shader-canvas')
const shader = createShader(canvas, {
  type: 'dithering',
  colors: ['#ff6b6b', '#4ecdc4', '#45b7d1'],
  intensity: 0.5,
  speed: 0.1
})

// Start the animation
shader.start()
</code></pre>
<p>The library comes with various shader effects - mesh gradients, dithering, dot orbit, warp effects, and more. Each one lets you tweak settings through props or configuration objects, so you can dial in exactly the look you're after.</p>
<h2>Why I think you'll like it</h2>
<p>Paper Shaders hits that sweet spot I'm always looking for - powerful enough for real work but simple enough that I can mess around with it on a weekend without getting bogged down in setup. The zero-dependency approach means you're not taking on technical debt, and the performance focus means your effects won't turn someone's phone into a space heater.</p>
<p>If you want to check it out, the <a href="https://shaders.paper.design">documentation and examples at shaders.paper.design</a> are pretty solid, and you can poke around the <a href="https://github.com/paper-design/shaders">GitHub repository</a> to see what's under the hood.</p>]]></content:encoded>
    </item>
    <item>
      <title>Understanding context windows in Claude Code</title>
      <description>Or how to make the most of your tokens.</description>
      <link>https://code.charliegleason.com/understanding-context-windows</link>
      <guid isPermaLink="true">https://code.charliegleason.com/understanding-context-windows</guid>
      <pubDate>Sat, 20 Sep 2025 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>I've been using Claude Code for a while and thought I understood context windows - something about memory limits, and, uh, token usage. When I started actually paying attention to the <code>/context</code> command output, I realized there was a bunch of complexity I'd been glossing over.</p>
<p>So I spent some time digging deeper into what's actually happening under the hood. If you're working with Claude Code, you can run <code>/context</code> to get something like this:</p>
<pre><code class="language-txt">/context
  ⎿  ⛁ ⛀ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛀ ⛀
     ⛀ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   Context Usage
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   claude-sonnet-4-20250514 • 17k/200k tokens (8%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ System prompt: 3.2k tokens (1.6%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ System tools: 11.6k tokens (5.8%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Custom agents: 69 tokens (0.0%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Memory files: 743 tokens (0.4%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Messages: 1.2k tokens (0.6%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛶ Free space: 183.3k (91.6%)
</code></pre>
<p>But what does all this actually mean? Let's break it down.</p>
<h2>What is a context window?</h2>
<p>A context window is basically the AI's working memory. An AI model can only process a finite amount of information at any given time, and this limit is measured in <strong>tokens</strong> - roughly equivalent to words, though punctuation and code symbols count too.</p>
<p>In the example above, Claude Sonnet 4 has a 200,000 token context window, and we're currently using 17,000 tokens (8% of the total capacity).</p>
<h2>What's taking up space?</h2>
<p>The context breakdown shows exactly where those tokens are going, using our example from above:</p>
<h3>System Prompt (3.2k tokens)</h3>
<p>This is Claude Code's "personality" - the instructions that tell Claude how to behave, what tools it has access to, and how to help with coding tasks. Think of it as the foundational rules that define how Claude operates.</p>
<h3>System Tools (11.6k tokens)</h3>
<p>These are all the functions Claude can use - reading files, running bash commands, searching through code, making git commits, and more. Each tool has a detailed internal specifications about what it does and how Claude can use it properly.</p>
<h3>Custom Agents (69 tokens)</h3>
<p>These are specialized sub-agents that can be launched for specific tasks like code review or complex multi-step operations. In this case, there aren't many loaded, hence the small token count. You can <a href="https://docs.claude.com/en/docs/claude-code/sub-agents">create your own custom agents</a> by adding them to (<code>.claude/agents/agent-name.md</code>).</p>
<h3>Memory Files (743 tokens)</h3>
<p>This includes project-specific instructions like your <code>CLAUDE.md</code> file, which tells Claude about your project structure, development commands, and coding conventions. It's how Claude knows to run <code>npm run dev</code> instead of trying random commands and hoping for the best.</p>
<h3>Messages (1.2k tokens)</h3>
<p>The actual conversation - your questions, Claude's responses, and any code or file contents that have been discussed.</p>
<h2>Why context windows matter</h2>
<p>Once you understand how context works, a lot of Claude's behavior starts making sense:</p>
<h3>Why Claude sometimes uses agents</h3>
<p>When searching through large codebases, Claude might launch a specialized search agent rather than loading dozens of files into the main context. This keeps the context window clean and efficient.</p>
<h3>Why Claude reads files strategically</h3>
<p>Claude doesn't read every file in your project at once. Instead, it reads what's relevant to the current task, keeping context usage manageable.</p>
<h3>Why conversations have limits</h3>
<p>Eventually, if you have a very long conversation with lots of code, you'll approach that 200k token limit. When that happens, earlier parts of the conversation might get compressed or forgotten.</p>
<h3>Why project memory matters</h3>
<p>Files like <code>CLAUDE.md</code> stay loaded throughout the entire session, so Claude always knows your project's specific setup without having to rediscover it each time.</p>
<h2>Making the most of context</h2>
<p>Here are some tips to work effectively within context constraints:</p>
<ul>
<li><strong>Be specific about what you need</strong> - instead of "fix my app," try "fix the authentication bug in UserService.ts"</li>
<li><strong>Use project memory</strong> - keep important project info in <code>CLAUDE.md</code> so it's always available</li>
<li><strong>Break down large tasks</strong> - if you're refactoring a huge feature, tackle it piece by piece</li>
<li><strong>Trust the agents</strong> - when Claude launches specialized agents for complex searches or tasks, it's to keep the main conversation context focused</li>
</ul>
<p>Context windows are one of those invisible infrastructure pieces that make AI coding assistants work. Understanding how they function can help you work more effectively with Claude Code, and ultimately, get stuff done.</p>]]></content:encoded>
    </item>
    <item>
      <title>Side projects and love letters</title>
      <description>On the value of making things and sharing them.</description>
      <link>https://code.charliegleason.com/side-projects-and-love-letters</link>
      <guid isPermaLink="true">https://code.charliegleason.com/side-projects-and-love-letters</guid>
      <pubDate>Wed, 12 Jun 2024 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>Early 2023 I started working on a project, <a href="https://lysterfieldlake.com">Lysterfield Lake</a>, that took me down a long winding road of generative AI as a tool for creative expression.</p>
<p>I used <a href="https://replicate.com">Replicate</a> as a jumping off point, exploring models, learning their open-source tool <a href="https://github.com/replicate/cog">Cog</a>, and eventually contructing an offline pipeline to generate the final videos that made up the project.</p>
<p>When I finished it, I sent them a bit of a love letter to say thanks for making cool stuff.</p>
<hr>
<div><div><p>From:    Charlie Gleason &#x3C;hi﻿@charliegleason.com><div></div>
Date:    Thu, 30 Nov 2023, 23:23<div></div>
To:      team﻿@replicate.com<div></div>
Subject: Thanks</p></div><p>Hey Replicate,</p><p>I wanted to reach out because I'm a product designer and developer who has been working on a project for the last year or so that was really inspired by (and to be totally honest, only possible because of) your tools. I am such a huge fan of Cog, of your explore page, and of the way you're making generative AI tools accessible and inspiring. I genuinely say Replicate when people ask me if there's a company that I'm excited by or that I'd love to work for (maybe don't mention that last bit to my current employer).</p><p>If you were ever looking for someone to do user research with, or to take a look at beta features, or to just generally talk about your product, I'd be super keen.</p><p>Oh, and the project is here: https://lysterfieldlake.com/</p><p>It's all open source: https://github.com/superhighfives/lysterfield-lake-pipeline</p><p>And I wrote a bit about the process of making it here: https://charliegleason.com/work/lysterfield-lake</p><p>Thanks to the team for making great stuff,<div></div>
Charlie.</p></div>
<hr>
<p>And I got this back.</p>
<hr>
<div><div><p>From:    Ben Firshman &#x3C;ben﻿@replicate.com><div></div>
Date:    1 Dec 2023, 00:59<div></div>
From:    Charlie Gleason &#x3C;hi﻿@charliegleason.com><div></div>
Subject: Re; Thanks</p></div><p>Hey Charlie!</p><p>Your generative music video is so cool. Would love to catch up and hear how you made it and how Replicate worked for you. :)</p></div>
<hr>
<p>Anyway, in related news, I'm joining Replicate as a staff designer. Happy days. Here's to side projects and love letters.</p>
<p>😊</p>]]></content:encoded>
    </item>
    <item>
      <title>Public / private Remix routes</title>
      <description>How to open-source a public Remix site while keeping some routes authenticated and private on Cloudflare.</description>
      <link>https://code.charliegleason.com/public-private-remix-routes</link>
      <guid isPermaLink="true">https://code.charliegleason.com/public-private-remix-routes</guid>
      <pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p><a href="https://salesforce.com/">Salesforce</a>, where I've spent the last seven years, has a pretty reasonable policy toward sharing design work in portfolios—just put anything that's non-public behind a password.</p>
<p>My site, <a href="https://charliegleason.com">charliegleason.com</a> is built in <a href="https://remix.run">Remix</a>, and it's open-source. Which makes putting case studies about my work on enterprise software tricky.</p>
<p>I was talking to <a href="https://twitter.com/geelen">Glen Maddern</a>, and he mentioned that given it's being deployed on Cloudflare Pages, I could just wrap the public site in a private repo and add the protected routes that way. Genius.</p>
<h2>How does it work?</h2>
<div><div></div></div>
<p>There's a couple of gotchas with this approach, but it's relatively straightforward.</p>
<ul>
<li>Create a new project with <code>npm init</code></li>
<li>Set up a folder structure that matches the repo you're going to be injecting these private routes into</li>
<li>Install your public Remix site's GitHub repo as an NPM dependency with <div><code>npm install git://github.com/USER_NAME/REPO_NAME.git#branch</code></div> (usually main)</li>
<li>Create a <code>.dev.vars</code> file in your private repo to manage any environment variables if you have them</li>
<li>Use <a href="https://www.npmjs.com/package/concurrently">concurrently</a> and <a href="https://www.npmjs.com/package/sync-directory">sync-directory</a> to sync your files to the <code>node_modules/REPO_NAME</code>, where repo is the name of your repository</li>
</ul>
<p>You can see the <code>package.json</code> I use for this below:</p>
<pre><code class="language-json">{
  "name": "auth.charliegleason.com",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "dev": "concurrently \"npm run sync:watch\" \"npm --prefix node_modules/charliegleason.com run dev\"",
    "postinstall": "npm run sync &#x26;&#x26; npm --prefix node_modules/charliegleason.com install &#x26;&#x26; cp .dev.vars node_modules/charliegleason.com",
    "sync": "npm run sync:files:routes &#x26;&#x26; npm run sync:files:assets &#x26;&#x26; npm run sync:files:data",
    "sync:watch": "concurrently \"npm run sync:files:routes -- -w\" \"npm run sync:files:assets -- -w\" \"npm run sync:files:data -- -w\"",
    "sync:files:routes": "syncdir routes node_modules/charliegleason.com/app/routes --exclude .DS_Store",
    "sync:files:data": "syncdir app/data node_modules/charliegleason.com/app/data --exclude .DS_Store",
    "sync:files:assets": "syncdir public node_modules/charliegleason.com/public --exclude .DS_Store",
    "update": "npm install git://github.com/superhighfives/charliegleason.com.git#main &#x26;&#x26; npm install"
  },
  "dependencies": {
    "charliegleason.com": "github:superhighfives/charliegleason.com#main",
    "concurrently": "^8.2.2",
    "sync-directory": "^6.0.5"
  }
}
</code></pre>
<h2>Breaking it down</h2>
<p>So, what does each of these scripts actually do? Let's go through step-by-step.</p>
<h3>npm run dev</h3>
<p>Start watching files, and sync when they change. Start the dev server.</p>
<pre><code class="language-bash">concurrently \"npm run sync:watch\" \"npm --prefix node_modules/charliegleason.com run dev\""
</code></pre>
<h3>npm run postinstall</h3>
<p>Run the initial sync, install the dependencies for the public repo, and copy your local <code>.dev.vars</code> into it.</p>
<pre><code class="language-bash">npm run sync &#x26;&#x26; npm --prefix node_modules/charliegleason.com install &#x26;&#x26; cp .dev.vars node_modules/charliegleason.com"
</code></pre>
<h3>npm run sync</h3>
<p>Sync your various files, but don't worry about doing it concurrently.</p>
<pre><code class="language-bash">npm run sync:files:routes &#x26;&#x26; npm run sync:files:assets &#x26;&#x26; npm run sync:files:data
</code></pre>
<h3>npm run sync:watch</h3>
<p>Sync your various files, but this time, also watch them. (There's probably a more succinct way of doing this, but it's for a personal project, so.)</p>
<pre><code class="language-bash">concurrently \"npm run sync:files:routes -- -w\" \"npm run sync:files:assets -- -w\" \"npm run sync:files:data -- -w\"
</code></pre>
<h3>npm run sync:files:routes,data,assets</h3>
<p>For each directory you want to sync, add <code>syncdir PRIVATE_REPO_FOLDER PUBLIC_REPO_FOLDER</code>. I ran into some issues with <code>.DS_Store</code> files getting copied over on Mac and causing mayhem, so I excluded them.</p>
<pre><code class="language-bash">syncdir routes node_modules/charliegleason.com/app/routes --exclude .DS_Store
</code></pre>
<h3>npm run update</h3>
<p>Force an update for the public repo if you run into any weirdness with stuff not updating. Not strictly necessary, but I came across it a couple times.</p>
<pre><code class="language-bash">npm install git://github.com/superhighfives/charliegleason.com.git#main &#x26;&#x26; npm install
</code></pre>
<p>One of the nice little features of this setup is that you can easily work on the public and private stuff at the same time. Because it's all just <code>package.json</code> under the hook, you can take advantage of <a href="https://docs.npmjs.com/cli/v10/commands/npm-link">npm link</a>. Let's say we've got two repos, <a href="https://github.com/superhighfives/charliegleason.com">charliegleason.com</a> and a private repo. You can jump into the <code>~/Development/charliegleason.com</code> repo and run <code>npm link</code>. Then jump into the private repo and run <code>npm link "charliegleason.com"</code>. Boom. Magic. ✨</p>
<p>The only thing to watch our for here is that your private files will end up inside your public repo locally, so just make sure you don't commit anything on the public side that you don't want to.</p>
<h2>Deployment</h2>
<p>When you're ready to deploy your site, you'll need to set up a new Cloudflare Pages project <a href="https://dash.cloudflare.com/">on the Cloudlare dashboard</a>.</p>
<p>Navigate to <strong>Workers &#x26; Pages</strong> > <strong>Create Application</strong> > <strong>Pages</strong> > <strong>Connect to Git</strong>.</p>
<p>Use the following settings:</p>
<div></div>
<h3>Build command</h3>
<pre><code class="language-bash">npm --prefix node_modules/REPO_NAME run build &#x26;&#x26; ln -s node_modules/REPO_NAME/functions functions
</code></pre>
<blockquote>
<p>[!NOTE]<br>
You'll need to update the REPO_NAME with the name of your repo. In the example screenshot, the repo is <code>charliegleason.com</code>.</p>
</blockquote>
<p>This will build your public repo on Cloudflare, and create a symbolic link of your functions directory to the root, where Cloudflare expects it.</p>
<h3>Build output directory</h3>
<pre><code class="language-bash">node_modules/REPO_NAME/public
</code></pre>
<blockquote>
<p>[!NOTE]<br>
Again, update the repo name.</p>
</blockquote>
<p>On deploy, Cloudflare will run the <code>postinstall</code> script inside your <code>package.json</code>, handling the syncing of assets and routes you defined earlier. It'll then build the public repo with the private assets and routes.</p>
<p>And just like that, you'll be serving your private routes alongside your public ones. Everyone wins!</p>]]></content:encoded>
    </item>
    <item>
      <title>Satori, Vite, Remix, Cloudflare, og my!</title>
      <description>How to handle bundling Satori (Yoga) and Resvg WASM files for both Vite and Cloudflare.</description>
      <link>https://code.charliegleason.com/satori-vite-remix-cloudflare</link>
      <guid isPermaLink="true">https://code.charliegleason.com/satori-vite-remix-cloudflare</guid>
      <pubDate>Sun, 26 May 2024 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>On Friday night I decided to add <a href="https://github.com/vercel/satori">Satori</a> to <a href="https://github.com/superhighfives/code.charliegleason.com/">code.charliegleason.com</a> to automatically generate <code>og:image</code>'s for each post.</p>
<p>I'm hosting the site on Cloudflare Pages, which makes it super fast at the edge™. I installed the <a href="https://github.com/vercel/satori?tab=readme-ov-file#runtime-and-wasm">Satori WASM requirements</a>, added <a href="https://github.com/RazrFalcon/resvg">Resvg</a> to save it as a PNG, and then assumed the pose of a man awaiting success.</p>
<p>I got a lot of errors.</p>
<p>It turns out, <a href="https://vitejs.dev/guide/features#accessing-the-webassembly-module">Vite handles WASM files</a> in a specific way, and <a href="https://developers.cloudflare.com/workers/runtime-apis/webassembly/javascript/">Cloudflare expects WASM files</a> in a certain way. The result was a weekend spent <a href="https://github.com/superhighfives/vite-plugin-wasm-module-workers/blob/main/src/index.ts#L21">noodling on regular expressions</a>. And a library that will do it for you.</p>
<p>👉 <a href="https://github.com/superhighfives/vite-plugin-wasm-module-workers">vite-plugin-wasm-module-workers</a></p>
<p>It will, in essence, take this:</p>
<pre><code class="language-ts">import satori, { init as initSatori } from 'satori/wasm'
import { Resvg, initWasm as initResvg } from '@resvg/resvg-wasm'
import initYoga from 'yoga-wasm-web'

import YOGA_WASM from 'yoga-wasm-web/dist/yoga.wasm?url'
import RESVG_WASM from '@resvg/resvg-wasm/index_bg.wasm?url'
Then, in our default function:

export async function createOGImage(title: string, requestUrl: string) {
const { default: resvgwasm } = await import(
/_ @vite-ignore _/ `${RESVG_WASM}?module`
)
const { default: yogawasm } = await import(
/_ @vite-ignore _/ `${YOGA_WASM}?module`
)

  try {
    if (!initialised) {
      await initResvg(resvgwasm)
      await initSatori(await initYoga(yogawasm))
      initialised = true
    }
  } catch (e) {
    initialised = true
  }

  // more fancy code

</code></pre>
<p>And turn it into this, on build:</p>
<pre><code class="language-ts">import YOGA_WASM from './assets/yoga-CP4IUfLV.wasm'
import RESVG_WASM from './assets/index_bg-Blvrv-U2.wasm'
let initialised = false

async function createOGImage(title, requestUrl) {
  const resvgwasm = RESVG_WASM
  const yogawasm = YOGA_WASM
  try {
    if (!initialised) {
      await initWasm(resvgwasm)
      await init(await initYoga(yogawasm))
      initialised = true
    }
  } catch (e) {
    initialised = true
  }

  // more fancy build code
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Things I use</title>
      <description>A list of hardware, software, and creative tools I use day-to-day.</description>
      <link>https://code.charliegleason.com/things-i-use</link>
      <guid isPermaLink="true">https://code.charliegleason.com/things-i-use</guid>
      <pubDate>Tue, 21 May 2024 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>Back in 2018, I did an interview with <a href="https://usesthis.com/interviews/charlie.gleason/">Uses This</a>, which captures the tools and technology that creative people use. In that vein, I thought I'd update it for 2024.</p>
<h2>Who are you, and what do you do?</h2>
<p>I'm <a href="https://charliegleason.com/">Charlie Gleason</a>, a <a href="https://dribbble.com/superhighfives">designer</a>, <a href="https://github.com/superhighfives">developer</a>, and <a href="http://wewerebrightly.com/">musician</a>. I push around pencils and pixels at <div><a href="https://www.heroku.com/">Heroku</a></div> <a href="https://www.salesforce.com/">Salesforce</a>, working on developer tooling and experience. I have worked on a ton of projects, personally and professionally, the most interesting ones of which <a href="https://charliegleason.com/">you can find at my site</a>. I studied design at university, and then went back to do computer science, and I sweat the intersection of design and code.</p>
<h2>What hardware do you use?</h2>
<p>For most things, I use my trusty Intel <a href="https://www.apple.com/macbook-pro/">MacBook Pro</a>, the last of its kind, which I spent a silly amount of money on during the pandemic right before Apple announced an entirely new architecture. Good times all round in 2020.</p>
<p>I use an <a href="https://www.apple.com/uk/studio-display/">Apple Studio Display</a>, a wireless <a href="https://www.apple.com/uk/shop/product/MQ052B/A/magic-keyboard-with-numeric-keypad-british-english">Magic Keyboard with a number pad</a>, and a <a href="https://www.apple.com/uk/shop/product/MK2E3Z/A/magic-mouse-white-multi-touch-surface">Magic Mouse</a>.</p>
<p>When doing 3D modelling I use a <a href="https://www.logitechg.com/en-gb/products/gaming-mice/g502-lightspeed-wireless-gaming-mouse.html">Logitech G502 wireless mouse</a> because 3D modelling is a lot easier with three buttons and a scroll wheel. I also lose the wireless dongle once a week, and it drives me bananas.</p>
<p>For all things AI and machine learning, I have a Teenager Engineering designed <a href="https://teenage.engineering/products/computer-1">Computer-1</a> case in day-glo orange, housing an <a href="https://www.amd.com/en/product/8436">AMD Ryzen 9 3900X</a> with an <a href="https://www.nvidia.com/en-gb/geforce/graphics-cards/30-series/rtx-3060-3060ti/">Nvidia 3060</a>, running <a href="https://ubuntu.com/">Ubuntu</a>. I use <a href="https://tailscale.com/">Tailscale</a> to make managing my home network easier.</p>
<p>For ideation and design stuff, I tend to sketch ideas out with a pen and paper. (Actually, that's a lie. That's pretty rare. I think I'm faster in <a href="https://www.figma.com/">Figma</a> than I am at actually drawing things out, and it's infinitely more malleable. So I usally just throw boxes in Figma.)</p>
<p>For music I use a whole ton of bits and pieces I've collected over the years. The key pieces, beyond my laptop, are Ableton's <a href="https://ableton.com/push">Push 3</a>, Focusrite's <a href="http://web.archive.org/web/20230528064506/https://focusrite.com/en/usb-audio-interface/scarlett/scarlett-2i2-studio">Scarlett 2i2</a>, and a Radial DI and microphone splitter. I spend a lot of time trying not to buy an <a href="https://teenage.engineering/store/op-1-field/">OP-1 Field</a> or an <a href="https://www.google.com/search?q=elektro+digitone&#x26;sourceid=chrome&#x26;ie=UTF-8">Elektron Digitone</a>. Also, I love <a href="https://www.radialeng.com/">Radial Engineering</a>—their stuff could fall out of a plane and it would still work.</p>
<h2>And what software?</h2>
<p>For design I use <a href="https://www.figma.com/">Figma</a>, as mentioned. For development, <a href="https://code.visualstudio.com/">VS Code</a>. The real secret sauce for development is the Operator Mono font, though. It's really pretty.</p>
<p>Oh, I also recently moved from <a href="https://www.google.com/intl/en/chrome/">Chrome</a> to <a href="https://www.arc.com">Arc</a>, which has some really interesting ideas around the browser and managing tabs.</p>
<p>For terminal stuff, I use <a href="https://ohmyz.sh/">oh-my-zsh</a>, <a href="https://github.com/romkatv/powerlevel10k">powerlevel10k</a>, and a bunch of utilities like <a href="https://github.com/junegunn/fzf">zfz</a> and <a href="https://github.com/sharkdp/bat">bat</a>.</p>
<p>For backups and moving files around <a href="https://rclone.org/">rclone</a> is great. The feature set alone is proof of the power of open-source.</p>
<p>For notes, I use VSCode or <a href="https://ia.net/writer">IA Writer</a>, but I've previously loved using <a href="https://bear.app/">Bear</a>.</p>
<p>For music I use <a href="https://www.ableton.com/en/live/">Ableton</a> with <a href="https://www.fabfilter.com/">Fabfilter's creative plugins</a>. I love the design of Fabfilter. It has a super clean, clear UI, which is beyond a rarity for software plugins.</p>
<h2>What would be your dream setup?</h2>
<p>The older I get the more I've realised that the amount of stuff you have doesn't really correlate to your level of personal happiness. And technology changes so rapidly that whatever your dream is, it'll be superseded in a year anyway.</p>
<p>So I think my dream setup is whatever I have right now, that generally meets my needs. In life, as in product design, perfect is the enemy of good. If it works well, then the rest is just gravy.</p>]]></content:encoded>
    </item>
    <item>
      <title>Hello world</title>
      <description>Code is a space to share code, resources, and thoughts on design and front-end development.</description>
      <link>https://code.charliegleason.com/hello-world</link>
      <guid isPermaLink="true">https://code.charliegleason.com/hello-world</guid>
      <pubDate>Fri, 17 May 2024 00:00:00 GMT</pubDate>
      
      <content:encoded><![CDATA[<p>Hello world. 👋</p>
<p>I'm a designer and developer who loves the intersection of creativity and code. This site is where I share technical implementations, creative coding experiments, and the occasional tool or resource I've found useful.</p>
<p>You'll find posts about things like <a href="/satori-vite-remix-cloudflare">generating OG images with Satori</a>, <a href="/public-private-remix-routes">building authentication patterns in Remix</a>, and <a href="/side-projects-and-love-letters">the value of side projects</a>. Most posts include working code examples, links to demos, and reflections on what worked (and what didn't).</p>
<p>The site itself is open-source and built with React Router 7, Cloudflare Workers, and a bunch of other bits and pieces. You can find the source code <a href="https://github.com/superhighfives/code.charliegleason.com">on GitHub</a>, and you can subscribe via <a href="/rss">RSS</a> to get updates when I publish something new.</p>]]></content:encoded>
    </item>
  </channel>
</rss>