Autonomous Social Media Engine

Let me set the scene: it's somewhere past midnight, I'm staring at a screen full of TypeScript, and I'm trying to explain to an AI what the word "lollybagging" means. Not because I've lost my mind, but because I'm building an autonomous content engine for my web3 music project...and the AI keeps using "lollipop technique" instead. Which, for the record, is not a thing in the Bagaverse.

This is the story of how I built a fully automated social media system from scratch, what worked, what spectacularly didn't, the unexpected rabbit holes, and where this whole thing is heading. If you work in marketing automation, AI content generation, or social media strategy, pull up a chair. There's something in here for you.

The Problem I Was Actually Solving

Bags Bro! is a music-first, hip-hop-rooted brand built around what I call the Bagaverse - a philosophical universe where everything orbits the pursuit of the bag (money, knowledge, culture, vibes, all of it). We have NFT collections, physical trading cards, music on Spotify and Apple Music, and a genuinely unhinged brand lingo dictionary that includes terms like "baggamistic," "lollybagging," and "bagsbrophysics."

The content challenge was real: maintaining an authentic, culturally sharp voice on X (Twitter) while also running product development, music distribution, and enough other things to make a project manager weep. The solution was not to hire a social media manager. The solution was to build one.

"One person doing what used to take a team" is the promise of AI automation. I wanted to test whether that promise held up when the brand voice was genuinely difficult to replicate."

Phase One: The Naive Version (A Cautionary Tale)

The first iteration was exactly what you'd expect from someone who has been coding too long and sleeping too little: a single hardcoded system prompt stuffed with everything. Brand personality. Tone rules. A comma-separated list of 52 lingo terms. Post type weights. Timing logic. All of it, jammed into one giant string and injected into every Claude API call regardless of what type of content was being generated.

It worked. Sort of. The posts came out on-brand roughly 60% of the time. The other 40% were educational - which is a polite way of saying the AI would sometimes use "lollybagging" in a sentence about lollipops because it had the word but not the definition. Context matters enormously, and I had given it none.

The second problem was rigidity. Every topic, format, and audience segment got the exact same prompt. A trending crypto post and a late-night shitpost were both dressed in the same outfit. The AI had no way to know the difference mattered.

The Architecture That Actually Works

After a couple of days of iteration, the system evolved into something I'm genuinely proud of. Here's the stack at a high level, without getting too deep into the weeds:

A dynamic context engine that assembles Claude's prompt from MongoDB based on what type of post is being generated. Instead of one monolithic prompt, the system now pulls: the base persona, relevant lingo terms with their full definitions and usage examples, audience-specific framing, content pattern templates, and hot take topic seeds - all composited at generation time.

A queue system that pre-generates 3–6 posts each morning, complete with AI-generated pixel art images and Seedance-animated MP4 videos for trending topics. Posts sit in a review queue where I can edit text, regen images, regenerate the entire post, or skip it before it fires. The dashboard is built in Next.js and talks to a Fastify backend on Google Cloud Run.

An image and animation pipeline using Wavespeed's Flux model for pixel art generation (trained on our own NFT collection via a custom Replicate LoRA) and Seedance 1.5 Pro for animating the statics into 7-second MP4s. The animated posts look genuinely stunning and cost less than a dollar each.

A skill graph encoded in MongoDB — the brand voice, lingo dictionary, hot take topic seeds, audience segment framing - all updateable from the dashboard without redeploying anything. Add a new lingo term and it's live in the next post generation cycle.

A scheduler that mimics human posting behavior: 3–6 posts per day, randomized within broad time zones (8am to midnight ET), 90-minute minimum gaps with variable spacing, skip days built in, and a reply engine that monitors target accounts in the crypto and hip-hop Twitter spaces.

"The goal was never automation for its own sake. The goal was to sound like a real person at 2pm on a Tuesday when I'm busy doing seventeen other things."

What Didn't Work (The Honest Part)

LoRA quality vs. quantity. I trained a custom Flux LoRA on 500 Bags Bro! Ballers NFT images to give the AI-generated pixel art our brand's specific aesthetic. The style transfer worked beautifully. The anatomy of anything with hands? A disaster. Flux's weakness with hands is well-documented, and 500 training images isn't enough to overcome it. The LoRA is now a secondary fallback while Wavespeed's base Flux model handles primary generation...and the results are actually better.

The lingo problem. Giving an AI a list of made-up words without definitions is like handing someone a dictionary with no meanings. "Lollybagging" became "lollipop technique." "Baggamistic" got used in contexts that made no sense. The fix? Passing full definitions and usage examples from the database...seems obvious in retrospect. It always does.

Fixed timing patterns. The original scheduler used eight specific anchor hours with jitter added. The result was posts appearing at predictably similar intervals, which is exactly what automated posting looks like. Humans don't post at 10:07am every Tuesday. The fix was switching to broad time zones with fully randomized minute placement and variable minimum gaps.

Silent failures. Early versions of the pipeline would fail silently. An image generation call would error out, the catch block would swallow it, and the post would go live as text-only with no indication anything went wrong. Proper logging, fallback chains (Replicate → Wavespeed → error), and visibility in the dashboard fixed this.

Where This Goes Next

The Twitter engine is the proof of concept. The real opportunity is extending the same architecture to TikTok and Instagram, where the animated pixel art MP4s are purpose-built for Reels format. Same content pipeline, different platform nodes in the skill graph, different timing windows.

The skill graph concept: composable, database-driven knowledge that assembles contextually per task, is transferable to any brand with a complex voice. The marketing automation space is full of tools that generate content. Very few of them know what the content is actually supposed to sound like. That gap is the opportunity.

The physical trading card shop (Stripe + print-on-demand API, no Shopify fees) plugs into the same ecosystem. Meta Pixel on the storefront, retargeting campaigns fed by the organic audience built through autonomous posting. The flywheel is: content builds audience, audience becomes customers, customer data improves content targeting.

The Actual Takeaway

AI automation in content generation is real and it works, but the quality ceiling is determined entirely by how much structured knowledge you give the system. A flat prompt produces flat content. A system that knows your brand, your audience, your voice rules, your lingo, and your product context produces something that actually sounds like you.

The tooling is accessible. The APIs are affordable. The real work is in the knowledge architecture, deciding what the system needs to know, then structuring it so it can be used selectively, and building the feedback loops that keep it current.

I built this for Bags Bro! because I had to. But the pattern applies to any brand that has a genuine voice and not enough hours in the day to express it consistently. If you're a marketer, content strategist, or automation engineer reading this and nodding, the technology is there. It just needs someone willing to spend a few nights teaching an AI what "lollybagging" means.

— Jeremiah Williams AKA: The Bag Lord

Lord of The Bagaverse | Founder, Bags Bro!

bagsbro.io

Next
Next

Welcome to GRIZL!