IM
In Progress

AI Agents Suite

A modular collection of AI agents for automating repetitive tasks across social media, trading, moderation, and development workflows.

PythonLangChainFastAPIRedisOpenAI/LLM

Overview

The Problem

Repetitive tasks drain time: scheduling social posts, moderating communities, monitoring markets, reviewing code. Each domain has its own tools, none of which talk to each other. Context switching kills productivity.

The Solution

Build once, deploy everywhere. A unified agent framework where each agent specializes in one domain but shares infrastructure: scheduling, logging, error handling, and a common LLM interface.

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                      AGENT ORCHESTRATOR                         │
│  Scheduling • Logging • Error Handling • Rate Limiting          │
└─────────────────────────────────────────────────────────────────┘
                              │
       ┌──────────────────────┼──────────────────────┐
       ▼                      ▼                      ▼
┌─────────────────┐   ┌─────────────────┐   ┌─────────────────┐
│  SOCIAL AGENT   │   │ SENTINEL AGENT  │   │  TRADING AGENT  │
│  Twitter/X      │   │ Discord/Slack   │   │  Market Data    │
│  Content Gen    │   │ Content Filter  │   │  Signal Gen     │
└─────────────────┘   └─────────────────┘   └─────────────────┘
       │                      │                      │
       └──────────────────────┼──────────────────────┘
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                       LLM INTERFACE                             │
│  OpenAI / LLM / Local Models (Ollama)                           │
└─────────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                        EXTERNAL APIs                            │
│  Twitter API • Discord API • Exchange APIs • GitHub API         │
└─────────────────────────────────────────────────────────────────┘

Orchestration

Central coordinator handles scheduling, retry logic, and cross-agent communication.

Specialized Agents

Each agent is a self-contained module with domain-specific prompts and tools.

LLM Abstraction

Swap between OpenAI, LLM, or local models without changing agent code.

Agent Types

Social Media Agent

Automated posting, engagement tracking, and AI-generated responses.

  • • Multi-platform scheduling
  • • Content generation with brand voice
  • • Engagement analytics

Sentinel - AI Guardian

Live

Autonomous agent that actively protects and informs your server.Click to view details →

  • The Cleaner: Auto-moderates and bans suspicious accounts instantly.
  • Hybrid Brain: Trainable via /learn commands. Remembers context.
  • News Anchor: Auto-posts crypto news.
  • Researcher: Browses web.

Trading Agent

Market monitoring and signal generation (analysis only, no execution).

  • • Technical indicator monitoring
  • • Sentiment analysis from social
  • • Alert notifications

Dev Assistant

Code review, documentation generation, and CI/CD automation.

  • • PR review suggestions
  • • Auto-generated docs
  • • Test generation

Tech Stack

Every tool was chosen for a specific reason:

Python

The lingua franca for AI/ML. Every LLM SDK has first-class Python support. Async/await handles concurrent API calls efficiently.

LangChain

Agent framework with built-in tools, memory, and chain composition. Accelerates development vs building from scratch.

FastAPI

High-performance Python API framework. Automatic OpenAPI docs, Pydantic validation, and async support out of the box.

Redis

Task queue for background jobs. Rate limiting per API. Caching LLM responses to reduce costs. Pub/sub for real-time events.

Challenges

LLM Cost Management

Problem: Agents making hundreds of API calls per day adds up quickly.

Solution: Implemented response caching in Redis. Semantic similarity check before new API call. Smaller models for simple tasks, larger for complex reasoning.

API Rate Limits

Problem: Twitter, Discord, and exchange APIs all have different rate limits.

Solution: Per-API rate limiters using Redis token buckets. Exponential backoff on 429 errors. Priority queuing for time-sensitive tasks.

Agent Reliability

Problem: LLMs sometimes produce unexpected outputs that break downstream logic.

Solution: Structured output with Pydantic validation. Fallback handlers for malformed responses. Human-in-the-loop for high-stakes actions.

Results

What Works Today

  • • Core orchestrator with scheduling
  • • Sentinel - AI Guardian (Discord)
  • • Social agent prototype (Twitter)
  • • LLM abstraction layer
  • • Redis-based task queue
  • • Basic logging and monitoring

Planned Next

  • • Trading signal agent
  • • Web dashboard for configuration
  • • Multi-agent collaboration
  • • Self-hosted local models

Follow Development

This project is actively being built

GitHub