Skip to content

Core Modules

StarStreamer comes with several built-in modules that provide essential streaming automation features. These modules are production-ready and designed to work seamlessly together.

Chat Module

The Chat Module provides fundamental chat interaction capabilities and is the foundation for most streaming bots.

Features

Basic Commands

  • !hello - Friendly greeting with username
  • !ping - Bot responsiveness check
  • !commands - Lists all available commands
  • !lurk - Acknowledge lurkers
  • !hug <user> - Send virtual hugs to other users

Social Integration

  • !discord - Share Discord invite link
  • !socials - Display social media links

Chat Logging

  • Automatic logging of all chat messages for moderation and analytics
  • Structured logging with usernames and message content

Implementation Details

The Chat Module uses explicit dependency injection for all handlers:

@on_event("twitch.chat.message")
@trigger(CommandTrigger("!hello"))
async def hello_command(event: Event, twitch: TwitchClient) -> None:
    """Friendly greeting command"""
    user = event.data.get("user", {})
    username = user.get("display_name", user.get("username", "friend"))
    await twitch.send_message(f"Hello {username}! 👋 Welcome to the stream!")

Configuration

All social links in the Chat Module can be customized by editing the command handlers in src/modules/chat/actions/basic_commands.py.


Alerts Module

The Alerts Module handles real-time notifications for Twitch events, making your stream more interactive and engaging.

Features

Follow Alerts

  • Welcome messages for new followers
  • Color-coded announcements using Twitch's announcement system

Subscription Alerts

  • New Subscriptions - Alerts for Tier 1, 2, 3, and Prime subscriptions
  • Gift Subscriptions - Special handling for single gifts and gift bombs
  • Resubscriptions - Acknowledge loyalty with cumulative months and streaks

Raid Alerts

  • Large Raids (10+ viewers) - Special announcements with enhanced visibility
  • Standard Raids - Welcome messages for smaller raids
  • Automatic viewer count display

Bits/Cheer Alerts

  • Large Cheers (100+ bits) - Special announcements for significant support
  • Standard Cheers - Thank you messages for all bit contributions

Channel Point Redemptions

Built-in handlers for common redemptions: - Hydrate - Reminds streamer to drink water - Posture Check - Encourages good posture - Song Requests - Displays requested songs

Hype Train Events

  • Start notifications with level and goal information
  • Progress updates as levels increase
  • Completion celebrations

Advanced Features

The Alerts Module uses trigger filtering for enhanced functionality:

@on_event("twitch.raid")
@trigger(MinViewersTrigger(10))
async def large_raid_alert(event: Event, twitch: TwitchClient, logger: logging.Logger) -> None:
    """Special alert for raids with 10+ viewers"""
    from_broadcaster = event.data.get("from_broadcaster_user_name", "Unknown")
    viewers = event.data.get("viewers", 0)

    await twitch.send_announcement(
        f"🚨 HUGE RAID INCOMING! 🚨 {from_broadcaster} is raiding with {viewers} viewers!",
        color="red",
    )

Customization

Alert messages can be customized by editing handlers in src/modules/alerts/actions/stream_alerts.py. Each alert type supports custom messages, colors, and triggers.


RPG Module

The RPG Module adds gamification elements to your stream with an economy system, work mechanics, and user progression.

Features

Economy System

  • Virtual currency for viewers
  • Persistent user balances stored in database
  • Transaction logging and analytics

Core Commands

  • !balance - Check current currency balance
  • !work - Earn currency with 5-minute cooldown
  • !daily - Get daily bonus rewards
  • !give @user amount - Transfer currency between users
  • !gamble amount - Gambling with risk/reward mechanics
  • !leaderboard - Display top users by balance

Work System

The work command features multiple scenarios with random rewards: - Delivered pizzas (50-150 coins) - Fixed computers (75-200 coins) - Streamed on Twitch (25-100 coins) - And many more randomized activities

Social Features

  • User-to-user currency transfers
  • Community leaderboards
  • Social gambling mechanics

Implementation Highlights

The RPG Module demonstrates advanced dependency injection with multiple services:

@on_event("twitch.chat.message")
@trigger(CommandTrigger("!work"))
@trigger(CooldownTrigger(300, per_user=True))  # 5 minute cooldown
async def work_command(
    event: Event,
    twitch: TwitchClient,
    economy: EconomyService,
    users: UserService,
    logger: logging.Logger,
) -> None:
    """Work command - earn money with a cooldown"""
    # Work logic with random scenarios and rewards

Database Integration

The RPG Module uses SQLite for data persistence: - User profiles with balances and statistics - Transaction history for auditing - Cooldown tracking for commands - Leaderboard data

Extensibility

The RPG Module is designed for easy extension: - Add new work scenarios by editing the scenarios list - Create custom gambling games - Implement achievement systems - Add item inventories and shops


ElevenLabs Module

The ElevenLabs Module provides high-quality text-to-speech (TTS) functionality using the official ElevenLabs API integration.

Features

Text-to-Speech Commands

  • !tts <text> - Convert text to speech with automatic voice selection
  • !voices - List all available voices from your ElevenLabs account
  • !ttshelp - Display TTS command help and usage information

Voice Management

  • Automatic Voice Selection - Uses the first available voice from your account
  • Multiple Voice Support - Access to all voices in your ElevenLabs subscription
  • Real-time Voice Fetching - Dynamically retrieves available voices from the API

API Integration

  • Official SDK - Built on the official ElevenLabs Python SDK (AsyncElevenLabs)
  • Streaming Support - Real-time audio streaming for longer texts
  • Error Handling - Comprehensive error handling with user-friendly messages
  • Rate Limiting - Respects ElevenLabs API rate limits

Implementation Details

The ElevenLabs Module demonstrates advanced SDK integration with dependency injection:

@on_event("twitch.chat.message")
@trigger(CommandTrigger("!tts"))
async def tts_command(event: Event, elevenlabs: ElevenLabsClient, twitch: TwitchClient) -> None:
    """Convert text to speech using ElevenLabs"""
    message = event.data.get("message", "")
    parts = message.split(maxsplit=1)

    user = event.data.get("user", {})
    username = user.get("display_name", user.get("username", "Someone"))

    if len(parts) < 2 or not parts[1].strip():
        await twitch.send_message(f"@{username} Usage: !tts <text to speak>")
        return

    text_to_speak = parts[1].strip()

    # Get available voices and use the first one
    voices = await elevenlabs.get_voices_as_objects()
    if not voices:
        await twitch.send_message(f"@{username} Sorry, no voices are available for TTS.")
        return

    voice = voices[0]
    audio_bytes = await elevenlabs.text_to_speech(text_to_speak, voice=voice)

Configuration

The ElevenLabs Module requires API key configuration:

elevenlabs:
  enabled: true
  api_key: "your_elevenlabs_api_key"

Safety Features

  • Text Length Limits - Maximum 500 characters per TTS request
  • Input Validation - Sanitizes user input before processing
  • Error Recovery - Graceful handling of API failures and rate limits
  • User Feedback - Clear error messages and success confirmations

Module-Level Voice Usage

The module supports custom voice configurations for specialized use cases:

from starstreamer.plugins.elevenlabs import Voice, ElevenLabsClient

class CustomModule(BaseModule):
    def __init__(self):
        # Define module-specific voices
        self.narrator_voice = Voice(
            voice_id="21m00Tcm4TlvDq8ikWAM",
            model_id="eleven_multilingual_v2", 
            name="Narrator"
        )

    async def custom_tts(self, text: str, elevenlabs: ElevenLabsClient):
        """Use module-specific voice for TTS"""
        return await elevenlabs.text_to_speech(text, voice=self.narrator_voice)

Extensibility

The ElevenLabs Module is designed for easy extension: - Custom Voice Selection - Implement voice selection by name or characteristics - Specialized TTS Commands - Create module-specific TTS functionality - Voice Caching - Cache frequently used voices for performance - Content Filtering - Add custom content filters before TTS generation


AI Module

The AI Module provides intelligent conversation capabilities powered by LiteLLM and OpenRouter, enabling streamers to add AI-powered interactions to their streams.

Features

AI Chat Commands

  • !ask <question> - Ask the AI any question and get conversational responses
  • !explain <topic> - Get educational explanations of topics or concepts
  • !tldr <text> - Generate concise summaries of provided text

Multi-Model Support

  • OpenRouter Integration - Access to various AI models (Claude, GPT-4, Llama, etc.)
  • Model Flexibility - Easy switching between models for different use cases
  • Cost Optimization - Choice of free tier and premium models

Configuration Options

  • System Prompts - Customize AI personality and behavior for your stream
  • Response Control - Configure temperature and token limits
  • Error Handling - Graceful handling of API failures and rate limits

Implementation Details

The AI Module demonstrates advanced API integration with dependency injection:

@on_event("twitch.chat.message")
@trigger(CommandTrigger("!ask"))
async def ask_command(event: Event, twitch: TwitchClient, ai: AIClient) -> None:
    """AI-powered question answering command"""
    message = event.data.get("message", "")
    parts = message.split(maxsplit=1)

    user = event.data.get("user", {})
    username = user.get("display_name", user.get("username", "friend"))

    if len(parts) < 2 or not parts[1].strip():
        await twitch.send_message(f"@{username} Usage: !ask <your question>")
        return

    question = parts[1].strip()

    # Create a prompt for the AI
    prompt = f"Please answer this question concisely for a Twitch stream audience: {question}"

    # Get AI response
    response = await ai.complete(prompt)

    # Send response to chat
    await twitch.send_message(f"@{username} {response.content}")

Configuration

The AI Module requires OpenRouter API key configuration:

# AI Integration using LiteLLM with OpenRouter
# Get your API key from https://openrouter.ai/keys
# Models are specified at the module level, not in configuration
ai:
  openrouter:
    enabled: true
    api_key: "${OPENROUTER_API_KEY}"

Model Selection

Models are specified at the module level within the AI commands, not in configuration files. The current implementations use:

AI Commands: - !ask and !explain use openrouter/anthropic/claude-3.5-sonnet for comprehensive responses - !tldr uses openrouter/anthropic/claude-3.5-haiku for faster summarization

Available OpenRouter Models: - openrouter/anthropic/claude-3.5-sonnet - Best balance of quality and speed - openrouter/anthropic/claude-3.5-haiku - Faster responses for simple tasks - openrouter/openai/gpt-4o - Latest OpenAI model for high-quality responses - openrouter/meta-llama/llama-3.1-8b-instruct:free - Free tier option for budget-conscious streamers

Advanced Usage

Custom modules can integrate AI functionality for specialized use cases:

from starstreamer.plugins.litellm import AIClient

class RPGModule(BaseModule):
    async def generate_quest(self, ai: AIClient):
        """Generate RPG quest using AI"""
        prompt = "Generate a short fantasy quest for a Twitch stream RPG game:"
        # Use Claude 3.5 Sonnet for creative quest generation
        response = await ai.complete(prompt, model="openrouter/anthropic/claude-3.5-sonnet")
        return response.content

    async def narrate_action(self, action: str, ai: AIClient):
        """Narrate player actions in RPG style"""
        prompt = f"Narrate this RPG action dramatically: {action}"
        # Use Claude 3.5 Haiku for fast action narration
        response = await ai.complete(prompt, model="openrouter/anthropic/claude-3.5-haiku", temperature=0.9)
        return response.content

Safety Features

  • Input Validation - Sanitizes user input before processing
  • Response Limits - Configurable maximum token limits
  • Error Recovery - Graceful handling of API failures and rate limits
  • User Feedback - Clear error messages and success confirmations

Performance Considerations

  • Singleton Client - Efficient resource management and connection reuse
  • Async Operations - Non-blocking AI API calls
  • Response Caching - Module-level caching for frequently requested content
  • Token Management - Monitoring and optimization of API usage

Module Interaction

The core modules are designed to work together seamlessly:

Shared Services

  • UserService - Unified user management across modules
  • EconomyService - Currency system available to all modules
  • Logger - Centralized logging with module identification

Event System Integration

All modules use the same event bus and trigger system, allowing for: - Cross-module event handling - Consistent cooldown management - Unified user experience

Configuration Consistency

All modules follow the same patterns: - Explicit dependency injection - Standardized error handling - Consistent logging practices - Type-safe implementations


Getting Started with Core Modules

Enable All Core Modules

Core modules are automatically loaded but can be manually managed:

from modules.registry import ModuleRegistry
from modules.chat import ChatModule
from modules.alerts import AlertsModule
from modules.rpg import RPGModule
from modules.elevenlabs import ElevenLabsModule
from modules.ai import AiModule

registry = ModuleRegistry()
await registry.register_module(ChatModule())
await registry.register_module(AlertsModule())
await registry.register_module(RPGModule())
await registry.register_module(ElevenLabsModule())
await registry.register_module(AiModule())

Customizing Module Behavior

Each module can be customized by:

  1. Editing Handler Files - Modify command responses and logic
  2. Adding New Commands - Extend existing action files
  3. Configuration Changes - Update environment variables
  4. Service Integration - Add custom services via DI

Module Dependencies

The core modules have these dependencies:

graph TD
    A[Chat Module] --> E[UserService]
    B[Alerts Module] --> E[UserService]
    B --> F[TwitchClient]
    C[RPG Module] --> E[UserService]
    C --> G[EconomyService]
    C --> H[Database]
    D[ElevenLabs Module] --> I[ElevenLabsClient]
    D --> F[TwitchClient]
    I --> J[ElevenLabs API]
    K[AI Module] --> L[AIClient]
    K --> F[TwitchClient]
    L --> M[OpenRouter API]
    L --> N[LiteLLM Library]

Next Steps