The AI Platform That Remembers, Collaborates, Protects
JEBAT combines eternal memory, multi-agent orchestration across 8 local LLMs, and enterprise security into one self-hosted platform.
Download, install, and own your AI — no cloud dependency.
Two Pillars. One Platform.
JEBAT is built on two powerful components that work together seamlessly.
Jebat Agent
The unified AI agent combining OpenClaw control plane and Hermes capture-first methodology. Setup your entire workspace in 30 seconds with one command.
Jebat Core
The platform backbone — IDE context injection, MCP server, memory system, skill registry, and the gateway that routes everything.
10 Core Agents. One Platform.
Each agent has a specialized role, provider, and model — orchestrated by Panglima for maximum effectiveness.
Panglima
claude-sonnet-4
The commander — routes tasks, manages specialists, orchestrates workflows
Tukang
qwen2.5-coder
The builder — writes code, implements features, fixes bugs
Hulubalang
hermes-sec-v2
The guardian — security reviews, penetration testing, vulnerability analysis
Pengawal
hermes-sec-v2
The defender — real-time threat detection, security monitoring, incident response
Pawang
claude-sonnet-4
The researcher — deep investigation, documentation, comparative analysis
Syahbandar
qwen2.5-coder
The operator — CI/CD, automation, deployments, system administration
Bendahara
qwen2.5-coder
The treasurer — database design, SQL migrations, query optimization
Hikmat
claude-sonnet-4
The wise — 5-layer eternal memory (M0-M4), cross-session continuity
Penganalisis
claude-sonnet-4
The analyst — KPI tracking, funnel analysis, experiments, reporting
Penyemak
claude-sonnet-4
The inspector — testing, verification, regression review, release confidence
24 Specialist Agents
Domain experts spawned on-demand for specialized tasks across delivery, security, growth, strategy, and quality.
Tukang Web
Browser UI & Frontend
DeliveryPembina Aplikasi
Cross-Layer App Delivery
DeliveryKhidmat Pelanggan
Onboarding & Support
GrowthSenibina Antara Muka
UI/UX & Responsive Design
DesignPenyebar Reka Bentuk
Design Systems & Tokens
DesignPengkarya Kandungan
Content Systems & Scripts
GrowthJurutulis Jualan
Copywriting & CTA Framing
GrowthPenjejak Carian
SEO & Metadata
GrowthPenggerak Pasaran
Marketing & Campaigns
GrowthStrategi Jenama
Positioning & Brand Voice
StrategyStrategi Produk
Feature Framing & Roadmap
StrategyPenulis Cadangan
Proposals & SOWs
StrategyPenggerak Jualan
Sales Collateral & One-Pagers
StrategyPerisai
Security Hardening
SecuritySerangan
Penetration Testing
SecurityHulubalang
Security Audit & Pentest
SecurityPengawal
CyberSec Defense
SecurityTukang
Full-Stack Development
DeliveryBendahara
Database & Migrations
DeliverySyahbandar
Ops, CI/CD & Deploy
DeliveryPenyemak
QA & Validation
QualityPawang
Research & Docs
KnowledgePenganalisis
KPI & Funnel Analysis
QualityHikmat
Memory Consolidation
KnowledgePlatform Features
Every component built for performance, security, and developer experience.
5 Orchestration Modes
Multi-Agent Debate
Multiple agents argue positions, LLM-as-Judge reaches consensus
Consensus Building
Agents converge on shared understanding through structured rounds
Sequential Chain
Step-by-step pipeline: research → implement → verify → deploy
Parallel Analysis
Independent workers investigate simultaneously for speed
Hierarchical Review
Multi-level review with escalating authority and scope
8 Local LLM Models
5 LLM Providers
Markdown Rendering
Full tables, code blocks with syntax highlighting, headers, lists, and inline formatting in all AI responses.
Shimmer Animations
Smooth shimmer loading states while AI thinks — beautiful, not frustrating. Know when work is happening.
Confidence Scoring (ConfMAD)
Every response includes confidence metrics using ConfMAD paradigm. Know when the AI is certain or uncertain.
LLM-as-Judge Consensus
A designated judge model evaluates competing agent outputs and reaches consensus for higher accuracy.
Retry Logic
Automatic retry with exponential backoff for model loading failures. Resilient by default.
LRUCache Optimization
Intelligent caching reduces response latency by 40-60%. Frequently accessed results served instantly.
Customer Portal
Self-service portal for customers with usage analytics, billing, and support — all self-hosted.
Open Customer Portal →One Command. Full Setup.
Configure your IDE, connect channels, deploy local models — all in 30 seconds.
Quick Setup
Interactive wizard or one-liner. Full workspace with skills and config in seconds.
IDE Integration
Inject JEBAT context into VS Code, Zed, Cursor, Claude Desktop, or Gemini CLI.
Channel Setup
Connect Telegram, Discord, WhatsApp, or Slack with guided configuration.
Local Models
Deploy Qwen2.5, Gemma 4, Phi-3, Hermes3 via Ollama or AirLLM on your VPS.
Migration
Migrate from OpenClaw/Hermes automatically. Configs, skills, workspace — all converted.
Management
Gateway control, agent health checks, skill listing, and deployment helpers.
The Platform Backbone
Memory, skills, agents, security — everything that makes JEBAT intelligent.
Eternal Memory
5-layer cognitive stack (M0-M4) with heat-based retention. Cross-session continuity.
Skill Registry
40+ specialized skills optimized for token efficiency and real-world patterns.
Agent Orchestration
Multi-agent routing with 5 modes: Debate, Consensus, Sequential, Parallel, Hierarchical.
CyberSec Suite
Hulubalang (audit), Pengawal (defense), Perisai (hardening), Serangan (pentest).
Gateway Router
Provider routing across 5 LLM backends with fallback chains and load balancing.
IDE Context
Inject JEBAT into any editor. VS Code, Cursor, Zed, JetBrains, Neovim, and more.
LRUCache
Intelligent caching reduces response latency by 40-60% for frequently accessed results.
Confidence Scoring
ConfMAD paradigm provides confidence metrics on every AI response.
LLM-as-Judge
Designated judge model evaluates competing outputs for consensus-driven accuracy.
Chat with Your Agents
Manus/Kimi-style interface with LLM-to-LLM debates, BYOK support, and local models.
JEBAT's Gelanggang system connects multiple agents across different LLM providers:
- Panglima — Orchestrator (routes tasks)
- Tukang — Builder (implements solutions)
- Hulubalang — Guardian (security review)
- Penyemak — QA (validates output)
Qwen2.5 14B
The key advantage is cross-provider diversity. Each agent can use a different model, providing diverse perspectives.Hermes3 8B
Agreed. Plus the memory layer (M0-M4) ensures context persists across the entire conversation chain.LLM-to-LLM Chat
Watch two AI models debate any topic. Great for idea exploration and getting multiple perspectives automatically.
BYOK Support
Bring Your Own Key — use GPT-4o, Claude, or any OpenAI-compatible model alongside local models.
8 Local Models
Gemma 4, Qwen2.5 14B, Hermes3, Phi-3, Llama 3.1, Mistral, CodeLlama, and TinyLlama — all running locally.
Memory Context
Chat with full conversation history. JEBAT remembers preferences, decisions, and context across sessions.
Enterprise-Grade Security
Zero-trust architecture with prompt injection defense, command sanitization, and complete audit trails. 100% self-hosted — no cloud dependency.
Prompt Injection Defense
Input sanitization, context isolation, pattern detection, adversarial testing
Command Sanitization
Whitelist-only execution, argument escaping, timeout enforcement, shell safety
Complete Audit Trails
Tamper-evident operation logs, JSON format, immutable history, full traceability
Secrets Management
Secure token handling, credential masking, auto-rotation, vault integration
CyberSec Suite — 4 Security Agents
Hulubalang
Security Audit & Pentest
Vulnerability scanning, pentesting, security review with structured findings
Pengawal
CyberSec Defense
Real-time threat detection, security monitoring, incident response protocols
Perisai
Security Hardening
System hardening, configuration audit, attack surface reduction
Serangan
Penetration Testing
Offensive security, red-team simulations, exploit analysis
100% Self-Hosted. No Cloud Dependency.
Every model runs locally. Every byte of data stays on your infrastructure. No telemetry, no data exfiltration, no vendor lock-in. Complete sovereignty over your AI operations.
Try from Your Terminal
Two packages. Zero installation. Start in seconds.
$ npx jebat-agent --help
jebat-agent v1.0.3 USAGE: npx jebat-agent [options] OPTIONS: --quick Quick setup (gateway only) --full Full setup (workspace + skills) --ide <ide> IDE integration --channel <ch> Channel setup --local-model <m> Setup local model --migrate Migrate from OpenClaw/Hermes
$ npx jebat-agent --full
🚀 Setting up Jebat... 1/5 Creating directory structure... ✓ Created ~/.jebat 2/5 Generating gateway config... ✓ Gateway config generated 3/5 Setting up workspace and skills... ✓ Workspace and skills configured 4/5 Creating environment file... ✓ Environment file created 5/5 Validating setup... ✓ Validation passed ✅ Jebat setup complete!
$ npx jebat-core doctor
🩺 JEBAT Doctor — Workspace Health Check ✅ Core files: 5/5 ✅ Gateway: http://localhost:18789 ✅ Skills directory found (40+ skills) ✅ Memory: 4 daily files ✅ JEBAT home: ~/.jebat ✅ All checks passed. JEBAT is healthy.
$ npx jebat-core status
📡 JEBAT System Status ✅ Online Gateway (http://localhost:18789) ✅ Healthy VPS (jebat.online) ✅ Healthy WebUI (/webui/) ✅ Ollama Server (8 models) ✅ npm: jebat-core@3.0.0