Any
$5-$25/hour
30
Feb 25, 2026
OpenClaw Systems Engineer
AI-Native · Security-First · Agent Architecture · Automation-Driven
I’m looking to replace myself with someone exceptional.
This role is for a systems-level thinker who fully embraces AI — not as a tool, but as an operating layer.
You will help evolve an OpenClaw-based agent ecosystem spanning architecture, dashboards, automation, and secure infrastructure.
This is high autonomy. High trust. High standards.
Core Expectation: You must be deeply AI-native. Not curious. Not experimenting. Fluent.
You operate comfortably in:
Claude
Cursor
Lovable
Voice-to-code workflows
Prompt chaining
Tool orchestration
Autonomous agent loops
You understand how to:
Maintain structured context
Prevent prompt drift
Debug AI-generated systems
Design reliable agent interactions
Control cost, latency, and complexity
You don’t fight AI. You architect with it.
Advanced Workflow & Automation Intelligence
You have demonstrated advanced knowledge of:
Workflow design patterns
Event-driven automation
Agent-triggered execution chains
Tool integrations
Modular orchestration
State persistence strategies
Context routing
You think in flows, not scripts.
You understand how to:
Reduce human bottlenecks
Build compounding automation
Design systems that improve with iteration
Keep complexity contained
Security & Authorization Depth (Non-Negotiable)
You understand:
OAuth and token lifecycles
JWT design
Role-based and attribute-based access control
Secret management
Multi-tenant isolation
AI system attack surfaces
Prompt injection vectors
Rate limiting & abuse mitigation
API boundary hardening
You design assuming adversarial input.
Security is not a feature.
It is a foundation.
Token Economics & System Scaling
You understand:
LLM pricing models
Context window constraints
Retrieval vs memory tradeoffs
Cost-per-agent-action modeling
Loop containment
Latency vs cost optimization
Failure handling in agent chains
You design systems that scale intelligently.
AGI Awareness & Critical Thinking
You must demonstrate:
Clear thinking about the trajectory toward AGI
Ability to reason about autonomous systems risk
Awareness of alignment challenges
Understanding of compounding automation
Strategic thinking about human-in-the-loop vs autonomy, Not sci-fi enthusiasm.
Grounded analysis.
You should be able to articulate:
Where AI agents break
Where they scale
Where they become dangerous
Where oversight must exist
GitHub & Remote Infrastructure
Everything is versioned. Everything is clean.
You are comfortable with:
Advanced Git workflows
Branch strategies
CI/CD fundamentals
Secure remote setup
SSH and key management
Environment separation
Documentation structure
You can onboard another engineer without chaos.
Documentation Velocity
You can:
Absorb technical documentation quickly
Extract signal from dense specs
Implement precisely from written architecture
Ask sharp, minimal clarification questions
Move without being micromanaged
Speed with accuracy.
This Is Not For You If:
You use AI casually but don’t architect around it
You don’t understand token economics
You avoid security complexity
You build clever but fragile systems
You romanticize AGI without critical analysis
You need step-by-step task lists
What Success Looks Like
Agent architecture becomes modular and extensible
Automation compounds rather than fragments
Token costs are predictable and optimized
Security is structured and resilient
Dashboard systems scale cleanly
I can step back and trust the system
To Apply Send:
GitHub profile
A system demonstrating strong automation architecture
A brief explanation of how you would secure a multi-user AI agent platform
A short note on how you manage LLM cost and context scaling
Your perspective (concise) on AGI trajectory and system risk
(Preferred) A short Loom walking through your thinking process
If you are exceptional, it will be obvious