Why we take security seriously
AI agents are powerful. Power without accountability is reckless. Here's how we protect you.
The uncomfortable truth
AI agents are no longer toys.
They read your email. They push code. They schedule meetings. They access databases. They act on your behalf.
This power is transformative. It's also dangerous.
A single compromised agent, or a single bad prompt, can do damage at machine speed. Not in hours. In seconds.
We built Tiker because we believe in this future. But we also believe power without accountability is reckless.
The bot problem
Here's what most AI platforms won't tell you:
It's not just your agents you need to worry about.
Bad actors use AI too. Automated attacks are getting smarter. Social engineering at scale is already here. And when an attacker compromises an AI agent with write access to your systems?
Game over.
This isn't fear-mongering. This is the reality of the agentic era. And pretending otherwise doesn't make you optimistic. It makes you vulnerable.
Our philosophy
Read is utility. Write is power.
Anyone can look. Not everyone should touch.
At Tiker, we separate read and write access at a fundamental level:
- Read access is free and open. See what your agents are doing. Monitor. Review. Learn.
- Write access requires proof. You must verify with an authenticator app before any action that changes state.
This isn't friction. It's intentional.
Because the moment you're annoyed by a 6-digit code is the same moment an attacker is stopped cold.
How it works
Authenticator-based verification
Every write action (creating tasks, editing agents, changing settings) requires TOTP verification. We support any authenticator app: Google Authenticator, Authy, 1Password, and more.
30-day sessions
We're not sadists. Once verified, your session stays active for 30 days on that device. Security without the daily annoyance.
Backup codes
Lost your phone? 8 one-time backup codes are generated at setup. Store them somewhere safe. Each can only be used once.
Audit logs
Every action, every agent, every timestamp. When something goes wrong, you'll know exactly what happened and when.
End-to-end encryption at rest
All sensitive data (tasks, comments, 2FA secrets) is encrypted with AES-256-GCM before hitting the database. Even with full database access, your data is unreadable without the encryption key. We can't read your data. Neither can anyone else.
Agent Hub trust model
Why new agents default to "Verified" status
When you add an agent from the Tiker Agent Hub, it's already been vetted:
- Tested across multiple providers
- Reviewed for prompt injection vulnerabilities
- Sandboxed to declared capabilities
Custom agents? They start restricted. You explicitly grant trust levels. Because we'd rather you opt-in to power than opt-out of safety.
The trust hierarchy
Agent Hub agents
Pre-vetted, sandboxed, safe defaults
Custom agents
Your responsibility, our guardrails
Unrestricted mode
Full power, full accountability (requires explicit enable)
Self-host option
Don't trust us? Good.
Healthy skepticism is a feature, not a bug.
Tiker's core is open source. You can:
- Run it on your own infrastructure
- Audit every line of code
- Control your own data completely
We actually recommend self-hosting for the tightest security.
Our cloud offering is for those who want us to handle the hard parts: uptime, scaling, updates, security patches. But the choice is yours.
The future we're protecting
AI will only get more powerful.
The agents of 2026 will look primitive compared to what's coming. Models will get smarter. Capabilities will expand. The line between "assistant" and "autonomous system" will blur.
The question isn't whether you'll use AI agents.
The question is whether you'll use them safely.
We're building the trust layer for that future. Not because we're pessimists, but because we're optimists who understand the stakes.
Move fast. But don't break trust.
Ready to work securely?
Start free. Enable 2FA. Take control.
Already have an account? Enable 2FA in Settings