Profile picture

Dor Luzgarten

Mobile Developer 🔄 AI Engineer

Israel

I've been building mobile apps for a while - iOS, Android, Flutter. Along the way I also built the backends behind them - APIs, databases, distributed systems, the stuff that has to actually hold up under load. At some point I got more interested in AI, and now most of my work is around agents and automations. I like building things that actually do stuff on their own.

iOSAndroidFlutterAI AgentsPythonNode.js

Experience

Mobile Development

2018 – 2026

Built mobile apps on iOS, Android, and Flutter - both native and cross-platform. Worked on consumer apps with real scale, handled architecture decisions, and dealt with App Store and Play Store pipelines.

  • Native iOS with Swift & UIKit / SwiftUI
  • Native Android with Kotlin & Jetpack
  • Cross-platform with Flutter & Dart
  • CI/CD with Fastlane, GitHub Actions, Bitrise
iOS (Swift)Android (Kotlin)FlutterDart

Backend Development

2020 – 2026

Built and maintained backend services for mobile apps. REST and GraphQL APIs, databases, third-party integrations. Worked on everything from simple serverless functions to containerized services.

  • RESTful & GraphQL API design
  • PostgreSQL, Firebase, Supabase
  • Node.js / NestJS, Python / FastAPI, Go / Gin
  • Docker, AWS Lambda, GCP Cloud Run
Node.jsPythonGoREST APIsPostgreSQLFirebase
Current Focus

AI Agents

2024 → Present

Building regular apps started feeling too predictable. Agents actually do things - reason, use tools, make decisions. Coming from mobile means I can ship agents that live inside the apps people already use, not just in a terminal.

  • Multi-agent orchestration with LangGraph
  • RAG pipelines and vector stores
  • Tool use and function calling with OpenAI & Anthropic APIs
  • Autonomous research, coding, and data extraction agents
  • GenUI in Flutter - UI that renders itself based on model output
  • Apple Foundation Models Framework - on-device LLMs, no server needed
LangChainLangGraphOpenAIAnthropicPythonFlutterFoundation Models
Active Build

OpenClaw

2026

My own agent setup running on a private VPS. Everything end-to-end - I wanted full control over the infra, the data, and what the agents can touch. Locked it down with VPN and firewall, then connected it to the channels I actually use day to day.

  • Private VPS - I own the compute, nothing runs on someone else's cloud
  • VPN + firewall so it's not just sitting open on the internet
  • WhatsApp and Telegram - talk to agents like you'd message a person
  • Slack for automations and internal triggers
  • Gmail - agents that can read, draft and send mail
  • Whisper for voice input - speak instead of type
  • Web search and live browsing - agents that actually go and look things up
  • Scheduled agents - things that run on their own without me triggering them
  • Monitoring dashboard - see what's running, what failed, what was triggered
  • Multi-model - not locked to one provider, routes to OpenAI, Anthropic or local models depending on the task
VPSVPNFirewallWhatsAppTelegramSlackGmailWhisperWeb BrowsingCronMulti-modelPython

Second Brain

2026

Everything I read, build, think about and work on ends up in one place. Obsidian is the vault, Claude Code is wired into it, and OpenClaw agents write to it automatically. It's not a notes app anymore - it's a system that captures stuff without me having to remember to.

  • OpenClaw agents write notes into Obsidian from chats, emails and browsing - capture happens on its own
  • Claude Code is hooked into the vault - it reads context from my notes when I'm coding
  • Search and retrieval across the whole knowledge base, not just keyword matching
  • Notes sync between OpenClaw and Obsidian so agents know what I know
  • Automated summaries and connections between notes surface things I'd otherwise forget
ObsidianClaude CodeOpenClawRAGAutomation