Who I am
AI Engineer specializing in building production LLM systems. From prototype to production — I build AI systems that solve real business problems with documented ROI.
My mission
I believe AI should be accessible and practical. I specialize in transforming advanced language models into reliable production systems. I approach every project with an engineering mindset — measurable results, full observability, and cost optimization.
My approach
I don't build demos — I build production-ready systems. That means: automated evaluations, input and output guardrails, real-time monitoring, and architecture that scales with the business. Every system is designed with maintainability and costs in mind.
Experience
Where I worked and what I did
Delivered product development and operational projects valued at up to 500k PLN.
- ▸Leadership: Led cross-functional teams of 10+ members across international markets (DACH region)
- ▸Expansion: Managed business expansion into Asian markets and established strategic partnerships
- ▸Optimization: Reduced operational costs and improved team efficiency through data-driven KPI tracking
Prompt optimization and AI integration consulting for business clients. API cost reduction and improvement of model response quality.
- ▸Prompt optimization for GPT-4 and Claude
- ▸Implementing guardrails and response validation
- ▸Team training in prompt engineering
- ▸Audit and optimization of existing AI systems
Education & Certificates
Continuously growing in the AI field
Skills
Technologies and tools I work with daily
AI & LLM
Backend
Frontend
DevOps & MLOps
Databases
Projects
Production AI systems solving real problems
RAG Q&A System
Hybrid search system combining semantic and keyword search for accurate AI answers. Full observability with LangSmith and real-time metrics.
LLM Evaluation Framework
Automated LLM model evaluation pipelines — measuring accuracy, relevance, and safety. Testing system with 50+ test cases and 4-layer protection.
Production LLM Architecture
Complete systems from document ingestion to real-time monitoring. Asynchronous processing with Redis cache and sub-second responses.
Intelligent Chat Interface
Smart caching layer reducing API costs while maintaining sub-second responses. RAG with vector search and contextual conversation memory.