Data Scientist
Skills
Location
Languages
Fluent English
Job Description
We’re looking for a Data Scientist to build production-grade AI agent systems that automate multi-step workflows, interact with data in real time, and drive measurable business outcomes—helping shape the future of privacy-first, neuro-contextual advertising. 🤖📈🔒
🎯 Responsibilities
- Build AI agent systems that interact with data, execute multi-step workflows, and make autonomous decisions in production.
- Design and implement agent architectures using modern frameworks, including multi-agent orchestration, delegation patterns, and reliability under production constraints.
- Solve context management challenges (conversation history, tool outputs, file attachments) using semantic search / retrieval so agents consistently have the right information.
- Integrate agents with databases and APIs for real-time tool calling, and create scalable tool-integration patterns (e.g., protocols like MCP) for broader adoption across teams.
- Develop evaluation frameworks to measure agent performance in production and iterate based on results.
- Own solutions end-to-end: from concept and design to deployment, working closely with cross-functional stakeholders to align work with business goals.
🛠️ Requirements
- 3–5 years building and deploying production ML systems, including hands-on work with LLM-based applications or autonomous agents (e.g., Pydantic AI, OpenAI Agents SDK, LangGraph, or similar).
- Strong software engineering skills in Python, with experience shipping services to production (async programming, API design).
- Production infrastructure experience with Docker and Kubernetes.
- Strong quantitative background (CS, Engineering, Statistics, Math, or similar) and understanding of retrieval systems, embeddings, and context-management patterns.
- Experience delivering end-to-end projects (design → deployment), comfortable with tool calling, multi-step workflows, and the non-deterministic nature of LLM systems.
- Proactive, adaptable, and entrepreneurial mindset—comfortable operating where best practices are still emerging.
🧰 Tech environment (what you’ll work with)
- High-scale ML serving (very low latency at high throughput).
- Microservices in Python and Go, deployed on Kubernetes in GCP (and AWS).
- Data infrastructure including datalakes, GCS, Redis, Kinesis and/or Kafka, and large-scale daily data outputs for analytics.
- Broader company stack also includes TypeScript/Node.js and Scala, with technologies like Kafka, Druid, and MongoDB in GCP.
🤝 Why join
- High-growth moment with strong career development opportunities.
- Remote-first culture with flexibility to work 100% remote or hybrid (depending on what works best).
🎁 Benefits & perks
- Learning: ODILO online courses + optional English/Spanish group classes.
- Flexible benefits plan (restaurant, transportation, kindergarten vouchers).
- Health insurance.
- Gympass / wellbeing programs.
- Seedsave discounts program.
- Home office budget up to 1,000€ gross (equipment like screen/chair/desk).
- Paid trips to HQ in Madrid for in-person squad collaboration.
- MacBook Pro M3.
📍 Location / Work model
- Remote-first (100% remote or hybrid).
- Contracts available within Europe where legal entities exist: Spain, Italy, UK, Belgium, Netherlands, France, Germany.
Location
Calle del Marqués de Valdeiglesias, 6, 28004 Madrid, Community of Madrid, España
.png%3Falt%3Dmedia%26token%3Da6511bc7-94ff-4105-808f-645ee24e634f&w=3840&q=75)