Skip to main content

Head of AI Engineering (f/m/x)

München
Full-time
Permanent employee

Your mission

About neoshare 
We’re a Munich-based AI-first fintech scale-up (founded 2019) with offices in Munich, Frankfurt, and Sofia. Our SaaS platform brings banks, investors, and advisors together to collaborate on complex financial deals — making due diligence faster, smarter, and more transparent. Our AI features are already live with leading banks. Now we’re scaling. 
  
The Role 
Own and evolve our AI engineering function — transforming a 15–20 person ML team from research-heavy to a high-throughput, production-grade organization. You’ll partner with the Director of AI on strategy, build the platform that unifies LLM access, RAG, and backend services, and ship reliable, scalable AI features that change how banks work. 
 
Key responsibilities 
  • Team leadership and org build
    • Hire, mentor, and develop a high-performing team; set the technical bar, operating rhythms, and code/research review practices
    • Organize sub-teams (e.g., Core Modeling, AI Platform/Infra, Integrations) with clear ownership, SLOs, andon-call
    • Manage roadmap, capacity planning, and delivery across parallel initiatives
  • Architecture and platform
    • Own the LLM gateway: unified APIs and proxy layers for multi-provider routing (OpenAI, Gemini, Bedrock), with rate limits, fallbacks, and cost tracking
    • Build high-performance RAG pipelines (ingestion, embeddings, vector stores, caching) with robust observability and safety guardrails
    • Partner with Java/NestJSteams to define clean async contracts, schemas, and eventing patterns; drive low-latency, scalable inference
  • Model lifecycle and operations
    • Lead end-to-end model and prompt lifecycle: data curation, training/fine-tuning, evaluation, deployment, rollback
    • Establish LLMOps/MLOps: model/prompt registries, CI/CD, canary/A/B tests, offline/online evals, drift and cost monitoring
    • Optimizeinference throughput and cost (autoscaling, batching, quantization/distillation, caching)
  • Strategy and collaboration
    • Translate company goals into an AI/ML roadmap with measurable outcomes; balance exploration with reliability and cost
    • Own build-vs-buy/vendor strategy for models, infrastructure, and data services; manage budgets and SLAs
  • Governance and security
    • Implement data privacy, security, and compliance practices (RBAC, secrets, auditability); track prompt/model lineage and reproducibility
    • Define incident response, runbooks, and postmortems for AI features

Your profile

  • 5+ years as a backend engineer and 4+ years leading AI/ML engineering in production (10+ years total experience ideal)
  • Deep architecture expertise in Java (JVM) and/or Node.js (NestJS), distributed systems, APIs, microservices, and messaging/streaming
  • Hands-on with LLM stacks: orchestration (e.g.,LangChain/LlamaIndexor custom), vector DBs (Pinecone,Qdrant, FAISS), cloud AI (e.g., AWS Bedrock)
  • Proven operation of systems at scale (millions of daily API calls) with strong SLOs, observability, and incident management
  • MLOpsfoundations: model registries, experiment tracking, CI/CD, Kubernetes,IaC(e.g., Terraform), security best practices
  • Excellent communication and stakeholder management; strong product sense focused on shipping user-facing feature 
Nice to have 
  • Experience with GPU/accelerator serving and optimization (vLLM, TGI, Triton, ONNX Runtime)
  • Cost optimization for LLM workloads (token budgets, dynamic routing, caching)
  • Evaluation and safety/red-teaming for generative systems; startup/high-growth experience
Impact metrics 
  • Platform: adoption of a unified LLM gateway; standardized observability and cost reporting
  • Delivery: 2–3 user-facing AI features shipped with clear SLOs and measurable impact
  • Reliability/cost: reduced average latency and cost per request; autoscaling and caching in place
  • Org: sub-team structureestablished; improved code quality and on-time delivery; targeted hiring completed
 
Our stack  
  • Backend: Java (JVM), Node.js (NestJS); event-driven microservices; API gateways/proxies
  • AI platform: Python,PyTorch, LLM orchestration, prompt pipelines/registry; vector DBs (Pinecone,Qdrant); RAG services
  • Infra/DevOps: AWS (incl. Bedrock), Kubernetes, Terraform, CI/CD, Observability (OpenTelemetry, Prometheus/Grafana)

Why us?

International & Inclusive Team: Collaboration with diverse teams at our locations in Munich, Frankfurt, Berlin, and Sofia.
Modern & Dog-friendly Offices: Ergonomic, green, and inspiring for collaboration and productivity.
Flexibility: 30 vacation days, flexible working hours, and hybrid work.
Special Time Off: Additional half-day off on Christmas Eve and New Year's Eve.
Workation: Work remotely for a limited period each year from selected destinations.
Wellbeing & Mobility Benefits: Support for well-being and sustainable lifestyle:
  • Urban Sports/EGYM Club subsidy: Monthly support for your membership.
  • Jobticket: 50% monthly subsidy for the Deutschlandticket.
  • JobRad: Leasing of bicycles or e-bikes at attractive conditions.
Candidates must have the right to work in the EU; visa sponsorship is not provided for this role. 

About us

neoshare AG, founded in 2019 in Munich, has quickly evolved into an international fintech company and now operates locations in Munich, Frankfurt and Sofia, Bulgaria. As an “AI-First Company,” it offers an innovative end-to-end solution with its SaaS platform "neoshare" for the efficient digitization and management of large-scale project and real estate financing. In close collaboration with banks and real estate companies, the product is continuously developed to sustainably transform the financial sector.