Description
How you can help make a better world of work
We're building the platform that makes AI possible across Culture Amp. This is a Staff-level role in our AI Platform team, responsible for the infrastructure, governance, and tooling that enables product teams to ship AI-powered features safely and at scale.
What you'll do
You'll own the systems that sit between our product teams and the AI capabilities they need: LLM gateways, vector storage, retrieval infrastructure, and the guardrails that keep it all compliant and cost-effective.
This means:
- Designing and operating the platform services that power AI features across Culture Amp, including inference pipelines, embedding storage, and retrieval systems
- Building a scalable approach to vector search across diverse categories of unstructured data (survey responses, performance feedback, company documents)
- Driving MLOps and LLMOps practices across the organisation, including observability, cost management, and reliability
- Ensuring AI is used responsibly: implementing guardrails, security controls, and data compliance measures
- Partnering with data scientists on the team to productionise models and evaluate new AI capabilities
What makes this interesting
AI is central to Culture Amp's product roadmap. Coach, our AI assistant, is changing how we empower managers and leaders to build better workplaces, and this platform is what makes that possible. You'll be solving genuine scale problems: semantic search across millions of survey responses, retrieval that works across different data types, and governance that keeps pace with how quickly AI capabilities are evolving.
We're looking for someone who leads from the front. Not just executing well, but identifying opportunities for leverage, driving architectural direction, and building excitement across the team and with partner teams. You'll shape how AI infrastructure evolves at Culture Amp.
You have
- Strong platform engineering fundamentals, with experience building and operating services that other teams depend on
- Expertise in Python and experience with ML tooling and infrastructure
- Deep experience with large-scale data systems (streaming, batch processing, data lakes)
- Proven experience with ML infrastructure: model serving, vector databases, embedding pipelines
- Strong understanding of cloud platforms (AWS preferred) and backend architecture
- Experience building and optimising RAG and retrieval systems
- The communication skills to work across teams and influence technical direction beyond your own team
You Are
- Self-motivated and able to work independently, comfortable dealing with ambiguity when necessary. You take the initiative to ensure that you have everything you need to work effectively and ask for support when required.
- A driver of technical excellence in a team environment. You’re an expert in your domain and are able to develop the expertise and knowledge of those around you.
- Someone who loves collaboration - our teams are cross functional and you’ll be working with other engineers, team leads and product managers to deliver great outcomes together.
- Aligned with our values, check them out here: https://www.cultureamp.com/company#values and demonstrate them through your working practice.