Request a callback

    Request a callback

    By submitting your details, you are agreeing to our Privacy Policy and to receive related communications from the Portfolio Group. You can unsubscribe at any time.

    Request terms

      Request terms

      By submitting your details, you are agreeing to our Privacy Policy and to receive related communications from the Portfolio Group. You can unsubscribe at any time.

      AI Platform Engineer | London | Excellent Salary +Benefits

      Join an award-winning, internationally recognised B2B consultancy as an AI Platform Engineer, owning the cloud-native platform that underpins conversational AI and generative AI products at scale.

      Sitting at the core of AI delivery, you will design, build, and operate the runtime, infrastructure, and operational layers supporting RAG pipelines, LLM orchestration, vector search, and evaluation workflows across AWS and Databricks. Working closely with senior AI engineers and product teams, you’ll ensure AI systems are scalable, observable, secure, and cost-efficient, turning experimental AI into reliable, production-grade capabilities. With further scope of responsibilities detailed below:

      • Own and evolve the AI platform powering conversational assistants and generative AI products.
      • Build, operate, and optimise RAG and LLM-backed services, improving latency, reliability, and cost.
      • Design and run cloud-native AI services across AWS and Databricks, including ingestion and embedding pipelines.
      • Scale and operate vector search infrastructure (Weaviate, OpenSearch, Algolia, AWS Bedrock Knowledge Bases).
      • Implement strong observability, CI/CD, security, and governance across AI workloads.
      • Enable future architectures such as multi-model orchestration and agentic workflows.

      Required Skills & Experience

      • Strong experience designing and operating cloud-native platforms on AWS (Lambda, API Gateway, DynamoDB, S3, CloudWatch).
      • Hands-on experience with Databricks and large-scale data or embedding pipelines.
      • Proven experience building and operating production AI systems, including RAG pipelines, LLM-backed services, and vector search (Weaviate, OpenSearch, Algolia).
      • Proficiency in Python, with experience deploying containerised services on Kubernetes using Terraform.
      • Solid understanding of distributed systems, cloud architecture, and API design, with a focus on scalability and reliability.
      • Demonstrable ownership of observability, performance, cost efficiency, and operational robustness in production environments.

      Why Join?

      You’ll own the foundational AI platform behind a growing suite of generative AI products, working with senior AI leaders on systems used by real customers at scale. This role offers deep technical ownership, long-term impact, and an excellent compensation package within a market-leading organisation.

      INDAM