MLOps Engineer (LLMs)
Multiverse Computing
Multiverse is a well-funded, fast-growing deep-tech company founded in 2019. We are the largest quantum software company in the EU and have been recognized by CB Insights (2023 and 2025) as one of the 100 most promising AI companies in the world.
With 180+ employees and growing, our team is fully multicultural and international. We deliver hyper-efficient software for companies seeking a competitive edge through quantum computing and artificial intelligence.
Our flagship products, CompactifAI and Singularity, address critical needs across various industries:
- CompactifAI is a groundbreaking compression tool for foundational AI models based on Tensor Networks. It enables the compression of large AI systems—such as language models—to make them significantly more efficient and portable.
- Singularity is a quantum- and quantum-inspired optimization platform used by blue-chip companies to solve complex problems in finance, energy, manufacturing, and beyond. It integrates seamlessly with existing systems and delivers immediate performance gains on classical and quantum hardware.
You’ll be working alongside world-leading experts to develop solutions that tackle real-world challenges. We’re looking for passionate individuals eager to grow in an ethics-driven environment that values sustainability and diversity.
We’re committed to building a truly inclusive culture—come and join us.
As a MLOps Engineer, you will
- Deploy cutting-edge ML/LLMs models to Fortune Global 500 clients.
- Join a world-class team of Quantum experts with an extensive track record in both academia and industry.
- Collaborate with the founding team in a fast-paced startup environment.
- Design, develop, and implement Machine Learning (ML) and Large Language Model (LLM) pipelines, encompassing data acquisition, preprocessing, model training and tuning, deployment, and monitoring.
- Employ automation tools such as GitOps, CI/CD pipelines, and containerization technologies (Docker, Kubernetes) to enhance ML/LLM processes throughout the Large Language Model lifecycle.
- Establish and maintain comprehensive monitoring and alerting systems to track Large Language Model performance, detect data drift, and monitor key metrics, proactively addressing any issues.
- Conduct truth analysis to evaluate the accuracy and effectiveness of Large Language Model outputs against known, accurate data.
- Collaborate closely with Product and DevOps teams and Generative AI researchers to optimize model performance and resource utilization.
- Manage and maintain cloud infrastructure (e.g., AWS, Azure) for Large Language Model workloads, ensuring both cost-efficiency and scalability.
- Stay updated with the latest developments in ML/LLM Ops, integrating these advancements into generative AI platforms and processes.
- Communicate effectively with both technical and non-technical stakeholders, providing updates on Large Language Model performance and status.
Required Qualifications
- Bachelor's or master's degree in computer science, Engineering, or a related field.
- Mid - 2+ years of experience as an ML/LLM engineer in public cloud platforms.
- Senior - 5+ years of experience as an ML/LLM engineer in public cloud platforms.
- Proven experience in MLOps, LLMOps, or related roles, with hands-on experience in managing machine/deep learning and large language model pipelines from development to deployment and monitoring.
- Expertise in cloud platforms (e.g., AWS, Azure) for ML workloads, MLOps, DevOps, or Data Engineering.
- Expertise in model parallelism in model training and serving, and data parallelism/hyperparameter tuning.
- Proficiency in programming languages such as Python, distributed computing tools such as Ray, model parallelism frameworks such as DeepSpeed, Fully Sharded Data Parallel (FSDP), or Megatron LM.
- Expertise in with generative AI applications and domains, including content creation, data augmentation, and style transfer.
- Strong understanding of Generative AI architectures and methods, such as chunking, vectorization, context-based retrieval and search, and working with Large Language Models like OpenAI GPT 3.5/4.0, Llama2, Llama3, Mistral, etc.
- Experience with Azure Machine Learning, Azure Kubernetes Service, Azure CycleCloud, Azure Managed Lustre.
- Experience with Perfect English, Spanish is a plus.
- Great communication skills and a passion for working collaboratively in an international environment.
Preferred Qualifications
- Experience in training “Mixture-of-Experts"
- Experience working with different public cloud providers and hybrid environments.
- Experience in real-time streaming applications.
- Experience with Azure Data Lake, Azure Data Factory, Azure RBAC, Application insights - Azure Monitor.
Perks & Benefits
- Indefinite contract.
- Equal pay guaranteed.
- Variable performance bonus.
- Signing bonus.
- We offer work visa sponsorship (If applicable).
- Relocation package (if applicable).
- Private health insurance.
- Eligibility for educational budget according to internal policy.
- Hybrid opportunity.
- Flexible working hours.
- Flexible remuneration: hospitality and public transportation.
- Language classes and discounted lunch options
- Working in a high paced environment, working on cutting edge technologies.
- Career plan. Opportunity to learn and teach.
- Progressive Company. Happy people culture
As an equal opportunity employer, Multiverse Computing is committed to building an inclusive workplace. The company welcomes people from all different backgrounds, including age, citizenship, ethnic and racial origins, gender identities, individuals with disabilities, marital status, religions and ideologies, and sexual orientations to apply.
- Department
- Technical
- Locations
- San Sebastian, Spain
- Employment type
- Full-time
- Workplace type
- Hybrid
- Seniority level
- Mid-Senior level
About MULTIVERSE COMPUTING
Come and join our multicultural team!
5 locations
+27 languages
Already working at MULTIVERSE COMPUTING?
Let’s recruit together and find your next colleague.