About the position

This is an excellent opportunity to work in a company with a highly technological product that generates hundreds of thousand events per second. A vast sea of data that is not only stored and organized but also consumed to improve all aspects of the operation: pricing, dispatching, marketing, governance, and many others.

At data engineering, we operate dozens of services (Scala, Golang, Python), pipelines (Apache Beam, Airflow), and our in-house developed machine learning platform. We are a hands-on team: we manage our own infrastructure (GCE and AWS) and Kubernetes clusters (GKE). 

It’s important to highlight that Cabify is a global company with a very complex product, but at the same time with the perfect size to allow you to have a real impact on the product. You will be able to build and improve the platform that provides trusted data at scale to the rest of the company. And you will do it as part of a team of very experienced data engineers, helping each other to grow technically and professionally. 

You will:

  • Design and develop end-to-end data solutions and modern data architectures for Cabify products and teams (streaming ingestion, data lake, data warehouse...)
  • Evolve and maintain Lykeion, a Machine Learning platform developed along with the Data Science team, to take care of the whole lifecycle of models and features. It includes a feature store, which allows other teams inside Cabify to make better decisions based on data, and a prediction platform to serve ML models.
  • Design and maintain complex APIs exposing data at scale, that helps other teams to make better decisions.
  • Provide the company with data discoverability and governance.
  • Collaborate with other technical teams to define, execute and release new services and features.
  • Manage and evolve our infrastructure. Continuously identify, evaluate and implement new tools and approaches to maximize development speed and cost efficiency.
  • Extract data from internal and external sources to empower our Data Analytics team.

What we’re looking for

We are looking for experienced software engineers with a deep interest in data and great know-how on large-scale distributed systems.

This is the list of how the ideal candidate looks, but we know that backgrounds are very diverse. You don’t have to fulfill every single point in the list, and in case of doubt we are happy to discuss it with you: 

  • At least 4 years of tenure in coding and delivering complex software projects
  • Fluency in different programming languages (we work with Python, Scala, and Go; you don’t need to master all three of them).
  • Experience with message delivery systems and streaming processing (Kafka, RabbitMQ, Akka streams, Apache Beam…)
  • Good understanding and application of modern data processing technology stacks and distributed processing (Hadoop, Spark, Apache Beam, Apache Flink...)
  • Deep understanding of different storage technologies (file-based, relational, columnar, document-based, key-value...)
  • Experience with orchestration tools such as Airflow, Luigi, or Dagster.
  • Familiarity with Machine Learning concepts, especially with its lifecycle / MLOps (features, models, training & evaluation processes, productionizing)
  • Experience with cloud infrastructures (GCP, AWS, Azure)
  • Being comfortable with automation/IaC tools (Terraform, Puppet, Ansible…)
  • Bonus points:
    • Experience with Google Cloud BigData products (PubSub, Dataflow, BigTable, BigQuery…)
    • Experience with Kubernetes.
    • Experience with Apache Beam and Scio.

The good stuff:

We’re a company full of happy, motivated people and we never want that to change. Here are more reasons why it rocks to be part of our high-performance team.

  • Salary conditions:
    • Senior: 45k - 65k.
    • Principal 55k - 85k
  • Very competitive stock options plan.
  • Remote position, or on-site/hybrid position at our Madrid HQ.
  • 22 vacation days + 2 extra days for the Christmas period.
  • Recharge day: in addition to the just mentioned vacation days, every third Friday of each month is also a day off! 
  • Personal development programs based on our career paths, and a budget for training.
  • Cabify staff free rides.
  • Flexible compensation plan: Restaurant Tickets, Transport Tickets, healthcare and childcare.
  • Regular team events.
  • All the equipment you need (you only have to bring your talent).
  • A pet room in the office, so you don’t have to leave your furry friend at home. And last but not least… free coffee and fruit!

       Join us!

Apply now!

Why Cabify?

We're remote friendly!

In our Product team, everyone has the option to work remotely two days per week. People in Senior roles have the option of working fully remotely.

Flexible remuneration plan

You can take advantage of a range of services: restaurant vouchers, transport vouchers, healthcare and childcare vouchers.

Keep learning

You'll be given your own personal budget for training and development! And with regular internal training and technical events.