● Creative and intellectually curious people with at least 5 years hands-on experience in machine learning engineering, or 5 years experience in data engineering and a strong attitude for machine learning.
● Rigorous engineers, which value building at scale and delivering at quality, and think of ML integration in software in terms of ML pipelines.
● Distributed computing enthusiasts, with at least 2 years of experience with Spark and the Hadoop ecosystem, along with other big data technologies.
● MLOPs fanatics, with proven experience in building CI/CD pipelines for machine learning models
● Python developers, programmers, with excellent knowledge of SQL.
● Generous, flexible team players.
● Amazing communicators who can convey the importance of their work to laypersons as well as peers.
● Full working proficiency in English.
● Hands on experience with orchestrating machine learning and big data services in AWS (Step Functions, Sagemaker, EMR, S3, Amazon Aurora, Amazon Athena).
● Experience with orchestrating production workflows in Apache Airflow.
● Knowledge of Scala.
● Regular compensation package reviews
● 20 working days paid vacation leave, 15 working days of paid sick leave with a certificate and 5 working days of paid sick leave without a certificate.
● Medical Insurance program for employee (and the ability for an employee to insure family members at a corporate rate)
● Certification coverage & support for participation in specialized events (webinars, conferences, trainings etc.)
● Regular team-buildings inside team projects, as well as summer and winter corporate parties for the entire company
● Office or remote or mixed types of work are welcome — upon employee’s convenience and by approval of direct manager
● Free English classes in the office
● Convenient office location in the center.