• Специализации
    Data Engineering
  • Город
    Kyiv, Ukraine
  • Тип работы
    fulltime • internship
  • Дата размещения
    3 месяца назад
  1. Проект и команда

    The Project involves pulling, processing and analyzing large amounts of structured and semi-structured Data from web API's and various data stores. Most of the data is related to eCommerce businesses, online marketing, visitor behavior.

    The goal of the project is to provide a clean data set of all relevant online data for an eCommerce business and use that data for API's to optimize product merchandising (recommendations), user-generated content, conversion rate, and marketing spend across all channels.

    More info on the company https://stacktome.com

  2. Описание позиции

    As a member of the Data Engineering team, you will design, build and manage Big Data
    infrastructure and tools. You will be owning part of the data platform that enables performing

    complex analytics and reporting efficiently and reliably.


    • Design, implement and maintain efficient and reliable data pipelines to move data across


    • Collaborate with multiple teams (product, analytics, finance, engineering etc) in high

    visibility roles to implement data workflow and own the solution end-to-end

    • Design and develop new systems and tools in a number of languages (Scala, Python) to facilitate effective data consumption

    • Implement data workflow to aggregate and structure data for high-performance analytics

    • Monitor performance of the data platform and optimize as needed

    • Evaluate, benchmark and integrate latest big data tools and technologies

    Desired Skills and Experience

    • Able to code in Scala

    • Understanding ELT design, implementation and maintenance

    • Able to communicate complex concepts clearly and accurately

    • Willingness to learn new technologies, tools and approaches to problem-solving

    • Responsible for own work and able to solve problems independently

    Nice to Have

    • Experience with data pipeline implementation in Apache Spark (Batch and Streaming)

    • Experience with HDFS from using to deploying and configuring

    • Familiarity with Docker and Kubernetes


    • Learn cutting edge tools used in big data world

    • Build product not bill hours

    • Freedom and independence to make own decisions for achieving results

    • Be part of young and ambitious team seeking to create a bend in the data universe

    • Great office location

    • Flexible schedule

    • Compensation growth based on performance

  3. Ключевые навыки и технологии

    Apache Spark Docker Google Cloud Platform Linux Scala