Data Engineer / Scala Dev

Published on Jun 26, 2019
StackTome

A Product and Startup company founded in 2017 with office in Kyiv, Ukraine.


Experience:
1-3 years
Location:
Kyiv, Ukraine
Job Type:
Full-Time
Internship
Project:
Apache Spark
Docker
Google Cloud Platform
Linux
Scala

Overview Project & team

The Project involves pulling, processing and analyzing large amounts of structured and semi-structured Data from web API's and various data stores. Most of the data is related to eCommerce businesses, online marketing, visitor behavior. The goal of the project is to provide a clean data set of all relevant online data for an eCommerce business and use that data for API's to optimize product merchandising (recommendations), user-generated content, conversion rate, and marketing spend across all channels.


What You Will Do Responsibilities

As a member of the Data Engineering team, you will design, build and manage Big Data infrastructure and tools. You will be owning part of the data platform that enables performing complex analytics and reporting efficiently and reliably.

  • Design, implement and maintain efficient and reliable data pipelines to move data across
    systems.
  • Collaborate with multiple teams (product, analytics, finance, engineering etc) in high visibility roles to implement data workflow and own the solution end-to-end.
  • Design and develop new systems and tools in a number of languages (Scala, Python) to facilitate effective data consumption.
  • Implement data workflow to aggregate and structure data for high-performance analytics.
  • Monitor performance of the data platform and optimize as needed.

What You Should Bring
Must-have
  • Able to code in Scala.
  • Understanding ELT design, implementation and maintenance.
  • Able to communicate complex concepts clearly and accurately.
  • Willingness to learn new technologies, tools and approaches to problem-solving.
  • Responsible for own work and able to solve problems independently.
Nice-to-have
  • Experience with data pipeline implementation in Apache Spark (Batch and Streaming).
  • Experience with HDFS from using to deploying and configuring.
  • Familiarity with Docker and Kubernetes.

What You Will Get Benefits
  • Learn cutting edge tools used in big data world.
  • Build product not bill hours.
  • Freedom and independence to make own decisions for achieving results.
  • Be part of young and ambitious team seeking to create a bend in the data universe.
  • Great office location.
  • Flexible schedule.
  • Compensation growth based on performance.

Need more info? Let's have a chat when it's okay for you.
Evaldas Miliauskas
Connect via

Location Velyka Vasylkivska Street, 1, Kyiv, Ukraine