Innovate, Develop and maintain the ETL, AI Infrastructure, Data pipeline and distributed jobs of the core Data applications of the company.
Proactively maintain the database's performance and continuous improvement of DB optimization.
Be part of the Data engineering guild and have a real contribution to the innovative solutions for endless challenging Data requirements , including new DB, Architecture change and optimization, etc.
Be a productive member of the big data team and part of the R&D group.
What You Should Bring
Must-have
1-2 years experience of working with Python and Spark.
1-2 years of experience in a similar role.
BSc degree in computer science, electrical engineering or relevant experience.
Strong background on ETL development, data modeling, metadata management, and data quality.
Descent background as application DBA or similar, experience with Nosql and Cassandra or ElasticSearch a big plus.
Strong analytical, problem-solving and interpersonal skills, have a hunger to learn, and the ability to operate in a fast-paced rapidly changing environment.
Nice-to-have
Experience working with AWS services: EC2, EMR, RDS, Redshift
Similar Jobs inData Engineering and Data Science & AI