Your new organisation
We are currently partnering with a leading consultancy to source for numerous data roles that will be working onsite at a “big 4” bank on some exciting projects
Your new role
Big Data/Streaming Engineer are responsible for developing, testing, implementing, and maintaining big data solutions and data pipelines for data lake environment with the ability to design solutions independently based on high-level architecture.
What you'll need to succeed
- Associate must know Spark core, Spark Streaming, Kafka, Hbase and have hands-on experience of at least 1-2 years
- Must have used Java/Scala programming language for Spark streaming API’s
- Good hands-on experience on Hive, HDFS, phoenix and Big Data ecosystem for 1-2 years
- Worked in AWS, Azure or GCP cloud for 1-2 years
- Worked in Hortonworks Data Platform (HDP), Cloudera Distribution of Hadoop (CDH) for 1-2 years.
- Good to know data warehousing concepts and SQL.
- Good Linux, scripting and python knowledge
- Proficient in source version control and CI/CD which includes git, bitbucket, GitHub, Jenkins
- Good communication skill and should know agile terminologies
What you need to do now
Please apply or to find out more about this exciting opportunity, please contact Menka on 0292492265 or email to email@example.com [mailto:firstname.lastname@example.org] for a detailed and confidential discussion .LHS 297508