Kafka Engineer

Kafka Engineer

Kafka Engineer

Your new organisation
We are currently partnering with a leading consultancy to source for numerous data roles that will be working onsite at a “big 4” bank on some exciting projects

Your new role
Big Data/Streaming Engineer are responsible for developing, testing, implementing, and maintaining big data solutions and data pipelines for data lake environment with the ability to design solutions independently based on high-level architecture.

What you'll need to succeed
  • Associate must know Spark core, Spark Streaming, Kafka, Hbase and have hands-on experience of at least 1-2 years
  • Must have used Java/Scala programming language for Spark streaming API’s
  • Good hands-on experience on Hive, HDFS, phoenix and Big Data ecosystem for 1-2 years
  • Worked in AWS, Azure or GCP cloud for 1-2 years
  • Worked in Hortonworks Data Platform (HDP), Cloudera Distribution of Hadoop (CDH) for 1-2 years.
  • Good to know data warehousing concepts and SQL.
  • Good Linux, scripting and python knowledge
  • Proficient in source version control and CI/CD which includes git, bitbucket, GitHub, Jenkins
  • Good communication skill and should know agile terminologies

What you need to do now
Please apply or to find out more about this exciting opportunity, please contact Menka on 0292492265 or email to menka.tahiliani@hays.com.au [mailto:menka.tahiliani@hays.com.au] for a detailed and confidential discussion .

LHS 297508 #2649404


Job Type
Technology & Internet Services
NSW, Sydney CBD
Data & Advanced Analytics

Talk to a consultant

Talk to Menka Tahiliani, the specialist consultant managing this position, located in Sydney City
Level 13, Chifley Tower, 2 Chifley Square

Telephone: 0292492265

Similar jobs to Kafka Engineer

  • Senior Data Engineer

    Sydney Big 4 Bank require a Senior Data Engineer with a background in Hadoop.
    NSW, Sydney CBD
  • Testing Engineer

    Permanent Testing Engineer Opportunity at Big 4 Bank
    NSW, Sydney CBD