Minimum Qualifications:
• Minimum 2+ years of IT experience development with regard to Big Data Platform in must.
• Proficiency with at least with one of the following languages : Java, Python, Scala.
• Experience with HBase, Hive, Map Reduce, Sqoop, ETL, Kafka, Mongo, MySQL etc.
• Understanding how to bring efficiency in big data related life cycle.
• Understanding of automated QA needs related to big data.
• Proficiency with agile or lean development practices.
Responsibilities:
• Hadoop Cluster setup & administration experience
• Experience with ETL & data cleansing/preparation in a Hadoop environment
• Experience with Hadoop tools such as Spark, Pig, Hive, Impala etc.
• Familiar with Hadoop distributions such as Cloudera ,Hortonworks ...
• Gathering requirements, analysis of entire system and providing estimation on development, testing efforts.
• Coordinating with team to assign tasks and monitor team deliverables to meet project time lines.
• Writing UNIX shell scripts to load the data from different interfaces to Hadoop.
• Writing scripts to import, export and update the data to RDBMS.
• Strong object-oriented design and analysis skills
• Excellent technical and organizational skills
• Excellent written and verbal communication skills