Experience: 8 - 10 Years
Proficiency with Hadoop , MapReduce, HDFSWorking knowledge of hadoop components such as spark streaming, hdfs, hbase, yarn, hive Pig, HiveQL and impala
Experience with building stream-processing systems, using solutions such as SparkKnowledge of various ETL techniques and frameworks, such as Flume
Experience with Cloudera/MapR/Hortonworks Hands on programming experience with Scala/Python as well as shell scripts and SQLProficient understanding of distributed computing principles
Ability to learn quickly in a fast-paced, dynamic team environment
Highly effective communication and collaboration skillWork on ETL and high volume real time data pipeline
6 years + of overall experience with at least 2 in big data related positions.