Big Data Developer (Java, Hadoop) – LT Contract
Develop application requirements within virtualized environments by delivering messaging platforms, in-memory databases, and batch processing products at scale on internal and external clouds.
Deploy solutions such as Kafka, Spark, Fink, Hadoop to our internal cloud, automate and prepare for operational support. Work with our cloud engineering team to produce products for our big data customers to utilize either as IaaS or SaaS products within our cloud environment.
• Design, develop, and implement web-based Java applications to support business requirements.
• Follow approved life cycle methodologies, create design documents, and performs program coding and testing.
• Resolve technical issues through debugging, research, and investigation.
• Codes software applications to adhere to support internal business requirements.
• Standardizes the quality assurance procedure for software. Oversees testing and develops fixes.
• Develop software for large scale Java/Spring Batch/Hadoop distributed systems
• Load and process from disparate data sets using appropriate technologies including but not limited to, Hive, Pig, MapReduce, HBase, Spark, Storm and Kafka.
• Expert in HIVE SQL and ANSI SQL
• Extensive hands-on in Data Management and Data Analysis using SQL
• Ability to write simple to complicated SQL in addition to having ability to comprehend and support data questions/analysis using already written existing complicated queries
• Familiarity in Dev/Ops (Puppet, Chef, Python)
• Understanding of Big Data concepts and common components including YARN, Queues, Hive, Beeline, AtScale, Datameer, Kafka, and HDF
• Java experience.
• Strong communication skills.
• Experience with Hbase, Kafka and Spark.
• Bachelor's degree in area of specialty and experience in the field or in a related area