Senior Software Engineer - Big Data Services
Location: Pleasanton, California
Posted: November 30, 2017
Reference ID: JR-23221
Join our team and experience Workday!
It's fun to work in a company where people truly believe in what they're doing. At Workday, we're committed to bringing passion and customer focus to the business of enterprise applications. We work hard, and we're serious about what we do. But we like to have a good time, too. In fact, we run our company with that principle in mind every day: One of our core values is fun.
Welcome to the Big Data Platform Services team! We are responsible for Big Data platform services at Workday based on the Hadoop ecosystem as well as highly available and scalable data services related to persistence, messaging and caching. Our services are critical to the business and enable both customer-facing and internal applications to support analytics, reporting, anomaly detection, machine learning, and of course, big data storage and processing.
As a Senior Big Data Engineer, you'll be responsible for securing highly sensible customer information in big data scale. You will provide multi-tenancy cluster as a service to internal or external customers. You'll interact with the engineers, product managers and architects to provide scalable robust technical solutions to business and data engineering challenges.
Bachelor's Degree or higher. Computer Science major is preferable.
5+ years of software engineering experience.
Proven track record of execution in a fast paced environment.
Proficient in major development tools and Agile processes.
Strong verbal and written communication skills.
Technical Knowledge and Skills:
Must have a disciplined, methodical minimalist approach to designing and constructing layered software components that can be embedded within larger frameworks or applications.
Must have detail understanding of how Hadoop/Spark cluster works
Must have hands on experience in micro services deployment methodologies, Chef/Puppet/Ansible
Must have hands on experience in implementing solutions using SPARK, MR, Hive, Pig.
Must have a good understanding of Cloud Development and using Cloud APIs.
Must have a good understanding of enterprise security, encryption-at-rest/authentication/authorization
Experience on commercial cloud such as Amazon EC2, and EMR and/or hosted cloud.
Experience in managing deployment with Kubernetes and Docker container
Experience in building analytics for structured and unstructured data and managing large data ingestion using technologies like Kafka/Flume/Avro/Thrift /Sqoop
Proficient in Java/Scala programming and one other scripting language, ruby/python preferred.
Experience in R, SAS and other similar analytic tools.
Contribution to open source for Oozie, HBase, ZooKeeper or other open source project on Apache.
Familiarity with data warehouse ETL process