Distributed Systems Engineer, Query Processing

  • Company: Workday
  • Location: San Mateo, California
  • Posted: November 15, 2017
  • Reference ID: JR-23076
Join our team and experience Workday!

It's fun to work in a company where people truly believe in what they're doing. At Workday, we're committed to bringing passion and customer focus to the business of enterprise applications. We work hard, and we're serious about what we do. But we like to have a good time, too. In fact, we run our company with that principle in mind every day: One of our core values is fun.

Job Description

As part of Workday's Prism Analytics team, you will be responsible for designing and implementing techniques for high-performance query processing in support of big data analytics in the cloud.  You will work with a top-notch team to implement features for parallel and distributed data processing engines.  This includes developing efficient data structures and algorithms for massively parallel in-memory analytics, advanced techniques for distributed data processing, and integration with open-source distributed systems frameworks such as Spark. You will participate in the full lifecycle of software development and directly impact all the customers.

About You

You're an engineer who is passionate about data management and distributed data processing frameworks and algorithms. Performance, scalability, and reliability of these algorithms are constantly on your mind and looking to stay involved in these areas. You have the chops to build the infrastructure that powers high-performance data crunching on large volumes of data while ensuring simplicity and ease-of-use of the overall product. You enjoy the thrill of coming up with brilliant ideas and can articulate their value-prop to stakeholders but are most satisfied when you turn these ideas into solid high quality implementations that make customers successful.

Bonus points if you have looked under the hood of Spark and changed the internals of how the framework works. Optimizing the Spark jobs completion time and scalability are part of the DNA of the team and we are looking to add engineers that will make the Spark engine as fast as a Ferrari.

Required Skills
  • 2+ years industry experience building and delivering high-performance data processing engines. 
  • Excellent coding skills, Java, Scala, and Linux expertise
  • Desire to figure out how things work
  • Good grasp of SQL and distributed data processing
  • Background in database internals, query processing, and distributed systems
  • BS in Computer Science or MS

Share this Job