Join our team and experience Workday!
It's fun to work in a company where people truly believe in what they're doing. At Workday, we're committed to bringing passion and customer focus to the business of enterprise applications. We work hard, and we're serious about what we do. But we like to have a good time, too. In fact, we run our company with that principle in mind every day: One of our core values is fun.Job Description
As part of Workday's Prism Analytics team, you will be responsible for designing and implementing techniques for high-performance query processing in support of big data analytics in the cloud. You will work with a top-notch team to architect and implement features for parallel and distributed data processing engines. This includes developing efficient data structures and algorithms for massively parallel in-memory analytics, advanced techniques for distributed data processing, and integration with open-source distributed systems frameworks like Hadoop and Spark. You will develop new capabilities to support simplified workflows for integrated data analysis and visualization and participate in the full lifecycle of software development. About You
You're an engineer who is passionate about data management and distributed data processing frameworks and algorithms. Performance, scalability, and reliability of these algorithms are not an afterthought for you. You have the chops to build the infrastructure that powers high-performance data crunching on large volumes of data while ensuring simplicity and ease-of-use of the overall product. You enjoy the thrill of coming up with brilliant ideas and can articulate their value-prop to stakeholders but are most satisfied when you turn these ideas into solid high quality implementations that make customers successful. Required Skills
- A strong background in advanced database concepts, database internals, query processing, and distributed systems.
- Several years' industry experience building and delivering high-performance data processing engines.
- Experience with parallel and distributed data processing techniques and big data frameworks like Hadoop and Spark
- A distinguished track record of innovation and delivering on technically demanding projects
- Excellent coding skills, Java, Scala, and Linux expertise
- Very good grasp of SQL and BI concepts
- Good troubleshooting skills and willingness to help the field and customer support teams, as needed
- The ability to collaborate effectively within and across cross-functional teams
- BS in Computer Science, MS or PhD preferred