Hadoop Application Service Reliability Engineer (HASRE)
Posted: December 17, 2016
Reference ID: 1216418923
LinkedIn was built to help professionals achieve more in their careers, and every day millions of people use our products to make connections, discover opportunities and gain insights. Our global reach means we get to make a direct impact on the world's workforce in ways no other company can. We're much more than a digital resume - we transform lives through innovative products and technology.
Searching for your dream job? At LinkedIn, we strive to help our employees find passion and purpose. Join us in changing the way the world works.
LinkedIn is a deeply data-driven company with data driving not only business decisions but also product features and direction. Data is embedded in the LinkedIn DNA. The Data Services team is looking to hire a Hadoop Application Service reliability Engineer (HASRE). This team is responsible for building and maintaining the infrastructure that makes this data available and accessible to the entire company. The team works closely with Data scientists, Product Managers, Executives and other key parts of the business to understand their data requirements and build appropriate systems that meet or exceed those needs.
The Data Services team is looking for someone with strong background in data warehouse and Hadoop based operations that has managed and administered multi-petabyte data warehouse deployments and is open to working with and learning cutting-edge technologies in this space. This is a mission-critical role that ensures that our complex Hadoop data pipeline and related services are healthy, monitored, automated and designed to scale. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, where he/she will also believe that automation is a key component to operating large-scale systems.
Serve as a primary point responsible the overall health, performance and capacity of our back-end Hadoop/Oracle/Teradata based data warehouse environment Partner with data-engineering, program management, site reliability operations, and other related groups Gain deep knowledge of our complex applications and data pipeline by working hands on with the engineer Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment Work closely with development teams to ensure that platforms are designed with "operability" in mind Delve and perform the "Root cause analysis" of any identified issues Function well in a fast-paced, rapidly-changing environment. Participate in a 12x7 on-call rotation
B. . /B. . degree in Computer Science or related technical discipline, or equivalent practical experience 3+ years of experience in a technical operations role 2+ years of experience in networking, systems administration and automation 2+ years of experience with a scripting language
3+ years of experience with a large scale Hadoop environment 2+ years of experience with designing/implementing monitoring solutions across systems and applications 2+ years of experience in Python, Perl or Java UNIX/Linux systems knowledge and/or systems administration background Strong troubleshooting skills that span systems, network, and application Demonstrated programming skills in one or more of: Python, Perl, Ruby, Java, C - specifically for systems automation Has experience in implementing customized monitoring solutions across systems, application, network and business impact levels Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other Engineers, Product Managers, etc. Experience with serialization technologies (Avro, Protocol Buffers, etc) Understanding of large-scale data processing technologies like PIG and Map/Reduce Good experience working with databases including abilities to write/tune SQL queries Experience working in large scale data warehousing environment Knowledge of most of these: data structures, relational and nonrelational databases, networking, Linux internals, filesystems, web architecture and related topics