Responsible for design, development and implementation of Big Data Projects. Oversee, perform and manage Big Data projects and operations. Resolve issues regarding development, operations, implementations, and system status. Research and recommend options for department direction on Big Data systems, automated solutions, server-related topics. Manages and maintains all production and non-production Hadoop clusters.MAJOR DUTIES AND RESPONSIBILITIES
RELATED WORK EXPERIENCE
- Manage Hadoop environments and perform Installation, administration and monitoring tasks
- Install & configure software updates and deploy application code releases to Production & Non-Production environments
- Understanding of best practices in maintaining medium to large scale Hadoop Clusters.
- Contribute and maintain access and security administration.
- Implement and Maintain backup and recovery strategies on Hadoop Clusters based on current process and procedure
- Supports multiple clusters of medium complexity with multiple concurrent users, ensuring control, integrity and accessibility of data.
- Install ,Configure and maintain High Availability
- Perform Capacity Planning of Hadoop Cluster and provide recommendations to management to sustain business growth.
- Implement and maintain Disaster Recovery (DR) methodologies and create documentation.
- Contribute to multiple projects along with the Production Support.
- Assist in creation of and maintaining Standard Operational Procedures and templates.
- Proactively identify opportunities to implement automation and monitoring solutions
- Familiar with setup/configuration of Cloudera CDH or Hortworks HDP.
- Coordinate with Development, Network, Infrastructure, and other organizations necessary to get work done
- 24 x 7 On Call pager rotation.
- Strong desire to learn a variety of technologies and processes with a "can do" attitude
REQUIRED QUALIFICATIONSSkills / Abilities and Knowledge
- 6+ years of hands-on experience in handling large-scale distributed platforms and integration projects.
- 6+ years of experience with Linux / Windows, with basic knowledge of Unix administration
- 1+ years of experience administering Hadoop cluster environments and tools ecosystem: Cloudera/Horton Works/Sqoop/Pig/HDFS
- Experience in whole Hadoop ecosystem like HDFS, Hive , Yarn, Flume, Oozie, Flume, Cloudera Impala, Zookeeper, Hue, Sqoop, Kafka, Storm, Spark and Spark Streaming including Nosql database knowledge such as Hbase, Cassandra and/or MongoDB
- Familiar with Spark, Kerberos authorization / authentication, LDAP and understanding of cluster security
- Exposure to high availability configurations, Hadoop cluster connectivity and tuning, and Hadoop security configurations
- Expertise in collaborating with application teams to install the operating system and Hadoop updates, patches, version upgrades when required.
- Experience working with Load balancers, firewalls, DMZ and TCP/IP protocols.
- Understanding of Enterprise IT Operations practices for security, support, backup and recovery
- Good understanding of Operating Systems (Unix/Linux), Networks, and System Administration experience
- Good understanding of Change Management Procedures
- Experience with hardware selection, environment sizing and capacity planning
- Knowledge of Java, Python, Pig, Hive, or other languages a plus
- Ability to read, write, speak and understand the English language to communicate with employees, customers, suppliers, in person, on the phone, and by written communications in a clear, straight-forward, and professional manner.
- Ability to communicate orally and in writing in a clear and straightforward manner
- Ability to communicate with all levels of management and company personnel
- Ability to handle multiple projects and tasks
- Ability to make decisions and solve problems while working under pressure
- Ability to prioritize and organize effectively
- Ability to show judgment and initiative and to accomplish job duties
- Ability to use personal computer and software applications (i.e. word processing, spreadsheet, etc.)
- Ability to work independently
- Ability to work with others to resolve problems, handle requests or situations
- Ability to effectively consult with department managers and leaders
- BS in Information Technology, Computer Science, MIS or related field or equivalent experience.
WORKING CONDITIONSOffice environmentEOE Race/Sex/Vet/Disability
- Experience in working with RDBMS and Java
- Exposure to NoSQL databases like MongoDB, Cassandra etc.
- Experience with cloud technologies(AWS)
- Certification in Hadoop Operations or Cassandra is desired
Charter is an equal opportunity employer that complies with the laws and regulations set forth in the following EEO Is the Law poster: http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf
Charter is committed to diversity, and values the ways in which we are different.
A little about us:
Spectrum is the nation’s fastest growing TV, internet and voice company. We’re committed to integrating the highest quality service with superior entertainment and communications products.