Sr. Manager, Platform Data Engineer - Streaming Data Platforms
We are actively seeking highly creative and intellectually curious Technology Professionals who are passionate about leading-edge distributed computing technologies to join our team! This is an opportunity to display knowledge of your craft by having a hand in building large-scale reliable applications and platforms to impact the way that Capital One does business. You will be an integral part in advancing the culture of technical excellence within Capital One, and creating experiences to delight millions of customers!
On any given day you will:
- Design end-to-end engineering solutions for business opportunities using existing on premises or new Cloud based Big Data technology platforms
- Tenaciously keep the Big Data infrastructure (Hadoop and peripheral tools) operational across various environments in datacenters & AWS Cloud.
- You will be involved in building high-quality, scalable data streaming solutions using Kafka, Kenesis, Flume, Spark, Storm etc.
- As an engineer you will be working on Apache Kafka platform that powers millions of events a day
- Deploy & manage solutions based on services offered by AWS & third party suppliers.
- Work with the team to build the Big Data platform solutions for different business needs
- Proactively monitor environments and drive troubleshooting and tuning
Managing support function for enterprise class environments
- Demonstrate deep knowledge in OS, networking, Hadoop & Cloud technologies to troubleshoot issues
- Administer the streaming technology clusters and related tools to manage the fast analysis on data in AWS Cloud or in datacenter environment
- Evaluate & build different compute frameworks for all tiers for technologies in AWS cloud.
- Identify technical obstacles early and work closely with team to find creative solutions to build prototypes & develop deployment strategies, procedures, road maps etc.
- Investigate the impact of new technologies on the platform, Capital One users, customers & recommend solutions
- Build prototypes for open source technology solutions
- Self-starter with the ability to drive team deliverables. Experience managing and working with remote teams.
- A polyglot engineer with a strong background in building and maintaining highly scalable distributed systems.
- Strong fundamentals in distributed systems design and development
- Solid understanding of basic systems operations (disk, network, operating systems, etc)
- Successful candidates will possess strong, demonstrable skills in Linux, Hadoop, systems automation, and AWS Technologies
- Experience in deploying, managing & supporting scalable, highly available data services with Kafka, Kenesis, Hadoop, Spark, Storm, Flume in AWS Cloud and datacenter environments.
- Ability to identify, define & deploy requirements to build & operate on AWS or in datacenter
- Strong verbal and written communication skills are required due to the dynamic nature of discussions with customers, vendors, and other engineering and product teams
- Curiosity. You ask why, you explore, you're not afraid to blurt out your crazy idea
- Do-er. You have a bias toward action, you try things, and sometimes you fail
- Expect to tell us what you’ve shipped and what’s flopped. We respect the hacker mentality
- Fearless. Interest in evangelism. Big, undefined problems don't frighten you. You can work at a tiny crack until you've broken open the whole nut
- Experience building and maintaining user-facing libraries, API’s is a big plus. For example, and open source library or internal library used by other teams
- It would be awesome if you have a robust portfolio on GitHub and/or open source contributions of which you are proud to share!
- Bachelor’s Degree or Military Experience
- At least 5 years of experience in people management
- At least 2 years of experience deploying and managing AWS based data technology solutions
- At least 1 year of experience with streaming technologies such as Active MQ, Kafka or Kenesis
- At least 2 years of experience in Object Oriented programming languages such as Java, Python, or Scala
- 3+ years with Distributed Computing
- 3+ years with Hadoop & other Big Data Technologies
- 2+ years of experience operating & managing environments in AWS Cloud.
- 2+ years designing & implementing Cloud Services.
- 3+ years of experience in designing, deploying, administrating enterprise class Big Data platforms
At this time, Capital One will consider sponsoring a new applicant for employment authorization for this position.