1750 Tysons (12023), United States of America, McLean, Virginia
Senior Data Engineer
Do you want to work for a technology company that writes its own code, develops its own software, andbuildsits own products? We experiment and innovate using the latest technologies, engineer breakthrough customer experiences, and bring simplicity and humanity to banking. Wemake a differencefor 65 million customers.
We are looking for bright, driven, and talented individuals to join our team of passionate and innovative software engineers. In this role, you’ll use your experience with Python/Java/Scala, Fast Data, Big Data, Streaming and Cloud technologies tobuildour next generation of Data capabilities.
- Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data application, utilizing programming languages likePython, Java,Scala, and Open Source RDBMS and NoSQL databases like PostgreSQL, MongoDB, and Redshift
- Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Flink and Kafka on AWS Cloud
- Implement the best DevOps practices like Continuous Integration, Continuous Deployment, Test Automation,BuildAutomation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Ansible, Teraform, Git and Docker
- Courageous. Big, undefined problems excite you!
- You have a bias toward action, you try things, and sometimes you fail. Expect to tell us what you’ve shipped and what’s flopped.
- You are passionate about finding refined solutions to complex DevOps challenges and helping the entire team meet its commitments.
- You yearn to be a part of cutting edge, high profile projects and are motivated by delivering extraordinary solutions.
- You love learning newtechnologies and mentoring more junior developers.
- Humor and fun are a natural part of your flow.
- At least 3 years of professional work experience programming in Python, Java or Scala
- Buildor hone skills working with Cassandra, Accumulo, HBase,Spark, Hadoop,HDFS, AVRO, MongoDB, Redshift, Lambda, or PostgreSQL
- Degree in Computer Science, Computer Engineering, Data Science or related discipline
- 2+ years of experience with the Hadoop Stack
- 2+ years of Distributed Computing frameworks such as ApacheSpark, Hadoop
- 2+ years experience with Cloud computing (AWS a plus)
- Experience with Elasticsearch, PostgreSQL, Ansible, Flask, Docker, Cassandra, Jenkins, Spark, Git, Stash
- Familiarity with Agile engineering practices
At this time, Capital One will not sponsor a new applicant for employment authorization for this position.