Manager, Data Engineering(Machine Learning)
Do you want to be on the forefront of the next BIG thing in data? Are you charged by the thought of being involved at the ground level of an enterprise-wide Big Data transformation? Do you enjoy solving complex business problems in a fast-paced, collaborative, and iterative programming environment? If this excites you, then keep reading.
Capital One – a top 10 US bank that is on a quest to use technology to bring ingenuity, simplicity, and humanity to banking – needs software engineers to power its Big-Data transformation. We are looking for bright, driven, and talented individuals to join our team of passionate and innovative software engineers. In this role, you’ll use your experience with Java & Big Data technologies to work side-by-side with product owners and Agile team members in building our next generation of big data capabilities.
In this role, the candidate will develop machine learning, data mining, and statistical algorithms on Hadoop platform. This position specializes in collecting, creating, cleansing, correlating, analyzing, and storing data. Data is regularly correlated, curated, and integrated with other data sets and used in data visualizations. They will be responsible for integrating and cleansing data from multiple sources to make sure that this data is accurate and valuable.
The successful candidate will:
• Implement cutting edge machine learning algorithms in Java, Python
• Experience in building natural language processing systems
• Collect, track, and integrate multiple sources of data
• Design, develop, test and deploy in AWS architecture
• Query databases and perform statistical analysis using SAS
• Experience working with Relational and Hadoop systems
• Solid software development experience
• Develop and implement algorithms and models in production.
- Bachelor’s Degree or military experience
- At least 2 years of experience developing machine learning algorithms using Java or Python
- At least 3 years coding in data management, data warehousing or unstructured data environments
- At least 3 years’ experience with big data technologies (Cassandra, Accumulo, HBase, Spark, Hadoop, HDFS, AVRO, MongoDB, Zookeeper or similar)
- 3 years in-depth experience with the Hadoop stack (MapReduce, Pig, Hive, Hbase)
- 3+ years’ experience with NoSQL implementation (Mongo, Cassandra, etc. a plus)
- 3+ years’ experience developing Java based software solutions
- 3+ years’ experience in at least one scripting language (Python, Perl,Shell)
- 3+ years’ experience developing software solutions to solve complex business problems
- 3+ years’ experience with Relational Database Systems and SQL
- 3+ years’ experience designing, developing, and implementing ETL
- 3+ years’ experience with UNIX/Linux including basic commands and shell scripting
Capital One will consider sponsoring a new qualified applicant for employment authorization for this position.