Master Data Engineer
Location:
Riverwoods , Illinois
Posted:
October 21, 2017
Reference:
P180703
At Discover Financial Services you can make an impact. Whether it's developing corporate strategy, innovating new services or supporting IT needs, every employee has the opportunity to be a vital part of our business and make a real difference in people's lives. It's the heart of what we do. Discover is one of the most recognized brands in U.S. financial services.We're a direct banking and payment services company built on a legacy of innovation and customer service. Our employees have always played a big part in our success. We support, challenge and inspire employees to continually develop their skills, advance their career and help grow our business.Discover Financial Services is seeking a Master Data engineer to join our Enterprise Data Management team. Successful candidates will partner with our business partners to understand their data needs, and build data pipelines using cutting edge technologies like NiFi/Kafka. Additionally you will explore the use of Java, Scala, Spark APIs to enrich data for our data scientists.The ideal candidate will be passionate about Discover Financial Service's data and its mission.

Where you can make an impact:
  • Drive highly visible projects include translating business and technology requirements into our ETL/ELT architecture.
  • Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, NIFI, Java, NoSQL DBs, Hadoop, AWS EMR.
  • Collaborate with cross functional teams such as Hadoop Infrastructure, AWS cloud engineering, DBAs and business teams
  • Provide our highest senior level technical capabilities to architect and implement our big data roadmap
You are:
  • Highly productive: Ability to deliver with minimal oversight on a day-to-day basis. This role is intended for somebody who is passionate about data and developing solutions to generate data insights.
  • Fierce fast attitude: Must be motivated to work hard in a fast-paced environment with other high caliber engineers
  • Multi-tasking and prioritization expertise: Enjoy balancing numerous different priorities and demands.
  • Learner: Passionate learner who enjoys education through class room training and self-discovery on a variety of emerging technologies
  • Leader: Decision making capabilities while gathering information and demonstrating leadership to management, and junior level staff.
Responsibilities:
  • Utilize multiple development languages/tools such as Python, Spark, HBase, Hive, Java to build prototypes and evaluate results for effectiveness and feasibility
  • Actively contribute to the Enterprise Decision & Data Management department's developer community to support technical initiatives and provide input into best practices
  • Build high quality software for large scale and highly available systems
  • Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, NIFI, Java, NoSQL DBs and Hadoop.
  • Custom Data pipeline development (AWS Cloud and locally hosted)
  • Work heavily within the Hadoop ecosystem and migrate data from Teradata to Hadoop.
  • Operationalize open source data-analytic tools for enterprise use.
  • Provide support for deployed data applications and analytical models by being a trusted advisor to Data Scientists and other data consumers by identifying data problems and guiding issue resolution with partner Data Engineers and data providers.
  • Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
  • Ensure proper data governance policies are followed by architecting and building Data Lineage, Quality checks, and data classification systems / frameworks.

Skills
  • BS in Computer Science or related field, or professional experience in data movement.
  • 7+ years of data engineering experience working with structured and un-structured data.
  • Experience with SQL
  • Proficiency in scripting languages such as Shell/Python/Scala/Java is a must.
  • Well versed with Linux/Unix operating system.
  • Good understanding of Big Data technology trends, with knowledge of technologies such as NiFi, Kafka, Spark, Hive, Presto.
  • Experience with one of the ETL tools Abinitio, Informatica or Data Stage


  • Experience with BI tools like Tableau
  • Prior experience with Banking or Financial domain
  • Experience in Agile methodologies

Promote a risk-aware culture, ensure efficient and effective risk and compliance management practices by adhering to required standards and processes.

#LI-KE We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class.

A little about us:
Discover is one of the most recognized brands in U.S. financial services. We’re a direct banking and payment services company built on a legacy of innovation and customer service.

Know someone who would be interested in this job? Share it with your network.