Big Data Engineering Architect Consultant
Location:
Remote
Posted:
February 17, 2018
Reference:
00529120

  • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience
  • Minimum 1 year of architecting, implementing and successfully operationalizing large scale data solutions in production environments using Hadoop and NoSQL ecosystem on premise or on Cloud (AWS, Google or Azure) using many of the relevant technologies such as Nifi, Spark, Kafka, HBase, Hive, Cassandra, EMR, Kinesis, BigQuery, DataProc, Azure Data Lake etc.
  • Minimum 1 year of architecting data and buildconing performant data models at scale for Hadoop/NoSQL ecosystem of data stores to support different business consumption patterns off a centralized data platform
  • Minimum 1 year of Spark/MR/ETL processing, including Java, Python, Scala, Talend; for data analysis of production Big Data applications
  • Minimum 1 year of architecting and industrializing data lakes or real-time platforms for an enterprise enabling business applications and usage at scale
  • Minimum 2 years designing and implementing relational data models working with RDBMS and understanding of the challenges in these environment  

  • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale and others
  • Minimum 1 year of experience implementing large scale BI/Visualization solutions on Big Data platforms
  • Minimum 1 year of experience implementing large scale secure cloud data solutions using AWS data and analytics services e.g. S3, EMR, Redshift
  • Minimum 1 year of experience implementing large scale secure cloud data solutions using Google data and analytics services e.g. BigQuery, DataProc
  • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.)  and governance solutions for modern data platforms that use Hadoop and NoSQL on premise or on AWS, Google and Azure cloud
  • Minimum 1 year of experience securing Hadoop/NoSQL based modern data platforms on-premise or on AWS, Google, Azure cloud
  • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop or NoSQL technologies on premise or transition to AWS, Google clouds
  • Experience implementing data wrangling and data blending solutions for enabling self-service solutions using tools such as Trifacta, Paxata
  • 1 year industry systems development and implementation experience OR Minimum of 2 years of data loading, acquisition, storage, transformation, and analysis
  • Minimum 1 years of using Talend, Informatica like ETL tools within a Big Data environment to perform large scale metadata integrated data transformation
  • Minimum 1 year of building Business Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies
  • Responsibilities include the following:

  • Create technical and operational architectures for these solutions incorporating Hadoop, NoSQL and other modern data technologies
  • Implement and deploy custom solutions/applications using Hadoop/NoSQL
  • Lead and guide implementation teams and provide technical subject matter expertise in support of the following:
    • Designing, implementing and deploying ETL to load data into Hadoop/NoSQL
    • Security implementation of a Hadoop/NoSQL solutions
    • Managing data in Hadoop/NoSQL co-existing with traditional data technologies in a hybrid environment
    • Troubleshooting production issues with Hadoop/NoSQL
    • Performance tuning of a Hadoop/NoSQL environment
  • Architecting and implementing metadata management solutions around Hadoop and NoSQL in a hybrid environment
     
     
  •  

    A little about us:

    Want to stay connected with Accenture about other opportunities that may be right for you? Join our Talent Network.

    Know someone who would be interested in this job? Share it with your network.