Ticker

15/recent/ticker-posts

Publicis Sapient Data Engineer Hiring Challenge: Register by Jan 30!

Applications are invited from eligible candidates for the Publicis Sapient Data Engineer Hiring Challenge.

About the Challenge

At Publicis Sapient, we believe that all the good things happen when great minds come together. And for 30 years, our secret to success has remained just that – through enabling our people to do the work that matters to them, we have been able to unleash an enduring culture of problem-solving creativity.

They are on a mission to transform the world, and you will be instrumental in shaping how they do it.

Eligibility

  • Should have 4+ years of experience can participate in our hiring challenge. 
  • The candidate should 
    • have worked on Spark/Flink/Apache Beam
    • have worked on Python/Scala as a coding language
    • have worked on any MPP database (redshift, snowflake, bigquery etc)
    • be able to write complex SQL queries

Challenge Format

  • 10 MCQs
  • 1 Programming Question
  • 1 SQL Question 

Position Overview

Data Engineer

Experience:4 years+
Compensation:Best in industry
Job Location:Gurgaon/Noida/Bangalore

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solutions.

Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to ensure the necessary health of the overall solution 

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and Data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.

Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

  • Data Ingestion, Integration and Transformation
  • Data Storage and Computation Frameworks, Performance Optimizations
  • Analytics & Visualizations
  • Infrastructure & Cloud Computing
  • Data Management Platforms
  • Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
  • Build functionality for data analytics, search and aggregation

Experience Guidelines: 

Mandatory Experience and Competencies:

  • Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
  • Minimum 1.5 years of experience in Big Data technologies
  • Hands-on experience with the Hadoop stack – HDFS, sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge of real-time data pipelines is added advantage.
  • Strong experience in at least of the programming languages Java, Scala, Python. Java preferable
  • Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQL DW, GCP BigQuery etc

Preferred Experience and Knowledge (Good to Have):

  • Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience
  • Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
  • Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Microservices architectures
  • Performance tuning and optimization of data pipelines
  • CI/CD – Infra provisioning on the cloud, auto-build & deployment pipelines, code quality
  • Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
  • Cloud data specialty and other related Big data technology certifications

Personal Attributes: 

  • Strong written and verbal communication skills
  • Articulation skills
  • Good team player
  • Self-starter who requires minimal oversight
  • Ability to prioritize and manage multiple tasks
  • Process orientation and the ability to define and set up processes

How to Register?

Interested applicants can apply for the challenge through this link.

Registration Deadline

Jan 30, 2022

Click here to view the official notification for the Publicis Sapient Data Engineer Hiring Challenge.

You may like these posts as well:-

  1. 4 Crore 60 lakh free books by IIT Kharagpur
  2. Harvard University Courses - Now Available for Free
  3. 600+ Free Google Certificates & Badges (Updated January 2022)
  4. E-Pathshala: A Joint Initiative of Ministry of Education & NCERT (Students, Teachers, Educators & Parents Resources)
  5. NISHTHA - National Initiative for School Head's and Teacher's Holistic Advancement (Ministry of Education & NCERT Initiative)
  6. eBasta: Resources for Schools, Teachers & Students (A Digital India Initiative)

For More Event & Webinar Updates Like this, Join Our Whatsapp Group:

https://contest360.blogspot.com/p/join-us.html

To Promote your event on our website:

> please contact us here.

Post a Comment

0 Comments