Educational requirements: Bachelor
English requirements: Competent English
Requirements for skilled employment experience for years: 1-3 years
Required residence status: Temporary visa, Permanent resident, Citizen
Accept remote work: unacceptable
Roles and Responsibilities • Responsible for design, development and maintenance of the data lake built using Spark framework on big data ecosystem. • Responsible to create Source to Target mapping artefact and presenting in governance and approval forums • Translate business requirements into technical design documents • Tracking all levels of the normal, recurring, critical batch failures in the application level and giving the permanent resolution code fix by using a problem record in production environment. • Provide support during the application delivery times and able to handle the issues during the production deployments. • Automating the daily manual workarounds using the shell scripts in the production environment to reduce the unwanted manual efforts. • Coordinate with offshore scrum teams to make sure project tasks are executed on time with customer agreed quality.
Education Qualification, Experience and Expertise • Bachelor’s degree in engineering. • At least 8- 14 years experience with the following technologies : Spark, Oracle 11g, Hike, HDFS, Hadoop • Scripting Language : Java, Python, Unix and Scala • Databases : Oracle, GraphDB, MongoDB, PL/SQL, Teradata • Tools : Jira, Collibra, Erwin, Confluence • Scheduling Tools : Autosys • Expertise in Data Vault design concepts. • Advanced understanding of ETL processes and practices, ideally having implemented an ETL system before. • Advanced SQL skills. Able to build solutions that are fit for purpose, perform well with large data volume and complex data transformation rules, and reliable to operate. • Strong knowledge of data structures including time variant and dimensional models, algorithms • Excellent communication, presentation skills and leadership skills.