Educational requirements: Bachelor
English requirements: Competent English
Requirements for skilled employment experience for years: 3-5 years
Required residence status: Temporary visa, Permanent resident, Citizen
Accept remote work: unacceptable
Role Responsibilities:
• Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data • Design and develop real time streaming and batch processing pipeline solutions • Design, develop and implement data pipelines for data migration and collection, data analytics and other data movement solutions • Work with stakeholders and data analyst teams to assist with data-related technical issues and support their data infrastructure needs • Collaborate with Architects to define the architecture and technology selection
Lets talk about your Qualifications and Experience
• A minimum of 3-8 years of experience • Proven working experience as Big Data engineer for 2 years preferably in building data lake solution by ingesting and processing data from various source systems • Understanding of SDLC processes, experience of working in a large-scale program, data warehousing and ETL development • Experience in working in a fast-paced Agile environment • Experience in DevOps, Continuous Integration and Continuous Delivery principles to build automated pipelines for deployment and production assurance on the data platform • Share knowledge and best practices with peers that promote better technical practices across the organisation • Experience in building various frameworks for enterprise data lake is highly desirable • Experience in one or more Cloud platforms - AWS, Azure, GCP, and/or Snowflake • Experience in one or more of Java, Scala, Python and Bash, Informatica, integration, ETL, Hadoop, Spark, Hive, Terdata, Cloudera • Strong foundation in data warehousing concepts and dimensional data modelling • Understanding of goals and risks associated with technical and business requirements and align data solutions accordingly