Basic Information
- Total Positions 1
- Experience: 3-6 Years
- Job Type fulltime
- Travel Required: Not Specified
- Minimum Education: Not Specified
- Salary Range: Not Specified PKR
- Gender: Any
- Max Age Limit: Not Specified
REQUIRED SKILLS
JOB DESCRIPTION
Zepto Systems Work collaboratively with Data Scientists, Data Engineers, Data Analysts, DevOps engineers, Cloud Solutions Architects, and other stakeholders including product owners and business analysts, in order to gather, analyse, and understand data engineering requirements
Build a range of data products and integrate and manage datasets from multiple external sources, including data extraction, data ingestion, and processing of large datasets
Design and develop methods to automate data ingestion from external sources (including different data vendors) into our AWS products and create efficient ETL pipelines.
Design, optimise and implement data models and data schemas for different data sources and use cases
Build, design, refactor, and optimise AWS data lakes and AWS data warehouses for a variety of data sources
Work with a range of storage systems, including relational databases, NoSQL, and others.
2-3 years’ experience in Data Engineering
Extensive experiences in AWS services like RDS, Lambda, Glue, Neptune, Athena, DMS, Redshift, EC2 and machine learning tools
Skills in core SQL Competencies, such as Stored Procedures, Batch Jobs, and implementing highly performant SQL code for our AWS products
Good working knowledge of any of the following: Python, Java, SQL
Experience with version control tools (e.g., Git)
Aptitude for, and interest in, working in a fast-paced environment
Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
Experience working with data ingestion from APIs, RSS feeds, and FTP
Experiences working with either a MapReduce or MPP system
Good understanding of data modelling and data engineering tools such as Kafka, Spark, and Hadoop
Experiences with Graph databases
0 40 hours per week Information Technology- 3-6 Years
- fulltime
- Not Specified
- Not Specified
Work collaboratively with Data Scientists, Data Engineers, Data Analysts, DevOps engineers, Cloud Solutions Architects, and other stakeholders including product owners and business analysts, in order to gather, analyse, and understand data engineering requirements
Build a range of data products and integrate and manage datasets from multiple external sources, including data extraction, data ingestion, and processing of large datasets
Design and develop methods to automate data ingestion from external sources (including different data vendors) into our AWS products and create efficient ETL pipelines.
Design, optimise and implement data models and data schemas for different data sources and use cases
Build, design, refactor, and optimise AWS data lakes and AWS data warehouses for a variety of data sources
Work with a range of storage systems, including relational databases, NoSQL, and others.
2-3 years’ experience in Data Engineering
Extensive experiences in AWS services like RDS, Lambda, Glue, Neptune, Athena, DMS, Redshift, EC2 and machine learning tools
Skills in core SQL Competencies, such as Stored Procedures, Batch Jobs, and implementing highly performant SQL code for our AWS products
Good working knowledge of any of the following: Python, Java, SQL
Experience with version control tools (e.g., Git)
Aptitude for, and interest in, working in a fast-paced environment
Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
Experience working with data ingestion from APIs, RSS feeds, and FTP
Experiences working with either a MapReduce or MPP system
Good understanding of data modelling and data engineering tools such as Kafka, Spark, and Hadoop
Experiences with Graph databases
Posted Date: 22 Sep 2022 This job has been Expired