Agilus is recruiting for a Data Engineer in the Oil and Gas in a hybrid work environment in Calgary, Alberta.
Apply your expertise in data and software engineering to design and implement data products that meet extreme requirements on scalability, reliability, maintainability, flexibility, auditability, and quality Be T-Shaped. Your primary area is data engineering, but you are comfortable working in a secondary area of expertise such as data presentation/visualization, backend engineering, or data modelling (SQL, NoSQL, Graph & Time-series) Work closely with cross-functional teams of data, backend and frontend engineers, product owners, technical product owners, and technical support personnel
You will:
Possess knowledge of Data Engineering components (Azure), contributing to the development of Big Data and AI/ML solutions.
Work closely with cross-functional teams of data, backend and frontend engineers, product owners, technical product owners, and technical support personnel
Be T-Shaped. Your primary area is data engineering, but you are comfortable working in a secondary area of expertise such as data presentation/visualization, backend engineering, or data modelling (SQL, NoSQL, Graph & Time-series)
Apply your expertise in data and software engineering to design and implement data products that meet extreme requirements on scalability, reliability, maintainability, flexibility, auditability, and quality
Getting hands-on experience with technologies such as Elasticsearch, Apache Airflow, Apache Kafka, Apache Beam, Apache Spark, Hive, HDFS, Kubernetes (OpenShift)
Getting hands-on experience with Microsoft Azure and the library of tools within the Azure suite.
Setup, configure and monitor CI/CD Pipelines (Azure DevOps), Container platform (AKS) and various data tools (Databricks, Airflow, PostgreSQL).
Gaining technical expertise in building a data platform at scale to solve business, product, and technical use cases
Successful candidates will have:
Experience pulling data from a variety of data source types including Mainframe (EBCDIC), Fixed Length and delimited files, databases (SQL, NoSQL, Time-series)
Minimum 2 - 3 years of experience in developing or deploying data pipelines with various data tools (Databricks, ADF, Airflow, Kafka, Key Vaults, GraphQL, PostgreSQL).
An undergraduate or Master’s degree in Computer Science or equivalent engineering experience
6+ years of professional software engineering and programming experience (Python, Angular, React – front end Web) with a focus on designing and developing complex data-intensive applications
3+ years of architecture and design (patterns, reliability, scalability, quality) of complex systems
Soft skills:
Comfortable communicating with various stakeholders (technical and non-technical)
Understands how to translate business requirements to technical architectures and designs
Experience in mentoring and leading junior engineers
Strong written and verbal communication skills
Total rewards:
Hybrid WFH Model
Client is ranked in the top 100 Canadian employers
Opportunity for Extension/Permanent placement for right candidate