AWS Data Engineer – L1/L2
Job Title: AWS Data Engineer – L1/L2 Support Role
Job Overview
We are seeking a Level 1/2 AWS Data Engineer to join our dynamic team. This role is responsible for monitoring, troubleshooting, and optimizing AWS-based data pipelines. The ideal candidate will work with AWS Glue, Lambda, Step Functions, and other AWS data services to investigate job failures, re-run ETL jobs, and ensure data integrity. This position requires strong analytical thinking, problem-solving skills, and a solid understanding of AWS data engineering best practices.
Requirements
Bachelor’s degree in Computer Science, Data Science, Information Technology, or a related field.
Up to 1+ year of experience in AWS-based data engineering.
Hands-on experience with AWS Glue, S3, Athena, Lambda, Step Functions, and CloudWatch.
Strong Python (Pandas, Boto3) and SQL skills for data querying and processing.
Familiarity with PySpark and ETL development is a plus.
Basic knowledge of AWS IAM, security policies, and access control.
Experience in monitoring and debugging AWS Glue jobs, Step Functions, and Lambda logs.
Hands-on experience with Data Lake, Delta Lake, and Lakehouse architectures.
Strong analytical and problem-solving skills to diagnose and resolve data processing failures.
Ability to work independently while collaborating effectively with cross-functional teams.
AWS certification such as AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect Associate (preferred but not mandatory).
Key Responsibilities
Monitor, troubleshoot, and support AWS data pipelines (Glue, Lambda, Step Functions, S3, Athena, Redshift, Airflow).
Investigate job failures, analyze CloudWatch logs, and take corrective actions to ensure pipeline continuity.
Re-run failed jobs and retry data ingestion processes while maintaining data consistency.
Assist in debugging Glue ETL scripts, Athena queries, and Lambda functions used for data transformations.
Collaborate with L2/L3 engineers and data architects to escalate and resolve complex issues.
Optimize data processing jobs by applying best practices in partitioning, bucketing, and indexing.
Maintain documentation of troubleshooting steps, common issues, and resolutions.
Ensure compliance with AWS security best practices (IAM roles, policies, and access controls).
Participate in incident response and data recovery procedures as required.
Provide technical support for production data pipelines, ensuring SLAs are met.