
Data Engineer Intern
Age of Learning🇺🇸Hybrid - Glendale$22–$27/hr5d ago
Summary
Data Engineer Intern role focused on designing and prototyping AI-powered tools to improve data engineering workflows and enhance team productivity at Age of Learning. The position offers hands-on experience with modern data stack tools and LLM-based approaches to solve real-world data challenges.
Key Responsibilities: Design, build, and prototype AI-powered tools for data engineering and analytics workflows including dbt development and query generation. Support development and optimization of end-to-end data pipelines, models, and transformations while collaborating with data engineers and stakeholders.
Skills & Tools: Strong SQL proficiency, Python or scripting language experience, and solid understanding of data modeling concepts required. Analytical problem-solving abilities, curiosity about AI/ML tools, and familiarity with modern data stack tools like dbt, Airflow, or Snowflake preferred.
Qualifications: Currently pursuing a degree in Computer Science, Data Science, Engineering, Mathematics, or related field from an accredited university. Must be available for 10-week summer internship with minimum 25 hours per week and ability to work in-person in Glendale, CA two days per week.
Location: Glendale, CA (Hybrid)
Compensation: $22.00 – $27.00/hour
Job Description
Responsibilities::
• Design, build, and prototype AI-powered tools from scratch to improve data engineering and analytics workflows (e.g., dbt development, query generation, documentation, testing, monitoring)
• Experiment with and evaluate AI/LLM-based approaches to accelerate development, improve data quality, and reduce manual effort
• Identify opportunities to create new internal tools or automations that enhance team productivity, and take them from idea → prototype → iteration
• Collaborate with data engineers, analysts, and stakeholders to turn ambiguous problems into practical, working solutions
• Support the development, optimization, and maintenance of end-to-end data workflows—including pipelines, data models, and transformations—while ensuring data quality, reliability, and clear documentation
Minimum Qualifications:: • Experiment with and evaluate AI/LLM-based approaches to accelerate development, improve data quality, and reduce manual effort
• Identify opportunities to create new internal tools or automations that enhance team productivity, and take them from idea → prototype → iteration
• Collaborate with data engineers, analysts, and stakeholders to turn ambiguous problems into practical, working solutions
• Support the development, optimization, and maintenance of end-to-end data workflows—including pipelines, data models, and transformations—while ensuring data quality, reliability, and clear documentation
• Currently pursuing a degree in Computer Science, Data Science, Engineering, Mathematics, or a related field from an accredited university
• Must be available at least 25 hours per week for 10 weeks during the summer internship program
• Must be able to work in a hybrid environment, with the ability to work in person at our Glendale, CA headquarters a minimum of two days per week
• Strong SQL skills and basic understanding of data modeling concepts
• Some experience with Python or another scripting language
• Interest in AI/ML tools and a curiosity for applying AI to real-world problems
• Interest in modern data stack tools like dbt, Airflow, Snowflake, or similar (hands-on experience a plus)
• Strong analytical and problem-solving skills
• Detail-oriented, curious, and eager to learn
Preferred Qualifications:
• Exposure to cloud platforms (AWS, GCP, or Azure)
• Experience with AI IDE (Cursor, Claude Code, Codex)
• Hands-on experience with AI tools and LLM APIs (e.g., OpenAI, LangChain, etc.)
• Experience building small tools, side projects, or automations (data or AI-related a plus)
• Builder mindset: enjoys creating, experimenting, and iterating on ideas
• Must be available at least 25 hours per week for 10 weeks during the summer internship program
• Must be able to work in a hybrid environment, with the ability to work in person at our Glendale, CA headquarters a minimum of two days per week
• Strong SQL skills and basic understanding of data modeling concepts
• Some experience with Python or another scripting language
• Interest in AI/ML tools and a curiosity for applying AI to real-world problems
• Interest in modern data stack tools like dbt, Airflow, Snowflake, or similar (hands-on experience a plus)
• Strong analytical and problem-solving skills
• Detail-oriented, curious, and eager to learn
Preferred Qualifications:
• Exposure to cloud platforms (AWS, GCP, or Azure)
• Experience with AI IDE (Cursor, Claude Code, Codex)
• Hands-on experience with AI tools and LLM APIs (e.g., OpenAI, LangChain, etc.)
• Experience building small tools, side projects, or automations (data or AI-related a plus)
• Builder mindset: enjoys creating, experimenting, and iterating on ideas
Total Compensation:
The estimated hourly range for this position is $22.00 to $27.00 USD. Pay may vary depending on job-related factors, including knowledge, skills, experience, and location.
