I

Data Engineer

International Rescue Committee

Today
Experience Level: Entry level Experience Length: 3 years

Job descriptions & requirements

ABOUT THE COMPANY

The International Rescue Committee responds to the world’s worst humanitarian crises, helping to restore health, safety, education, economic wellbeing, and power to people devastated by conflict and disaster. Founded in 1933 at the call of Albert Einstein, the IRC is at work in over 40 countries and 26 U.S. cities helping people to survive, reclaim control of their future and strengthen their communities.

JOB SUMMARY

Experience3–6 years of hands-on experience in data engineering, analytics engineering, or a related technical role.Demonstrated experience building or maintaining data pipelines in a professional setting.Exposure to cloud-based data platforms, preferably Azure (Databricks, Data Factory, or Synapse).Technical Skills — Requireddbt:Working knowledge of dbt model development including staging and mart layers.Familiarity with dbt tests, documentation, and source configurations.Eagerness to deepen dbt skills including incremental models and CI/CD integration.Databricks:Hands-on experience with Databricks notebooks and basic job/workflow setup.Familiarity with Delta Lake concepts and Databricks SQL.Exposure to PySpark for data transformation tasks.SQL:Solid SQL skills: joins, CTEs, window functions, aggregations, and basic performance awareness.Experience writing SQL for data transformation and validation in a cloud data warehouse.Pipeline Engineering:Experience building or supporting ELT pipelines with monitoring and basic data validation.Familiarity with pipeline orchestration tools such as Azure Data Factory or Databricks Workflows.Python:Basic to intermediate Python skills for data processing, scripting, and automation.Familiarity with PySpark is a plus.Data Modeling:Understanding of star/snowflake schemas and fact & dimension table concepts.Exposure to Lakehouse or medallion architecture (Bronze/Silver/Gold) is a plus.Soft SkillsCurious and eager to learn with a proactive approach to problem-solving.Good communication skills — able to collaborate across technical and non-technical teams.Attention to detail and a strong sense of data quality.Comfortable working in a collaborative, fast-paced, and remote team environment.Preferred Additional RequirementsExperience with Databricks or Azure Synapse Analytics.Familiarity with D365 CRM or Similar data structures.Exposure to Git-based workflows and CI/CD practices for data pipeline deployments.Experience in a humanitarian, nonprofit, or international development context.

RESPONSIBILITIES

Pipeline Engineering & OrchestrationBuild and maintain ELT data pipelines using Databricks Workflows and Azure Data Factory for batch and scheduled processing from internal and external sources.Support the ingestion of data from key systems (e.g., D365 CRM, ServiceNow) into Lakehouse.Monitor pipeline execution, identify failures, and troubleshooting issues in collaboration with senior engineers.Contribute to pipeline documentation and help maintain runbooks and process standards.dbt DevelopmentDevelop and maintain dbt models across staging, intermediate, and mart layers under the guidance of senior team members.Write dbt tests and contribute to source freshness checks to support data quality.Learn and apply dbt best practices including modular design, ref dependencies, and incremental model patterns.Work with analysts and business teams to translate data requirements into dbt models.SQL & Data TransformationWrite intermediate to advanced SQL for data extraction, transformation, and validation tasks.Apply SQL techniques including joins, CTEs, window functions, and aggregations to support reporting and analytics needs.Assist in query optimization and performance troubleshooting withinDatabricks SQL environments.Support data model maintenance and help accommodate new source fields or schema changes.Databricks & Cloud PlatformDevelop and maintain Databricks notebooks and jobs for data transformation workloads.Gain hands-on experience with Delta Lake concepts and PySpark for data processing.Follow Lakehouse design patterns (Bronze/Silver/Gold) as defined by the Data Architect.Support cloud resource management including basic cluster configuration and job scheduling.Collaboration & LearningActively collaborate with the Data Team on pipeline design, troubleshooting, and delivery.Participate in code reviews and incorporate feedback to improve code quality.Support documentation of processes, standards, and data flowsEngage with Finance, FP&A, and other business teams to understand data needs and assist in solution delivery.

REQUIRED SKILLS

Programming, Data engineering, Big data and data analytics, Software architecture

REQUIRED EDUCATION

Diploma, Associate's degree

Important safety tips

  • Do not make any payment without confirming with the BrighterMonday Customer Support Team.
  • If you think this advert is not genuine, please report it via the Report Job link below.

This action will pause all job alerts. Are you sure?

Cancel Proceed

Similar jobs

Lorem ipsum

Lorem ipsum dolor (Location) Lorem ipsum Confidential
3 years ago

Stay Updated

Join our newsletter and get the latest job listings and career insights delivered straight to your inbox.

v2.homepage.newsletter_signup.choose_type

We care about the protection of your data. Read our

We care about the protection of your data. Read our  privacy policy .

Follow us On:
Get it on Google Play
2026 BrighterMonday

Or your alerts