Software Engineer, Data Infrastructure
AtomicJar
Location
Seattle, WA
Employment Type
Full time
Location Type
Remote
Department
Engineering
Compensation
- US Salary RangeUS Salary Range $132K – $181.5K • Offers Equity
The salary range is a guideline and actual starting compensation will be determined by location, level, skills, and experience.
At Docker, we make app development easier so developers can focus on what matters. Our remote-first team spans the globe, united by a passion for innovation and great developer experiences. With over 20 million monthly users and 20 billion image pulls, Docker is the #1 tool for building, sharing, and running apps—trusted by startups and Fortune 100s alike. We’re growing fast and just getting started. Come join us for a whale of a ride!
Docker is seeking a Software Engineer to join our Data Infrastructure team and drive the technical evolution of data systems that power analytics across the entire company. As Docker continues to scale with millions of developers and thousands of enterprise customers globally, we need a senior technical leader who can design, build, and launch scalable data infrastructure that enables data-driven decision making across Product, Engineering, Sales, Marketing, Finance, and Executive teams.
This is a hands-on technical role focused on execution, learning, and individual contribution. You'll be responsible for implementing and maintaining robust data systems and pipelines following established technical standards and best practices. You'll work closely with senior engineers and cross-functional teams to understand requirements and deliver reliable data solutions.
Success in this role requires foundational technical skills in modern data platforms, a strong desire to learn system design, and the ability to execute on technical tasks with guidance. You'll play a vital role in the day-to-day operation and growth of Docker's data capabilities.
Responsibilities
Technical Contribution & Execution
Contribute to the design and implementation of highly scalable data infrastructure leveraging Snowflake, AWS, Airflow, DBT, and Sigma.
Implement and maintain end-to-end data pipelines supporting batch & realtime analytics across Docker's product ecosystem.
Follow and contribute to the technical standards for data quality, testing, monitoring, and operational excellence.
Hands-On Engineering & System Development
Design, build, and maintain robust data processing systems, focusing on data volume and latency requirements.
Implement data transformations and modeling using DBT for analytics and business intelligence use cases.
Develop and maintain data orchestration workflows using Apache Airflow under the direction of senior engineers.
Assist with optimizing Snowflake performance and cost efficiency.
Contribute to building data APIs and services to enable self-service analytics.
Cross-Functional Collaboration & Requirements Engineering
Work with Product, Engineering, and Business teams to understand data requirements and translate them into technical tasks.
Support Data Scientists and Analysts by providing access to reliable, high-quality data.
Collaborate with business teams to deliver and maintain accurate reporting and operational dashboards.
Engage with Security and Compliance teams to support data governance implementation.
Technical Operations & Reliability
Assist with monitoring, alerting, and incident response for critical data systems.
Support the implementation of data quality frameworks and automated testing in data pipelines.
Participate in performance optimization and cost management initiatives.
Contribute to troubleshooting and resolution of technical issues affecting data availability and accuracy.
Learning & Technical Growth
Proactively learn technical skills, system design, and data engineering best practices from senior team members.
Participate in technical design reviews and provide feedback on documentation.
Actively contribute to team knowledge sharing and documentation efforts.
Qualifications
Required
Technical Expertise
2+ years of software engineering experience, preferably with a focus on data engineering or analytics systems.
Experience with a major cloud platform (AWS, GCP, or Azure), including basic data services (S3, GCS, etc.).
Proficiency with SQL and experience with a cloud data warehouse (e.g., Snowflake, Redshift, BigQuery).
Familiarity with data transformation tools (e.g., DBT) and modern BI platforms (e.g., Sigma).
Familiarity with workflow orchestration tools (e.g., Apache Airflow, Dagster).
Proficiency in Python, Go, Kotlin and other programming languages used in data engineering.
Familiarity with version control (Git) and modern software development practices (CI/CD).
System Understanding
Basic understanding of data warehousing concepts (e.g., dimensional modeling) and analytics architectures.
An eagerness to learn about distributed data systems and stream processing concepts.
Foundational knowledge of data quality and testing principles.
Collaboration & Communication
Strong communication and collaboration skills.
Ability to take direction and work effectively as part of a team.
A proactive attitude toward problem-solving and self-improvement.
Preferred
Experience in an internship or junior role at a technology company.
Knowledge of container technologies (Docker, Kubernetes).
Experience with version control (Git) and CI/CD practices.
Advanced degree in Computer Science, Data Engineering, or a related technical field.
Key Success Metrics
Successful completion of assigned data engineering projects and tasks.
Delivery of high-quality, reliable code for data pipelines.
Demonstrated technical growth and increasing independence.
Positive working relationships and collaboration with team members and stakeholders.
Impact You'll Make
As a Software Engineer in our Data Platform group, you will contribute directly to the data foundation that powers Docker's product innovation and business intelligence. You will implement systems that enable teams across Docker to make data-informed decisions. Your work will directly support the scaling of Docker's data infrastructure as we continue to expand our product portfolio and serve customers globally.
You’ll have the opportunity to solve challenging technical problems while rapidly growing your technical skills and learning from experienced data engineers. Your contributions will help provide the insights and capabilities that enable millions of developers to build better software.
We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on April 13, 2024.
Please see the independent bias audit report covering our use of Covey here.
Perks
Freedom & flexibility; fit your work around your life
Designated quarterly Whaleness Days
Home office setup; we want you comfortable while you work
16 weeks of paid Parental leave
Technology stipend equivalent to $100 net/month
PTO plan that encourages you to take time to do the things you enjoy
Quarterly, company-wide hackathons
Training stipend for conferences, courses and classes
Equity; we are a growing start-up and want all employees to have a share in the success of the company
Docker Swag
Medical benefits, retirement and holidays vary by country
Docker embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better our company will be.
Due to the remote nature of this role, we are unable to provide visa sponsorship.
#LI-REMOTE
Compensation Range: $132K - $181.5K