Senior Data Engineer

  • Anywhere

About Glia

Our award-winning technology powers conversations with customers for some of the world’s largest enterprises. We believe that combining the human touch with technology is the best way to create amazing customer experiences. When human abilities such as problem-solving, creative thinking and relationship building are enhanced with technology… magical moments happen.

The work

  • At Glia, the data team (named Cerebro) is at the heart of all things data. You will be working on modeling our next generation data models, transformation layer, warehousing and ingestion pipelines
  • Working in an established team of engineers, partnering with product teams, analysts, and stakeholders to bring business-critical data solutions to life
  • Working on our data solutions built on Snowflake, Redshift, Druid, S3, and dbt
  • Working on strategic long-term initiatives, implementing stream-based data ingestion pipelines on Kafka

The team’s setup

Team Cerebro is a dynamic cross-functional team that brings together skilled Engineers and product Managers and collaborates closely with Data Analysts to drive innovation and deliver high-impact solutions.
We have members in Tallinn and Tartu, Estonia, Poland, and Portugal, so our processes are optimized for remote collaboration.
We work in the Eastern European time zone (EET/EEST).

Our current tech stack

  • Coding languages: Elixir, Ruby, SQL, Python
  • Current Persistence Solutions: Amazon RDS for PostgreSQL, Apache Druid, Snowflake, S3
  • Monitoring: DataDog
  • CI/CD: Jenkins
  • Infrastructure as Code: Terraform, Ansible
  • Other: AWS services, Kafka, DBT, Amazon Quicksight, GitHub

Note: We are constantly evolving our tech stack to ensure the usage of the right tools for specific needs, and you will be part of the process of choosing new technologies.

Candidate requirements

  • Practical experience with creating implementation-ready analytical data models based on domain/business data models
  • Proven track record of implementing data warehousing, ingestion and transformation solutions
  • Extensive experience in technical leading, planning and hands-on implementation of data-related strategic initiatives and projects
  • Proactive, collaborative, open-minded and solution-oriented
  • Proficiency and recent experience in Redshift, Snowflake, dbt, Druid, Kafka or equivalent big data tech. (It’s way more about the concepts you’re confident in than the specific tech stack experience you have)
  • Programming languages – we are  a Ruby & Elixir shop to a large degree, but we welcome engineers with all backgrounds
  • Data infrastructure know-how (managing via Terraform) is a big plus


  • Competitive salary
  • Professional development support  (trainings, courses, conferences, books, etc)
  • Access to all the latest tools and equipment you’ll need
  • Sports compensation, reimbursement for therapy, counseling sessions
  • Team events: annual employee awards, internal hackathons, and a dozen cool events from cooking to the Glia olympic games 🙂
  • Generous referral bonuses
  • Diversity: 25 countries represented

*Glia is an equal-opportunity employer. Glia does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law.

The Glia Talent Acquisition team is using only mailboxes for coordinating interviews, providing updates and sending documents. Our hiring process involves an introduction, and practical and team interviews, followed by a decision and offer. For more information, visit our Recruitment Privacy Notice page or contact our talent team via [email protected]

To apply for this job please visit