← All jobs

Senior Data Engineer

Beyond ONE · Lahore, Punjab, Pakistan

onsitefull-timesenior level

About this role

We don’t think about job roles in a traditional way. We are anti-silo. Anti-career stagnation. Anti-conventional. 

Beyond ONE is a digital services provider radically reshaping the personalised digital ecosystems of consumers in high growth markets around the world. We’re building a digital services aggregator platform, with a strong telco foundation, and a profitable growth strategy that empowers users to drive their own experience—subscribe once, source from many, and only pay for what you actually use. 

Since being founded in 2021, we’ve acquired Virgin Mobile MEA, Friendi Mobile MEA and Virgin Mobile LATAM (with 6.5 million subscribers) and 1600 dedicated  colleagues across Chile, Colombia, KSA, Kuwait, Mexico, Oman, Pakistan and UAE. 

To disrupt for good takes a rebellious spirit, a questioning mind and a warm heart. We really care about how to get things done and not who manages who. We benefit from our diversity, and together, we disrupt the way we and others thinking about our lives for good.  

Do you want to exchange ideas, learn from each other and leave your mark on our journey? This is the place for you. 

Role Purpose

Why this role matters:

As a Senior Data Engineer I, you will act as the technical backbone of your squad, translating complex and ambiguous business requirements into robust data solutions. Your work will directly impact how data is ingested, processed, and consumed across product and business teams.

You will also play a critical role in stabilizing systems, mentoring engineers, and shaping engineering best practices within the team.

What success looks like:

In your first year, you will:

  • Design and deploy scalable batch and streaming data pipelines using modern data stack tools
  • Take ownership of both new data platform developments and existing system reliability and maintenance
  • Improve data availability, monitoring, and incident response processes across pipelines
  • Contribute to architectural decisions alongside senior engineers
  • Mentor junior and mid-level engineers, raising the overall technical bar of the team

Why this is for you:

If you enjoy solving complex data problems, working across both greenfield builds and legacy systems, and being the go-to technical problem solver within a squad, this role is for you.

This is an opportunity to work on high-scale data systems, influence architecture, and have direct impact on business outcomes.


Key Responsibilities

In this role, you will:

  • Own the end-to-end design, development, and deployment of scalable, production-grade data pipelines.
  • Build and maintain batch and streaming ingestion frameworks, ensuring reliability, performance, and data quality.
  • Drive operational stability through proactive monitoring, alerting, and root cause analysis (RCA) on incidents.
  • Collaborate with the team on data architecture and long-term technical direction.
  • Write clean, scalable, and well-documented code; champion engineering standards through rigorous code reviews.
  • Partner with Data Analysts and Product Managers to translate business requirements into effective technical solutions.
  • Maintain and optimize existing data systems, balancing new development with technical debt reduction.
  • Mentor junior engineers, actively supporting their growth in technical and architectural skills.

Qualifications & Attributes

We’re seeking someone who embodies the following:

Education: Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.

Experience: 4–8 years of experience in data engineering or related roles, with a proven track record of building and maintaining production-grade data systems in cross-functional environments.

Technical Skills

Must-haves:

  • Strong programming proficiency in Python and SQL, with hands-on scripting experience (Shell).
  • Deep experience with Apache Spark for large-scale data processing.
  • Hands-on experience building and orchestrating pipelines using Airflow.
  • Strong working knowledge of data warehousing systems (e.g., Vertica or similar) and data modeling concepts (OLAP/OLTP, dimensional modeling).
  • Experience with cloud environments (GCP preferred) and platforms like Databricks.
  • Proven ability to monitor, debug, and optimize complex data pipelines.

Nice-to-haves:

  • Familiarity with modern data stack tools such as dbt and Terraform.
  • Exposure to real-time and streaming data systems.
  • Experience with cost optimization and performance tuning in cloud environments.

Unique Attributes:

  • Thrives in ambiguous, fast-paced environments and brings clarity through structured thinking and decisive action.
  • Balances a strong ownership mindset with a collaborative approach to cross-functional problem-solving.
  • Comfortable navigating both greenfield innovation and the nuanced complexity of legacy system maintenance.

What we offer:

  • Rapid learning opportunities - we enable learning through flexible career paths, exposure to challenging & meaningful work that will help build and strengthen your expertise.
  • Hybrid work environment - flexibility to work from home 2 days a week in our UAE & Pakistan offices.
  • Healthcare and other local benefits offered in market.

By submitting your application, you acknowledge and consent to the use of Greenhouse & BrightHire during the recruitment process. This may include the storage and processing of your data on servers located outside your country of residence. For further information, please contact us at dataprivacy@beyond.one.

Jobb.ai is an independent skill benchmarking platform. Applications are submitted on the employer's official website.