What you'll do

We are seeking an experienced Data Engineer. The role involves designing ingestion pipelines, optimizing query performance, and ensuring data quality, governance, and cost efficiency at scale (50–300 TB workloads).

Key Responsibilities
  • Migration Strategy & Execution
    • Design and implement data ingestion pipelines to extract data from Oracle into GCS/Iceberg.
    • Migrate and modernize existing Oracle schemas, partitions, and materialized views into Iceberg tables.
    • Define CDC (Change Data Capture) strategies using custom ETL.
  • Data Lakehouse Architecture
    • Configure and optimize Trino clusters (coordinator/worker, Helm charts, autoscaling).
    • Design partitioning, compaction, and clustering strategies for Iceberg tables.
    • Implement schema evolution, time-travel, and versioning capabilities.
  • Performance & Cost Optimization
    • Benchmark Trino query performance vs Oracle workloads.
    • Tune Trino/Iceberg for large-scale analytical queries, minimizing query latency and storage costs.
  • Data Quality, Metadata & Governance
    • Integrate Iceberg datasets with metadata/catalog services (Postgre/Hive Metastore, or Glue).
    • Ensure compliance with governance, observability, and lineage requirements.
    • Define and enforce standards for unit testing, regression testing, and data validation.
  • Collaboration & Delivery
    • Support existing reporting workloads (regulatory reporting, DWH) during and after migration.
    • Document architecture, migration steps, and provide knowledge transfer.

Why we should decide on you

  • 5 + years of experience
  • Prior experience migrating financial/regulatory datasets.
  • Experience with Regulatory Reporting or similar enterprise workloads.
  • Familiarity with large-scale performance benchmarking and cost modelling.
Required Skills & Experience
  • Core Expertise:
    • Strong hands-on experience with Trino/Presto, Apache Iceberg, and Oracle SQL/PLSQL.
    • Proven experience with data lakehouse migrations at scale (50 TB+).
    • Proficiency in Parquet formats.
  • Programming & Tools:
    • Solid coding skills in Java, Scala, or Python for ETL/ELT pipeline development.
    • Experience with orchestration (Spark).
    • Familiarity with CDC tools, JDBC connectors, or custom ingestion frameworks.
  • Cloud & DevOps:
    • Strong background in GCP (preferred) or AWS/Azure cloud ecosystems.
    • Experience with Kubernetes, Docker, Helm charts for deploying Trino workers.
    • Knowledge of CI/CD pipelines and observability tools.
  • Soft Skills:
    • Strong problem-solving mindset with ability to manage dependencies and shifting scopes.
    • Clear documentation and stakeholder communication skills.
    • Ability to work in tight delivery timelines with global teams.

Why you should decide on us

  • Let’s grow together, join a market leading SaaS company – our agile character and culture of innovation enables you to design our future.
  • We provide you with the opportunity to take on responsibility and participate in international projects. 
  • In addition to our buddy-program, we offer numerous individual and wide-ranging training opportunities during which you can explore technical and functional areas. 
  • Our internal mobility initiative encourages colleagues to transfer cross functionally to gain experience and promotes knowledge sharing.
  • We are proud of our positive working atmosphere characterized by a supportive team across various locations and countries and transparent communication across all levels. 
  • Together we're better - meet your colleagues at our numerous team events.
To get a first impression, we only need your CV and look forward to meeting you in a (personal/virtual) interview! 
Recognizing the benefits of working in diverse teams, we are committed to equal employment opportunities regardless of gender, age, nationality, ethnic or social origin, disability, and sexual identity. 
Are you interested? Apply now! 
https://www.regnology.net

R&D_N_2025_02
R&D_N_2025_03



Qui sommes-nous ?

Regnology est un chef de file technologique qui s’est fixé pour mission d'apporter la sécurité et la stabilité aux marchés financiers. Avec un focus exclusif sur le reporting réglementaire et plus de 34 000 institutions financières, 60 régulateurs, organisations internationales et autorités fiscales qui s'appuient sur nos solutions, nous nous positionnons de manière unique pour apporter une meilleure qualité de données, une plus grande efficience et de réduction des coûts pour tous les acteurs du marché. Avec plus 850 employés répartis dans 15 pays et un modèle unifié d'acquisition de données, nos clients peuvent rapidement extraire de la valeur de nos solutions et rester aisément en conformité avec les évolutions réglementaires. Regnology a été créé en 2021 lorsque BearingPoint RegTech, une ancienne unité commerciale de BearingPoint Group, a uni ses forces à celles de Vizor Software, un leader mondial des technologies de réglementation et de supervision.

Contactez-nous