What you'll do
We are seeking an experienced Data Engineer. The role involves designing ingestion pipelines, optimizing query performance, and ensuring data quality, governance, and cost efficiency at scale (50–300 TB workloads).
Key Responsibilities- Migration Strategy & Execution
- Design and implement data ingestion pipelines to extract data from Oracle into GCS/Iceberg.
- Migrate and modernize existing Oracle schemas, partitions, and materialized views into Iceberg tables.
- Define CDC (Change Data Capture) strategies using custom ETL.
- Data Lakehouse Architecture
- Configure and optimize Trino clusters (coordinator/worker, Helm charts, autoscaling).
- Design partitioning, compaction, and clustering strategies for Iceberg tables.
- Implement schema evolution, time-travel, and versioning capabilities.
- Performance & Cost Optimization
- Benchmark Trino query performance vs Oracle workloads.
- Tune Trino/Iceberg for large-scale analytical queries, minimizing query latency and storage costs.
- Data Quality, Metadata & Governance
- Integrate Iceberg datasets with metadata/catalog services (Postgre/Hive Metastore, or Glue).
- Ensure compliance with governance, observability, and lineage requirements.
- Define and enforce standards for unit testing, regression testing, and data validation.
- Collaboration & Delivery
- Support existing reporting workloads (regulatory reporting, DWH) during and after migration.
- Document architecture, migration steps, and provide knowledge transfer.
Why we should decide on you
- 5 + years of experience
- Prior experience migrating financial/regulatory datasets.
- Experience with Regulatory Reporting or similar enterprise workloads.
- Familiarity with large-scale performance benchmarking and cost modelling.
Required Skills & Experience
- Core Expertise:
- Strong hands-on experience with Trino/Presto, Apache Iceberg, and Oracle SQL/PLSQL.
- Proven experience with data lakehouse migrations at scale (50 TB+).
- Proficiency in Parquet formats.
- Programming & Tools:
- Solid coding skills in Java, Scala, or Python for ETL/ELT pipeline development.
- Experience with orchestration (Spark).
- Familiarity with CDC tools, JDBC connectors, or custom ingestion frameworks.
- Cloud & DevOps:
- Strong background in GCP (preferred) or AWS/Azure cloud ecosystems.
- Experience with Kubernetes, Docker, Helm charts for deploying Trino workers.
- Knowledge of CI/CD pipelines and observability tools.
- Soft Skills:
- Strong problem-solving mindset with ability to manage dependencies and shifting scopes.
- Clear documentation and stakeholder communication skills.
- Ability to work in tight delivery timelines with global teams.
Why you should decide on us
- Let’s grow together, join a market leading SaaS company – our agile character and culture of innovation enables you to design our future.
- We provide you with the opportunity to take on responsibility and participate in international projects.
- In addition to our buddy-program, we offer numerous individual and wide-ranging training opportunities during which you can explore technical and functional areas.
- Our internal mobility initiative encourages colleagues to transfer cross functionally to gain experience and promotes knowledge sharing.
- We are proud of our positive working atmosphere characterized by a supportive team across various locations and countries and transparent communication across all levels.
- Together we're better - meet your colleagues at our numerous team events.
To get a first impression, we only need your CV and look forward to meeting you in a (personal/virtual) interview!
Recognizing the benefits of working in diverse teams, we are committed to equal employment opportunities regardless of gender, age, nationality, ethnic or social origin, disability, and sexual identity.
Are you interested? Apply now!
https://www.regnology.netR&D_N_2025_02
R&D_N_2025_03
Über uns
Regnology ist ein international führender Anbieter für innovative Lösungen im Bereich Regulatory, Risk und Supervisory Technology (RegTech/RiskTech/SupTech), für AEOI und Steuerreporting sowie für Services für das aufsichtsrechtliche Meldewesen entlang der regulatorischen Wertschöpfungskette. Regnology ist seit 25 Jahren ein Partner für Banken und Regulierungsbehörden. Bis Ende 2020 war das Unternehmen Teil der BearingPoint-Gruppe und firmierte unter dem Namen BearingPoint RegTech. Seit dem Verkauf des RegTech-Geschäfts an das Private-Equity-Unternehmen Nordic Capital ist das Unternehmen unabhängig. Im Juni 2021 hat sich das Unternehmen mit Vizor Software zusammengeschlossen und kürzlich den Namen in Regnology geändert. Insgesamt nutzen mehr als 7.000 Firmen, darunter Banken, Versicherungen und Finanzdienstleister, Reporting-Lösungen von Regnology. Gleichzeitig setzen mehr als 50 Aufsichtsbehörden und Steuerbehörden auf fünf Kontinenten die SupTech-Lösungen des Unternehmens ein, um Daten von 34.000 Firmen in 60 Ländern zu erfassen und zu analysieren. Regnology beschäftigt insgesamt über 770 Mitarbeiter an 17 Standorten in 12 Ländern.
Du hast Fragen? Schreib uns gerne unter:
recruiting@regnology.net