Senior Data Engineer

Industry - Other
Level - Senior
  • Apache Spark
  • Azure
  • git
Employment - Full time
Work Model - Hybrid

Client - STARK

Our vision is to be the most trusted business-to-business distributor of heavy materials in Northern Europe. We take responsibility for the entire value chain in our industry by providing the best terms possible for our customers and suppliers.

Highlights

At STARK we have embarked on a digital journey and with own established development teams and involving our entire value chain from sourcing to sales and every IT system that helps serve our customers better. We therefore require Data Engineers to help build the Data Platform upon which we build the rest of our new IT landscape.

Lisbon, Portugal

Responsibilities

At STARK we have embarked on a digital journey and with own established development teams and involving our entire value chain from sourcing to sales and every IT system that helps serve our customers better. We therefore require Data Engineers to help build the Data Platform upon which we build the rest of our new IT landscape.

Your main responsibilities will be:
• Create and maintain data streams in STARK’s Azure-based and streaming-first data platform.
• Develop and maintain code to transform data using Kafka Streams, kqlDB and Spark.
• Explore and implement ways to enhance data quality and reliability.
• Collaborate with the Enterprise Architects, Solution Architects, and Technical Leads to establish the most effective way of processing data in an event-driven architecture.
• Support establishing the practices and ways of working in a data mesh architecture in STARK.
• Assist the Business representatives in designing and implementing a shared, common data model that spans across all STARK businesses.

Main challenges:
• Very ambitious project. Data platform is also both Stark’s Analytics and Integration platform.
• Many projects will rely on this integration platform, making it the central point for all STARK’s data usage.
• The person hired for this role has the possibility to influence all architecture.
• Cloud based platform.

Technologies and languages used in role:
- Azure stack (ADLS, Databricks (including spark streaming), Event Hub and more).
- Kafka (Confluent Cloud), Kafka Streams, ksqlDB.
- SQL.
-Python.

Qualifications

Technical Requirements:

Must Have Skills

• +5 years of experience working experience as a Data engineer, Data developer, DWH Developer or in similar roles.
• Experience with Relational SQL.
• Experience/ general understanding in at least one of these 3:

- Python (in data analytics/ data processing context);
- Spark (Databricks);
- Kafka (Confluent Cloud);

• Practical knowledge of Git.
• Good understanding of Data & analytics trends and methodologies, ideally used in the Cloud environment (Azure ideally).

Good-to-have:
• CI/CD.

Other Requirements:

- Experience in building enterprise-wide data models.
- Wide understanding of Data Warehouse and BI methodologies and concepts.
- Ability to communicate professionally in English.

Education

Bachelor or Master’s degree in Sc/BA degree in computer science, engineering or related discipline OR relevant years of experience in required skills.

Remarks

Stark vision is to be the most trusted business-to-business distributor of heavy materials in Northern Europe. We take responsibility for the entire value chain in our industry by providing the best terms possible for our customers and suppliers.

http://www.stark.dk

Cookies make our website work properly and better for you

We use cookies to improve your experience on our website.
By continuing to browse this site, by default you agree to the use of cookies.
You can change your preferences using the preferences button below.

Visit our Cookies Policy to learn more.

COOKIES