Disk.jpg
 
 

YOUR TASKS

  • Design and implementation of distributed batch and stream processing

  • Creating data lakes and data pipelines

  • Developing and realizing Big Data infrastructures

  • Ensuring data security in a Cloud environment

YOUR PROFILE

  • Degree in computer science or many years of experience in data processing

  • Solid programming skills in a least one major language, preferrably Java, Scala or Python

  • Good understanding of software architectures, testing and application monitoring

  • Practical knowledge of developing Big Data projects

  • Excellent knowledge of relational databases and SQL, preferrably also of NoSQL approaches

  • Experience with GIT, Linux/Bash, and similar standards

OPTIONAL EXPERIENCE AND SKILLS

  • Distributed SQL engines (e.g. Apache Drill, Impala)

  • Large-scale data processing or graph cluster computing (e.g. Apache Spark, Apache Airflow, Dask)

  • Containers (e.g. Docker, Kubernetes)

  • Streaming/event-driven plattforms (e.g. Apache Kafka)

  • Ideally experience with cloud service (e.g. Google Cloud)

  • German :)

BONUS (PICK 0 OR MORE)

  • Do you have an active GitHub profile?

  • Do you run a blog on data engineering?

Salary

We offer a salary of EUR 2.518 monthly gross based on experience level ST1 set by the IT-collective bargaining agreement. Negotiable based on your expertise.

About us

We build up our remote company with innovation in mind. We improve by learning and sharing with the world and constantly adapt our working environment to enjoy the years to come. Find out more what it means to work with us.

 

Are you in it?