Robin Jean

ingénieur big data

Remote from Saint-Denis

  • -20.8789
  • 55.4482
Propose a project The project will only begin when you accept Robin's quote.
Propose a project The project will only begin when you accept Robin's quote.

Location and geographical scope

Saint-Denis, France
Remote only
Works remotely most of the time


Project length
  • Between 3-6 months
  • ≥ 6 months




Skills (16)

Robin in a few words

As a computer enthusiast, I chose to become a Computer Science Engineer in the Big Data area with the aim to understand and participate in the tomorrow’s world.

For 4 years, I had the chance to actively participate to the implementation of two different and very critical distributed solutions in three of the largest French companies in the energy, banking and luxury sectors and one big telecommunication company in New Zealand.

The main tasks I had to work on, were :
- To design and implement data ingestion and data aggregation jobs (batch / streaming)
- To develop CI/CD pipelines to automate deployment
- To develop monitoring and alerting solutions about Hadoop resource usage.

I learnt different hard skills:
- Big Data Technologies : Spark, Hadoop (YARN, HDFS, HBase), Nifi, Kafka, Cassandra, Solr
- Programming Languages : Java, Scala, Python
- CI/CD : Jenkins, GitLab CI
- Testing : Unit Test, Cucumber
- Monitoring : Elasticseach, Kibana, Grafana
- Cloud : AWS, Azure

But also some usefull soft skills
- Teamwork and Agiles methods (Kanban, Scrum)
- Communication
- Leadership

So, I am currently looking for a position as Big Data Engineer in an open-minded, competent and motivated team with which I will progress and share good times.

Do not hesitate to add me if you think that we could collaborate !


Total Direct Energie - Total

Energy & Utilities

Big Data Engineer

Paris, France

October 2019 - May 2020

- Implementation of Spark Streaming jobs in Scala
- Improvement of code performances and resources usage
- Leading the CI/CD pipelines development using Azure DevOps and Jenkins
- Implementation of Java REST web services

Spark NZ


Big Data Engineer

Auckland, Nouvelle-Zélande

October 2018 - May 2019

- Leading the implementation of the cluster monitoring and alerting solution
- Implementation of the Azure Databricks ingestion template flow which will be used across all Spark teams

Kering - Kering

Luxury Goods

Big Data Engineer

Paris, France

January 2018 - September 2018

- Implementation of ingestion flow using Apache Nifi from {Kafka, Rest API, SFTP, Kafka, SQL Server, Microsoft Server, Oracle} to {Kafka, Cassandra, Solr, S3, Elasticsearch, Slack}
- Development of different Spark batch jobs in Scala
- Proactive implementation of GitLab and Jenkins pipeline to deploy a real use case in different environment

Société Générale - Société Générale

Banking & Insurance

Big Data Engineer

Paris, France

October 2016 - November 2017

1 external recommendation