Solar PV plants convert solar energy to electricity. While operating, a large amount of data is created in the form of time series. The electrical variables describing the behavior of plants is directly influenced by the environmental variables. The electricity generated by these plants is either self-consumed or transported and distributed to the consumption points by the power grid and traded in the energy markets. In order to create services for our clients, we need to take advantage of all the data stream of the PV and electricity stakeholders and do a good usage of this data.
The main goal of the Data Engineer is to contribute to expanding and optimizing our Data Platform, as well as optimizing data flow and collection for cross functional teams. Example of tasks include:
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with other profiles and teams to assist with data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Technical skills
Experience building and optimizing databases, data pipelines, data architectures and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
Experience supporting and working with cross-functional teams in a dynamic environment.
Advanced knowledge and experience working with relational databases (PostgreSQL)
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with data pipeline and workflow management tool (Airflow preferred)
Experience with AWS cloud services: Glue, Athena, Lambda
Experience with object-oriented/object function scripting languages (Python preferred, Go is a nice to have)
Personal skills
Responsibility. Committed to its work.
Ability to work both independently and cohesively in a team environment.
Highly motivated and results-driven person, critical thinker.
Agile startup mindset, creative thinking. Sense of anticipation, proactivity.
Willingness to learn and improve its skills constantly.
Optimization of the use of time and ability to manage stress.
Sharing knowledge (training, peer programming, workshops, etc.)
Accepting failure as way to improve and succeed.
Entretien RH avec notre Recruteuse
Entretien technique avec notre Tech Lead et un membre de l’équipe
Notez qu’un test technique peut s’ajouter au processus de recrutement
These companies are also recruiting for the position of “Data / Business Intelligence”.