
Data Engineer, Ssr /Sr
Do you want to be part of the team?


Information
What are we looking for in you?
Knowledge of the AWS ecosystem (Glue, Redshift, S3, Athena, Lambda) – mandatory.
Proficiency in SQL, ETL tools (PowerCenter), Python, R.
Knowledge of Shell Bash programming language.
Understanding of Hadoop and distributed environments.
Experience in developing and automating data ingestion from different sources (DB, files, APIs, Sqoop, Oozie).
Interaction with automation solutions.
Real-time ingestion and processing using Kafka and Spark Streaming.
General computational techniques.
Participation in meetings to contribute ideas and coordinate low- and medium-complexity tasks, both within the team and with other IT management areas.
Ability to propose improvements in administration and operations processes.
Openness to continuous learning and self-training.
Process optimization skills.
Ongoing or incomplete university degree.
Degree in Systems Engineering, Computer Science, or related fields.
Responsibilities:
Enhance the data ecosystem through warehouse processing and big data platforms.
Identify source data and select the most suitable tools based on the data structure and flow characteristics.
Validate technical documentation.
Manage data loading, transformations, and developments related to data generation.
Apply and execute best practices.
Optimize existing ETL processes.
Analyze business requirements to provide efficient solutions.
Troubleshoot incidents related to ETL, data, and associated developments.
Stay up to date with technology and related components.
Nombre de la empresa
IT Patagonia
Location
Buenos Aires, Argentina
Apply here
Work scheme
Hybrid
How many times a week do you have to go to the office
2
Años de experiencia requeridos
4
