About the job
Objective:
Works with data scientist, data engineer and data architect to automatize, optimize and scale data products to ensure the industrialization and the performance of algorithms.
Responsibilities:
- Works with stakeholders to formulate code approaches to solve problems using algorithms and data sources in context of customer, engineering, and business needs.
- Will use data exploration techniques to discover new opportunities or to find the best performance.
- Will interpret the results of analyses, validate approaches, and learn to monitor, analyse, and iterate to continuously improve and ensure the data product are scalable.
- Will engage with stakeholders to produce clear, compelling, and actionable insights that influence product and service improvements that will impact customers.
- Will also engage in the peer review process and act on feedback while learning innovative methods, algorithms, and tools to increase the impact and applicability of the data product.
- Will participate together with the DevOps team in choosing the best operational architecture.
- Will monitor the performance of ML models and implement the best technological solution within the operational architecture.
- Will have a strong relationship with the infrastructure team and develop using HTTP API logic in frameworks such as Lambda architecture.
Qualifications:
- 5+ years of engineering experience using large data systems on SQL, PySpark, Spark, etc.
- 5+ years of experience using one or more programming or scripting language like Python, Scala, PySpark or C# to work with data.
- 5+ years of experience using tools like Python, R, MATLAB, AMPL, SAS, and other relevant data science tools/environments. (Jupyter Notebook)
- 5+ years of experience developing AI and ML pipelines for continuous operation, feedback and monitoring of ML models leveraging best practices from the CI/CD automation; and DevOps principles
- 2+ years of prior experience implementing HTTP REST APIs.
- 2+ years of experience working in Azure Cloud
- Be familiar with ML algorithms, AI use cases and applications.
- Have knowledge about data engineering concepts, tools and automation processes (DataOps) since data pipelines and architectures provide the base for building AI solutions as all-in-one Databricks
Skills and Competencies:
- Behavioural skills:
- Strong collaboration skills
- Objective Oriented
- Accountable; takes ownership and pride in their work
- Flexibility and adaptation to changing environments.
- Ability to work in an agile scrum project team
- Customer Orientation.
- Complex problem-solving and analytical skills
- Strong written and verbal communication skills
- Be able to explain data product process to software developers
- Continuous improvement and creativity.
- Passion to learn world class techniques and tools.
- Ability to multi-task, prioritize and be detail-oriented
- Other requirements:
- Degree in Computer Science, Statistics, Mathematics, Economics, or similar.
- Master in Artificial Intelligence will be preferred.
- Fluent written and spoken English. (Level C1)
- Knowledge of the insurance business and its business processes in all its areas (nice to have)
Modalità di lavoro: ibrida – ufficio Torino o Milano + 12gg di smartworking/mese
La ricerca è rivolta ai candidati ambosessi (l.903/77). Ti preghiamo di leggere l’Informatica Privacy ai sensi dell’Art. 13 del Regolamento (UE) 2016/679 sulla protezione dei dati (GDPR).
Tagged as: C++, MATLAB, PySpark, Python