Catégories
Data

Data Sommelier

Les épisodes

S1E1
Apache Iceberg, vers un nouveau standard du stockage de la donnée ? avec Victor Coustenoble

S1E2
Découverte de The Apache Software Foundation Foundation, avec JB Onofré

S1E3
FinOps, halte au gaspillage … où les bonnes pratiques à mettre en place pour optimiser les coûts d’une plateforme data, avec Matthieu Rousseau et Ismael Goulani

S1E4
Un Lakehouse dans un cloud français, économiquement abordable et basé sur des composants hashtag#opensource interchangeables, c’est possible ? avec Vincent HEUSCHLING

S1E5
Talaxie, le fork Talend Open Studio. L’initiative de Jean Cazaux

S1E6
De PowerMart à IDMC, en passant par PowerCenter, Christophe Fournel retrace les 30 dernières années d’Informatica

S1E7
Le retour des ‘Data Platforms’. Interview de Eric Mattern

S1E8
Le projet Icehouse avec Victor Coustenoble, une plateforme Lakehouse intégralement gérée, qui combine le moteur de requête opensource Trino et Apache Iceberg.

S1E9
Retour sur la conférence Subsurface, organisée par Dremio qui s’est déroulée les 2 et 3 mai 2024 à New York. Charly Clairmont en profite pour nous rappeler ce qu’est Dremio et ses différents cas d’usages.

S1E10
La gouvernance de données, c’est d’abord une question organisationnelle ! Daniel MALOT nous parle de son expérience terrain et décrit les étapes nécessaires pour mener à bien un projet de gouvernance en nous faisant découvrir quelques aspects de sa solution META ANALYSIS.

S1E11
Pierre Villard retrace l’histoire de la solution Apache NiFi, véritable gateway universelle permettant le développement de pipelines de mouvements de données, aussi bien en mode batch qu’en streaming.

S1E12
Le streaming, une nouvelle façon de penser l’architecture des applications et d’améliorer les usages de la data ! Fred CECILIA constate que le streaming s’impose naturellement lorsqu’on a vainement essayé d’optimiser les batch existants

S1E14
Alexandre Guillemine de chez Foodles nous détaille toutes les étapes de son projet de migration de PostgreSQL vers Snowflake !

S1E15
Amphi, un ETL opensource pour faire du RAG, développé par Thibaut Gourdel !

S1E16
Cloudera, de l’ère Bigdata à l’ère de l’IA, interview de Denis Fraval

S1E17
DCP, la Data Platform ClickOps Self Service, avec le témoignage d’EDF. Interview de Frederic Collin et Edouard Rousseaux

S1E18
Qu’est que la Data Observability ? avec Mahdi Karabiben de la société Sifflet

Catégories
Data

The Definitive Guide to Data Integration

Covering essential concepts, techniques, and tools, this book is a compass for every data professional seeking to create value and transform their business.

Stéphane Heckel, Data Sommelier

1998, Ignition

My journey into the data integration world started in 1998 when the company I served as a database consultant was acquired by an American software vendor specializing in this field. Back then, the idea of a graphical ETL solution seemed far-fetched; drawing lines with a mouse between sources and target components to craft data movement interfaces for analytical applications appeared unconventional. We were accustomed to developing code in C++, ensuring the robustness and performance of applications. Data warehouses were fed through batch-mode SQL processes, with orchestration and monitoring managed in shell scripts.

The 3Vs and more !

Little did we anticipate that this low-code, no-code ETL solution would evolve into a standard embraced by global companies, marking the onset of the data integration revolution. The pace was swift1. Growing data volumes, expanding sources to profile, operational constraints, and tightening deadlines propelled changes in data tools, architectures and practices. Real-time data integration, data storage, data quality, metadata and master data management, enhanced collaboration between business and technical teams through governance programs, and the development of cloud-based applications became imperative challenges for data teams striving for operational excellence.

Ready for the AI Era !

The past 25 years flashed by, and the revolution persists, keeping my passion for data ablaze. The rise of artificial intelligence, exemplified by the success of ChatGPT, necessitates vast data processing for model building. This, in turn, compels a deeper reliance on data engineering techniques. Authored by seasoned data professionals with extensive project deployments, this book offers a comprehensive overview of data integration. My sincere gratitude to them, Pierre-Yves, Emeric, Raphaël and Mehdi for crafting this invaluable resource! Covering essential concepts, techniques, and tools, this book is a compass for every data professional seeking to create value and transform their business. May your reading journey be as enjoyable as mine!

  1. The 3Vs of Big Data: Volume, Velocity, Variety ↩︎
Catégories
Data

DataOps 2025

By 2025, a Data Engineering team guided by DataOps practices and tools will be 10 times more productive than teams that do not use DataOps !

Gartner’s Strategic Planning Assumption

By 2025, one-half of organizations will have adopted a DataOps approach to their data engineering processes, enabling them to be more flexible and agile.

Ventana Research

Definition(s)

DataOps is an engineering methodology and set of practices for rapid, reliable, and repeatable delivery of production-ready data and operations-ready analytics and data science models (source

Wayne Eckerson, Eckerson Group

Operationalizing Data Integration for constant change and continuous delivery1

DataOps is a collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and data consumers across an organization.

Gartner

DataOps is the new way of thinking about working with data, it provides practitioners like architects & developers an ability to onboard and scale data projects quickly while giving operators and leaders visibility and confidence that the underlying engines are working well. It is a fundamental mindshift that requires changes in people, processes, and supporting technologies2.

Data Operations (DataOps) is a methodology focused on the delivery of agile business intelligence (BI) and data science through the automation and orchestration of data integration and processing pipelines, incorporating improved data reliability and integrity via data monitoring and observability. DataOps has been part of the lexicon of the data market for almost a decade and takes inspiration from DevOps, which describes a set of tools, practices and philosophy used to support the continuous delivery of software applications in the face of constant changes.

Matt Aslet, Ventana Research

Gartner Key Findings

DataOps is becoming a necessity. Care capabilities include:

  • Orchestration
  • Observability
  • Test Automation
  • Deployment Automation
  • Environment Management

Gartner Recommendations

  • Procure as a cost optimization solution
  • Understand the diverse market landscape and focus on a desired set of core capabilities
  • Prioritize single pane of glass tools

Resources

  1. Source StreamSets ↩︎
  2. Source StreamSets ↩︎