Du er ikke logget ind
Beskrivelse
With the surge in big data and AI, organizations can rapidly create data products. However, the effectiveness of their analytics and machine learning models depends on the data's quality. Delta Lake's open source format offers a robust lakehouse framework over platforms like Amazon S3, ADLS, and GCS.This practical book shows data engineers, data scientists, and data analysts how to get Delta Lake and its features up and running. The ultimate goal of building data pipelines and applications is to gain insights from data. You'll understand how your storage solution choice determines the robustness and performance of the data pipeline, from raw data to insights.You'll learn how to:Use modern data management and data engineering techniquesUnderstand how ACID transactions bring reliability to data lakes at scaleRun streaming and batch jobs against your data lake concurrentlyExecute update, delete, and merge commands against your data lakeUse time travel to roll back and examine previous data versionsBuild a streaming data quality pipeline following the medallion architecture