Du er ikke logget ind
Beskrivelse
Harness the power of Apache Arrow to optimize tabular data processing and develop robust, high-performance data systems with its standardized, language-independent columnar memory format
Key Features:
- Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet
- Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data
- Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description:
Apache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author's 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange.
This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You'll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You'll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You'll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications.
By the end of this book, you'll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow.
What You Will Learn:
- Use Apache Arrow libraries to access data files, both locally and in the cloud
- Understand the zero-copy elements of the Apache Arrow format
- Improve the read performance of data pipelines by memory-mapping Arrow files
- Produce and consume Apache Arrow data efficiently by sharing memory with the C API
- Leverage the Arrow compute engine, Acero, to perform complex operations
- Create Arrow Flight servers and clients for transferring data quickly
- Build the Arrow libraries locally and contribute to the community
Who this book is for:
This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you're building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book.
Table of Contents
- Getting Started with Apache Arrow
- Working with Key Arrow Specifications
- Format and Memory Handling
- Crossing the Language Barrier with the Arrow C Data API
- Acero: A Streaming Arrow Execution Engine
- Using the Arrow Datasets API
- Exploring Apache Arrow Flight RPC
- Understanding Arrow Database Connectivity (ADBC)
- Using Arrow with Machine Learning Workflows
- Powered by Apache Arrow
- How to Leave Your Mark on Arrow
- Future Development and Plans