跳到主要内容
Open In ColabOpen on GitHub

PyPDFLoader

本笔记本提供了一个快速概览,帮助您开始使用 PyPDF 文档加载器。有关所有 DocumentLoader 功能和配置的详细文档,请访问 API 参考

概述

集成详情

本地可序列化JS 支持
PyPDFLoaderlangchain_community

加载器功能

来源文档延迟加载原生异步支持提取图像提取表格
PyPDFLoader

设置

凭据

使用 PyPDFLoader 不需要凭据。

如果您想要获得模型调用的自动化最佳追踪,您还可以通过取消注释下方内容来设置您的 LangSmith API 密钥

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

安装

安装 langchain_communitypypdf

%pip install -qU langchain_community pypdf
Note: you may need to restart the kernel to use updated packages.

初始化

现在我们可以实例化我们的模型对象并加载文档

from langchain_community.document_loaders import PyPDFLoader

file_path = "./example_data/layout-parser-paper.pdf"
loader = PyPDFLoader(file_path)
API 参考:PyPDFLoader

加载

docs = loader.load()
docs[0]
Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'author': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2', 'subject': '', 'title': '', 'trapped': '/False', 'source': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'page': 0, 'page_label': '1'}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (\x00 ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\nshannons@allenai.org\n2 Brown University\nruochen zhang@brown.edu\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu\n5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2  [cs.CV]  21 Jun 2021')
import pprint

pprint.pp(docs[0].metadata)
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': '/False',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0,
'page_label': '1'}

延迟加载

pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)

pages = []
len(pages)
6
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
LayoutParser: A Unified Toolkit for DL-Based DIA 11
focuses on precision, efficiency, and robustness. T
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': '/False',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 10,
'page_label': '11'}

元数据属性至少包含以下键

  • source
  • page (如果在 page 模式下)
  • total_page
  • creationdate
  • creator
  • producer

其他元数据特定于每个解析器。这些信息可能很有用(例如,对您的 PDF 进行分类)。

拆分模式和自定义页面分隔符

加载 PDF 文件时,您可以将其拆分为两种不同的方式

  • 按页
  • 作为单个文本流

默认情况下,PyPDFLoader 将 PDF 拆分为单个文本流。

按页提取 PDF。每页都作为 langchain Document 对象提取:

loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
16
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': '/False',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0,
'page_label': '1'}

在此模式下,pdf 按页拆分,生成的 Documents 元数据包含页码。但在某些情况下,我们可能希望将 pdf 作为单个文本流处理(这样我们就不会将某些段落切成两半)。在这种情况下,您可以使用 single 模式

将整个 PDF 提取为单个 langchain Document 对象:

loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
1
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': '/False',
'source': './example_data/layout-parser-paper.pdf',
'total_pages': 16}

从逻辑上讲,在此模式下,“page_number”元数据会消失。以下是如何清楚地识别文本流中页面结束位置的方法

添加自定义 pages_delimiter 以识别 single 模式下页面的结束位置:

loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimiter="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (� ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
-------THIS IS A CUSTOM END OF PAGE-------
2 Z. Shen et al.
37], layout detection [38, 22], table detection [ 26], and scene text detection [ 4].
A generalized learning-based framework dramatically reduces the need for the
manual specification of complicated rules, which is the status quo with traditional
methods. DL has the potential to transform DIA pipelines and benefit a broad
spectrum of large-scale document digitization projects.
However, there are several practical difficulties for taking advantages of re-
cent advances in DL-based methods: 1) DL models are notoriously convoluted
for reuse and extension. Existing models are developed using distinct frame-
works like TensorFlow [1] or PyTorch [ 24], and the high-level parameters can
be obfuscated by implementation details [ 8]. It can be a time-consuming and
frustrating experience to debug, reproduce, and adapt existing models for DIA,
and many researchers who would benefit the most from using these methods lack
the technical background to implement them from scratch. 2) Document images
contain diverse and disparate patterns across domains, and customized training
is often required to achieve a desirable detection accuracy. Currently there is no
full-fledged infrastructure for easily curating the target document image datasets
and fine-tuning or re-training the models. 3) DIA usually requires a sequence of
models and other processing to obtain the final outputs. Often research teams use
DL models and then perform further document analyses in separate processes,
and these pipelines are not documented in any central location (and often not
documented at all). This makes it difficult for research teams to learn about how
full pipelines are implemented and leads them to invest significant resources in
reinventing the DIA wheel .
LayoutParser provides a unified toolkit to support DL-based document image
analysis and processing. To address the aforementioned challenges,LayoutParser
is built with the following components:
1. An off-the-shelf toolkit for applying DL models for layout detection, character
recognition, and other DIA tasks (Section 3)
2. A rich repository of pre-trained neural network models (Model Zoo) that
underlies the off-the-shelf usage
3. Comprehensive tools for efficient document image data annotation and model
tuning to support different levels of customization
4. A DL model hub and community platform for the easy sharing, distribu-
tion, and discussion of DIA models and pipelines, to promote reusability,
reproducibility, and extensibility (Section 4)
The library implements simple and intuitive Python APIs without sacrificing
generalizability and versatility, and can be easily installed via pip. Its convenient
functions for handling document image data can be seamlessly integrated with
existing DIA pipelines. With detailed documentations and carefully curated
tutorials, we hope this tool will benefit a variety of end-users, and will lead to
advances in applications in both industry and academic research.
LayoutParser is well aligned with recent efforts for improving DL model
reusability in other disciplines like natural language processing [ 8, 34] and com-
puter vision [ 35], but with a focus on unique challenges in DIA. We show
LayoutParser can be applied in sophisticated and large-scale digitization projects
-------THIS IS A CUSTOM END OF PAGE-------
LayoutParser: A Unified Toolkit for DL-Based DIA 3
that require precision, efficiency, and robustness, as well as simple and light

这可以简单地是 \n 或 \f 以清楚地指示页面更改,或者 <!-- PAGE BREAK --> 以便无缝注入 Markdown 查看器而没有视觉效果。

从 PDF 中提取图像

您可以使用三种不同的解决方案从 PDF 中提取图像

  • rapidOCR(轻量级光学字符识别工具)
  • Tesseract(高精度 OCR 工具)
  • 多模态语言模型

您可以调整这些功能以选择提取图像的输出格式,包括 htmlmarkdowntext

结果插入到页面文本的最后一段和倒数第二段之间。

使用 rapidOCR 从 PDF 中提取图像:

%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import RapidOCRBlobParser

loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=RapidOCRBlobParser(),
)
docs = loader.load()

print(docs[5].page_content)
API 参考:RapidOCRBlobParser
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



![Coordinate
(x1, y1)
(X1, y1)
(x2,y2)
APIS
x-interval
tart
end
Quadrilateral
operation
Rectangle
y-interval
ena
(x2, y2)
(x4, y4)
(x3, y3)
and
textblock
Coordinate
transformation
+
Block
Block
Reading
Extra features
Text
Type
Order
coordinatel
textblockl
layout
same
textblock2
layoutl
The
A list of the layout elements](#)

请注意,RapidOCR 旨在与中文和英文一起使用,而不是其他语言。

使用 Tesseract 从 PDF 中提取图像:

%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import TesseractBlobParser

loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="html-img",
images_parser=TesseractBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
API 参考:TesseractBlobParser
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



<img alt="Coordinate

textblock

x-interval

JeAsaqul-A

Coordinate
+

Extra features

Rectangle

Quadrilateral

Block
Text

Block
Type

Reading
Order

layout

[ coordinatel textblock1 |
&#x27;

“y textblock2 , layout1 ]

A list of the layout elements

The same transformation and operation APIs src="#" />

使用多模态模型从 PDF 中提取图像:

%pip install -qU langchain_openai
Note: you may need to restart the kernel to use updated packages.
import os

from dotenv import load_dotenv

load_dotenv()
True
from getpass import getpass

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers import LLMImageBlobParser
from langchain_openai import ChatOpenAI

loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=LLMImageBlobParser(model=ChatOpenAI(model="gpt-4o", max_tokens=1024)),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



![**Image Summary:**
Diagram explaining coordinate systems and layout elements with labels for coordinate intervals, rectangle, quadrilateral, textblock with extra features, and layout list. Includes transformation and operation APIs.

**Extracted Text:**
Coordinate
coordinate
start
start
x-interval
end
y-interval
end
(x1, y1)
Rectangle
(x2, y2)
(x1, y1)
Quadrilateral
(x2, y2)
(x4, y4)
(x3, y3)
The same transformation and operation APIs
textblock
Coordinate
Extra features
Block Text
Block Type
Reading Order
...
layout
coordinate1, textblock1, ...
..., textblock2, layout1
A list of the layout elements ](#)

处理文件

许多文档加载器都涉及解析文件。此类加载器之间的差异通常源于文件的解析方式,而不是文件的加载方式。例如,您可以使用 open 读取 PDF 或 markdown 文件的二进制内容,但您需要不同的解析逻辑才能将该二进制数据转换为文本。

因此,将解析逻辑与加载逻辑分离可能会有所帮助,这使得无论数据如何加载,都可以更轻松地重复使用给定的解析器。您可以使用此策略来分析具有相同解析参数的不同文件。

from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import PyPDFParser

loader = GenericLoader(
blob_loader=FileSystemBlobLoader(
path="./example_data/",
glob="*.pdf",
),
blob_parser=PyPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (� ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': '/False',
'source': 'example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0,
'page_label': '1'}

可以处理来自云存储的文件。

from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader

loader = GenericLoader(
blob_loader=CloudBlobLoader(
url="s3://mybucket", # Supports s3://, az://, gs://, file:// schemes.
glob="*.pdf",
),
blob_parser=PyPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)

API 参考

有关所有 PyPDFLoader 功能和配置的详细文档,请访问 API 参考: https://python.langchain.ac.cn/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html


此页内容对您有帮助吗?