Giter VIP home page Giter VIP logo

chentahung / html-text-parser Goto Github PK

View Code? Open in Web Editor NEW
3.0 1.0 0.0 19.09 MB

This project is designed to extract text from documents and prepare it for processing by Large Language Models (LLM). Implemented a feature to store and utilize text style information, enabling the program to identify and segment content based on potential headers and titles.

Python 1.82% HTML 98.18%
data-processing large-language-models llms python text-chunking text-parsing

html-text-parser's Introduction

HTML-Text-Parser

This project is designed to extract text from documents and prepare it for processing by Large Language Models (LLM). It not only pulls text but also preserves its styles and decorations by converting everything into structured data. This approach ensures that the style information is maintained through tags or classes, helping to keep the text's original formatting and emphasis.

Handling large blocks of text directly is often impractical for LLMs, as they can struggle to process and interpret extensive, undivided text effectively. To solve this, we implement a chunking strategy where text is divided based on its styling cues, such as font size, boldness, or italics, etc. Text with larger fonts or emphasized styles is typically deemed more significant, often representing headings or subheadings, which are given higher scores and treated as separate chunks, including following context. This method enhances the readability and usability of the text in LLM applications.

Installation

First, clone the GitHub repository.

git clone https://github.com/ChenTaHung/HTML-Text-Parser.git path/to/clone/the/repository # HTTPS
git cloen [email protected]:ChenTaHung/HTML-Text-Parser.git path/to/clone/the/repository # SSH

Then, switch to the directory where the repository has been cloned.

import os
os.chdir('/path/to/the/cloned/repository')
from src.main.TextParsing.HTMLParser import HTMLParser
from src.main.TextParsing.TextChunker import TextChunker

Usage

Input a html file (.html):

If the original documents are in PDF files format, please convert them into html files (recommending the Adobe convertor), and make sure the html contents are parsable.

# Open the html file and read into the program
with open('data/FASB_2022_html/ASU_2022-01.html', 'r') as html_file:
    html_content = html_file.read()

# instantiate the Parser object
parser = HTMLParser(html_content)
text_info_df = parser.parse()

# Get all the text out:
allText = parser.get_text()

The text_info_df holds all the extracted text along with its styles and decorations in a structured format.

Image

Now that we have the dataframe containing all the text segments, we can use the chunker to break the text into smaller pieces, making it more manageable for processing by the LLMs.

# instantiate the Chunker object
# The constructor accepts the dataframe we parsed out as the input
chunker = TextChunker(text_info_df)

# Chunk text
result_chunks_list = chunker.chunk_text()

The critical step here is the chunk_text() function, it includes the following parameters:

def chunk_text(self, 
               cutoff = 7, 
               auto_adjust_cutoff=False, 
               keep_text_only=True, 
               refine=True, 
               sel_metric='words', 
               lower_bound=100, 
               upper_bound=650
               )

The chunk_text function is designed to segment text into smaller chunks based on various criteria, making it easier for Large Language Models to process the text effectively. Here’s how you can utilize this function:

  • cutoff (int, optional): This parameter sets the threshold for including a row in a chunk. The default value is 7.

  • auto_adjust_cutoff (bool, optional): Enables automatic adjustment of the cutoff value based on the data. It is set to False by default.

  • keep_text_only (bool, optional): If set to True, the function returns only the concatenated text of each chunk, omitting any DataFrame structure. This is the default behavior.

  • refine (bool, optional): Activates a refinement process on the chunks using the selected metric. This is set to True by default.

  • sel_metric (str, optional): Specifies the metric used for refining the chunks, with 'words' as the default option.

  • lower_bound (int, optional): Sets the minimum size of a chunk when refining. The default is set at 100 words.

  • upper_bound (int, optional): Sets the maximum size of a chunk when refining. The default is set at 650 words.

The function returns a list of chunks. These chunks are either simple concatenated text contents or DataFrames, depending on the keep_text_only parameter. This function is essential for preparing large texts in a format that is more manageable for LLMs to process.

Potential Future works

  1. Able to handle external CSS files that defined the predefined classes.
  2. Optimize the logic of refining chunks.
  3. Optimize and generalize the score dictionary for scoring each text.

Environment

OS : macOS Sonoma 14.5

IDE: Visual Studio Code 

Language : Python       3.9.7 
    - numpy             1.20.3
    - numpydoc          1.1.0
    - pandas            1.5.3
    - regex             2021.8.3
    - beautifulsoup4    4.10.0

Developers

Denny Chen

html-text-parser's People

Contributors

chentahung avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.