
Navigating the Depths of Long-Form Content: Unveiling the Power of PEARL
Introduction
With the growth of the internet, social media, and digital communication, there's an increasing amount of information available, everywhere. Advances in AI and NLP have made it possible to analyze and extract valuable insights from long-form content (e.g., papers, essays, articles, etc.) more effectively. This has opened new possibilities for prompt engineers to answer complex issues and generate insights that have not been previously available using AI.
So how can we as prompt creators provide better answers from complex long-form content? This study provides us with a viable solution to tackle this question.
Prompting Approaches
As a prompt engineer, it's essential to know that while there are direct approaches (e.g., Chain-of-Thought (CoT)) to get answers from the model (like just asking it outright), there are also guided ones too (like PEARL) that can help generate more accurate or detailed responses, especially when dealing with complex and long text.
Direct approach
- Zero-shot answering: The model tries to answer questions without any special training on that particular question type. It just uses what it already knows.
- Zero-shot CoT (Continuation-of-Thought): Here we provide a piece of text and then continue it as if it's a natural extension.
Guided approach
- PEARL (Prompt Engineering and Representation Learning): Instead of just asking questions directly, PEARL involves creating specific "prompts" or "instructions" to guide the model in understanding and answering the question better.
Becoming A Better Prompt Engineer with PEARL
As a prompt creator, our job is to get the best answers out of GPT-4. So using PEARL means we can get clearer and more accurate answers from GPT-4, especially with big and complex documents.
Based on a study by researchers from the University of Massachusetts and Microsoft, PEARL was tested using a complex data subset from QuALITY, a reading comprehension dataset that contains questions about long-form articles. Results showed that prompting with PEARL:
- Significantly outperforms traditional approaches (i.e., direct approaches) and is able to read and analyze long-form articles of several thousands of tokens.
- Provides higher quality answers where reasoning over long documents is required because it takes a holistic approach while understanding the details of the content.

Instead of just asking GPT-4 a question and hoping it figures out the answer, PEARL gives the model a step-by-step plan to follow:
- action mining
- plan generation
- plan execution
It's like giving someone a map to find a treasure instead of just telling them to find it. So by understanding these three steps, as a prompt engineer, we can be more effective if we:
- Clearly define what we want to know.
- Trust PEARL to sift through the information, plan its approach, and then execute that plan to generate an answer.

However, like all leading-edge methods, they come with their own limitations: PEARL is time-consuming, requires more computational resources, may overcomplicate simple questions, and is susceptible to misinformation and even hallucinations.
Putting PEARL into Action
Example scenario: You're asked to find the reason behind the protagonist's behaviour of looking for meaning in life (Example: The Driving Snow story).
Prompt 1:
We will be using PEARL (Prompt Engineering and Representation Learning), which involves creating specific "prompts" or "instructions" to guide the model in understanding and answering the question better. It follows a three-step framework: action mining, plan generation, and plan execution. The next step is to ask me to provide you with the long-form content and confirm that it will be labelled as CTX. Using the CTX, we will be defining the scenario as: “You're asked to provide an answer about the main themes and characters of a story.” Do you understand?
We will be using PEARL (Prompt Engineering and Representation Learning), which involves creating specific "prompts" or "instructions" to guide the model in understanding and answering the question better. It follows a three-step framework: action mining, plan generation, and plan execution. The next step is to ask me to provide you with the long-form content and confirm that it will be labelled as CTX. Using the CTX, we will be defining the scenario as: “You're asked to provide an answer about the main themes and characters of a story.” Do you understand?
Prompt 2:
[Add the story text next. You can use an AI chunker tool to bring in all the pieces of the content into acceptable chunks into the AI platform].
[Add the story text next. You can use an AI chunker tool to bring in all the pieces of the content into acceptable chunks into the AI platform].
Step 1: Action Mining
Prompt 3: Action mining step: SUMMARIZE(CTX)
Prompt 4: Given a question about a long document and the seed action set, come up with new actions that could help to answer the question “[What’s the protagonist’s mission]?”
Prompt 4: Given a question about a long document and the seed action set, come up with new actions that could help to answer the question “[What’s the protagonist’s mission]?”
Step 2: Plan Generation
Prompt 5: Plan generation step: Using the list of mined actions, devise a strategy to uncover "[What conclusion did the protagonist come to realize]?"
Step 3: Plan Execution
Prompt 6: Plan execution step: [Find the reason behind the protagonist's behaviour of looking for meaning in life].
Conclusion
PEARL is an emerging method and power tool for prompt engineers to find and give better answers from big articles. When compared to other simple ways of asking GPT models, PEARL doesn't just look at a small piece of the article, it considers the whole story and all the important parts in its three-step framework in order to provide more accuracy and greater detailed answers with long-form text.
Fafa
Entrepreneur, Engineer, Product, AI enthusiast