Practical Prompt Engineering for Product Managers
How I cut down on time spent on routine tasks by implementing structured and re-usable prompts as a Product Manager.
The Role of Product Manager
As a Product Manager you are the glue between a lot of different parts of the company. You need to be juggling a lot of different tasks and view-points. Given that one your key responsibilities is to ensure alignment and proper information flow, the ability to write and communicate efficiently is key.
However, getting your documents, summaries and messages in a suitable state takes time. It's frustrating because often you have a very precise idea of what you want to communicate - it's clear in your head, but getting it written in a way that's clear for anyone is a whole process in itself.
Especially if you're at a startup, like I am, where there’s a lot happening and many people demand your attention, spending 30 minutes perfecting a message to a customer can feel like a waste of time to spend .
Thus the need to leverage language models.
How Language Models can help PMs
Recent AI developments have emerged as a lifeline for managing these extensive writing tasks. Language models allow us to dump our thoughts with less concern for the nuances of language and structure at the first go and get a baseline to work on, but since they're so easy to use they can also easily lead to pitfalls.
Using them without careful consideration can lead to errors and frustration, especially with complex tasks. Poorly crafted prompts can lead to spending as much time refining and correcting the model's responses as you would have spent writing the content from scratch.
A common workflow that leads to inefficiency involves casually approaching a language model like ChatGPT for a quick solution to a problem. You submit a vague query with minimal context and loose descriptions of the desired output, hoping to save time. When its response sucks, you double-down on it by refining the query and adding more instructions around the areas where it sucked the most, resulting in a cycle of trial and error.
This highlights the importance of familiarising yourself with prompt engineering and laying the groundwork to effectively utilize language models for routine and mundane tasks.
How? Let me share my approach to this problem.
Using Language Models in Routine Tasks
What do you wanna do?
The most common tasks that I found myself turning to language models for were:
Writing Product Documentation
Publishing Change Logs
Performing and Summarising Market Research
Writing Product Briefs
A big struggle here is around the fact that, to be able to effectively write a new piece of documentation or a new product brief, you can't just focus on what's new - these things are interconnected and built on top of A LOT of existing context.
For PMs this context is essentially around:
What is your Product?
What are its use-cases?
What are the Features that support those use-cases?
What are the Technical Aspects behind those features?
If you want AI models to be able to efficiently help today, you need to provide them with good context about what came before. This is where having a basic understanding of Prompt Engineering comes in.
Prompt Engineering Basics
There's extensive literature around prompt engineering - we won't get super deep here. If you want to explore more for yourself, I recommend this excellent survey by the folks in dair.ai.
The most critical thing I learned while researching is how to properly segment and think about your prompt. Here is the widely recommended prompt structure that I’ve optimised for PM tasks.
Instruction:
A specific task or instruction you want the model to perform.
You are a Product Manager for $company$ with a strong background in … and you want to …
Context:
External information or additional context that can steer the model to better responses. This is where past context around your company, product, use-cases and features goes.
…
Input Data:
The input or question that we are interested to find a response for. This is where the context for the current task goes.
…
Output Indicator:
The type or format of the output.
Your writing style should be…
Ensure that your output meticulously follows the outlined sections bellow. Each section should be concise and factual, focusing solely on relevant information - avoid including superfluous details. If a section cannot be adequately filled with pertinent data, leave it blank rather than adding filler content.
“““
$Outline or Template$
“““
The fact that you have a nice structure as a base template allows you to properly establish and re-use parts of the prompts effectively.
How I’m approaching this is:
The “Context” section is the most re-usable one, and can be re-used across tasks if you get it right.
The “Instruction” and “Output Indicator” sections can be re-used across the same tasks.
Which leaves the “Input Data” to be the actual “information dump” that I have been referring to that will need to be built on a case-by-case basis according to the task at band.
I found that having this template in mind and writing these sections helped me think about each problem on a deeper level and systematise my work in a more cohesive way.
Building Re-usable Context
Unless you're a CPO or a PM of a very large company with lots of different product offerings, it's likely that you can write a very neat context summarising your company, product, use-cases and features that can be re-cycled for different tasks.
That's exactly what I did, I created an extensive, detailed baseline context that I could re-use when prompting models for the tasks I perform most frequently. It's important to be critical about what goes into it, you need to include a broad overview of information while avoiding making it so extensive that it will occupy a significant percentage of the maximum prompt size - you need to be methodical.
Here's the very first general context I did for Rely.io a few months ago. For the sake of brevity, I will cut it short and only describe one feature.
Rely.io is a SaaS company that offers an internal developer portal as its main product. Internal Developer Platforms (IDPs) are configured by platform engineering or Ops teams and used by developers.
The platform team primarily builds, configures and maintains the IDP.
Teams building and running IDPs concentrate on standardization by design, infrastructure, service level agreements, workflow-optimization and configure the IDP to automate recurring or repetitive tasks, such as spinning up resources or environments for developers.
The platform team also sets the baseline for dynamic configuration management to avoid unstructured scripting which would lead to excessive maintenance time.
The IDP is then put at the disposal of the developers.
They can easily access the IDP for information and use it as a source of truth, request resources, spin up fully provisioned environments, rollback, deploy and set deployment automation ruling autonomously.
Rely.io's IDP is supported by 4 major features:
Software Catalog
The core of Rely is its software catalog which is the term used to refer to to the collection of all of organisation's catalogs (e.g. the service catalog, the deployments catalog, the cloud-resource catalog, etc.). Catalogs are table views that allow to easily track and explore information about your entities.
Entities are made up of properties (the defining characteristics and attributes of an entity) and relations (these indicate how the entity is connected to or interacts with other entities within Rely).
Every entity in Rely is associated with a blueprint which defines the entity's type. Blueprints in Rely.io are schemas that outline the structure and attributes of entities like services or resources. They provide a customisable framework for documenting and managing your software ecosystem, whereas entities serve as specific instances within this framework.
Both Entities and Blueprints can be represented as a JSON file which we call blueprint/entity descriptor. Both Entities and Blueprints have basic metadata fields for ID, title and description.
An organisations Data Model is the structure formed by all the blueprints defined the organisation, meaning their properties and the relationships between them. It encapsulates the entire schema of how data is organised and interconnected within your system and dictates all the available data that can be stored across your Software Catalog. Rely Provides an out-of-the-box data model composed by Teams, Services, Cloud Resources, Environment, Running-Instances (of services in a specific environment) and Deployment.
Entity Pages are customisable with different tabs and dashboard to enable a pleasant display of all the properties and relations of the entity itself. As a rule of thumb, different tabs serve different use-cases. For example service entities come with out-of-the-box tabs for troubleshooting for incident management, observability tracking, infrastructure tracking, documentation checking etc.
Plugins & Automation
…
Scorecards
…
Self-Service Actions
…
Creating Prompt Templates for each Task
With the “context” section locked across tasks, I created templates the “instruction” and “output details” sections for the different tasks, thus standardising the tone and embedding the adequate output template according to my company’s templates for the serval types of documents.
Here’s a simplistic example for product documentation
Instruction:
You are a Product Manager for $company$ tasked with creating clear and intuitive technical documentation that aligns with user needs and product specifications. You are about to write documentation for a new feature and you’ll need to cover the following pillars:
High-Level Feature Use Cases
Technical Basic Concepts that support the feature
Usage Guide
Troubleshooting
Context:
$generic context we mentioned in the previous section$
Input Data:
$Actual Input - What is the piece of documentation that you want the model to write about?$
Output Indicator:
Ensure that your output meticulously follows the outlined sections bellow. Each section should be concise and factual, focusing solely on relevant information - avoid including superfluous details. If a section cannot be adequately filled with pertinent data, leave it blank rather than adding filler content.
”"
# Feature Overview
Provide a 2 sentence description of the feature and the value it brings.
# Feature Use-Cases
Bullet point list around its use-cases from a user perspective.
# Basic Concepts
Focus on theory, describe the technical concepts the users need to know to operate the feature.
# Usage Guide
Focus on actionability, describe the different actions the users can perform around the feature and how can we guide them through it. This is the only section where you’re allowed to reference images.
# Troubleshooting
Briefly provide a list of bullet point about errors the user might encounter and how to fix them.
””
Optimising Over Time
The beauty of this is that, if you manage this information properly - and if you're reading this you're probably a product manager so you should be able to - you'll refine your templates overtime. Eventually, you'll reach a point where you can simply input the necessary information for a specific task, and the model will respond precisely as you desire. This is a dream scenario, I still haven't reached perfection and find myself tweaking stuff for optimal results here and there - but that process is much easier now.
As your product evolves you'll need to refine the context. You'll feel the need to overtime which is normal but at least you're just expanding on already solid ground.
For tasks that go deeper, you'll probably need to focus the context. Maybe for a given Product Brief you won't need as much general company & product context but you'll need to go balls deep on a very specific use-case or feature. This is where you can start playing around with having a list of different contexts that are optimised for certain things.
If you reach this point, you’ll be in a situation where:
You can manage different contexts that are re-usable across your templates.
You can manage your templates for all of the routine tasks you have.
This is already quite powerful.
What's next?
Dynamic Context with Chained Prompts and Web Navigation
I feel like this is only the beginning. I am now exploring tools that automate chain prompts which are AI agents that build on top of each other’s output that can also search the web. If you want to learn more, checkout this simple and ready to use example: “AI Agents: Powered by Crew AI”.
Something that I noticed with initial experiments is that AI models are not excellent at navigating the web - which is natural. Google Search has been polluted by companies looking to attract traffic that output poor content but with optimal SEO, which makes web search still a task meant for humans. Initial experiments have lead me to believe that the way forward is to find the resources I want the AI to use myself (blog posts, documentation, etc.) and hand it to it, and then dump additional instructions on it for efficient dynamic context building - instead of having the model search the web for me
I'm hoping this helps me in my market research endeavours.
See You Around ✌️
If you enjoyed this article, consider subscribing to my substack. I’m also active on Twitter @TheJointleman

