Tech Corner August 10, 2023
In the dynamic world of wealth and asset management, harnessing the power of AI is becoming crucial to remain competitive. One of the key elements in leveraging AI and large language models (LLMs) effectively is LLM prompting.
With the rise in LLMs, you might have also heard the term “prompt engineering.” Its technical definition refers to a technique for refining, targeting and training LLMs by using frameworks like Langchain. Here, we’re going to use the more general term “LLM prompting” to refer to when an end-user asks questions directly to an LLM, for example, through a chat interface, which may work in tandem with additional prompt engineering already built-in behind the scenes.
If you are using an LLM, prompting is at the heart of making it effective. Your prompt provides a roadmap for the model to generate meaningful responses.
Better prompts will give you better answers. We’ve compiled a few tips below to help you get exactly what you need when prompting an LLM.
LLMs can produce responses in a variety of forms. You can ask an LLM to answer your prompt in the form of a 50-word paragraph, a term paper, or even a haiku. When thinking about how to structure your prompt, think through the kind of response you’re looking for first, and be specific when asking for it. Types of responses you could ask for include:
This flexibility in format opens up endless possibilities for customization and tailored human-like interactions.
Context plays a pivotal role when prompting LLMs. Transformer-based models have been trained on vast amounts of text using deep learning methods, aiming to generate output that resembles human-generated text. Just like humans, language models thrive on context, allowing them to generate more relevant responses. Set the conversational stage with relevant background information, and you can help the model understand exactly what you mean.
Consider this example of a poorly constructed prompt to an LLM:
What makes it ineffective? It lacks essential context and fails to specify the desired outcome. The response given by the model thus lacks specificity and doesn't provide the analysis we asked for. When the model is given minimal context and instruction, it isn’t able to provide a useful response.
Providing effective context enables you to influence how the model responds. Giving more information about the specific areas you’re looking for information on, combined with instructions on the type and format of response you want, will lead to more helpful responses.
Different LLM models and interfaces have access to different data sets, which can inform the accuracy of the answer you’re given. It’s crucial to be aware of the parameters of the LLM you’re interacting with. Does it search the open Internet, or is it drawing answers only from its data set? What time periods does its data set cover? If your question includes current events or market trends, but the model’s data set only covers up to 2021, the response may not contain all the information you need. If the model has been engineered to look only at a specific document or data set, i.e., a research report or set of content you’ve uploaded, you may be able to get a more precise answer, though you won’t get additional context outside of the report.
When prompting LLMs, it can be helpful to simply rephrase your question. Like humans, LLMs sometimes need to be asked in a slightly different way or have the question clarified.
If your first question didn’t lead to a useful answer, keep iterating to see if you can get better results. Add context, details, and specific instructions to guide the model in crafting its response. Think of your interactions with the LLM as an ongoing conversation—it’s following along, building a cumulative understanding of what you want as you use it. You can refer back to what you found helpful or unhelpful about previous responses and work together to find the answer you’re looking for.
Below is a refined version of the initial prompt above, demonstrating the value of iterating and clarifying expectations. By providing more detail and specifying the desired format and what the response should include, we’ve offered the engine more valuable input, and as a result, we get a more specific and useful response.
As AI continues to evolve, LLM prompting (and the prompt engineering that often occurs behind the scenes) remains vital for harnessing the power of language models. Getting the response you need from an LLM might take practice, but by mastering the art of crafting precise instructions and providing relevant context, you can unlock the untapped potential of AI in your day-to-day interactions and for your business.
At Alkymi, we're already putting agentic AI into action, embedding it within our Patterns to automate some of the most challenging workflows in private markets.
Our new feature gives firms more granular access controls across their workflows for increased security and enables them to further streamline their operations.
Alkymi launches comprehensive fund tracking for private markets, improving transparency and performance reporting.