How Marve Works - LLMs for Audience Building and Customer Segmentation
Marve leverages the best in class large language models–including PaLM2 from Google–which have been trained on the entirety of the internet and refined using feedback from human experts. We’ve taken those models and integrated them into our platform in a way that makes it easy and effective for a marketer to input text in a way that they would naturally describe an audience, without necessarily knowing the names of columns in data tables or how the underlying data is structured.
For example, since the underlying Transformer models have effectively read the entire internet they can infer that “tri-state area” in our example above means we’re talking about the states “New Jersey,” “New York,” and “Pennsylvania” and that "bucks" probably means dollars in this context. We’ve then done further work to nail some of the translation into the underlying data structure and a format that Audience Builder can understand.
While LLMs are not perfect 100% of the time, this is where the precision of Audience Builder comes in. A user gets the benefit of interfacing in natural language with Marve, and in the rare cases where the audience doesn’t come out exactly right, they can review and refine in Audience Builder prior to export, if desired. This architecture creates a benefit to the user where the combination is better than the sum of its parts. Our users get the speed and intuition of natural language input with the precision and measurement capabilities of Audience Builder.
Here is a full demo of Marve, including a voiceover for more context: