An Emerging Framework for Personal Fiduciary Agents
Understanding the various flavours of Personal/ Personalised AI
There is a huge buzz at present around ‘AI Agents’. This has been triggered to at least some extent by Marc Benioff and Salesforce validating the space with their be .
My take is that in this case the hype is real and deserved. Agents are what make AI’s tangible and meaningful to real people who would never have any need or desire to understand AI. Agents can very clearly underpin better decision making, and efficiently enable ‘jobs to be done’. They will work across the whole economic spectrum - buyers agents, integrating with sellers agents; with intermediary agents supporting both.
So, all good then; bring on the agents?
Unfortunately, not yet. Like all hype explosions there is going to be a lot of work required to separate the wheat from the chaff.
Let’s look at the chaff first; it has a couple of obvious characteristics that will identify it.
Firstly, if any AI Agent is ingesting personal data it has a general problem it must find a solution for. And that’s easier said than done. Personal data inside AI models is a toxic mix because the nature of AI means that the data controller of the AI system literally cannot meet their data protection duties towards the data subject. And the individual (data subject) is unable to action their data rights such as access, control, transparency, maintenance and portability. It is only a matter of time before regulators catch up with this and it becomes the next wave of giant fines. It will not be sufficient to say ‘but we had ‘consent to train our AI on page 47 of our terms and conditions that you ticked to say you’d read, understood and agreed with….’.
Secondly, if the AI agent is being trained on personal data for which full provenance is not known and recorded then there is a similar general problem that its author must find a solution for. There is no means of mitigating downstream issues created by the AI if the inputs that shaped those outputs cannot be traced back to their root cause. And bear in mind, full provenance of personal data is not at all easy to achieve outside of governed/ regulated data ecosystems such as banking or healthcare. And regulations in these areas would likely exclude person-connected data being ingested into AI or their use in building models.
If the AI agents following the above approaches can’t find solutions to those problems, and carry on ingesting personal data, they will disappear quickly. That will happen first in geographies with more evolved privacy, data protection and AI regulations.
So now to the wheat; the personal AI agents that can cope with the above obligations. I would contend that these will sub-divide into two categories:
Personal agents powered, in part, by AI On People. This is the model being enacted by Salesforce with the AgentForce line. It’s a perfectly good approach as Salesforce have the capability to manage those personal data ingest and provenance problems mentioned above. They do so in lower layers of their technical stack, and have done for years (since the changes they made at platform level to cope with GDPR). Others will appear in this space; personally I’d expect Salesforce to lead in functional terms as most others did not address the underlying details as they did; and they are not easy. That will mean ‘seller side agents’ across all B2C sectors, and all key business processes pretty quickly. That’s a good thing, for the seller. It does not do too much for the buyer other than maybe a slightly better customer experience than the current norm. The example I’d offer is that much as I enjoy the story of Marc Benioff and his jacket from Saks 5th Avenue in the keynote, it does not mention the key point when looking from the buyer perspective. Namely, the individual and their agents would not only wish to talk to the Saks agent. The customer is more focused on the thing they need/ want than from whom they buy it from. So the rational model for the customer to pursue via their agent would be to make their buying intent available to multiple potential suppliers. When doing so via agents, it matters little (environmental issues aside for now) whether that buying signal is made available to one provider, or twenty one (because the buyer will have their own filtering agent to ensure they don’t waste time on irrelevant offerings).
So that takes us to where I think the real opportunity lies. Agents, powered, in part, by AI For People acting on a fiduciary basis for the individual (or indeed a buyer group). To distinguish them, i’m going to call them Personal Fiduciary Agents. The main reason I think this is the massive opportunity is because the market for tools to support sellers is already saturated and highly competitive. But the market for tools that genuinely support buyers from the demand side perspective is clear blue water. They are also, strangely enough, now much easier to deliver than ever before if one can cope with the majors barriers to entry listed above. There are very few irresolvable dependencies when working FOR the individual; and costs of doing so are lower than ever before. And the Gen AI bubble has enabled imaginations to conceive of and then build things that would have been much more difficult only two years ago.
So what specifically will that look like? What functions will these new ‘buyers agents’ cover? Here are some perspectives being raised:
Rory Sutherland suggests advertising will ‘turn 180 degrees’. Absolutely, and that will be a significant improvement on the current model.
Jeremiah Owyang offers a perspective that the marketing function will be highly disrupted. I think he is right.
Consumer Reports suggest that customer service will also be disrupted, for the better; I think they are right too.
My own take is that yes all of those will be disrupted; and more. I’d include ‘search’, lead management, comparison shopping, ‘terms’, privacy policies, payments, user experience, loyalty programmes, ratings and reviews. And then ultimately product/ service design will be impacted across whole categories (e.g. financial services) and will need/ be able to factor in agent capabilities.
Here’s how we in the DataPal team are building Personal Fiduciary Agents.
An individual will have a primary user agent who’s role is that of orchestration; the orchestra conductor if you will. That’s the purple one below, or indeed any colour you want because they are yours to scope, train and permission (with decision support). One main insight from our testing/ market simulator work over the last few months is that the relationship between we humans, and our primary fiduciary agents is a very interesting and important one. Separate post required on that I think.
There is then an agent army, hundreds of horizontal and vertically focussed ‘Action Agents’ that are available for selection depending on user needs and what the agent can bring to the table. Critically these are agents/ AI models/ algorithms that come to the individual’s data rather than the other/ usual way around. That lack of lock-in does two things that drive and enables a ‘race to the top’. Firstly, and critically, it means these action agents get access to a far richer, deeper data set with which to operate. And second, that model ensures that the provenance of the data is known, and the implications around regulatory compliance are quite different.
The above will be enabled as an ‘Agent Exchange’ model in that there will be the means for many developers to build agents that, provided they meet the fiduciary obligations, can extend an individuals capabilities and earn value from doing so. And seller/ supply side agents can also connect via the Agent Exchange, albeit noted as such and not acting on a fiduciary basis.
There is then the ‘which verticals?’ angle to this?
Is it about (open) banking? (YES)
Is it about (open) finance? (YES)
Is it about my home? (YES)
Is it about my family and friends? (YES)
Is it about health and wellbeing? (YES)
Is it about travel (a travel agent….?) (YES)
Is it about work? (YES)
…… you get the idea; if it is about things that humans are, have, do, want or make decisions about then Personal AI powered agents will be there to help.
So, what is required to be a member of The Agent Army?
In this case i’ll borrow from Salesforce as I think they nailed it in their Agentforce description of what an agent must have to be worthy of trust. In the AgentForce design, each agent rightly must have:
A Role - a precise remit
Trusted data - precise schema, fully documented
Activities - clear actions that are to be taken
Guardrails - boundaries to ensure only those desired actions are taken
Channel - where the agent surfaces
Personalised AI certainly needs all of those things. So does Personal AI; but the genuinely personal variety has and requires two extra bits of magic that come from the associated human:
Context: This magic is not required in the Personal AI model because it is provided by the entity running it. For example, in the Dreamforce example the context is ‘I’m a Saks 5th Avenue AI Agent and I’m here to help customers buy things from us and manage the various processes around that’. In Personal AI that context can be guessed at from outside but certainly not confirmed; and becomes a real benefit when the individual expresses it directly. For example, an individual might show up on the Saks site and look at some products, but be doing so to buy a gift. That change in context would mean different agent requirements on both personal and personal AI agent sides.
Outcome Sought: Likewise this might be guessed at by the agents operating on the supply side. For example the gift buying agent might just be checking prices and delivery options whereas the Saks seller agent will most likely assume a product purchase is the outcome sought.
Beyond these ‘must haves’ for personal AI agents; I think we can now see at least two different typologies within the Action Agents.
Firstly there will be a Fiduciary Agent category that we’ll call Decision Support. As you can imagine, these are about supporting people through complex scenarios in which information is gathered, evaluated and choices made.
Secondly, there will be another vast category that are those Action Agents mentioned above. Those are mainly about ‘Jobs to be Done’, either independent tasks, or orchestrated to deliver a piece within a larger requirement.
Both of these will work best when experts in each subject area are specifying the decision-support rules and logic and the steps in the action agent workflow. I think there is money to be made for experts wanting to leverage their knowledge in the form of fiduciary agents.
A huge amount going on then. I think Personal Fiduciary Agents will be a game changer, highly disruptive, and very good for individuals/ the demand side of the economy. Best get back to building now then…
Richard Whitt talks about Fiduciary agents in his book Reweaving the Web.