The Personal Stack, 2024 'AI Powered' Version
... what needs to be built on the individual side to enable balanced, trustworthy relationships with supply organisations
There's been various threads of late as to the components in 'the personal stack'. This was a good iteration from Jamie Smith over at Customer Futures; and this recent one from Doc Searls also helps. I tend to agree with what’s said in those posts, but want to drill into some specifics, and take onboard learnings built over many years on the ‘organisational stack’ as relates to managing customers. The personal stack is, in many ways, the updated, inverse of the organisational stack in that most of the reasons an individual would need such capabilities are around managing relationships with organisations. Hundreds of them….
If we design these inverse capabilities well, then the relationships they support should work well. If we design them to be incompatible then supplier and customer will be talking past each other. Of course that is sort of where we are now in that most of the relationship management capabilities that individuals are supposed to use are actually run by their supply organisations. Not a good basis for balanced, trustworthy relationships….
So, first some background on equivalent architectures within organisations; i.e. how organisations typically set themselves up to manage their customers. (the same applies to citizens, users, patients and other similar forms of person to organisation relationship). The diagram below is a visualisation of the classic ‘T-bar’ customer facing architecture one would find inside large organisations, and indeed within smaller ones albeit they would rarely think through the architecture formally as would be the case where real volume and complexity was likely.
I’ve deliberately hidden most of the detail as it would defeat the purpose of showing it. The key point is that organisations will typically have many operational systems (web sites, apps, e-commerce, CRM, billing, logistics, accounting, supply chain, HR and more). And operational systems are optimised for getting stuff done; they will often manage many thousands of data attributes, but retain them only as long as required (to support operational activity). But architectually organisations want to have only one planning system. And that planning system draws in only the important data from each of the operational systems. The planning system is that which effectively enables the metrics that enable the managers to run the business. So the planning system is optimised for analytics and reporting. It needs data over time, and critically clean, aggregated data over time. It needs input from all of the operational systems, and critically also provides a feed or steer into the operational systems. But it does not need all data, just the data that matters for strategic analytics, reporting and planning. There will be some that suggest that organisations don’t always need this distinction. My take would be that, in some cases, when an organisation is small and simple, one could get away without this architecture; but it would be optimised for neither key use, and thus ‘average’ at both.
So why am I telling you all of that organisation-centric stuff? I’m doing so because, I would contend, that individuals (if they ever thought in those terms) have a very similarly defined architecture for their information needs; and that anything short of that will be sub-optimised. In fact, as we all know, what individuals have to engage with at present is so far removed from optimised we are as well to start afresh (which indeed we are in many ways with many new initiatives now up and running with the ‘human-centric’ perspective).
Let’s therefore look at an interpretation of the above that is a) human-centric (optimised around an individual), and be b) able to engage the most modern technologies and approaches available. The visual below, you can easily see, derives a lot from the above. So our architectural starter is that, to be optimised, individuals will need:
Operational systems and methods, that support ‘jobs to be done’ (hundreds of them to support the hundreds of processes we all engage with)
A planning capability that supports the longer term, indeed life long nature of the human-centric model. Note, people will not be thinking in terms of a planning system as such, but inherently understand the need to plan - holidays, savings, retirements, new jobs and many more. All required data input and the ability to forward project across an integrated data-set to see some future outcomes.
The means to flow data between the two as appropriate, ideally in a real time and continuous flow or pipeline.
Ironically, and painfully, individuals have many more operational processes to run than organisations do. Organisations will typically have 20-30 high level processes that relate to customer management activity as defined by their sector of operation. Individuals have to deal with hundreds of processes because the individual is the point of integration for all parts of life; that is to say they deal with all the relevant sectors in their lives. If the smartphone had been invented before or at the same time as the commercial Internet then the most logical architecture by far, for all parties, would be to have the individual as the manager of core personal information, with anyone who needed it subscribing to the individual. The ‘subscribe to me’ tools required in that model would be vastly simpler than the current architectural mess in which individuals have to manage organisation relationships and the associated terms on a one at a time basis.
However, that did not happen, so thirty years later we are now re-calibrating towards that much more balanced and optimised architecture. And that being the case, we now can build components based on newer technologies - specifically decentralised identifiers, encrypted data, agents, data/ identity wallets and AI. Let’s look at each of those new things in turn to see what they add:
The planning system, that deals with long term data management has variously been called ‘personal data store’, or vault, or filing cabinet, or Pod and no doubt more. This remains a fundamental need and cannot, should not be minimised in importance; it is the key building block for an individual in their digital life. Actually, I have a new name for this block to throw into the mix; but that will have to wait for another post so as to not divert.
The doing systems; it will likely take the next decade for those to be running as they should across all walks of life. But they will; at present they are running for both organisations and individuals on low grade fuel (data), and thus a long way from being optimised. The only way for that to happen is be integrated into/ driven from the individuals own ‘my data’; and that only happens when the proper technology and contract based agreements are in place.
Wallets; important pieces of the jigsaw, as are the credentials that sit within them. They handle and elevate that sub-set of data that sits within both planning and doing systems AND make that portable; i.e. more easily useable by all parties that need to access (read or write to) this data.
Strategic Control Points/ point of integration: this ‘red dot’ is named and shown as such because it was identifed way back in 2008 in Project VRM as a key capability to be built on the individual side to support them in the digital world. In technical terms this might be seen as ‘the data router’; i.e. that which manages the practicalities of ensuring the right things are connected to the right identifiers, and the necessary information sharing agreements are in place to optimise the position of the individual. That means the red dot is also where decentralised identifiers live (which variously act as cryptographic keys for signing agreements, keys for opening vaults, data exchange ends points and no doubt more). (follow up post on that required!!!). Note that the red dot/ point of integration is not solely within a wallet(s); to ensure independence for the individual it must be housed in a domain the individual has full control over. Aspects of it will live in the wallet(s) to enable that portability/ ease of access and use.
The Agent is the new actor in the game, albeit this somewhat technical and ambiguous name may not survive the test of time. That’s not least because the individual will likely engage many agents over time for many purposes and with multiple origins (they are, after all, bits of software code in practice). The one I’m showing here though is ‘the boss’, for now let’s call him/ her/ it the Orchestra Conductor; an alternate might be the traffic cop, but I think that understates the role. The role of the primary Agent is to orchestrate the many moving parts of one individuals’ personal data ecosystem, and do so on behalf of the individual. In system speak they establish digital relationships, manage workflow and related rules, and log the provenance on data exchanges and reporting around those. They also tackle and run the critical task of ensuring appropriate information sharing agreements (Contracts) are in place to support each and every data exchange. The agent is also the entity that signs those agreements (digitally) on behalf of the individual, and retains copies of the signed ‘documents’.
So where then does ‘AI’ fit in. If one believes current hype one might conclude ‘everywhere’. I would tend to disagree. I think AI will be important in support of specific processes and decision-making. But it will be absorbed into existing ‘jobs to be done’ rather than be a stand alone thing. Most of todays AI is running on that same low grade fuel that our entire global economy runs on - data that is not fit for purpose. There are many sub-variations on why data is so often not fit for purpose; ‘data quality’ is actually a high level term that covers many issues. Data quality sub-issues include existence, accessibility, recency, accuracy, completeness, compliance, can the data be grouped/ aggregated, user confidence, appropriateness for processing purpose, is it compliant with relevant regulations and many more. Perhaps the largest and most important ‘new’ data quality component that emerges when processing is for AI purposes is data provenance. That’s quite simply ‘do we know where this data came from and can we prove that?’. Data provenance was a nice to have before AI became the potential consumer of all the worlds data. In this new scenario, data provenance is very much a ‘must have’. If you train a model on data that you are not completely clear on where it came from and on what basis, then the chances are the model built will come back to bite you. It certainly will if the data consumed includes personal data and is in a geography with modern data privacy regulations. These regulations mandate that those data subjects be able to retain degrees of access to and control of that shared data, which is highly unlikely to be the case once ingested into an AI model. Understanding what data moved, from where, to where, on what basis, for what purpose is the essence of data provenance, and at least gives AI model builders a fighting chance of documenting and understanding what they have built.
So, to summarise, the story around personal data architectures is getting more complex, but the technical building blocks around it are all stable and mature enough to build out with. We clearly need to massively simplify the story and explain the benefits in detail before exposing to end users; but that’s another post or two.
Next up…., my proposed new name for the personal data store…
Yes, potentially Jim; I think most people now know what an operating system is. Still need to find a way to simplify the 'what does it do?' as well as the 'what is it?'. Something like Digital Relationship Manager might stick; shortened over time to Relationship Manager. Or R-button to divert for a bit.
A personal OSS for individuals selecting tools and personal agents as digital assistants.