Streamlining Collective Intelligence with LLMs
Unlocking the Potential of Large Language Models for Knowledge Management in DAOs
Knowledge Management, as defined by Oxford, is simply the ‘efficient handling of information and resources’. By this definition, it is a crucial element of the functioning of any goal-driven organization.
The last 20 years have massively changed how we accomplish this-my use of Google Docs right now being a perfect example. While the internet functions as a communication layer that helps diffuse human intelligence more efficiently, LLMs are another step altogether-they increase the diffusion and accessibility of intelligence by providing it directly. The significance of this for organizations cannot be understated. This is because even though the internet enables the diffusion of intelligence, an entity is still required to actively curate the entire corpus of information and how it flows across an organization. By directly functioning as this intelligence, LLMs drastically change how we manage intellectual capital.
Now, as mentioned before, the internet obviously helps in the dissemination of the information and greatly removes the barriers to global coordination. However, it simply makes the diffusion of human knowledge and intelligence easier, it does not provide a net add to intelligence.
Last mile problems in knowledge management still exist. By last mile, I mean problems that have to do with the information actually going from a person’s screen (or informational interface) into their minds, or from their minds into the informational interface.
These are the two dimensions upon which we shall understand the chance in knowledge management that LLMs can bring about (focusing on existing models).
Given the paramount importance of knowledge management for organizations (and thus DAOs), this has all encompassing consequences.
This change would affect organizations of different varities differently, and arguably would be even more significant for DAOs, given the very characteristics that define them.
Let’s look at what these specific characteristics are, and how DAOs’ knowledge management could uniquely benefit from LLMs with regards to these particular characteristics.
1: Geographic Distribution
DAOs are by definition remote first and global, which poses the obvious communicational challenges of languages, cultures, and differing professional practices.
2: Loose Organizational Charts
The lack of a default hierarchical structure can often lead to operational inefficiencies. A corollary of this is that the varied levels of expertise within a team can become problematic, as the relative lack of gating and hierarchical structures means that
i: The general threshold for expertise is lower upon entry
ii: The traditional means to deal with differential expertise (formal hierarchies) is not as robust an instrument as it is in conventional organizations.
This can also lead to onboarding and communication difficulties.
3: Fragmented Knowledge Sources
Unlike conventional organizations, DAOs do not have ‘default’ systems for knowledge management. They are almost always emergent and specific to a particular DAO. This means that thleir data sources tend to be unstructured. A member must parse through Discord chats, Twitter, Discourse, and whatever project management applications the DAO may be using.
DATA DIGESTION
Let’s say two members of a DAO have created financial models/projections on a spreadsheet. If they have proficiency in different languages, they’d prefer writing in them, in addition to the different ways in which they may have built the spreadsheet itself (level of hard coding, data formats, functions, etc). This would evidently cause problems. These problems would be essentially instantaneously solved using LLMs which could simultaneously translate the spreadsheet and interpret its data and provide it in an interpretable format in the recipients vehicle of choice. For example, this could mean a simple linguistic summary, graphs (as platforms today can already do), or even an entire deck (which ought to be possible imminently). You could even make knowledge graphs from a given set of documents, as this tool does:
The same example that has been used for Excel spreadsheets can be applied to anything: individual governance proposals, or even entire Discord chat histories, forums, or voting patterns. By assimilating DAO specific corpus’ of data into foundational models, all this could be possible. These could all be made digestible by being made interactive through LLMs.
Furthermore, on-chain data is currently permissionlessly available for anyone to access, but is not optimally useful in its current form. This is because of the sheer amount of it, and because the majority of wallets are unidentifiable. Moreover, the UI/UX on most/all block explorers has not really been optimized for. If such data is processed through a large model utilizing simple APIs, it could be made digestible in any way.
This ameliorates geographic distribution and varying levels of expertise, as the intelligence that the LLM provides can be used as a cognitive equalizer to even out the disparities in expertise (for example, an onboarding chatbot that answers every relevant question a new entrant may have), whilst also smoothing over cultural and linguistic differences. In this way, LLM’s can increase efficiency with regards to the problems posed by their global nature, loose org charts, and fragmented knowledge sources.
Imagine a corpus of spreadsheets, memos, and decks, and then imagine a bar graph, knowledge graph, spreadsheet, or memo which could help you glean whatever scope and kind of insight you may want, regardless of whether you could interpret the source material, at every level of granularity, using just text.
In a nutshell, LLMs can function as a means of informational alchemy; put something in, and produce exactly what you want out of it. Just need to mess around with the proportions a tad!
DATA INPUT
We’ve spoken about how LLMs can improve DAOs’ operations with regards to making all data interpretable by every member. However, whether every member can also input their data is also crucial. One way in which this can be done is perhaps one you’ve already thought of -generative AI . (It’s obvious, but we gotta mention it!)
One example of this would be in the drafting of governance proposals. While ChatGPT is already quite good at this (especially for those whose English isn’t pristine), feeding and fine-tuning it for this particular purpose would indubitably turbo-charge it. The same can be said for coming up with entire DAO governance systems (as we’ve discussed in other blogs, what is prevalent right now leaves a lot to be desired). For example, the model could be trained with every kind of DAO’s governance model, and even relevant corporate governance, economics, and political science material.The sum of all this knowledge could very well suggest models that were previously unthought of by a team, and at the least, give them viable options for their particular needs.