One of the core principles for the way in which context should work on the web is disaggregation and re-aggregation - content that might easily be broken down into constituent elements of some kind and then just as easily re-aggregated back together again in a form that suits the end user. The classic example of this is music. The traditional way of bundling music is of-course the album, and this is still with us of-course, but digital has unbundled that format so that we might easily buy or stream single tracks and then re-aggregate them into personalised playlists. The component element here is a single track. But this principle might be applied to multiple formats incorporating widely differing component elements. In a networked world, content and ideas should be free flowing, modular and recombinant. We should be focusing on both destination and distributed thinking (and what Jonah Peretti of Buzzfeed called network integration).
Which is why I really liked the idea, described here by NYT Labs, that the future of news is shifting beyond constructs that might be considered to be tied to legacy print media, like the article, towards a more accumulative, elemental approach to news based on component parts that might be stitched together in different ways and that might 'capture and encode' knowledge that is contained within articles.
The NYT describe this as 'particles'. The ability to identify and annotate potentially reusable pieces of information within an article as it is being written.
This approach, they say, can enable enhanced tools for journalists, making it easier to surface contextual information from the rich archive of previous reporting that might augment the composition of a story. Or that might enable easier embedding of information, or deeper background/context/analysis inline, so that an article becomes 'a dynamic framework for deeper reading and understanding, one that can expand and contract in response to a reader’s interest'.
It might also enable more powerful ways of linking, synthesising and structuring information, creating 'a corpus of structured information' that is more powerful than a simple archive, easier to re-combine in new ways, and facilitating the kind of contextual, longitudinal knowledge that is currently hard to access.
It might also make content more adaptive, ensuring that it is easier to repurpose for, distribute over, and present in multiple platforms.
Their point is that the article as a standard format is far too rigid and in the context of ephemeral and evergreen content (which is like 'Stock and Flow' content that I've written about before). A news organisation might create hundreds of articles a day but has to start all over again the next day, leaving behind large amounts of essentially redundant or low value content. This is an approach shaped by the constraints of print media rather than one that is native to digital. If you were to start from a blank sheet, would you still craft articles in this way? It's highly unlikely.
There is much that might be applied here to thinking about Content Marketing.
Rather than focusing marketing around isolated campaigns, we are shifting towards combining campaigns with a far greater focus on always-on, requiring a greater focus in turn on accumulative marketing.
The particles approach requires us to identify evergreen, reusable pieces of content or information as we create content so that they might be re-used in new contexts. Similarly as marketers create more content, they need to identify elements that are foundational or applicable in different contexts in order to enable efficient content re-use.
The approach also allows for efficiency in adapting, repurposing and distributing content across multiple platforms. In other words it enables COPE (Create Once, Publish Everywhere). This is particularly important in the context of mobile content, which increasingly requires content to be atomised into 'cards' or portable formats that might easily be repurposed or shared.
But in order to extract, link, re-use, re-combine content we need to ensure of-course that individual elements can play nice together, be successfully integrated, and are searchable and extractable. This means tagging, labelling, and metadata. This has, until now, meant heavy human input but as AI (or should I say IA) improves, we can combine algorithms with human editing in smart ways. The NYT, for example, talk about how granular metadata could be created 'through collaborative systems that rely heavily on machine learning but allow for editorial input', and systems that can piggyback on top of the existing newsroom workflow rather than completely reinventing it (a sensible approach to ensure adoption).
The world of marketing automation and personalisation is bringing some of this capability to the fore and we are at the beginning of the curve. But it is, I think, increasingly how we need to frame content marketing and a curve that more and more businesses are on. So the sooner we start thinking about some of this stuff and structuring our capability accordingly the better.