The Business of Intelligent Content
January 31, 2011
Intelligent content is a simple enough idea. If we want to be able to do many things with our content and we want to do so quickly and sustainably, then it makes sense that our content will need to be intelligent. In practice, a number of ingredients must come together to make content intelligent. And this fact complicates our efforts to explain what intelligent content really is. Intelligent content, in essence, is content that has been consciously designed to be manageable and reusable such that automation can be efficiently applied to the discovery and delivery of the content in an unlimited range of contexts. Complicating matters further, intelligent content evolves rapidly once it has been published – feeding on, and reacting to, the behaviour and contribution of users. I have touched on several of these considerations before but I want to focus on one of them specifically - the need for an optimal balance between investment and returns. In short, I want to look at the business side of intelligent content.
In case you have not noticed, business has become a pursuit where measurement, analysis and projection play an increasingly large role in decision making. Indeed, there are many enterprises that, in order to compete, need to shave pennies and seconds from every process. Of course, there are some ventures that fail to notice when it is time to recalibrate their controls and they continue to rearrange deck chairs when the analysis of emergent patterns calls for more serious changes. We live in a world that is both highly competitive and rapidly changing and our approaches to managing and leveraging content should reflect this fact.
One thing that becomes imperative is that our investments in content, and its management and publishing, should provide substantial and immediate business benefits. This is pretty easy to say but then we recall that, given the types of information that users demand today, we need to create, manage and leverage content that is fundamentally more intelligent. And we need to recall that intelligent content doesn’t happen by itself – it calls for the investment of time, expertise, effort and money.
So a question arises. How do we square this circle?
Historically, this was not an equation we could actually balance. The level of investment required, and the difficulty of reaching a sufficiently broad audience of users quickly, meant that a return on investment, in real terms, took a long time if it was ever reached at all. Mercifully, things are very different now. In particular, almost all of the software applications deployed today can do something intelligent with intelligent content. They can accept it and they can be used to produce it. There is also a convergence of best practices and standards, with these bringing more sophisticated design and processing techniques into mainstream web environments. It is this that makes it possible today to deploy intelligent content applications quickly and to reach broad communities of users in a way that delivers substantial benefits.
With this in mind, it should become possible to set down some basic rules of thumb on how intelligent content applications should be planned and budgeted for.
First off, there should be an orientation towards “content applications” – specific deployments that allow subject matter experts to create content and see it automatically delivered in different ways to different consumers. Certainly the key pieces need to be in place – but for any one content application only a subset of solution components needs to be present. The general capability to deploy additional content applications can grow over time through a series of successful individual deployments.
With this application focus adopted, it should then become possible to achieve a very compelling balance between investments and outcomes. As with many of my rules of thumb, this one will sound daunting. This is quite intentional, because in being daunting these rules force us to carefully trim our investments so that the emphasis remains on simplicity of design, speed of execution and tangibility of results.
In the case of intelligent content applications, we should set our sights on a return on investment of no less than 2 to 1 within the first year after deployment. In other words, the application should deliver benefits twice the value of the initial investment and do so in year one in the life of the new application. An investment should make a positive contribution to the bottom line in the fiscal year the application comes into service. This regime should in fact make it possible for both the investment and return to be encompassed within a single fiscal year and thereby to be at least neutral in its budgetary effects. This does tell us something about the scope of most content applications – they should not be sprawling undertakings but rather they should be specific, focused, tactical and completed in a relatively short period of time (specifically measured in months not years).
Now in measuring both the investments and benefits, there will be aspects that are quantifiable and aspects that are less so. A great many of the things we might prefer to see as qualitative, or strategic, can indeed be made subject to some form of measurement, even if the increments being used are not recognizable currencies. Measurement should be brought to bear on as broad an array of impacted factors as possible even if we find ourselves counting seconds, visits, orders, retweets, comments or something else. Oftentimes, innovation takes the form of finding something new, and important, that can be measured. In looking at the calculation of the return on investments, we should make sure we apply measurements to the current state of affairs, and that we measure the impacts associated with making the change investments, as well as measuring the future state. It is a management role to determine how measurements will be compared against a baseline measure, specifically dollars and cents, or whether this is necessary or reasonable in all cases.
But we do need to keep the subject of money at the center of our attention when discussing both investments and returns. On the investment side, there is unfortunately another rule of thumb that should be applied and this one has the effect of increasing the amount of money that should be budgeted for every content application project. For every unit expended on the technical implementation of the application, with this covering all design, development, documentation and deployment expenses, as well as the associated management and technology licensing costs, there must be an equal amount directed towards the business side of the project. This will include investments in content improvements, creation and control procedures, work practices, team member skills, and all the other steps that will need be taken to make sure that the content application reaches its target audience with a new level of service and with compelling improvements over what was done before.
I sometimes refer to this budgetary rule of thumb as “content management tough love” as it forces management to look resolutely at both sides of the effort. It is in fact a lot easier, and therefore common, to trim the budgetary demands of projects by only allocating for the technology side of the equation and this is a monumental mistake. This is in fact why the vast majority of content management investments (or indeed all technology investments) flounder.
The combined effect of these two rules of thumb is quite simple. Content application projects must be specific, targeted efforts that seek to deliver compelling new services in a relatively short period of time and to do so without completely overturning the way the participating stakeholders currently work. If this is done successfully, then a new content application can be realistically brought online within a matter of months and this effort will see content stakeholders using their familiar tools in new ways to inject higher levels of intelligence into their content and content consumers using their familiar tools to access and make use of this intelligence.
Sequences of such investments can be planned as part of a broader strategy to make systemic improvements in how an organization creates, manages and leverages intelligent content. This type of strategy is to be preferred because each incremental investment, and with it each intelligent content application, will demonstrate its value and harvest real-world experience that can guide each subsequent step. Almost as important will be the fact that each successful content application deployment will generate positive results and produce budget surpluses that can be used to fund subsequent investments. Content assets, and content application investments, that learn over time and that help to sustain and grow their sponsoring organizations are in fact intelligent and that is what we are talking about.
In your vision, Joe, I picture a kind of Service Oriented Architecture built around intelligent content mimicking traditional WSDL descriptors and content/metadata payloads. But that leads me to think of Intelligent Content components that can function as "beans" in a larger collaboration/evolution of value-proven applications. Am I reaching beyond your picture? In fact, how is this different from today's REST/SOA landscape other than the interposition of a higher order of content and metadata (new content standards, perhaps), and the requirement to demonstrate value (common metrics)?
Posted by: Don Day | February 01, 2011 at 10:46 AM
I am reminded of what T.S. Eliot is reputed to have said when asked about the plethora of interpretations surfacing around The Wasteland - they are all welcome. This is how I look upon different technical realizations that are possible for what I have been describing. Going further, I would be inclined to say that an intelligent content application should be amendable to multiple instantiations, with some of these being radically modern and others being radically not.
Back in the real world, of course, there will be an architecture prevailing over the infrastructure in any given enterprise - perhaps "planned", perhaps "emergent" and perhaps "notional". Some will be "archaic". Many will exhibit "architectural anarchy". Ideally, the prevailing architecture will reflect a service oriented model of some form and among the reasons such models are gaining ascendance is that they do (or should) facilitate the rapid creation, deployment, evolution and reuse of components that deliver very specific services. These architectures would be an immediate corollary to, and attractive deployment venue for, what I was describing as content applications.
That said, I would not say that any particular architecture is necessarily associated with my vision of focused content applications delivering incremental and accumulating business benefits. I can envision realizations of the vision that are wholly untouched by any of the architectural concepts or protocols to which you have referred. The vision, and the discipline, applies equally well to an environment that is far more retro and far more pedestrian.
I would say that what I have been describing is quite different than today's REST/SOA lanndscape even with the interposition of a higher order of content and metadata and the requirement to demonstrate value. I would submit that it precedes it and stands on a different plane. To my mind, equating the two commits a form of category error. I would, however, continue to say that what I have been describing is fully compatible with today's REST/SOA landscape and that its impact on that domain would indeed be the interposition of a higher order of content and metadata and the requirement to demonstrate value.
Going further, the technology infrastructure within the vast majority of organizations, and in particular the infrastructure available for content management and publishing, is often far removed from the type of SOA landscape to which you are referring to. Those that have progressed to that point, and who have realized it to the breadth and depth of its potential, are in a very special place and this is so in part because they will very much be in a position to conceive, create, deploy and evolve intelligent content services in exactly the way I have been describing. And these organizations will be able to make these advances with a minimal level of "drag" as the technological friction will have been brought so low as to be negligible.
Does that make sense or am in becoming ensnared by my own sophistry?
Posted by: Joe Gollner | February 01, 2011 at 01:25 PM
Joe, you produced excellent reading again.
Just to comment your first paragraph. I agree with your definition on intelligent content. Recently, as got involved with W3C Open data activities here in Finland, defined web content this way.
To simplify the issue we can say that the visible page is for human and the hidden data (well, view source and you see that too) is for computers. I was told that currently 3.6% of all web content uses RDF fragments embedded into source (X)HTML. Not only Dublin Core metadata but even more complex metadata to suit “Open Web platform” needs. You are most likely aware of DBPedia (structured version of Wikipedia with API) just mention the largest collection on intelligent content. That was my short tech talk rant.
Then to the interesting business side of the intelligent content. While analyzing some content sources in my company (wondering the vast amount of content and the lack of common categories etc.) started to think of “fundamental rules for information”. The one I came up was: “justification to exist”. When content instance gets created, a couple of critical “justification attributes” to be attached are:
- “Business potential”: a value indentifying expected value of the content
- “Retirement age”: a some kind of pre-set value that triggers self-destroying function to either eliminate or to move content to dusty shelf – if content is not used/accessed.
- Could also think of adding “Cumulative cost” metric to record all the costs caused by this particular piece of content.
In order to make the ecosystem work:
-all attributes would be maintained and updated as the content evolves
-the content is somehow made findable i.e. enables trustworthy recording of use events
-there are (simple) business rules for calculation of each value, from top of my head, for business potential something like:
-created: initial value = 50
-retrieved as query result = +2
-checked out/in for update = +5
-no access within 6 months = -10
-thumbs up feedback by user/application = +30
And for costs:
-conversion = 100
-move to other storage 1000
-checked out/in for update = 500
Business benefits would require some kind of feedback mechanism from events in business process. “Retirement age” would kick in if document is not accessed, we don’t want to store mummies. This would need some thinking work in order to make it feasible…
Cheers, Heimo
Posted by: Heimo Hänninen | April 14, 2011 at 05:25 AM
Hi Heimo
Looking around more broadly, it is indeed heartening to see so many people getting behind "open, reusable information" that has been specifically designed to be amenable to automated processing. Of course, the degree of intelligence being sought varies considerably and by and large appropriately, based on what people are most interested in doing. The closer one moves towards familiar data types, the more commonly we find these open and intelligent resources out on the web. In large part this is because we (that being the technology community) have been wrestling with data for a long time and we often have data resources somewhere in a form that is pretty intelligent to start with. We are reasonably accustomed to exposing some of the details that establish what the data is about and what expectations can be reasonably applied to the different data units.
As we move deeper and deeper into the "content forest", where we see data mixed with increasingly complex rhetorical patterns, we find the amount of open and intelligent material drops off rather quickly. This is not too surprising. As the content structures grow more complex, for example as we move to collections of regulations and cases from details about infraction incidents, we find that the variety of things that people might want to do grows exponentially. So it is very difficult to know beforehand what someone else might want to do with your content. Also we encounter a sobering increase in overlapping semantic meaning wherein a single unit of content may in fact be described and governed by more than one set of contextualized declarations and rules. Each content unit can participate in multiple meanings. Finally, as we move deeper into this content forest the number of associations that surface grows rapidly and we find we have many relationships that might, or might not, be relevant at any given point in time.
The challenges surrounding exposing, definitely, the meaning within content (fulsomely considered) explains why what we usually find is a limited number of house-cleaning details about the unit of content itself. Typically, these details are provided for the transactional package which envelopes the content (what I typically refer to as information transactions). Hence a common use of the more popular Dublin Core metadata elements. But even here we are seeing more and more effort applied to enhancing the metadata supporting the units of content usually with a view to making that content more discoverable.
One of the directions this metadata enhancement can go is exactly as you have described - details that can help make units of content more manageable and therefore more cost-effective. The introduction of a few key criteria that can associate units of content with business objectives and intended roles would help an organization to implement what you have described - a dynamic environment of content management based on real events. This approach depends on the establishment of a measurement scheme that helps determine the current value of a unit of content and therefore to guide the handling of that content. As with all mathematical models, the real art lies in determining what is interesting to model and how this can be made useful. The really good thing about this approach is that it can be continuously evaluated and it can be adapted to improve the behaviour of the system. The other good thing is that this type of system for handling content using automated measurement and processing rules it is something that can scale to massive volumes - something that many other approaches fail miserably at.
We may be opening a new field here - content economics...
It started here...
Posted by: Joe Gollner | April 16, 2011 at 12:22 PM