Bookmark and Share
My Photo

Oxford England

  • The End of the Road
    These photos are separated from my Travels album because Oxford is something of a second home. I still manage to visit it several times a year. So the pathway between Manotick and Oxford is well trodden and I can likely do it with my eyes closed - and probably have on more than one occasion.

Royal Roads University

  • Hatley Castle
    This series of photographs was taken over the last few years. I have stayed at the campus of Royal Roads on several occasions and I have been repeatedly impressed by the grounds. They are in many ways a little-known treasure.

Travels

  • Kafka Statue
    Here is a selection of pictures I have taken during my travels over the last few years. I am very obviously an amateur photographer and it is not uncommon for me to forget my camera altogether when packing. What the pictures do not convey is the fact that in these travels I have met, and gotten to know, a great many interesting people.

Manotick Ontario

  • Springtime in Manotick
    Manotick Ontario Canada is the part of Ottawa that I call home. Much of Manotick stands on an island in the Rideau River. Interestingly, the Rideau Canal, which runs through and around the river, was recently designated a World Heritage Site by the United Nations. So this means that the view from my backyard is in some way on a similar par with the Egyptian Pyramids - although the thought strikes me as ridiculous.
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported

« Intelligent Content and the ePublishing Revolution | Main | Intelligent Content in the Green Desert »

January 31, 2011

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Don Day

In your vision, Joe, I picture a kind of Service Oriented Architecture built around intelligent content mimicking traditional WSDL descriptors and content/metadata payloads. But that leads me to think of Intelligent Content components that can function as "beans" in a larger collaboration/evolution of value-proven applications. Am I reaching beyond your picture? In fact, how is this different from today's REST/SOA landscape other than the interposition of a higher order of content and metadata (new content standards, perhaps), and the requirement to demonstrate value (common metrics)?

Joe Gollner

I am reminded of what T.S. Eliot is reputed to have said when asked about the plethora of interpretations surfacing around The Wasteland - they are all welcome. This is how I look upon different technical realizations that are possible for what I have been describing. Going further, I would be inclined to say that an intelligent content application should be amendable to multiple instantiations, with some of these being radically modern and others being radically not.

Back in the real world, of course, there will be an architecture prevailing over the infrastructure in any given enterprise - perhaps "planned", perhaps "emergent" and perhaps "notional". Some will be "archaic". Many will exhibit "architectural anarchy". Ideally, the prevailing architecture will reflect a service oriented model of some form and among the reasons such models are gaining ascendance is that they do (or should) facilitate the rapid creation, deployment, evolution and reuse of components that deliver very specific services. These architectures would be an immediate corollary to, and attractive deployment venue for, what I was describing as content applications.

That said, I would not say that any particular architecture is necessarily associated with my vision of focused content applications delivering incremental and accumulating business benefits. I can envision realizations of the vision that are wholly untouched by any of the architectural concepts or protocols to which you have referred. The vision, and the discipline, applies equally well to an environment that is far more retro and far more pedestrian.

I would say that what I have been describing is quite different than today's REST/SOA lanndscape even with the interposition of a higher order of content and metadata and the requirement to demonstrate value. I would submit that it precedes it and stands on a different plane. To my mind, equating the two commits a form of category error. I would, however, continue to say that what I have been describing is fully compatible with today's REST/SOA landscape and that its impact on that domain would indeed be the interposition of a higher order of content and metadata and the requirement to demonstrate value.

Going further, the technology infrastructure within the vast majority of organizations, and in particular the infrastructure available for content management and publishing, is often far removed from the type of SOA landscape to which you are referring to. Those that have progressed to that point, and who have realized it to the breadth and depth of its potential, are in a very special place and this is so in part because they will very much be in a position to conceive, create, deploy and evolve intelligent content services in exactly the way I have been describing. And these organizations will be able to make these advances with a minimal level of "drag" as the technological friction will have been brought so low as to be negligible.

Does that make sense or am in becoming ensnared by my own sophistry?

Heimo Hänninen

Joe, you produced excellent reading again.

Just to comment your first paragraph. I agree with your definition on intelligent content. Recently, as got involved with W3C Open data activities here in Finland, defined web content this way.

To simplify the issue we can say that the visible page is for human and the hidden data (well, view source and you see that too) is for computers. I was told that currently 3.6% of all web content uses RDF fragments embedded into source (X)HTML. Not only Dublin Core metadata but even more complex metadata to suit “Open Web platform” needs. You are most likely aware of DBPedia (structured version of Wikipedia with API) just mention the largest collection on intelligent content. That was my short tech talk rant.

Then to the interesting business side of the intelligent content. While analyzing some content sources in my company (wondering the vast amount of content and the lack of common categories etc.) started to think of “fundamental rules for information”. The one I came up was: “justification to exist”. When content instance gets created, a couple of critical “justification attributes” to be attached are:
- “Business potential”: a value indentifying expected value of the content
- “Retirement age”: a some kind of pre-set value that triggers self-destroying function to either eliminate or to move content to dusty shelf – if content is not used/accessed.
- Could also think of adding “Cumulative cost” metric to record all the costs caused by this particular piece of content.

In order to make the ecosystem work:
-all attributes would be maintained and updated as the content evolves
-the content is somehow made findable i.e. enables trustworthy recording of use events
-there are (simple) business rules for calculation of each value, from top of my head, for business potential something like:
-created: initial value = 50
-retrieved as query result = +2
-checked out/in for update = +5
-no access within 6 months = -10
-thumbs up feedback by user/application = +30
And for costs:
-conversion = 100
-move to other storage 1000
-checked out/in for update = 500

Business benefits would require some kind of feedback mechanism from events in business process. “Retirement age” would kick in if document is not accessed, we don’t want to store mummies. This would need some thinking work in order to make it feasible…

Cheers, Heimo

Joe Gollner

Hi Heimo

Looking around more broadly, it is indeed heartening to see so many people getting behind "open, reusable information" that has been specifically designed to be amenable to automated processing. Of course, the degree of intelligence being sought varies considerably and by and large appropriately, based on what people are most interested in doing. The closer one moves towards familiar data types, the more commonly we find these open and intelligent resources out on the web. In large part this is because we (that being the technology community) have been wrestling with data for a long time and we often have data resources somewhere in a form that is pretty intelligent to start with. We are reasonably accustomed to exposing some of the details that establish what the data is about and what expectations can be reasonably applied to the different data units.

As we move deeper and deeper into the "content forest", where we see data mixed with increasingly complex rhetorical patterns, we find the amount of open and intelligent material drops off rather quickly. This is not too surprising. As the content structures grow more complex, for example as we move to collections of regulations and cases from details about infraction incidents, we find that the variety of things that people might want to do grows exponentially. So it is very difficult to know beforehand what someone else might want to do with your content. Also we encounter a sobering increase in overlapping semantic meaning wherein a single unit of content may in fact be described and governed by more than one set of contextualized declarations and rules. Each content unit can participate in multiple meanings. Finally, as we move deeper into this content forest the number of associations that surface grows rapidly and we find we have many relationships that might, or might not, be relevant at any given point in time.

The challenges surrounding exposing, definitely, the meaning within content (fulsomely considered) explains why what we usually find is a limited number of house-cleaning details about the unit of content itself. Typically, these details are provided for the transactional package which envelopes the content (what I typically refer to as information transactions). Hence a common use of the more popular Dublin Core metadata elements. But even here we are seeing more and more effort applied to enhancing the metadata supporting the units of content usually with a view to making that content more discoverable.

One of the directions this metadata enhancement can go is exactly as you have described - details that can help make units of content more manageable and therefore more cost-effective. The introduction of a few key criteria that can associate units of content with business objectives and intended roles would help an organization to implement what you have described - a dynamic environment of content management based on real events. This approach depends on the establishment of a measurement scheme that helps determine the current value of a unit of content and therefore to guide the handling of that content. As with all mathematical models, the real art lies in determining what is interesting to model and how this can be made useful. The really good thing about this approach is that it can be continuously evaluated and it can be adapted to improve the behaviour of the system. The other good thing is that this type of system for handling content using automated measurement and processing rules it is something that can scale to massive volumes - something that many other approaches fail miserably at.

We may be opening a new field here - content economics...
It started here...

The comments to this entry are closed.