Actually the article I'm going to mention is called, "Databases Are So 20th Century" by Dave Kellogg in CMS Watch. I found it an interesting read for the start of a new year.
Kellogg's basic thesis is that database systems can never handle content well, and that the current efforts to manage content as XML in databases is now the fourth attempt to address the problem and is doomed to failure. As he says:
"First, you need to abandon the notion that content is a special case of data. Indeed, it's the other way around; data is a special case of content that happens be highly regular in structure."
What of content management systems? Kellogg says:
"Just as poor countries have rich residents, there is an upper class of content that gets to live in databases (e.g., corporate web content, aircraft repair manuals, new drug applications). Typically, this is accomplished through enterprise content management (ECM) systems that both break the content into bite-sized morsels that fit into relational "square tables" and track metadata about it (e.g., author, version, check-in status, required approvals).
So while upper-class content enjoys life in a database, the great irony is that ECM typically treats the content itself as opaque -- because of the limitations in the underlying database system. That is, while ECM tracks and manages a lot of information about the content, it actually does relatively little to help get inside content. Despite its middle name, ECM today isn't really about content. It's about metadata."
Since it doesn't appear that Kellogg's ideal system is likely to appear anytime soon, I guess we can go on selling and using ECM systems for some time! Happy New Year!