The nature of the record in the age of the real time web

Science Fiction writer Bruce Sterling has posted a transcript of his February 6 talk to Transmediale 10 Atemporality for the creative artist. The talk investigates the impact of the real time web on all those cultural activities that depend in some way on narrative – including the writing of history.

”History books are ink on paper. They are linear narratives with beginning and ends. They are stories created from archival documents and from other books. Network culture is not really into that. Network culture differs from literary culture in a great many ways. And step one is that the operating system is an unquestioned given. The first thing you do is go to the operating system, without even thinking of it as a conscious choice.

Then there is the colossally huge, searchable, public domain, which is now at your fingertips. There are methods to track where the eyeballs of the users are going. There are intellectual property problems in revenue, which interferes with scholarship as much as it aids it. There is a practice of ‘ragpicking’ with digital material – of loops, tracks, sampling. There are search engines, which are becoming major intellectual and public political actors. There is ‘collective intelligence’. Or, if you don’t want to dignify it with that term, you can just call it ‘internet meme ooze’. But its all over the place, just termite mounds of poorly organized and extremely potent knowledge, quantifiable, interchangeable data with newly networked relations. We cannot get rid of this stuff. It is our new burden, it is there as a fact on the ground, it is a fait accompli.

There are new asynchronous communication forms that are globalized and offshored, and there is the loss of a canon and a record. There is no single authoritative voice of history. Instead we get wildly empowered cranks, lunatics, and every kind of long-tail intellectual market appearing in network culture. Everything from brilliant insight to scurillous rumor.

This really changes the narrative, and the organized presentations of history in a way that history cannot recover from.”

What are the implications of this for record keeping? If the writing of history as a single narrative becomes impossible, then is the attempt to keep a single narrative of a piece of work impossible and/or unnecessary too? Should we still try to create a file (electronic or paper) for every piece of work, that brings together all the significant documents and communications arising from that work?

In the same talk Sterling describes a new way of looking at history:

history is a story. And to write down the story of the fourteenth century, to just ask yourself – “what happened in the fourteenth century?” — is a very different matter from asking the atemporal question: “What does Google do when I input the search term ‘fourteenth century?

This is reminiscent of the switch we are witnessing in record keeping. A decade ago organisations were trying to keep good records of everything they did, to be able to tell the story of everything they did, to keep an organised narrative. Now the aim is reduced to being able to respond to e-discovery/ Freedom of Infomation/ Data Protection Subject Access requests.

Our traditional concept of a record is of a stable information resource: a file, sitting in a record store or archive, or in an electronic records management system. A file that stays unaltered and inviolate as it moves through time, a narrative waiting to be read and interpreted by different waves of colleagues, auditors and historians.

The e-discovery/freedom of information/subject access request is to ‘show us everything your systems know about the dismissal of employee y’ or ‘show us everything your systems know about the contract with company x’. The narrative only comes together when the request comes in. The answer to the question will change over time, as the organisation’s network and systems change, as the tools it has to search them change, as things drop off the network and are added to it. In the same way as Google’s answer to a search query on ‘the fourteenth century’ will change over time, as its algorithms changes, as the internet changes.

Even if an organisation has been able to keep a good file for a piece of work the e-discovery/freedom of information/subject access requestor will want to go beyond that file. It will want the organisation to dredge up material kept outside of that file that the requestor might find useful, advantageous or interesting.

The implication for records management is that we are in a transition away from managing static files towards managing shifting networks of information.

Can Google be trusted with enterprise data?

Alan Pelz-Sharpe is an incisive Enterprise Content Management system analyst. He blogged today that Google’s aggressive use of its users contact data in the recent launch of Google Buzz indicates that Google is unsuitable for the enterprise, as they can not be trusted with enterprise data.

Alan’s statement is important in the context of the battle between Microsoft and Google for the provision of cloud based e-mail, calendaring and collaboration applications to the enterprise. The battle has only recent commenced but it is already developed into both a price war and a war of words.

  • Microsoft’s offering is BPOS (Business Productivity Online Standard Suite). BPOS provides enterprises with the following applications, hosted in the cloud: Exchange Online, SharePoint Online, Office Live Meeting, and Office Communications Online
  • Google’s offering is Google Apps which packages up Gmail for business, Google sites, Google Groups, Google Docs, Google Calendar and Google Video

Both of these products are immature: Google Apps has been on the market longer, but Google are new to enterprise computing. Microsoft have vast experience of enterprise computing but little experience of providing software-as-a-service (SaaS).

In terms of functionality BPOS, and in particular SharePoint, provides far more features than Google Apps. If levels of service and price were comparable you would expect Microsoft to win this war (though small enterprises may prefer the relative simplicity of Google Apps).

The fact that Alan Pelz-Sharpe sees trustworthiness as a key factor in judging a SaaS vendor shows how different the SaaS space is from the traditional packaged software space. In the packaged software space you did not have to decide whether you could trust the vendor with your data – they would never see it. You simply needed to be satisfied with the features that their software possessed, the financial stability of the vendor and their road map for their product.

In contrast SaaS providers will be judged on the supplier’s reliability, quality of service, security and trustworthiness, as much as on the features of the software. The SaaS market plays in many ways to Google’s strengths. Their success has been built on the resilience of their infrastructure. Their search engine is hardly ever down. They have always guarded the secrets of how they configure their data centres as zealously as their search algorithm, indicating that they regard running data centres as a core strategic competency.

Conversely the move to cloud computing negates two of Microsoft’s strengths:

  • Their monopoly, through Windows, of operating systems in the enterprise is negated because organisations can run applications off rival operating systems based in the cloud, or subscribe to applications (like Google Apps) that run off rival operating systems (like Linux).
  • The expertise that enterprise IT departments have built up over the years in the administration, upgrade and extension of Microsoft products gave them an edge against competitors. Organisations were reluctant to buy software that their IT staff were not familiar with. In the SaaS world applications are administered and upgraded by the provider, and the scope for developing and extending them is limited

Google and Microsoft are big, cash-rich companies, with plenty of freedom to manoeuvre, who act in what they see as their best commercial interests. I do not want to condone Google’s aggressive use of Gmail users’ contacts data, (it made me glad I do not use Google to maintain my contacts). However the commercial imperatives behind Google’s over aggressive launch of Buzz are not the same as Google’s drivers to enter the SaaS space, and I do not think we can necessarily assume that Google is less trustworthy with enterprise data than its competitors purely on the basis of their actions in the launch of Buzz.

The reasons behind Google’s aggressive launch of Buzz lie in the threat that Facebook poses to its monopoly of the web advertising market. Google’s monopoly of web advertising stems from their monopoly of search, and from the fact that search was until recently overwhelmingly the major route through which individuals were directed to websites.

The growth of Twitter and in particular of Facebook has altered that. More and more traffic on the web is directed by links from friends on Facebook or Twitter, and this is reducing the influence of Google search results. Steve Rubel recently reported that Facebook now drives more traffic to major websites than Google. In another post Rubel said that ‘Facebook is unstoppable, they aren’t just the next Google, they’re the next web’

Google recognises that in the medium term search will lose out to discovery through social networks. People are more and more using their Twitter network, and/or their Facebook network to bring them news, rather than going out and searching for it on Google. Google will only be able to retain its dominance of advertising if it can provide a discovery service that is both personalised to you as an individual, and social. It needs to be tailored to who you are, where you are, who you are interested in, and who your friends are. This is why Google is turning itself from a search company into a software company, it needs you to use its applications because that is how it will get to know you.

Contacts data was at the crux of the controversy over the Buzz launch, and contacts data is crucial to Google. The key advantage that Google’s Android phones have over Apple’s iPhone is that when you log on to your Google account on a Google phone it instantly synchronises your Google contacts and Google calendar to your phone. In contrast getting my contacts and calendar synched between my Mac and my iPhone is a service that Apple charges me £59 a year for (it is worth it to me, and I love the service, but I do not think the charge is sustainable in the face of Google providing a similar service for free).

Contacts data is important to people. They want to keep it in their control as they move from job to job, and device to device. Facebook and LinkedIn function in many ways as address books, but they do not tend to hold phone numbers and have no coverage of people that are not users of those applications. The perfect contacts book would be available when I use my phone and when I use my laptop. It would link with my e-mail, and it would have links to my contacts’ identities on applications such as Twitter, Facebook and LinkedIn.

Google are in a better place than their rivals to provide just such an integrated contact service:

  • Apple are dominant in smartphones, and the iPad could give them dominance of the netbook/tablet space. Their contacts and calendar applications are more pleasant to use than Google’s. But they are weak in e-mail provision and do not provide a social network.
  • Google have the smartphones, the e-mail service, and now with Buzz they are making a play for the social network space. When the Chrome OS netbook is launched they will rival Apple in the netbook/tablet space
  • Facebook is by far the strongest in the social network space, but does not (yet) do e-mails or phones or any sort of device. There have been rumours about a Facebook phone and a Facebook e-mail service for some time and both would make sense.

Google’s land grab with Buzz was a bid to get into the social network space before Facebook’s extension outwards became unstoppable. They needed to get as many people as possible using Buzz as quickly as possible and that is why they launched it on users with little or no warning. They pushed Gmail users at breakneck speed through a series of questions without giving them any way of judging what the consequences of their choices were. Google used Gmail users’ contacts to suggest people for them to follow, and to suggest them to others. However the consequence of this aggressive strategy was that some people found that lists of people that they frequently had Gmail correspondence with were being displayed on their Google profile.

It was an aggressive move, but it is an aggressive market. Google has more to lose from standing still and seeing Facebook overtake it than it does by aggressively moving to occupy some of Facebook’s space.

Alan Pelz-Sharpe points out in his post that the enterprise market accounts for only 1% of Google’s revenue. He is right that the enterprise market is far less important to Google than it is to its competitors (for example Microsoft). However Google’s war with Facebook is not taking place in the enterprise. Facebook is not trying to compete for the enterprise (Microsoft holds a significant stake in Facebook and the two companies have not encroached on each other’s space). I do not believe that Google has anything to gain by acting as fast and loose with enterprise data as it did with customer contact data when it launched Buzz. For these reasons I am reluctant to read across from their actions in the launch of Buzz to make any conclusions as to the trustworthiness or otherwise of Google in relation to enterprise data

The consumerisation of enterprise computing

Paul Buchheit is the brains behind Gmail. He then left Google and founded Friend feed which was recently acquired by Facebook.

Last week Paul blogged his opinion on the design of devices and applications.

Buchheit’s post was written in response to criticisms of Apple’s forthcoming iPad. Critics have listed all the things that the iPad won’t do that they would expect a tablet computer or netbook to be able to do. For Buchheit these critics are missing the point:

What’s the right approach to new products? Pick three key attributes or features, get those things very, very right, and then forget about everything else. Those three attributes define the fundamental essence and value of the product — the rest is noise. For example, the original iPod was: 1) small enough to fit in your pocket, 2) had enough storage to hold many hours of music and 3) easy to sync with your Mac (most hardware companies can’t make software, so I bet the others got this wrong). That’s it — no wireless, no ability to edit playlists on the device, no support for Ogg — nothing but the essentials, well executed.

We took a similar approach when launching Gmail. It was fast, stored all of your email (back when 4MB quotas were the norm), and had an innovative interface based on conversations and search. The secondary and tertiary features were minimal or absent. There was no “rich text” composer. The original address book was implemented in two days and did almost nothing (the engineer doing the work originally wanted to spend five days on it, but I talked him down to two since I never use that feature anyway). Of course those other features can be added or improved later on (and Gmail has certainly improved a lot since launch), but if the basic product isn’t compelling, adding more features won’t save it.

By focusing on only a few core features in the first version, you are forced to find the true essence and value of the product. If your product needs “everything” in order to be good, then it’s probably not very innovative (though it might be a nice upgrade to an existing product). Put another way, if your product is great, it doesn’t need to be good.

At the end of the post Buchheit made a comment that helps explain why so many enterprise applications suffer useability problems:

Disclaimer: This advice probably only applies to consumer products (ones where the purchaser is also the user ..). For markets that have purchasing processes with long lists of feature requirements, you should probably just crank out as many features as possible and not waste time on simplicity or usability.

Enterprises still purchase their applications and devices through a long list of feature requirements. Microsoft are kings of enterprise computing because they provide so many features in their products- SharePoint can be made to do almost anything. It isn’t just Microsoft that operates like this. Every Enterprise Content Management system vendor plays the same game, favouring features over useablity and design values, because it is features that they are judged on.

This contrast between the design of devices and applications intended to appeal directly to consumers and those intended to win through enterprise procurement processes is coming to a head.   More and more aspects of enterprise computing are becoming markets which consumer-targeted devices/applications are competing for.

Three years ago the battle for the smartphone market was a straight head to head between Research in Motion’s Blackberry and Microsoft’s Windows Mobile. Both assumed the smartphone market would go to the supplier that won the enterprise market,  and both crammed their smartphone full of features. Then Apple bought out the iPhone, with less functionality,  appealing direct to consumers, and a design that so intuitive that they did not have to ship a manual with the phone.  Now the future of the smartphone is a battle between two consumer companies: Google and Apple.

These two companies will also be the ones fighting it out for the tablet/netbook market. This year will see the launch of Apple’s iPad, and of Google’s Chrome OS – a new operating system for netbooks. Both will be bought as consumer devices but used for work. Read this post from Pete Gilbert for a discussion of how the iPad will support work in ‘the third place’ – outside of both the home and the office, on the move, in cafes, on trains, airport lounges etc.

The trend of enterprise computing is to require less and less to be installed on the PC, with the logical end result being that your device will only need a browser. For over a decade Microsoft have had a very safe income stream from Microsoft Office, which always had to be installed on the client device. Now it has to compete with Google Docs, which not only doesn’t require anything installed on the device, but is also free. Their solution is Office Web Apps, which will allow people to view and edit documents held in SharePoint 2010’s without having MS Office installed on the device.

As soon as you reach the point where only a browser is required on the device then (in theory!) it doesn’t matter what device an employee uses (so long as it doesn’t compromise security). That will work to the advantage of both the enterprise and the individual: the enterprise because they will no longer need to worry about providing every individual with a PC (and configuring and maintaining them) – the individual because they can choose their device of preference.

Consumerisation is a trend that affects applications as much as devices.

The one thing that every web 2.0 application has in common is that all of them were written to be used without training and without helpdesk support. It is conceivable that enterprises may start to regard training and helpdesk support as unwanted costs of applications (much like cloud computing vendors are encouraging enterprises to see system and server administration as unwanted costs).

Last week I heard Roger James, Director of IT at the University of Westminster, describe to the UNICOM Records Management update conference how the University provided Google Apps to all its students and staff without having to provide any training. It has been in use for over a year and they have so far only received 150 help desk calls. Google Apps is an amalgam of applications such as GMail, Google calendar and Google Docs that were first launched as free consumer products on the web.

Simon Wardley’s overview of cloud computing

This Wednesday I heard Simon Wardley give a fascinating overview of cloud computing to the Records Management Update 2010 Conference organised by UNICOM. Simon is responsible for Cloud Computing Strategy for Canonical.

Simon told us that cloud computing is difficult to define because it is not a thing, it is a transition.

The transition is from:

  • computing-as-a-product (the enterprise buys in hardware, operating systems and applications as products which they install, configure, deploy and maintain themselves)

towards:

  • computing as a service (the enterprise subscribes to applications which are hosted in the cloud, deployed on operating systems that are hosted in the cloud, sitting on hardware that exists somewhere on the cloud)

The transition period will be problematic and disruptive for companies. They won’t know who to trust, or what standards they should be insisting on, or which suppliers will be viable over time, or how they should govern their information in the cloud. There will exist many different cloud models during the transition period. Enterprises won’t just flick a switch and move all their IT infrastructure, all their development and all their applications straight over to the cloud.

Cloud computing is not a new phenomenon. John McCarthy and Douglas Parkhill both (separately) came up with the concept of utility computing in the 1960s. The necessary technology (virtualisation, which allows a provider to run several different virtual servers on one peice of hardware) was described by Jerry Popek et al in the 1970s. But the crucial attitude change came more recently, with the writings of Paul Strassman and Nicholas Carr who argue that the majority of IT activities in an enterprise do not give the organisation any strategic advantage. In other words IT has become a commodity, well enough defined that no enterprise gets a strategic advantage from doing it better or differently than their competitors.

Simon told us that some organisations will lose out in the transition. One of the key risks is supplier lock in. To avoid supplier lock in you need to be able to get your data out of your existing supplier. You also need alternative suppliers to be both willing and able to accept your data in the form that you extracted it from the previous supplier. The cloud has not yet reached that level of maturity, it is still in a transition period and there is no agreed de facto standard for APIs (the application programming interfaces through which organisations access, amend and add to the data held by their cloud provider). Most cloud providers have set up their own APIs to their services. Simon showed us a long list of different cloud APIs. If you choose a supplier whose API is not compatible with any other vendor then changing supplier would be hampered by the cost of adapting their data so that it can be deposited through the different API.

Simon showed us the complexity of the cloud computing stack. with its three levels of infrastructure, platform and applications.

  • Software-as-a-service vendors provide applications – tools such as Google Apps, SharePoint Online and Salesforce that end users can utilise
  • Platform-as-a-service vendors provide programming language support and development tools to enable an enterprise to develop new applications and to deploy those applications [see this information week article for a survey of the platform-as-a-service market]
  • Infrastructure-as-a-service vendors provide the’ bare metal’ hardware, plus an operating system and a virtualisation tool

Simopn anticipates that there will be cloud computing disasters and failures.
Cloud computing creates dependencies up and down this stack. For example you may trust your software-as-a-service provider to provide you with a stable, reliable application. They in turn may trust their platform-as-a-service provider. But if the infrastructure-as-a-Service provider at the bottom of the stack fails, this would have a knock-on effect up the stack causing both the platform-as-a-service and the software-as-a-service to fail. Simon drew parallels with the the catastrophic banking collapse of late 2008, which resulted from bankers being shielded from the true risks of their investment through the existence of different layers separating their investments from the properties on which those investments were secured.

However Simon does not anticipate that the risks, disruption and losses involved in the transition to cloud computing will prevent that transition happening. Organisations that continue to host their own IT may find themselves at the wrong end of a competitive gap between them and organisations that use cheaper, more efficient cloud computing services. This may mean that many organisations will have eventually no choice but to move to a cloud computing model, in some shape or form, for most non-strategic computing provision.

The current state of the electronic records management system market

The noughties was a decade of two halves for electronic records management systems in the UK

The first half of the decade: 2000 to 2005

The first half of the decade was a boom period. The UK National Archives (then called the Public Record Office) issued an influential statement of requrements (TNA 2002) for electronic records management systems, with an accompanying evaluation programme. Similar evaluation programmes were set up in other countries. The requirements included the ability to:

  • hold a hierarchical corporate records classification (fileplan)
  • link retention rules and access rules to the fileplan
  • allow folders and documents to inherit retention and access rules from the heading of the fileplan that they are saved under

The acronym EDRMS (electronic document and records management systems) was applied to systems that complied with statements of requirements such as TNA 2002. In the UK large organisations in the public sector and in regulated sectors of the economy would routinely specify TNA 2002 compliance when issuing a tender for a corporate scale document management system.

Like most booms it was overdone. Some UK central government departments rushed into implemementations, spurred on by a Modernising Government target (all UK government departments must create new records electronically by 2004), and by a desire to be ready for the coming into force in January 2005 of the Freedom of Information regimes for England and Wales, and for Scotland. Corporate Fileplans proved difficult for organisations to construct and unpopular for users to use.

Nevertheless EDRMS had established itself as an integral part of the enterprise content management (ECM) landscape. All the big ECM suite vendors (IBM, Open Text, ECM, Oracle etc.) now provide an EDRMS offering within their suites, alongside their web content management, portal and workflow offerings.

The second half of the decade – 2005 to 2009

  • In the second half of the decade EDRMS has struggled. Some of the factors behind the decline are specific to the UK, but the more important ones are global. The Modernising Government target deadline passed unnoticed, and the fear of Freedom of Information subsided. The National Archives withdrew their testing regime at the end of 2005 to make way for a European Union sponsored regime (MoReq2). It would be three years before the MoReq 2 specification would be published, and by that time EDRMS was facing a new threat in the shape of SharePoint 2007.

    SharePoint 2007 came out towards the end of 2006. It had no facility to apply a corporate fileplan, and an obscure and unworkable method for applying retention rules (in the shape of the SharePoint Records Centre). SharePoint 2007 devolves power down to the team level, and hence did not hit the same user adoption problems that EDRMS implementations often ran up against. Microsoft cleverly/generously included client licences for SharePoint in the Enterprise Agreements that most organisations that buy Windows, Exchange and Office already have with them. This meant that many organisations wanting corporate wide document management systems did not issue an invitation to tender (ITT): instead they simply started using the SharePoint licences that they already had. This has had a ruinous effect upon electronic records management system evaluation programmes, such as MoReq2, whose influence is predicated upon buyers specifying compliance when they issue an ITT.

  • So here we are at the start of 2010. The acronym EDRMS has almost disappeared from our language. What are the factors that will shape the way that the electronic records management system market develops over the coming years?

    Make or break for MoReq2

    MoReq 2 came out at a bad time for records management standards and evaluation programmes. The double whammy of the global downturn and the rise in SharePoint meant that there was less money to be made out of electronic records management systems. In order to comply with MoReq2 all vendors, even those meeting existing standards such as TNA 2002, would need to make investments in their products to make their necessary changes. Most of the vendors decided not to make those investments and have not yet submitted their products for testing. The result is that at the time of writing only one vendor (Fabasoft) has acheived compliance with the standard. In November 2009 I heard Doug Miles of AIIM tell an AIIM trade members meeting that so long as there was only vendor on the list then buyers would be unlikely to specify MoReq 2 compliance on their Invitations to Tender, because if they did so they would end up with a shortlist of one supplier.

    As a result of this situation MoReq2 will be overhauled during the early part of 2010, with a view to making it less onerous for vendors to adapt their products. Some functionality that is currently labelled as mandatory will be downgraded to optional.

    Rory Staunton sits on the MoReq2 governance committee, and he has written an interesting article in the January 2010 Records Management Bulletin. He does not pull any punches in the article.
    He says:

    The market reality is that records management software as a component market has almost collapsed. The sales of records management software licences from vendors have been falling before and during this recession, and most vendors are chasing a non-existent goal they call eDiscovery. US style eDiscovery is almost non existent in Europe, and MoReq2 in its original version was a basket case

    Electronic records management has a tough battle ahead of it. Staunton believes that the best chance of MoReq 2 having an influence is to pursuade regulators to adopt MoReq 2

    The way to win a game, of course, is to write the rules, so clever national archives, vendorrs and large users should target regulators in all sectors and propose that they adopt and extend MoReq2 based approaches…..The real success of MoReq2 will be judged in five years time, depending on the number of regulators that embracee and extend it.’

    Staunton is thinking or regulators in fields ‘from envirnoment to nuclear, healthcare to telcos, retail to finance’

    Staunton went on to say:

    ‘there is not time to waste if the records management community are to ride the wave of impending business regulations that are sure to emerge from the current and widespread breakdown in financial sector regulation. In market terms, Europe needs some basic approach to securing business information. Europe needs a more productive approach than the knee-jerk, lawyer-led, e-mail centric eDiscovery systems that the North Americans are adopting, which are a retrospective friction on business, and do not enhance compliance as they cure, rather than prevent, breaches of process’

    The coming of SharePoint 2010

    John Newton (chairman and CTO of Alfresco, a competitor to SharePoint) paints a bleak picture of the effect of SharePoint on ECM vendors, in a blogpost outlining his predictions for 2010:

    You have to hand it to Microsoft; they seem to have scared the bejesus out of the traditional ECM vendors. They have all seemed to rolled over and played dead in the wake of SharePoint 2007

    And it could get worse before it gets better. SharePoint 2010 is coming along with ‘ ever tighter integration with office, better web support, new records management, and claims to better scalability and administration’

    Nevertheless Newton believes that SharePoint still has vulnerabilities, arising from the fact that organisations adopting it are tied into the whole Microsoft stack :

    ”Microsoft has chosen not to address its fundamental architectural flaw of storing content in the database and it is still an exclusively Microsoft-centric platform. Forget whatever database, operating system, language, browser you have – you better get used to SQL-Server, .NET, Windows, IE and don’t forget Silverlight. If the traditional vendors can’t battle the crap out of that, then they deserve to lose. The next 12 months will be critical during the transition to SharePoint 2010”

    ECM and SharePoint: competition or partnership?

    The reaction of ECM vendors to SharePoint 2007 has gone through 3 phases:

    • At first ECM vendors believed that the difficulties of governing information in SharePoint 2007 would mean that SharePoint would not greatly threaten their market share.
    • When it became clear that SharePoint 2007 was shipping at a rapid rate the ECM vendors started developing ‘web parts’ that could be deployed within the SharePoint environment. The web parts enabled people to save docutments to the ECM, edit them and search for them, without leaving their SharePoint environment.
    • More recently vendors have realised that a much deeper integration with SharePoint is required. The problem with using web parts for integration with an ECM was that SharePoint knew nothing of the documents. Clients wanted the content to be available to SharePoint so that they had the option of using SharePoint functionality (such as workflows) with the documents and other content that they created.

    Most of the ECM vendors that I have spoken to recently are adopting an approach to integration with SharePoint, 2007 which sees content stored in the ECM system, but made available to SharePoint so that to all intents and purposes SharePoint views it as SharePoint content

    The ECM vendors have a reasonable tale to tell. Their selling proposition is:

    • that you can use the ECM to house your fileplan, and to apply it to any system that you connect it to (not just SharePoint)
    • you can apply the fileplan to any type of content (not just documents, but also content such as discussion board posts, blog posts etc.), and any level of granularity (not just documents, but folders of documents, document libraries or entire team sites)
    • that they provides a more efficient model for storing content. SharePoint stores both the document/content itself and the metadata about it, in a database (SQL Server). ECM systems store the document/content in a repository, with only the metadata stored in a database
    • that although the content is stored outside of SharePoint/SQL server, the content is made available to SharePoint so whatever you wanted to do in SharePoint with that content you would still be able to do.

    However the vendors are facing an uphill battle. The problems for them are that:

    • Most of the ECM vendors seem to have settled on the same model. They are fighting for a relatively small market, and customers may find it hard to differentiate them from each other.
    • they are reacting to SharePoint, rather than causing problems for Microsoft. It is one way traffic. There is nothing that any ECM vendor has done over the past few years that Microsoft has felt compelled to react to. Microsoft has nothing to fear from the ECM vendors in the collaboration space. (Note that things are slightly different in the web content management and portal spaces where SharePoint has had less impact on the market share of traditional ECM vendors).
    • As Microsoft gradually improve information governance within SharePoint they will leave less and less space for ECM systems in the collaboration space.
    • ECM vendors will be reluctant to rock the boat with Microsoft. Several ECM vendors have told me that Microsoft has been very helpful to them, sharing their roadmap and plans for 2010 well in advance so that the ECM vendors can plan how their products will integrate with it. But it means that in the collaboration space ECM vendors are becoming partners rather than competitors with Microsoft.

    It is interesting to note that the only two vendors that are trying to take Microsoft on in the collaboration space (Alfresco and Google) are not traditional ECM vendors. Alfresco provides open source collaborative and document management software. Google provides Google Apps from the cloud, as a subscription service.