Web 2.0 Blog – Discovering Innovation Opportunities using Social Media

This is my rough draft in my work with the W3C E-Gov Interest Group. I wanted to get comments from those working on social media in government as we work to finalize our recommendations. Please keep in mind this is for an international standard, so I have no assumed that 508 compliance is required but rather wrote about what compliance policies in digital age should take into consideration.

Multi-Channel Distribution Standards.

Distribution to Non-Government Websites and Platforms

In an age of connected data, standards are not just about the format of information but are also about accessible and fair distribution. That having been said, a balance must be achieved so that distribution of information does not become a barrier which limits the amount of information which is distributed.

In the digital age, information is key to both economic and social development of societies. Therefore, governments need to prioritize making the most information available through broadly distributed channels over limiting information in order to make it most broadly available and distributed. This is a classic 90/10 effort issue, where the last 10% of effort to broaden distribution and availability to near perfection would take 90% of the effort. Too often governments have opted for an all or none method in information distribution and it has resulted in less distribution and a lesser good for the public as a whole. The amount of information is too vast given the current state of information storage formats and technology to make all information accessible through all conceivable methods and channels. Accepting this fact and opening up government data needs to be the priority.

That having been said, wide-spread availability should not be discarded but rather a system should be in place to determine which information warrants the broadest, most accessible distribution and which information should be posted but resources are insufficient for the broadest possible access.  (Of course in both cases, the format chosen should be a non-proprietary an,d when appropriate, ‘mashable’ one so that the public may redistribute and remix the information if it chooses.) Concern for availability to all may be handled by providing a government sponsored service which can provide specific data in  alternate formats on demand.

This is not a radical departure from traditional accommodations but rather a continuation of choices which have become routine. An excellent example to understand how this is an extension of existing policies is to consider library books and the blind in the US. Library books for the sighted are more widely available and more easily available at libraries across the country,  but Braille versions of books can be accessed on demand through the Library of Congress’ National Library Service for the Blind and Handicapped. A similar program could be developed for on-demand access of multimedia material for the handicapped. That having been said, basic accommodations which can easily be built into websites to promote accessibility should be addressed with social media providers by encouraging broad accessibility to their material and links should be provide on multimedia home pages on how to request more accessible versions such as closed captioned videos.

It seems some people are misunderstanding this as advocating abandoning progress in accessibility.  I assure you this is not the case.  But it is simply stating plainly what already occurs throughout society and government already.  If you look at multi-lingual issues, not every document in the US from governments is immediately available in Chinese, or even Spanish for that matter.  I simply am saying if that EVERYONE is better served by as much government information as possible being available in some way and that should be the priority.  It is imply not possible to make everything avilable in all possible ways but when the need arises, on-demand services can supplement  the less broad methods of making information available. I hope this clears it up.

Availability in Social Media and Across the Digital Divide

Availability is determined by 3 factors which form a digital divide in most countries: device, bandwidth or connectivity, and user disability in using the device (commonly known as 508 standards in the US). Device availability varies because interoperability standards but also based on lifestyle, screen size, audio clarity and raw processing power. (My blackberry can play audio and some video but I am not going try to access a lot of that content in reality even if there is a way to squeeze it onto the device.)  Both wider broadband distribution and availability of information on mobile devices can help to solve this issue. One of the ways in which governments are broadening broadband access is through free internet enabled computers at libraries and kiosks. The type of access which is made widely available to citizens for free at public locations as well as the connectivity and devices available at the lowest price points should be considered when choosing data standards, platforms, devices and websites for the bulk of information. If broadly available public access is not compatible with how the majority of a country’s citizens use the internet, then clearly public internet access is not adequate.

Fair distribution on Non-Government Portals.

The lower costs devices and the lower costs access in most countries means that whether a website or platform makes text based information available on low cost mobile platforms should be taken into account. While most platforms are multimedia, there is still often the opportunity to provide some information in text form for mobile access.

The availability of multimedia information should be announced and searchable through text based services so that users who have limited access to multimedia enabled workstations, can find out about resources they need and go to a kiosk or library which better connectivity or devices are available. To prevent those without full access even to discover what is available would effectively block its use, since time and context when accessing the public internet is limited.

Fair distribution becomes an issue when government distributed content through selected websites, platforms or devices creates an unfair advantage for a particular device, platform, distribution network, or website or disadvantages a defined demographic among the citizentry. It seems appropriate for governments not have to expend resources on wide distribution if the bulk of the intended audience is on one platform or website, but some consideration should be taken so that governments do not become unintentional monopoly makers through their social media distribution choices. Again this consideration should not take priority over wide distribution of the bulk of information but be a factor in making policy choices.

Posting Information on the Social Web

The nature of social media information is that it is posted on locations which are not on government servers under its control and is distributed though social connections not through formal organizations. Social media information is distributed on websites which choose whom to allow access to the website and which behaviors are acceptable for participation. Also a user’s activity and connections on a social media website determines to some extent how much exposure they receive to information available on that site. For instance, someone is who is a friend of a person who participates in government discussion boards will be more likely to be exposed to government distributed information than someone who is not similarly friended. Likewise, people who belong to communities who choose to participate in smaller online venues will not be exposed to the government distributed information on the larger venues. For instance, what about the parent who blocks Youtube on the household computer because of objectionable material? Some consideration to the unevenness of social media distribution should be made.

Multimedia central feed for externally published info.

Therefore a government using social media to distribute multimedia, should create a public location which announces distribution of documents and content with links to their openly accessible location.

A central text feed of all distributed info will serve four purposes:

1. Provide the public with a completely open and highly accessible index to content provided through social media channels.

2. Provide the government content in a form isolated from other content to broaden distribution to those who prefer to avoid mixed distribution sources.

3. Provide other smaller content providers and websites a mechanism to have the same government content as larger providers.

4. Provide a central reference location for any on-demand accessibility service requests for government sponsored or partnered services such as closed captioning or braille.

This media index feed could be in the form of a searchable text feed which link to the original documents. The text feed would be searchable from text based mobile devices as well as web browsers. Search would be provided through a tagging mechanism which at the least allows those posting the information to create new search tags and categories. It also may allow the public to tag items to create a folksomy based search. Documents would be in a freely accessible format, so long as that format allows for the same distribution both in context and content to other websites as was carried by government officials. For instance, if a document was associated on a social media website with certain search tags, titles and description attached, those tags should be indicated in this feed.   If a document had hyperlinks or embedded content placed in it by government officials, those hyperlinks and content should be preserved in this centrally stored format.

Video and audio should be available from a link on this central feed in an instantly playable format such as a progressive player linked to cloud based storage so high demand will not slow distribution, as well as a downloadable format which can be used to replicate the distribution on other websites. Again the meta or context data which allows for duplication of the original post to the primarily distribution site should be stored in the feed or the linked files.

In the case of virtual world information distribution, some capture of the virtual world experience would be attempted to replicate the primary message in some way such as a video of the experience. If it is possible to store in an open format 3-D objects or actions, that content maybe also be considered for placement in this central data store.

To the extent that an industry standard is developed to allow easily subscription or importing of documents and audio/video content to alternate media websites and platforms, governments should adopt these methods to support their central feed.


Governments should clearly prioritize distribution and accessibility options which do not pose barriers which would result decrease the amount of information distribution. At the same time some consideration to disabled users, users without high bandwidth and high cost devices, as well as devices, platforms and websites with smaller audiences should be taken for high priority information as well as possible on-demand conversion services. A low-barrier method which could serve as a base from which to achieve these accomodations would be a central text-based multimedia index feed containing hyperlinks to content in open formats. This feed would be searchable from both text based mobile and internet browsers and contain context information which would allow replication of the content posting which were created on non-government websites by government officials.   If possible this central feed would facilitate posting of content to websites by those website owners, so that the websites themselves can opt in to the distribution.

In the policy to effort session at Government 2.0 camp, Lovisa Williams of the Department of State summed up the problem of building on cross agency’s efforts as “Don’t share your best practices, share them when they are good enough.” It sounded like a good start to a blog post.

For more on workplace collaboration check out the workshop I am organizing on April 23rd.

The essence of collaboration is to steady build on one another’s ideas bit by bit until you get a solution. Of course contributions in reality should be somewhat thought through but by no means need to be final, because if sharing final plans were enough, then there would be no need to collaborate.

The current cross-agency practices seem to be built on sharing each other’s best practices, which means they have been fought for, tried out, approved and finalized within an agency and no one in that agency has any stomach for opening up that can of worms again.  So suggestions for improvement from the outside, after the practice is shared, are not  as likely to be incorporated.

Sharing these final lessons learned, does not accumulate ideas from different perspectives and situations to create cross-agency solutions and support. Instead it passes an agency specific solution to another agency, at which point it gets rewritten.  There is some efficiency gained, but it doesn’t seem to compare to a truly collaborative process in which ideas are shared and accumulated quickly showing an agile and responsive result.

If collaborative efforts begin with sharing final outcomes which the authors don’t want to change because they have invested in these as being final, then essentially the collaborative process doesn’t begin. It’s more of a building on lessons learned than a collaboration .

It’s kind of like growing your vegetables in your own walled garden and only sharing the seeds after you have harvested the first successful crop.  In order to build an agile and responsive government, we need to all plant  seeds at the same time and figure out together how to get them to grow in the first season.

The systemic problem with sharing methods and ideas before they become a ‘best practice’ seems to be fear of acceptance within the agency or worse yet criticism from outside the agency.   A best practice almost by definition means it has the stamp of approval by agency heads.   Therefore by definition a “good ‘nough” practice does not have the stamp of approval and there is a fear of implied ‘approval’ and finality when you share it cross-agency. It seems we either need to create semi-private cross agency channels so people can be comfortable in sharing practices still ‘in–progress’ or overcome the fear of unfinished solutions being seen.

UPDATE:  I just put this data into google maps here.

Disclaimer: Wordles are often more artistic than informative but I thought it might fun to visualize the voice of the people.

A lot to write about from Government 2.0 camp but I am a bit under the weather and wanted to relax a little tonight.  So I created wordles of the top 100 questions  to President Obama from each category of the  Open for Questions application on Whitehouse.gov .  I also appointed myself the moderator and ruled that marijuana is a legitimate health reform issue but not a budget, financial stability, green jobs and energy or jobs issue.  The off topic questions were removed from the top 100 in those categories.  However if you want to see the complete raw data in Wordles, it is available here.

Phase net from IBMs Many eyes. Top 78 words and their relations from all categories. (Top 100 from each after moderation).

Phrase Net from top 50 words from Open for Questions

Word Tree All Categories starting with Mr. President

Mr. President Word Tree

The full wordles below are on Wordle.net.  Click on the thumbnail to see the full wordle:

Top 100 from all questions combined:

Top 100 Questions from all 10 categories


Education Top 100

Home Ownership:

Homeownership Top 100

Health Care Reform:

Health Care Reform Top 100


Veterans Top 100

Small Business:

Small business Top 100

Auto Industry:

Auto Industry Top 100

Retirement Security:

Retirement Security Top 100

Green Jobs and Energy:

Green Jobs Energy

Financial Stability:Financial Stability

Jobs Top 100

Budget Top 100

Ken Fischer is Chief Innovation Officer for ClickforHelp.com and thinks about social media, economics, linked data and Government 2.0. Let himknow if you want your web presence to be made more interactive and relevant.

Economics, according to wikipedia, is the social science that studies the production, distribution, and consumption of goods and services.

Notice money is not mentioned. But the current theories of economics socio-economic (Kondratieff (Kondratiev), Schumpeter, Kuznets) , fiscal-economic (Keynesian / Monetarist) and political-economic (Libertarian/Austrian) theories are all based on monetary markets.

Well that makes sense because MOST of the production, distribution and consumption of goods and services in society has used money as a means to relate them to another for at least 200 hundred years.  Money was a great leap forward in human history because it allows independent  transactions of goods and services. You can sell to one person and buy from any other, instead of having to set up a complex barter network with multiple prosumers or grow/hunt/forage everything on your own.
In modern times, you can also manipulate the market by artificially altering the money supply like a throttle on an engine.

Even so non-monetary based transactions have always been with us and seem to take 3 forms:

1. Bartering. Exchange between 2 people or a chain of prosumers.

2. Reputation. People will do things so that others think of them differently (usually in a more positive light).

3. Common good.  Sometimes hard to distinguish from reputation.  A good example are for-profit companies which contribute to common open source code based so they all share an updated platform to build their products on.

These have not been considered when discussing economics this century, because these types of economies were usually limited to family and local neighborhoods for most people.   A couple of examples:

1. Entering a local pie contest. (production driven by reputation)

2. Helping family members to do home improvements. (service driven by reputation)

3. Taking of your lawn so it looks as good as the neighbors’ and keeps the value of the neighborhood high.  (service driven by common good).

These are not significant to a modern economic structure.  And while wealthier people have donated to causes driven by reputation,  it is usually lumped into the other economic theories because it involved money instead of services.

Most people have been limited in terms of the amount of goods and services they can produce for these non-monetary motivations because raw materials  bought with money were usually required and then it essentially becomes a donation with a small value add. (Purchase the ingredients for brownies and donate them to the bake sale.)

Services can more easily be offered for non-monetary motivations, but their significance has also  been limited in modern times.  Usually these involve some basic labor such as fixing the neighbor’s flat tire. When they get more complicated they start to compete with opportunities to earn income which tend to limit the amount of skilled service people are willing to donate.

The impact of services offered for non-monetary motivations have in the past have also been limited in there impact to the larger economy for three major reasons:

1. Monetary costs of distribution and replication.

2. Limited distribution limits the potential impact and thus motivation for common good, reputation driven services and good.

3. Modern demands for complex goods and services limits the impact of the individual.

Social media removes these all barriers in the case of information products and services. (see more detailed explanation below)

In social media, two types of phenomena have started to change the impact of non-monetary activity to the larger economy:

1. Crowdsourcing/Collaboration (distributed production/ distributed problem solving). The best example is in software: linux, drupal, and other significant software programs which other companies charge license fees for.

2. Information distribution and analysis. Blogging in short.  Reporting on events, spreading the reports of others and analysis of news events.

3. Social Networks.  These have made it possible to impact large numbers of people if you create or collect highly relevant services or information.

Social media is technology amplified social interaction and allows for broad free distribution of information products and services.  Linux now is starting to threaten Microsoft’s dominance in the server and device markets.  Blogging has now replaced a significant amount of the magazine and newspaper industry.  (Actually newspapers seem to be hanging on by getting story leads from the blogosphere.  Don’t believe this? Check the thickness of your favorite magazine and compare it with what it was 5 years ago.)

So the big question… What does  is the impact on people of a non-monetary social gain? How do you compare it against monetary based gains?

Do we need to now combine non-monetary and monetary economics into a more comprehensive understanding of the well being of society?

Broadcast media. Monetary loss?

Broadcast media is irrevocably changing now that anyone has the power of mass information distribution.   And it is being replaced in part by largely non compensated product. The more organized the blogosphere gets, the more crowdsource news sites will popup and probably dominate.  The power of news analysis will be with in the hands of the most trusted analysts (bloggers)  rather than media distributors which many would view as a more just world.

The most significant loss related to media would seem to be the loss of an advertising based which we have relied on to drive consumer demand in the western world for the past 50 years.  If people listened to their friends, they might buy what they need, rather than be convinced to buy what they should want.  Will sidebar advertising on facebook, google and the like replace this as a consumption driver.

The power of social networks to set behavior standards and norms should not be underestimated. The July 07 New England Journal of Medicine had a 30 year longitudinal study which showed that obesity can be spread through social networks.  The messages sent through social networks are powerful.  Broadcast media for the last 50 years has supplemented social messaging with profound effects on society.  From the newness of the car you drive to the size of the house you ‘need’ to live in and has greatly affected consumer demand and is largely responsible for the ‘need’ now to have 2 earner households in the US. (IMHO.. havent had time to do the research yet).

Software and the long tail. Monetary gain?

Linux is growing rapidly as both a server and desktop operating system (though desktop adoption is still small).  IBM one of its biggest supporters though sees an advantage in linux as well as a significant number of its own patents being freely available for anyone to use and innovate on.

The advantage lies in the power of the long tail when it comes to technological innovation.  It turns out inviting people who should know very little about the core of a complex project, often know something very significant about an aspect of the project which is critical or at least important to its overall success.  The one guy who contributes one thing which turns out to be a critical patch against a hacker attack adds tremendous value to the project.  The more complex technology becomes, the more important the long tail is.   And on the opposite end, the less successful closed door efforts are in creating complex solutions.

Skeptics will say that Linux was paid for with money, that it is just a service based model. And for the most frequent contributors that was true to some extent. But the majority of the long tail contributions do not seem to have been paid for and while some programmers may have done the work on company time, it seems clear others worked on their own time. In either case a contribution to something accumulative and distributable to people who were not their clients is clearly a non-monetary contribution, even if services were paid for one time. The net result is a paid short term service plus a freely distributable enhanced product.

Overall win or loose?

The death of mass media while significant in the short term, means that people will more in touch with the reality of others in the world, rather than having vision created for the purpose of selling product.  I argue that mass media is in some ways a result of technological limitations because natural human communication is 2 way with all parties having the ability to choose to broadcast or  listen.  Hopefully this more tightly knit online world community can help prevent the potential damage which could occur as we go through the current deflationary cycle.

According to socio-economics, we need a significant technological innovation in order to bring about the next economic boom. While some have assumed this would come from nanotechnology or robotics, I think it may come from the development of a knowledge or semantic web. A semantic web could be brought about more quickly by using crowdsourced techniques to create the necessary underlying ontologies or definitions in a ‘Linked Open Data’ model.

So if social media can be utilized to bring out the next economic boom based upon a crowdsourced knowledge web platform, it would definitely result in an overall economic gain.

Look for more soon on why a knowledge web would lead to the next economic boom.

Why social media removes barriers to impact of non-monetary goods and services.

In business when you offer a skill, you create mechanisms to make it repeatable or widely distributed to multiply it’s economic effect.  Repeatability and distribution mechanisms have normally been costly in time, labor or infrastructure,  so were not used normally utilized in non-monetarian economic transactions.  And because they have lacked these features,  non-monetarian economic productivity contributions have normally been considered insignificant.  Also the limitation of distribution and replication have also limited the motivation to produce these goods and services.  Social media has removed the cost of distribution and replication for information based products and services.

The need to complex services for greater impact has also limited the economic impact of these non-monetary distributions.  Traditionally it has been difficult to collaborate on voluntary efforts because the small amount of time people have had to put into non-monetary efforts.

I am not an economist, but sometimes I play one on this blog. Why? Turns out understanding economics is important. Feel free to correct or argue the points I make.

Socioeconomic (Kondratieff (Kondratiev), Schumpeter, Kuznets) theory seems to be driving the current deflationary cycle more so than fiscal economic (Keynesian / Monetarist) or political economic (Libertarian/Austrian) theories offer the opportunity to reverse it. Socioeconomic theory basically says in order to get out of a deflationary cycle, it is an sociological problem as much as a fiscal one. The solution is the appearance of revolutionary technology promising large profits for investment in order to start the next boom cycle, and snapping the society out of the blue funk created by an economic downturn.

OK, so you expect me to say to get on the bandwagon and say, social media that will be the key to the next economic boom right? I don’t think so but I do think social media could help mitigate the damage caused the deflationary and cycle and may be instrumental in constructing the next opportunity for technological innovation.

But first I want to start a discussion to try to understand what the objective economic potential is of the social media revolution.

Social media uses technology to enhance the ability of people to interact with others.  Technology powered interaction, connection, trust and relationship building. In the business world this means establishing trust and communication channels that support and enable collaboration, and build engaged teams by removing barriers and frustration created by traditional structures.

Social media especially in the form of collaboration has the promise of unlocking hidden knowledge in organizations when needed, lowering the cost of software through open source collaborations, finding relevant information more quickly, and making organizations more agile and responsive. But these are mostly cultural changes which usually occur slowly.

So the promise for change is there, even though will take longer right?  Yes, but the technology needed to invest in to bring these changes about is relatively cheap.

Social media will bring change, though.  It has the promise of creating more efficient companies through collaboration, a greater variety of information services at low cost through mashups and open source, and a lower cost to product and service messaging, when the product or service has great appeal.

But, at the same time social media is having a destructive effect on major existing industries. Traditional advertising media is becoming less and less effective as the more audience becomes more networked and attentive to one another. Friend of a friend referrals, rating sites or consumer oriented websites will become the norm and rely on their objectivity to maintain trust with their followers, therefore are not as subject to trying to manipulate their audience based on the promise of big advertising revenue. Make no mistake, manipulation is clearly part of the social media landscape, but the ability for anyone to broadcast and be heard by large audience networks means it is more difficult and will in the end be the exception rather than the norm.

Retail product distribution may also take a hit because of social media, since e-commerce services are being enhanced with a layer of crowdsourced social intelligence ‘people who bought X also bought Y’. Also large companies can offer lower prices but still, through social media, have a personal touch, previously the advantage of the small business. Essentially this is pushing toward the commodization of all mass produced products and the markets of the future which have opportunity for larger profits will be niche markets requiring subject matter expertise and customization.  Essentially all growth markets in the future will be niche markets.

So in the short term, social media’s gains in economic investment may be offset by the disruptive role it has in traditional industries.  In the past technology changes lead to obvious and simple routes to large scale increases in productivity and demand. A path for social media methods to lead to an increase in productivity and demand in a short amount of time is less obvious since it requires a cultural change as much as a technological one.   In the longer term, as the culture adopts it to the full potential of social media, there may be large scale increases in productivity but in the near term social media is not providing a clear path for investment to lead to gains efficiency and productivity and even that ROI for social media applied throughout the economy is still anecdotal rather than proven.

Another question to ask is whether social media can help mitigate the damage done during this deflationary cycle.

Tightly knit communities survive economic stress better and social media allows more of the world to get and feel connected.  Also it actually gives people something to do if there is a lack of economic activity.

The motivation for this seems to be a sort of reputation economics which motivates people to do things like create open source software, do reporting on events, and a lot of other information services which before the internet, people would absolutely be expected to be paid to do. This is allowing rich content, development of useful products etc to be done without investment but with returns, such as any business which hosts their websites on linux servers or uses open office to create and manage documents.

In addition, these longer term efficiencies such as the ability to create complex systems such as an operating system (think Linux) at a very low cost and rather quickly, could help bring out a new technological innovation which present a clear path to increase productivity and demand. If we could determine what that innovation is and how to bring it about more quickly, then we might be able to shorten the deflationary cycle which the socio-economic theories predict.

Watch for future posts on why that technological innovation we need, may be the knowledge web promised by the Linked Data concept.

This post is in beta. I am looking for help in better understanding the connection between policy and effort so we can discuss it at the upcoming Gov 2.0 camp.  I am not an expert by any means in this area, but am struggling to understand the problem from a data perspective.  The semantic web initiatives and in general the goal of a collaborative government drove me to seek this understanding of how policy is connected to effort.

One of the 3 things which the NAPA paper on Enabling Collaboration:  Three Priorities for The New Administration identifies as a barrier to a more collaborative government is  ‘An inability to relate to information, and information to decision making.’   This hints at a critical problem in creating new initiatives which is not having enough information to plan a path to implement a new initiative.  I believe the solution is to map the connections between policy, responsibility, effort and procedure as critical pieces of data to inform decision making.  This has the potential to speed  progress in creating a more agile, innovative and collaborative government  just as mapping the genome has sped progress in genetics.

Specifically, I see missing connections between policy, responsibility, procedure and effort required to create new initiatives.   Let’s call it PREP (Policy-Responsibility-Effort-Procedure) data since everyone loves an acronym.   So PREP is essentially a line connecting 4 points from policy to the person trying to create a new initiative.  From a policy at a high level, to offices which have responsibility to ensure the policy is followed, to procedures created by those offices and to the effort to follow those procedures.  (I am sure in reality its more complicated than that but lets keep it simple for argument’s sake).  Of course each initiative has multiple policies it must be compliant with, so multiple lines between the effort and policies.  The procedures are often interdependent, yet created independently by separate offices often in isolation from what other offices do.  In the end you have a thick mesh to get through, that needs to be rediscovered for each new initiative.   I  have come to the conclusion that mapping PREP data is critical to creating a more collaborative, agile and innovative government.

The problem starts with policy being handled as a 19th century invention, that is as an isolated document.   Then the document is passed to various departments with responsibility to make sure the policy is followed.  These departments create procedures to ensure the policy is followed. When someone wants to create a new initiative or project, they need to determine all procedures from all policies involved and then put in an effort to follow  these often disjointed procedures which often have hidden interdependencies.  This seems to be a primary cause of what we commonly call the ‘bureaucracy’.

Many well intended policies come together to produce unintended, entangled procedures which form a barrier to quickly creating new initiatives.  Essentially this is a emergent property of the many policies which have been implemented over the years, as well as the many offices created to follow the many policies.  The result is fewer or slowed new initiatives leading to less innovation and collaboration. (Since almost by definition collaborative efforts will involve new initiatives.)  A confounding problem is that new technology is causing procedures to have to be reconsidered and policies reinterpreted which adds to the complexity.

There is data on policy to effort connections but it does not seem to be centrally accessible or uniformly stored.  And the large differences between  interpretation by individuals on every node in the PREP data which can change for every decision confound the problem of understanding what is really happening.

New policies to instigate new initiatives are now being issued and fast results are expected, but because of this unseen mesh which holds up execution, the top levels are frustrated with the work not getting done.   Meanwhile the people in agencies feel that they are too constrained to get the new initiative started.  Since the mesh is invisible, solutions to change the system become confusing and difficult to follow because they normally add to the mesh rather than disentangle and streamline it.

Solution: Map the PREP (Policy-Responsibility-Effort-Procedure) data and use this map to create guidance on streamlining implementation of policies as well as identifying duplicate or unnecessary procedures.  The data should include the amount of ongoing or one time man hours involved in the effort to follow a procedure, the average calendar delay caused by a procedure, and any interdependency with other procedures.

How? Initially just collect the data in a standardized and centrally accessible format.   It will be almost immediately useful. Use collaborative techniques to collect a lot of data quickly even if it means lower quality data initially. Then gradually move the data to a Semantic/RDF storage system where it can be queried in many different ways and linked to the broader set of definitions such as law, case history etc.

This will be the start of making a more agile, collaborative and data centric government.

The Challenge to this approach: Besides data management which is not too bad initially.   A lot of these hidden paths are not 100% ok with 100%  of interpretations of policies,  so how do we create a collaborative environment without people worrying that the interpretations which allow them to get things done will be ruled to be incorrect?  It seems this needs to be a research project that can’t be looked for that purpose.

At lunch Friday, I jokingly asked the question, what would be the economic impact of google mail going down be?

But after thinking a lot about cloud computer and semantics this weekend, I started to wonder if that is a serious issue.  Someone at the table did mention that google mail did go down for 2 hours recently.

In the next 5 years or so in the commercial side as well as federal, there will be a massive shift from single server to cloud computing as well as an increasing reliance on everything being always up because of the interwoven nature of the semantic web.  Websites and webservers will no longer be individual and isolated but exist on the ‘cloud’ or rely on it in one way or another.

By definition the cloud is supposed to be more reliable and more redundant than a single server. But is it more reliable and redundant than millions of individual servers?

I don’t pretend to understand cloud architecture, but I did note that Vivek Kundra a few months ago, said that any data with a national security requirement, would not exist on the new federal cloud.  So what are the implications for  massive civilian clouds from Google, Amazon and Microsoft that business email, websites and data services would rely on?

So if most commercial data will be on the growing commercial civilian clouds, doesn’t the economic impact of large outages, start to pose in itself a national economic security risk?  Especially if an outage could include data loss?

Of  course the temptation is to say the companies themselves will make sure that doesn’t happen because they have such a large financial stake in reliability. That seems reasonable.  After all look how well that approach worked in banking.