Web 2.0 Blog – Discovering Innovation Opportunities using Social Media

Archive for the ‘Web 2.0’ Category

Tim Berners-Lee concept of linked data clearly is a way to make data more usable whether this is public data or data within a large enterprise.   Linked data promises a future which makes related data more interoperable, discoverable and opens the door for innovation.

But how do we take large existing data stores and apply linked data principles to achieve these benefits?  We currently have massive existing data stores with complex security regimes which are depended upon for many legacy applications.   To make them available as Linked Data is a huge challenge especially if we were to recreate these data stores in XML syntax using RDF/RDFa or even simpler XML schemas.  This is coupled with the fact that many of benefits of the reconstituted data have not yet been invented so an ROI argument cannot clearly be made.  Of course, they haven’t been invented  yet because while many can agree the data would be more usable, those uses must be discovered by fiddling with the data in linked form and discovering the uses that emerge.  Since the linked form,  doesn’t yet exist, we have the classic chicken in the egg problem.

Perhaps there is a step we can take toward linked data without making large changes to the existing data stores in government and industry.  Let’s review the principles of Linked Data first (as paraphrased from wikipedia to add clarity):

  • Use URIs (Unique Resource Identifiers) to identify things that you expose to the Web as resources.
  • Use HTTP URIs so that people can locate and look up (dereference) these things.
  • Provide useful information about the resource when its URI is dereferenced.
  • Include links to other, related URIs in the exposed data as a means of improving information discovery on the Web.

The striking thing about these principles is that they don’t mention XML or RDFa etc but focus instead on linking data to definitions.  So it would seem a hybrid solution between the linked data concept and existing databases is possible.  We could add URIs as fields in existing databases for important elements and define a central location where we will track information about that element.  For instance, in the US government there are lots of federal buildings used by multiple agencies.  So I would assume many agencies have databases which refer to federal buildings.  Why not establish a central location to define those buildings and assign each a URI. (A URI by the way is essentially a universal identifier for a real world object.  Essentially it is a web page for each building, but the page would more like contain data links than nice pictures.  (Oh and some people refer to URIs as URNs or Unique Resource Name in an effort to make them more human readable which is nice too) .

So each federal building would have a URI/URN and we could of course put more information about each building in a centrally defined schema, but that will start to be real work and have instant security issues.  So why not initially just have URIs contain recipricol links to databases which also contain that identifier?  The links would have brief non-security breaking descriptions of what type of data is stored in the database which is linked to.    This would remove the need to re-securitize a lot of information to make it cross-department/cross-agency available.   And here is the other key to success for this type of solution: Don’t require the back links to the databases to expose data unless they already do so.   If we start requiring data to be exposed in this step,  it opens up the security pandora’s box.   We need to avoid imposing a new security regime for centralized data,  because it is a stumbling block which would create delays and costs.  And if people do not clearly see the benefits of this step, then it would simply die in committee in most cases.

So that is fine you say.  We have URIs for important data elements and for databases which contain those elements but it is not exposing data,  so where is the benefit?  I think this stripped down version of linked data would have 4 definite benefits:

  • Reference.  The URIs could serve as reference documents to find where similar information is stored. Users could then apply for security permissions on an as needed basis when they need to link to other databases.
  • Innovation.  Users, who would now have a more complete map of available data could be begin to suggest more uses for linking the data.
  • Discoverability.  Search engines (internal or external depending on the security decided upon for the URNs) could make existing databases more discoverable because the engines could discover  important data elements in the databases.  Search engines make use of links to discoverable relevance to searches and are often key to researching problems .
  • Interoperability.  The process of assigning URIs will begin to expose problems in data interoperability due to different definitions in different databases. The URI map would serve as a survey of issues in creating truly interoperable data.

So now the readers of this blog are in at least 2 camps.

  • Those who feel this is a half measure and would be a distraction from advocating for more completely linked data.
  • Those who are still not clear on the benefits of bothering to start the process of linking data at all.

I am hoping there is a third camp which sees this as a doable step in large enterprises such as the US government.  And that it would be the first step toward data which is more linked and therefore more usable for both public and internal uses, and eventually interoperable.

Let me know which camp you are in!

I noticed after writing this post that the underlying theme emerging from the fanciful thought droppings below is that it is best for the end user if data and applications are separate and interoperable.   The theme is starting to highlight for me the promise of semantic technology and open data standards.

I keep hearing will facebook win? Will google win? Will microsoft ever get out in the running? Will twitter be bought and by whom?  I wanted to offer another option.  Could the people win?

How would the people win?
Well what is a social network anyway? It’s a series of connections between people and it has rules for distributing information to people based on their connection.  Mutually agreed friends, followers and non-connected voyers following what you do and when you do it  as well as sharing with you.  The connecting and sharing  rules of the social network you choose determines what others see and, if you are up on the privacy settings, how you are connected with them.

Right now our choice is which networks to be on and we make that choice based on the connection rules, the type of content and interactions that can be had and where the people we want to connect with already are. As facebook or another network become more popular, it becomes more difficult not to choose it.

But we pay a price for choosing an online social network.
1. We have to accept the interface which is chosen for us. And while more customizations and widgets are coming out, the essential choice of interface is the control of the provider, not us.
2. We can’t choose our ideal mix. For instance what if we want a myspace style interface but with our facebook friends feed?   There are some configuration options available but still trying to match what we want with what is out there can be challenge.
3. We get targeted advertising based on our peronal information. Maybe we want it, maybe we don’t but at any rate we are not in full control of our information which gets mined for these ads.
4. We can’t move our information to another network or cross link to people in other networks. This is changing some but our information is still not in our control.
5. We can’t create our own rules for connection and viewing, we have to relay on a central authority to do this, even if they allow some flexibility. Very non-Web 2.0.

So how do we win?

What if instead of our data residing on a social network server, it resided on our own private space in the cloud?

And what if we could choose or even create the applications which would allow our data to be seen but others and with the rules which we decide on.  So we could use a facebook style application to interact with our friends but our friends wouldn’t have to be “on” facebook. They would simply have their own ‘cloud space’ and they could send twitter style updates back to us and not have to look at the vacation pics we just posted if they don’t want to.  But they could also choose to send some updates only to some people if they wanted, rather than having the choice tweet to all or tweet directly to one.  Basically the social network core of connections and activity of you and your friends could be managed by any number of applications and rule configuration more tailored to each individual. The way you want to interact with your friends and who your friends could be would not be determined by the popularity of a social network but by you.

Would this kill facebook or google?  Facebook would probably be the most popular application for people to choose to use to interact with their friends with and they could still get their ad revenue.  Google could provide the cloud space to host our data securely for free with ads or for a small cost as well as provide an interface application if you want it.

Twitter provides the first step in separating social data from the social application and it is good evidence of why this approach would be so popular.  I don’t mean the asynchronous relationships or the 140 character limit, but the fact that anyone can build a twitter application to interact with the “cloud space” of twitterfeeds.  Tweetdeck, tweetgrid, and many other twitter applications let people choose how to interact with their social connections and what their interface looks and feels like to some extent.  I am suggesting is widening this approach to include all of your personal information which you would want to potentially share and putting you back in control of your own information.

So you could have one interface for your immediate family, another window for friends and another for interesting people you follow or combination you choose.  Application vendors could make money through ads but you would choose who had a privacy policy on what those ads could find out about you.  Or you could choose to keep everything very private and pay for a service and place to keep your data.  This is similar to what people refer to as interoperability between networks but also with the twist of separating our peronsal data from the network itself.  So its more of an interoperable data model for social networking than an interoperable social network model.

Would this work?  Is part of a social network, the common rules and ways to connect which we are all are agreed upon?  If some people could stop sharing a lot of information except with their BFs, would the fabric of the social network be weakened and this whole idea result in a less networked world?    I don’t think it would because the culture has started to discover the benefits of sharing, but it’s definitely an open question.

So how do we get there?  Hmmm. Not sure.  Google’s free app engine could potentially power something like this. Something like a user rebellion which occurred when facebook tried to change its privacy policy a couple of months ago might be the start of an online privacy movement.  Right now people seem to be having too much fun though to worry about being in charge of their own information. Will this change?  I guess it depends what the social networks decide to do with all of our information that they have.

I decided to take my Wordle data set out for another spin and make google maps from each category.

Here are the maps. Hope you enjoy them!

There are not 100 questions in each map because some people did not provide valid US locations and a few questions were taken out for being off topic as described before.  The maps end up have 829 questions in 9 categories. Thanks to MapaList for the map tools and Ken Ward’s HTML guide for the javascript template.


This is my rough draft in my work with the W3C E-Gov Interest Group. I wanted to get comments from those working on social media in government as we work to finalize our recommendations. Please keep in mind this is for an international standard, so I have no assumed that 508 compliance is required but rather wrote about what compliance policies in digital age should take into consideration.

Multi-Channel Distribution Standards.

Distribution to Non-Government Websites and Platforms

In an age of connected data, standards are not just about the format of information but are also about accessible and fair distribution. That having been said, a balance must be achieved so that distribution of information does not become a barrier which limits the amount of information which is distributed.

In the digital age, information is key to both economic and social development of societies. Therefore, governments need to prioritize making the most information available through broadly distributed channels over limiting information in order to make it most broadly available and distributed. This is a classic 90/10 effort issue, where the last 10% of effort to broaden distribution and availability to near perfection would take 90% of the effort. Too often governments have opted for an all or none method in information distribution and it has resulted in less distribution and a lesser good for the public as a whole. The amount of information is too vast given the current state of information storage formats and technology to make all information accessible through all conceivable methods and channels. Accepting this fact and opening up government data needs to be the priority.

That having been said, wide-spread availability should not be discarded but rather a system should be in place to determine which information warrants the broadest, most accessible distribution and which information should be posted but resources are insufficient for the broadest possible access.  (Of course in both cases, the format chosen should be a non-proprietary an,d when appropriate, ‘mashable’ one so that the public may redistribute and remix the information if it chooses.) Concern for availability to all may be handled by providing a government sponsored service which can provide specific data in  alternate formats on demand.

This is not a radical departure from traditional accommodations but rather a continuation of choices which have become routine. An excellent example to understand how this is an extension of existing policies is to consider library books and the blind in the US. Library books for the sighted are more widely available and more easily available at libraries across the country,  but Braille versions of books can be accessed on demand through the Library of Congress’ National Library Service for the Blind and Handicapped. A similar program could be developed for on-demand access of multimedia material for the handicapped. That having been said, basic accommodations which can easily be built into websites to promote accessibility should be addressed with social media providers by encouraging broad accessibility to their material and links should be provide on multimedia home pages on how to request more accessible versions such as closed captioned videos.

It seems some people are misunderstanding this as advocating abandoning progress in accessibility.  I assure you this is not the case.  But it is simply stating plainly what already occurs throughout society and government already.  If you look at multi-lingual issues, not every document in the US from governments is immediately available in Chinese, or even Spanish for that matter.  I simply am saying if that EVERYONE is better served by as much government information as possible being available in some way and that should be the priority.  It is imply not possible to make everything avilable in all possible ways but when the need arises, on-demand services can supplement  the less broad methods of making information available. I hope this clears it up.

Availability in Social Media and Across the Digital Divide

Availability is determined by 3 factors which form a digital divide in most countries: device, bandwidth or connectivity, and user disability in using the device (commonly known as 508 standards in the US). Device availability varies because interoperability standards but also based on lifestyle, screen size, audio clarity and raw processing power. (My blackberry can play audio and some video but I am not going try to access a lot of that content in reality even if there is a way to squeeze it onto the device.)  Both wider broadband distribution and availability of information on mobile devices can help to solve this issue. One of the ways in which governments are broadening broadband access is through free internet enabled computers at libraries and kiosks. The type of access which is made widely available to citizens for free at public locations as well as the connectivity and devices available at the lowest price points should be considered when choosing data standards, platforms, devices and websites for the bulk of information. If broadly available public access is not compatible with how the majority of a country’s citizens use the internet, then clearly public internet access is not adequate.

Fair distribution on Non-Government Portals.

The lower costs devices and the lower costs access in most countries means that whether a website or platform makes text based information available on low cost mobile platforms should be taken into account. While most platforms are multimedia, there is still often the opportunity to provide some information in text form for mobile access.

The availability of multimedia information should be announced and searchable through text based services so that users who have limited access to multimedia enabled workstations, can find out about resources they need and go to a kiosk or library which better connectivity or devices are available. To prevent those without full access even to discover what is available would effectively block its use, since time and context when accessing the public internet is limited.

Fair distribution becomes an issue when government distributed content through selected websites, platforms or devices creates an unfair advantage for a particular device, platform, distribution network, or website or disadvantages a defined demographic among the citizentry. It seems appropriate for governments not have to expend resources on wide distribution if the bulk of the intended audience is on one platform or website, but some consideration should be taken so that governments do not become unintentional monopoly makers through their social media distribution choices. Again this consideration should not take priority over wide distribution of the bulk of information but be a factor in making policy choices.

Posting Information on the Social Web

The nature of social media information is that it is posted on locations which are not on government servers under its control and is distributed though social connections not through formal organizations. Social media information is distributed on websites which choose whom to allow access to the website and which behaviors are acceptable for participation. Also a user’s activity and connections on a social media website determines to some extent how much exposure they receive to information available on that site. For instance, someone is who is a friend of a person who participates in government discussion boards will be more likely to be exposed to government distributed information than someone who is not similarly friended. Likewise, people who belong to communities who choose to participate in smaller online venues will not be exposed to the government distributed information on the larger venues. For instance, what about the parent who blocks Youtube on the household computer because of objectionable material? Some consideration to the unevenness of social media distribution should be made.

Multimedia central feed for externally published info.

Therefore a government using social media to distribute multimedia, should create a public location which announces distribution of documents and content with links to their openly accessible location.

A central text feed of all distributed info will serve four purposes:

1. Provide the public with a completely open and highly accessible index to content provided through social media channels.

2. Provide the government content in a form isolated from other content to broaden distribution to those who prefer to avoid mixed distribution sources.

3. Provide other smaller content providers and websites a mechanism to have the same government content as larger providers.

4. Provide a central reference location for any on-demand accessibility service requests for government sponsored or partnered services such as closed captioning or braille.

This media index feed could be in the form of a searchable text feed which link to the original documents. The text feed would be searchable from text based mobile devices as well as web browsers. Search would be provided through a tagging mechanism which at the least allows those posting the information to create new search tags and categories. It also may allow the public to tag items to create a folksomy based search. Documents would be in a freely accessible format, so long as that format allows for the same distribution both in context and content to other websites as was carried by government officials. For instance, if a document was associated on a social media website with certain search tags, titles and description attached, those tags should be indicated in this feed.   If a document had hyperlinks or embedded content placed in it by government officials, those hyperlinks and content should be preserved in this centrally stored format.

Video and audio should be available from a link on this central feed in an instantly playable format such as a progressive player linked to cloud based storage so high demand will not slow distribution, as well as a downloadable format which can be used to replicate the distribution on other websites. Again the meta or context data which allows for duplication of the original post to the primarily distribution site should be stored in the feed or the linked files.

In the case of virtual world information distribution, some capture of the virtual world experience would be attempted to replicate the primary message in some way such as a video of the experience. If it is possible to store in an open format 3-D objects or actions, that content maybe also be considered for placement in this central data store.

To the extent that an industry standard is developed to allow easily subscription or importing of documents and audio/video content to alternate media websites and platforms, governments should adopt these methods to support their central feed.

Conclusion.

Governments should clearly prioritize distribution and accessibility options which do not pose barriers which would result decrease the amount of information distribution. At the same time some consideration to disabled users, users without high bandwidth and high cost devices, as well as devices, platforms and websites with smaller audiences should be taken for high priority information as well as possible on-demand conversion services. A low-barrier method which could serve as a base from which to achieve these accomodations would be a central text-based multimedia index feed containing hyperlinks to content in open formats. This feed would be searchable from both text based mobile and internet browsers and contain context information which would allow replication of the content posting which were created on non-government websites by government officials.   If possible this central feed would facilitate posting of content to websites by those website owners, so that the websites themselves can opt in to the distribution.

Economics, according to wikipedia, is the social science that studies the production, distribution, and consumption of goods and services.

Notice money is not mentioned. But the current theories of economics socio-economic (Kondratieff (Kondratiev), Schumpeter, Kuznets) , fiscal-economic (Keynesian / Monetarist) and political-economic (Libertarian/Austrian) theories are all based on monetary markets.

Well that makes sense because MOST of the production, distribution and consumption of goods and services in society has used money as a means to relate them to another for at least 200 hundred years.  Money was a great leap forward in human history because it allows independent  transactions of goods and services. You can sell to one person and buy from any other, instead of having to set up a complex barter network with multiple prosumers or grow/hunt/forage everything on your own.
In modern times, you can also manipulate the market by artificially altering the money supply like a throttle on an engine.

Even so non-monetary based transactions have always been with us and seem to take 3 forms:

1. Bartering. Exchange between 2 people or a chain of prosumers.

2. Reputation. People will do things so that others think of them differently (usually in a more positive light).

3. Common good.  Sometimes hard to distinguish from reputation.  A good example are for-profit companies which contribute to common open source code based so they all share an updated platform to build their products on.

These have not been considered when discussing economics this century, because these types of economies were usually limited to family and local neighborhoods for most people.   A couple of examples:

1. Entering a local pie contest. (production driven by reputation)

2. Helping family members to do home improvements. (service driven by reputation)

3. Taking of your lawn so it looks as good as the neighbors’ and keeps the value of the neighborhood high.  (service driven by common good).

These are not significant to a modern economic structure.  And while wealthier people have donated to causes driven by reputation,  it is usually lumped into the other economic theories because it involved money instead of services.

Most people have been limited in terms of the amount of goods and services they can produce for these non-monetary motivations because raw materials  bought with money were usually required and then it essentially becomes a donation with a small value add. (Purchase the ingredients for brownies and donate them to the bake sale.)

Services can more easily be offered for non-monetary motivations, but their significance has also  been limited in modern times.  Usually these involve some basic labor such as fixing the neighbor’s flat tire. When they get more complicated they start to compete with opportunities to earn income which tend to limit the amount of skilled service people are willing to donate.

The impact of services offered for non-monetary motivations have in the past have also been limited in there impact to the larger economy for three major reasons:

1. Monetary costs of distribution and replication.

2. Limited distribution limits the potential impact and thus motivation for common good, reputation driven services and good.

3. Modern demands for complex goods and services limits the impact of the individual.

Social media removes these all barriers in the case of information products and services. (see more detailed explanation below)

In social media, two types of phenomena have started to change the impact of non-monetary activity to the larger economy:

1. Crowdsourcing/Collaboration (distributed production/ distributed problem solving). The best example is in software: linux, drupal, and other significant software programs which other companies charge license fees for.

2. Information distribution and analysis. Blogging in short.  Reporting on events, spreading the reports of others and analysis of news events.

3. Social Networks.  These have made it possible to impact large numbers of people if you create or collect highly relevant services or information.

Social media is technology amplified social interaction and allows for broad free distribution of information products and services.  Linux now is starting to threaten Microsoft’s dominance in the server and device markets.  Blogging has now replaced a significant amount of the magazine and newspaper industry.  (Actually newspapers seem to be hanging on by getting story leads from the blogosphere.  Don’t believe this? Check the thickness of your favorite magazine and compare it with what it was 5 years ago.)

So the big question… What does  is the impact on people of a non-monetary social gain? How do you compare it against monetary based gains?

Do we need to now combine non-monetary and monetary economics into a more comprehensive understanding of the well being of society?

Broadcast media. Monetary loss?

Broadcast media is irrevocably changing now that anyone has the power of mass information distribution.   And it is being replaced in part by largely non compensated product. The more organized the blogosphere gets, the more crowdsource news sites will popup and probably dominate.  The power of news analysis will be with in the hands of the most trusted analysts (bloggers)  rather than media distributors which many would view as a more just world.

The most significant loss related to media would seem to be the loss of an advertising based which we have relied on to drive consumer demand in the western world for the past 50 years.  If people listened to their friends, they might buy what they need, rather than be convinced to buy what they should want.  Will sidebar advertising on facebook, google and the like replace this as a consumption driver.

The power of social networks to set behavior standards and norms should not be underestimated. The July 07 New England Journal of Medicine had a 30 year longitudinal study which showed that obesity can be spread through social networks.  The messages sent through social networks are powerful.  Broadcast media for the last 50 years has supplemented social messaging with profound effects on society.  From the newness of the car you drive to the size of the house you ‘need’ to live in and has greatly affected consumer demand and is largely responsible for the ‘need’ now to have 2 earner households in the US. (IMHO.. havent had time to do the research yet).

Software and the long tail. Monetary gain?

Linux is growing rapidly as both a server and desktop operating system (though desktop adoption is still small).  IBM one of its biggest supporters though sees an advantage in linux as well as a significant number of its own patents being freely available for anyone to use and innovate on.

The advantage lies in the power of the long tail when it comes to technological innovation.  It turns out inviting people who should know very little about the core of a complex project, often know something very significant about an aspect of the project which is critical or at least important to its overall success.  The one guy who contributes one thing which turns out to be a critical patch against a hacker attack adds tremendous value to the project.  The more complex technology becomes, the more important the long tail is.   And on the opposite end, the less successful closed door efforts are in creating complex solutions.

Skeptics will say that Linux was paid for with money, that it is just a service based model. And for the most frequent contributors that was true to some extent. But the majority of the long tail contributions do not seem to have been paid for and while some programmers may have done the work on company time, it seems clear others worked on their own time. In either case a contribution to something accumulative and distributable to people who were not their clients is clearly a non-monetary contribution, even if services were paid for one time. The net result is a paid short term service plus a freely distributable enhanced product.

Overall win or loose?

The death of mass media while significant in the short term, means that people will more in touch with the reality of others in the world, rather than having vision created for the purpose of selling product.  I argue that mass media is in some ways a result of technological limitations because natural human communication is 2 way with all parties having the ability to choose to broadcast or  listen.  Hopefully this more tightly knit online world community can help prevent the potential damage which could occur as we go through the current deflationary cycle.

According to socio-economics, we need a significant technological innovation in order to bring about the next economic boom. While some have assumed this would come from nanotechnology or robotics, I think it may come from the development of a knowledge or semantic web. A semantic web could be brought about more quickly by using crowdsourced techniques to create the necessary underlying ontologies or definitions in a ‘Linked Open Data’ model.

So if social media can be utilized to bring out the next economic boom based upon a crowdsourced knowledge web platform, it would definitely result in an overall economic gain.

Look for more soon on why a knowledge web would lead to the next economic boom.

Why social media removes barriers to impact of non-monetary goods and services.

In business when you offer a skill, you create mechanisms to make it repeatable or widely distributed to multiply it’s economic effect.  Repeatability and distribution mechanisms have normally been costly in time, labor or infrastructure,  so were not used normally utilized in non-monetarian economic transactions.  And because they have lacked these features,  non-monetarian economic productivity contributions have normally been considered insignificant.  Also the limitation of distribution and replication have also limited the motivation to produce these goods and services.  Social media has removed the cost of distribution and replication for information based products and services.

The need to complex services for greater impact has also limited the economic impact of these non-monetary distributions.  Traditionally it has been difficult to collaborate on voluntary efforts because the small amount of time people have had to put into non-monetary efforts.

I am not an economist, but sometimes I play one on this blog. Why? Turns out understanding economics is important. Feel free to correct or argue the points I make.

Socioeconomic (Kondratieff (Kondratiev), Schumpeter, Kuznets) theory seems to be driving the current deflationary cycle more so than fiscal economic (Keynesian / Monetarist) or political economic (Libertarian/Austrian) theories offer the opportunity to reverse it. Socioeconomic theory basically says in order to get out of a deflationary cycle, it is an sociological problem as much as a fiscal one. The solution is the appearance of revolutionary technology promising large profits for investment in order to start the next boom cycle, and snapping the society out of the blue funk created by an economic downturn.

OK, so you expect me to say to get on the bandwagon and say, social media that will be the key to the next economic boom right? I don’t think so but I do think social media could help mitigate the damage caused the deflationary and cycle and may be instrumental in constructing the next opportunity for technological innovation.

But first I want to start a discussion to try to understand what the objective economic potential is of the social media revolution.

Social media uses technology to enhance the ability of people to interact with others.  Technology powered interaction, connection, trust and relationship building. In the business world this means establishing trust and communication channels that support and enable collaboration, and build engaged teams by removing barriers and frustration created by traditional structures.

Social media especially in the form of collaboration has the promise of unlocking hidden knowledge in organizations when needed, lowering the cost of software through open source collaborations, finding relevant information more quickly, and making organizations more agile and responsive. But these are mostly cultural changes which usually occur slowly.

So the promise for change is there, even though will take longer right?  Yes, but the technology needed to invest in to bring these changes about is relatively cheap.

Social media will bring change, though.  It has the promise of creating more efficient companies through collaboration, a greater variety of information services at low cost through mashups and open source, and a lower cost to product and service messaging, when the product or service has great appeal.

But, at the same time social media is having a destructive effect on major existing industries. Traditional advertising media is becoming less and less effective as the more audience becomes more networked and attentive to one another. Friend of a friend referrals, rating sites or consumer oriented websites will become the norm and rely on their objectivity to maintain trust with their followers, therefore are not as subject to trying to manipulate their audience based on the promise of big advertising revenue. Make no mistake, manipulation is clearly part of the social media landscape, but the ability for anyone to broadcast and be heard by large audience networks means it is more difficult and will in the end be the exception rather than the norm.

Retail product distribution may also take a hit because of social media, since e-commerce services are being enhanced with a layer of crowdsourced social intelligence ‘people who bought X also bought Y’. Also large companies can offer lower prices but still, through social media, have a personal touch, previously the advantage of the small business. Essentially this is pushing toward the commodization of all mass produced products and the markets of the future which have opportunity for larger profits will be niche markets requiring subject matter expertise and customization.  Essentially all growth markets in the future will be niche markets.

So in the short term, social media’s gains in economic investment may be offset by the disruptive role it has in traditional industries.  In the past technology changes lead to obvious and simple routes to large scale increases in productivity and demand. A path for social media methods to lead to an increase in productivity and demand in a short amount of time is less obvious since it requires a cultural change as much as a technological one.   In the longer term, as the culture adopts it to the full potential of social media, there may be large scale increases in productivity but in the near term social media is not providing a clear path for investment to lead to gains efficiency and productivity and even that ROI for social media applied throughout the economy is still anecdotal rather than proven.

Another question to ask is whether social media can help mitigate the damage done during this deflationary cycle.

Tightly knit communities survive economic stress better and social media allows more of the world to get and feel connected.  Also it actually gives people something to do if there is a lack of economic activity.

The motivation for this seems to be a sort of reputation economics which motivates people to do things like create open source software, do reporting on events, and a lot of other information services which before the internet, people would absolutely be expected to be paid to do. This is allowing rich content, development of useful products etc to be done without investment but with returns, such as any business which hosts their websites on linux servers or uses open office to create and manage documents.

In addition, these longer term efficiencies such as the ability to create complex systems such as an operating system (think Linux) at a very low cost and rather quickly, could help bring out a new technological innovation which present a clear path to increase productivity and demand. If we could determine what that innovation is and how to bring it about more quickly, then we might be able to shorten the deflationary cycle which the socio-economic theories predict.

Watch for future posts on why that technological innovation we need, may be the knowledge web promised by the Linked Data concept.

This post is in beta. I am looking for help in better understanding the connection between policy and effort so we can discuss it at the upcoming Gov 2.0 camp.  I am not an expert by any means in this area, but am struggling to understand the problem from a data perspective.  The semantic web initiatives and in general the goal of a collaborative government drove me to seek this understanding of how policy is connected to effort.

One of the 3 things which the NAPA paper on Enabling Collaboration:  Three Priorities for The New Administration identifies as a barrier to a more collaborative government is  ‘An inability to relate to information, and information to decision making.’   This hints at a critical problem in creating new initiatives which is not having enough information to plan a path to implement a new initiative.  I believe the solution is to map the connections between policy, responsibility, effort and procedure as critical pieces of data to inform decision making.  This has the potential to speed  progress in creating a more agile, innovative and collaborative government  just as mapping the genome has sped progress in genetics.

Specifically, I see missing connections between policy, responsibility, procedure and effort required to create new initiatives.   Let’s call it PREP (Policy-Responsibility-Effort-Procedure) data since everyone loves an acronym.   So PREP is essentially a line connecting 4 points from policy to the person trying to create a new initiative.  From a policy at a high level, to offices which have responsibility to ensure the policy is followed, to procedures created by those offices and to the effort to follow those procedures.  (I am sure in reality its more complicated than that but lets keep it simple for argument’s sake).  Of course each initiative has multiple policies it must be compliant with, so multiple lines between the effort and policies.  The procedures are often interdependent, yet created independently by separate offices often in isolation from what other offices do.  In the end you have a thick mesh to get through, that needs to be rediscovered for each new initiative.   I  have come to the conclusion that mapping PREP data is critical to creating a more collaborative, agile and innovative government.

The problem starts with policy being handled as a 19th century invention, that is as an isolated document.   Then the document is passed to various departments with responsibility to make sure the policy is followed.  These departments create procedures to ensure the policy is followed. When someone wants to create a new initiative or project, they need to determine all procedures from all policies involved and then put in an effort to follow  these often disjointed procedures which often have hidden interdependencies.  This seems to be a primary cause of what we commonly call the ‘bureaucracy’.

Many well intended policies come together to produce unintended, entangled procedures which form a barrier to quickly creating new initiatives.  Essentially this is a emergent property of the many policies which have been implemented over the years, as well as the many offices created to follow the many policies.  The result is fewer or slowed new initiatives leading to less innovation and collaboration. (Since almost by definition collaborative efforts will involve new initiatives.)  A confounding problem is that new technology is causing procedures to have to be reconsidered and policies reinterpreted which adds to the complexity.

There is data on policy to effort connections but it does not seem to be centrally accessible or uniformly stored.  And the large differences between  interpretation by individuals on every node in the PREP data which can change for every decision confound the problem of understanding what is really happening.

New policies to instigate new initiatives are now being issued and fast results are expected, but because of this unseen mesh which holds up execution, the top levels are frustrated with the work not getting done.   Meanwhile the people in agencies feel that they are too constrained to get the new initiative started.  Since the mesh is invisible, solutions to change the system become confusing and difficult to follow because they normally add to the mesh rather than disentangle and streamline it.

Solution: Map the PREP (Policy-Responsibility-Effort-Procedure) data and use this map to create guidance on streamlining implementation of policies as well as identifying duplicate or unnecessary procedures.  The data should include the amount of ongoing or one time man hours involved in the effort to follow a procedure, the average calendar delay caused by a procedure, and any interdependency with other procedures.

How? Initially just collect the data in a standardized and centrally accessible format.   It will be almost immediately useful. Use collaborative techniques to collect a lot of data quickly even if it means lower quality data initially. Then gradually move the data to a Semantic/RDF storage system where it can be queried in many different ways and linked to the broader set of definitions such as law, case history etc.

This will be the start of making a more agile, collaborative and data centric government.

The Challenge to this approach: Besides data management which is not too bad initially.   A lot of these hidden paths are not 100% ok with 100%  of interpretations of policies,  so how do we create a collaborative environment without people worrying that the interpretations which allow them to get things done will be ruled to be incorrect?  It seems this needs to be a research project that can’t be looked for that purpose.