Building Intelligent systems: biological and software

There are amazing analogies between computer processors and the brain, and it is just a matter of time before the algorithms that define electronic processors mirror the functionality of the chemical processors of our brains. The biological neural systems rely on inputs and pattern recognition to learn. Computers are excellent at storing (remembering) facts, and are becoming proficient at recognizing relationships as defined by statistical patterns and neural networks. However, computers cannot yet create metaphors or learn how to independently process new information.

Metaphors are an important part of how humans think. We understand new information as we draw parallels and connections between it and prior information. Over-simply put, we learn by recognizing relationships between new information and old. Computers could do the same when their sets of data include enough relevant fields to be able to computationally identify systems that are described by similar dynamics. In other words, when a computer has data about how things works, it can find systems that work similarly to each other. The leap from there to drawing metaphors is a data-mining process: it can be solved by computational rote, where statistical relationships are identified, prioritized, and used for prediction.

This process also leads to “learning” about how to process new information. By recognizing the metaphors, new data can be classified and described according to how it is understood. And just like in our brains, there will be errors. Misunderstanding will occur as metaphors are calculated based on incomplete data sets. As more data is input, systems will have to be able to make corrections and re-calculate all of the other metaphors that included the corrected data. New corrections will be made and a cascade of corrections will result in a modified historical data record. A large number of calculations and recalculations will occur with each new input, and the storage of historical data (and calculated results) will require substantial processing and storage.

Recognizing metaphors will allow machines to output statements like: “It appears that ABC is driven in many similar ways to XYZ. The result we are seeking might be accomplished by A because a similar result was achieved in XYZ when X was applied.” Put more simply, computers will be able to express creative suggestions.

Interestingly, storage could be massively reduced by deleting large volumes of data that support the relationships that are strong enough to overcome some threshold level of certainty. For example, if everything falls, then we don’t have to keep all that data, just the relationship that everything falls. This may be analogous to forming intuitions.

The Future of Productivity and Culture

Productivity will continue to increase – and at an increasing rate. This trend inevitably leads us to the average person only working a small amount to support their basic needs. While this will be true on average, in reality we will most likely see a few individuals working very productively and supporting the needs of growing groups of underemployed people.

Social safety nets will become easier to support (assuming that the standard of social safety does not increase faster than the improvements in productivity). Vast portions of the population will stop working. Cultural differences will become pronounced as individuals and groups ‘specialize’ in non-work activities. Quality and breadth of entertainment, interpersonal interaction, and self-expression will greatly improve.

There will be a growing conflict between the highly productive individuals and companies and the large numbers of people who are underemployed. Managing this conflict will be a major political task.

Centralized network computing will win

I know it’s a big debate right now, but centralized network computing will win in the end.

Centralized network computing is the term used to describe a system of networked web servers (or a single web server) that provides integrated applications and storage for multiple users who can access the system through distributed terminals. The system can be geographically distributed or not, but will share a common (integrated) network of applications, probably using a software interface standard to encourage and enable multiple independent application development teams.

Centralized networks are inevitable because of self-reinforcing competitive advantages. Economies of scale and market forces will lead to substantial change in the way we compute, and the systems we use now are simply the seeds that will grow into (and merge together to form) global centralized information service providers. There are already some very strong indicators that this trend is happening, and the potential points to this trend being a very long one.

  1. There are economies of scale in processing Load balancing can optimize processor utilization and provide both faster interactivity and reduced hardware investment.
  2. There are competitive advantages in information and application aggregation Integrations can break down the walls between programs, improving functionality through better integration and data sharing. You can analyze data in more dimensions and with more flexibility. Development rates can improve as it becomes possible to support separation of more software components and the people that work on them.
  3. Load balancing improves transmissions Transfer rates improve because fewer nodes are required and data traffic can be optimized. Information served to you from a server on your edge is more reliable and fast than information sent to you from another server through your edge.
  4. End-user transparency The front-end possibilities under centralized computing are not limited beyond that of other systems. This implies that there will not be a selection bias away from centralized systems because end-users will not prefer or recognize the difference between systems. That is not to say that they will all be the same – only that they all could be. The opportunity set in one system is available in the other.
  5. The outsourced storage industry exists This implies that there is a willingness to adopt on the part of the owners of data.

You can see the markets already rewarding companies that are moving to take advantage of this trend. Many of these companies are providing application services along with ISP connectivity, and they are capturing traffic. This traffic is investing time and thought into signaling their own preferences. Some examples include personalizing page layout and content — often even using system-wide wallets and e-mail. Giving users what they prefer is a huge competitive advantage. The time it takes to personalize a competing system is a high transaction cost – especially relative to the low cost of inertia.

Eventually, you will be using only a browser. All your computing will occur on a centralized system and be sent to your browser for translation into your interface. All of your applications will be centrally hosted, so your profile and applications – essentially everything you do on your computer – will be available to you from any machine, at any time.

Multiple systems will compete for scale, reducing marginal costs and creating meaningful and irreproducible competitive advantages. This race will likely be won permanently by 2050. Before that time, ASP services will consolidate rapidly along with economic cycles. the early players will rely on loss leader models to attract user bases, and will transition to profitability as the scale reaches the tipping point. The companies that make it to profitability first and reinvest in their technology platform will improve their integration, breadth, and quality to further support their competitive advantages.

In the first decade or two of this trend, there will probably be dozens of smaller companies that are able to enter and gain market share against their larger competitors. These companies will have competitive advantages most likely based on data storage, traditional media integration, wireless adoption, software platform architecture, applications suite integrations, and possibly international comparative advantage. After 20 years, the marginal advantages possible from these characteristics will not pose a meaningful threat to the aggregation and scale advantages of the top few market participants.

Consolidate or die will be the mantra of information companies.

Forecast Changes in Asset Values based on Measurable Cultural influences

The sum total of the media contributed to the internet approximates the attention of society during that period. Then analysis of this media indicates trends in attention and preferences. These trends create signals about the directions of values for securities and other assets. Frequency and trend directions of keyword usage, volume of content in certain classifications, and level and type of contribution of media files vs. sector and industry pricing trends are recommended starting points for analytical comparison.

Reality, perception, and the media’s effect on the economy

For each of us, perception is reality, and drives our behaviors and investments. The media have strong influence over the perceptions of communities, and this drives the net investments of time and money in a community.

It is this net investment of time and money that dictates changes in the economy and the community. So those who produce the information that change perceptions can manipulate the reality of the growth of the community. Is a relationship this general something that we want to foster in our own community? Will it lead down the best path for humanity? A major manifestation of this relationship is the growing value of media. Those who control the media can manipulate the users of it by biasing the information flow. Biases often manifest themselves in analysis and selection of content. It is no wonder that the latest tulip bulb incident centered around the technology and media industries that helped to create that boom-bust. Where would we go if we were not biased? Given all the info and allowed to weigh it as we see fit, we would all more independently perceive the world. The range of individuality would increase, while the volatility of consumer confidence over time would fall because correlations in idiosyncratic confidence levels would fall – affecting every cyclical market. If there is such thing as a risk premium, then people are willing to pay to reduce volatility and are effectively indicating that lowering volatility of cyclical markets would be a good thing.

Is religion succumbing to media as the primary influence in the populations’ perceptions of self and community? And if so, are we replacing one set of biases with another? The most important reason to avoid biases is that they possibly influence decisions away from what is right or best. And the development of the community is optimized when large numbers of unbiased (and rational) individuals participate in decision-making.

We are naturally transitioning toward an environment where individuals have more control over their information. The internet is a strong influence on that process. As the barriers to entry continue to fall in the media space, we will have more and more access to varied information. As this happens, we will see new tools arise that allow us to filter, select, and interpret our information. This is an important process in the evolution of our society, and is a necessary step in the transition toward fairness and truth.