Category Archives: Tech

SOAP, .net, and the ubiquitous internet cloud

Microsoft’s recent major push to develop the .net platform is an attempt to aggregate and brand all internet services into the Windows operating system. And it just might work. SOAP and .net are sometimes referred to as a “cloud” because of the distributed nature of the processing; your machine accesses a server which renders your display based on stored interface components and applications that are potentially stored on different machines anywhere else, controlled by anyone else. I think that it is a key new technology and that it will play an important role in the development of communications technology in the next few years.

What is this technology? And where does it take us?


SOAP and .net use techniques that enable distributed computing and webserving. In other words, they allow web applications to run on independent computers and independent of the look and feel of the web site. Applications developers will want to adopt this technology because it means that they can focus on the application and spend less time on the user interface. Portals will want to adopt this technology because it means that they can integrate many external services and make them available to their users. Microsoft, I believe, is in the process of building this technology into their Windows operating system in order to enable any internet application to be run without leaving the Microsoft-controlled environment.

There is a programming design heuristic that is based on the separation of model, content, controller, and view. SOAP is analogous as it enables the separation of the model. The dominance of this design for programming was very strong, and similarly, the dominance that SOAP enables will likely be very strong. Effectively, The potential of XML is captured through the definition of protocols for the distributed exchange of applications.

What is the risk?

If .net is successful in becoming the dominant channel for web services, then there is a strong likelihood that Windows will combine the operating system and portal functionality to provide the complete computing experience. Microsoft will have the ability to target services, advertising, applications, communications, and other information to each individual. Further, web services will be conveniently available through integrated Windows applications, reducing the need for browser-based web access. Specifically, it will mean that the internet will be able to be re-faced with a Microsoft-branded front-end, and a selection of web services defined by Microsoft.

What is the potential?

In order for .net to become only one of many popular web service aggregators, the SOAP protocol must never give advantage to Microsoft over other aggregators. If SOAP (which stands for simple object access protocol) remains open in such a way that any portal can aggregate any SOAP enabled web service, then the result will be wonderful. Specifically, it will mean that the entire internet will be able to be re-faced with a customizable front-end, and your selection of web services will be personalized and context dependent based on specifications you select.

Centralized Computing Platform

The world needs a platform for centralized computing that enables anyone to commercially publish their intellectual properties through any networked device using their own interface. This platform could be supplemented with advanced and semantic search accross all IP, as well as access to any distributed web service through SOAP.

Wearable Computing

Digital Convergence is about accessing the functionality of a broad array of devices from fewer more pervasive devices. The logical result of SOAP, wireless connectivity, open source software, and increasingly compact hardware is a trend toward a small wearable computer with access to any web service, including personal information, through a customizable interface. In combination with remote device control, biofeedback input devices, and systems for enhancing senses, the implications are astounding.

Distributed Processing and Biological Approximation

The complexity of biological thinking is impossible is impossible to replicate today when constrained by the limitations of a single machine. With the ability to define and exchange standard objects and a standard interface, this barrier could be broken. The hurdle of coding models that approximate the function of the brain has also been insurmountable as long as development was dictated, managed, and organized within a company. The upper limit on programs of this origin seems to be in the range of 10 million lines. The growth of open source development opens the door to the integration of many codes to form the billions of lines necessary to approach human capacity for thought.

Building Intelligent systems: biological and software

There are amazing analogies between computer processors and the brain, and it is just a matter of time before the algorithms that define electronic processors mirror the functionality of the chemical processors of our brains. The biological neural systems rely on inputs and pattern recognition to learn. Computers are excellent at storing (remembering) facts, and are becoming proficient at recognizing relationships as defined by statistical patterns and neural networks. However, computers cannot yet create metaphors or learn how to independently process new information.

Metaphors are an important part of how humans think. We understand new information as we draw parallels and connections between it and prior information. Over-simply put, we learn by recognizing relationships between new information and old. Computers could do the same when their sets of data include enough relevant fields to be able to computationally identify systems that are described by similar dynamics. In other words, when a computer has data about how things works, it can find systems that work similarly to each other. The leap from there to drawing metaphors is a data-mining process: it can be solved by computational rote, where statistical relationships are identified, prioritized, and used for prediction.

This process also leads to “learning” about how to process new information. By recognizing the metaphors, new data can be classified and described according to how it is understood. And just like in our brains, there will be errors. Misunderstanding will occur as metaphors are calculated based on incomplete data sets. As more data is input, systems will have to be able to make corrections and re-calculate all of the other metaphors that included the corrected data. New corrections will be made and a cascade of corrections will result in a modified historical data record. A large number of calculations and recalculations will occur with each new input, and the storage of historical data (and calculated results) will require substantial processing and storage.

Recognizing metaphors will allow machines to output statements like: “It appears that ABC is driven in many similar ways to XYZ. The result we are seeking might be accomplished by A because a similar result was achieved in XYZ when X was applied.” Put more simply, computers will be able to express creative suggestions.

Interestingly, storage could be massively reduced by deleting large volumes of data that support the relationships that are strong enough to overcome some threshold level of certainty. For example, if everything falls, then we don’t have to keep all that data, just the relationship that everything falls. This may be analogous to forming intuitions.