Tag Archives: evolution

Invest in Biotech and Information Sciences

If we invest heavily in biotechnology and information services companies (especially genomics, networked centralized computing, neurology, neural network predictive applications, and nerve regeneration) in the next 50 years, many currently living people may have an opportunity to achieve substantially improved and lengthened quality of life and indefinitely extended sentience.

It’s more than a financial return, but it can still be evaluated financially. The return on these investments should be calculated as the return on the securities themselves, plus the return on your other investments over the period of time that your life and investment horizon are extended. It is possible, then, that the net return on biotech and information science investments may be substancially higher than the direct value change for those investment securities.

Distributed Processing and Biological Approximation

The complexity of biological thinking is impossible is impossible to replicate today when constrained by the limitations of a single machine. With the ability to define and exchange standard objects and a standard interface, this barrier could be broken. The hurdle of coding models that approximate the function of the brain has also been insurmountable as long as development was dictated, managed, and organized within a company. The upper limit on programs of this origin seems to be in the range of 10 million lines. The growth of open source development opens the door to the integration of many codes to form the billions of lines necessary to approach human capacity for thought.

Building Intelligent systems: biological and software

There are amazing analogies between computer processors and the brain, and it is just a matter of time before the algorithms that define electronic processors mirror the functionality of the chemical processors of our brains. The biological neural systems rely on inputs and pattern recognition to learn. Computers are excellent at storing (remembering) facts, and are becoming proficient at recognizing relationships as defined by statistical patterns and neural networks. However, computers cannot yet create metaphors or learn how to independently process new information.

Metaphors are an important part of how humans think. We understand new information as we draw parallels and connections between it and prior information. Over-simply put, we learn by recognizing relationships between new information and old. Computers could do the same when their sets of data include enough relevant fields to be able to computationally identify systems that are described by similar dynamics. In other words, when a computer has data about how things works, it can find systems that work similarly to each other. The leap from there to drawing metaphors is a data-mining process: it can be solved by computational rote, where statistical relationships are identified, prioritized, and used for prediction.

This process also leads to “learning” about how to process new information. By recognizing the metaphors, new data can be classified and described according to how it is understood. And just like in our brains, there will be errors. Misunderstanding will occur as metaphors are calculated based on incomplete data sets. As more data is input, systems will have to be able to make corrections and re-calculate all of the other metaphors that included the corrected data. New corrections will be made and a cascade of corrections will result in a modified historical data record. A large number of calculations and recalculations will occur with each new input, and the storage of historical data (and calculated results) will require substantial processing and storage.

Recognizing metaphors will allow machines to output statements like: “It appears that ABC is driven in many similar ways to XYZ. The result we are seeking might be accomplished by A because a similar result was achieved in XYZ when X was applied.” Put more simply, computers will be able to express creative suggestions.

Interestingly, storage could be massively reduced by deleting large volumes of data that support the relationships that are strong enough to overcome some threshold level of certainty. For example, if everything falls, then we don’t have to keep all that data, just the relationship that everything falls. This may be analogous to forming intuitions.

The Evolvor Cycle

I think that the following cycle is an abstraction that applies to many forms of systems and, when implemented, can create very powerful evolutionary dynamics. It actively evolves the underlying system, so I name it ‘Evolvor cycle’.

Applied to a configuration, for example, it would set the original default for new users according to the implied preferences of the existing users. The option to signal new preferences continues the cycle and the default configuration evolves over time without any human administration.

Remote Storage and Analysis of Sensory Logs

Wireless connectivity can solve the problem of local storage constraints for PDAs and other portable devices (including wearables). In addition, external processing greatly increases the breadth and depth of analysis that can be performed on the data. For example:

  • Journaling
  • trend analysis and advice
  • recommendations of – media, medical, communications, reminders,
  • Statistical analysis of log data forecasts user reactions to new voice and other input.