Method of server-side biofeedback system for mechanically evolving human-computer interfaces

Many different forms of biofeedback devices, each with it’s own software, could be attached to a USB port on any networked device that supports USB. The output from the devices are sent real time in raw (compressed and encrypted) form to a server side application, probably written in C++. The installation of the new device involves:

  1. acquiring the biofeedback device,
  2. connecting it to the network device,
    identifying the muscle groups, electrode locations, or any other biofeedback characteristics,

  3. initiating the translational learning, and
  4. interacting in regular teaching sessions within an individual account.

The first signal from the biofeedback device to the server is it’s identification. An installation file is selected and executed on the basis of this identification. The installation file is run on the server side to set up the new device and initiate the translation learning. Translation learning begins as the translation software (Referred to as “Empath”), requests that the user perform certain tasks, concurrently storing the streaming output of the biofeedback device. A new database table is generated for each device installed within your account to your Empath. Correlations are evaluated between biofeedback streams and the tasks being performed in order to identify the appropriate computer response to realtime biofeedback streams.

Each user teaches the application to respond to biofeedback by providing the computer with adequate samples of data. When there is any transalation that does not meet an acceptable standard of certainty, the user is prompted for additional samples that specify the correct translation; this acts to teach the Empath. The teaching system can be disabled, and the certainty standards can be edited.

Device manufacturer would provide a biofeedback hardware device that would stream realtime to a USB plug, and a very small definition file (probably an XML file consisting of identification, and translation variables… possibly specifying the tasks to be requested in initial teaching sessions, as well as other fields that can grow into a public standard interface protocol). From the user account, a list of supported devices could be made available to display the definition files available.

This method would provide an interface between the human and the computer in which the computer and the human communicate through any type of USB biofeedback device. These devices would connect by UBS cable to a network device which is in turn communicating with a server (Probably by thin client streaming software that takes USB input and sets up VPN, compression, and encryption).

As many users teach their Empath to translate their biofeedback, aggregate information will be extracted for increasing the rate of learning for each user. Correlations across very large populations will help to initiate the teaching process for new individuals; this way the Empath can use the rest of the population as a starting place from which to earn the new individually optimized translation.

Implications for computing: computers will be learn to understand your gestures if you can stream your gestures into the USB port. Server side “ASP” processing allows for very thin client applications and extensible device support. Interface standard publication enables mass market public and commercial development of biofeedback devices with USB output.

Implications for humans: We turn the corner such that computers learn to understand what humans mean, rather than humans being forced to learn new (and highly limited) communications skills, like typing.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s