Tag Archives: hardware

Private drones

Private droneIt’s becoming easy; we’re on the cusp. Soon, you will be able to assemble or buy a private drone that can go anywhere, record video, deliver a payload, retrieve a thing, whatever. Land, sea, and sky. Automated surveillance systems can be easy to control. Can you click on a spot on a map? That’s pretty much all you need.

How do we respond as a society when anyone can spy or kill as easily as clicking an app? How do you trace an attack that is delivered from the sky from anyone anywhere? How do you handle alibis when you can schedule a crime with cron?

I don’t know the answer, but it’s a continuation of a long trend toward putting more lethal power into more hands. I hope some smart people are thinking well about this.

Somehow, I don’t think a drone shield is going to be enough.

$24.5 Million Linux Supercomputer

Pacific Northwest National Laboratory (US DOE) signed a $24.5 million dollar contract with HP for a Linux supercomputer. This will be one of the top ten fastest computers in the world. Some cool features: 8.3 Trillion Floating Point Operations per Second, 1.8 Terabytes of RAM, 170 Terabytes of disk, (including a 53 TB SAN), and 1400 Intel McKinley and Madison Processors. Nice quote: ‘Today’s announcement shows how HP has worked to help accelerate the shift from proprietary platforms to open architectures, which provide increased scalability, speed and functionality at a lower cost,’ said Rich DeMillo, vice president and chief technology officer at HP. Read Details of the announcement here or here. I think this is something we’re going to see a lot more of…

Centralized network computing will win

I know it’s a big debate right now, but centralized network computing will win in the end.

Centralized network computing is the term used to describe a system of networked web servers (or a single web server) that provides integrated applications and storage for multiple users who can access the system through distributed terminals. The system can be geographically distributed or not, but will share a common (integrated) network of applications, probably using a software interface standard to encourage and enable multiple independent application development teams.

Centralized networks are inevitable because of self-reinforcing competitive advantages. Economies of scale and market forces will lead to substantial change in the way we compute, and the systems we use now are simply the seeds that will grow into (and merge together to form) global centralized information service providers. There are already some very strong indicators that this trend is happening, and the potential points to this trend being a very long one.

  1. There are economies of scale in processing Load balancing can optimize processor utilization and provide both faster interactivity and reduced hardware investment.
  2. There are competitive advantages in information and application aggregation Integrations can break down the walls between programs, improving functionality through better integration and data sharing. You can analyze data in more dimensions and with more flexibility. Development rates can improve as it becomes possible to support separation of more software components and the people that work on them.
  3. Load balancing improves transmissions Transfer rates improve because fewer nodes are required and data traffic can be optimized. Information served to you from a server on your edge is more reliable and fast than information sent to you from another server through your edge.
  4. End-user transparency The front-end possibilities under centralized computing are not limited beyond that of other systems. This implies that there will not be a selection bias away from centralized systems because end-users will not prefer or recognize the difference between systems. That is not to say that they will all be the same – only that they all could be. The opportunity set in one system is available in the other.
  5. The outsourced storage industry exists This implies that there is a willingness to adopt on the part of the owners of data.

You can see the markets already rewarding companies that are moving to take advantage of this trend. Many of these companies are providing application services along with ISP connectivity, and they are capturing traffic. This traffic is investing time and thought into signaling their own preferences. Some examples include personalizing page layout and content — often even using system-wide wallets and e-mail. Giving users what they prefer is a huge competitive advantage. The time it takes to personalize a competing system is a high transaction cost – especially relative to the low cost of inertia.

Eventually, you will be using only a browser. All your computing will occur on a centralized system and be sent to your browser for translation into your interface. All of your applications will be centrally hosted, so your profile and applications – essentially everything you do on your computer – will be available to you from any machine, at any time.

Multiple systems will compete for scale, reducing marginal costs and creating meaningful and irreproducible competitive advantages. This race will likely be won permanently by 2050. Before that time, ASP services will consolidate rapidly along with economic cycles. the early players will rely on loss leader models to attract user bases, and will transition to profitability as the scale reaches the tipping point. The companies that make it to profitability first and reinvest in their technology platform will improve their integration, breadth, and quality to further support their competitive advantages.

In the first decade or two of this trend, there will probably be dozens of smaller companies that are able to enter and gain market share against their larger competitors. These companies will have competitive advantages most likely based on data storage, traditional media integration, wireless adoption, software platform architecture, applications suite integrations, and possibly international comparative advantage. After 20 years, the marginal advantages possible from these characteristics will not pose a meaningful threat to the aggregation and scale advantages of the top few market participants.

Consolidate or die will be the mantra of information companies.

The Changing Face of the Hardware Industry

;Hardware is going to be increasingly smaller and more powerful. Nanotechnology, and the potential that opens up to us with the ability to manipulate structures on the atomic level, will have at least two profound impacts on the hardware industry. Firstly, the size and specificity will make interfaces and tools much more powerful and flexible. Secondly, the self-construction of molecular components will nearly eliminate the cost of manufacturing.

Some of the technologies that may become commonplace:

  • Cleaners – scouring for impurities and removing them,
  • Remote sensors – devices that reside far from us and operate by receiving, interpreting, and transmitting information. This will expand our information gathering capabilities to open new avenues of predictability and accountability.
  • Sensory interfaces – devices that are near, touching, or even within us and operate by receiving, interpreting, and transmitting information. This will expand our senses to include all measurable information, and be just as intuitively interpreted as we perceive oscillating pressure information as sound.
    • Contact lenses that supplement vision with graphical displays.
    • Hearing aids that receive any combination of audio streams at various volumes.
    • Cloths that manipulate pressure, texture, and temperature.
    • Self evaluation devices that read your own physical characteristics and track health stats.
    • Personalizers that interpret your biochemical reactions to stimuli and control devices to optimize your state.
    • Storage for the massive volumes of information that is a daily part of life.
    • Controllers will directly manipulate your nervous systems, biochemistry, and genetic makeup.