The Fundamental Problem with I/O: There is No Moore’s Law for Pins

Roger Isaac, Chief Technology Officer; Keyssa, Co-architect, VPIO

Ajay Bhatt, Former Chief I/O Architect, Intel; Co-architect, VPIO

Learn more at

It’s been 50 years since Gordon Moore published his prophetic observation, and as we all know, it has had a seminal impact on the development of solid-state devices. It’s been the basis for the speed of development of processors. It’s had a similar impact on the development of SSDs. But there is one key area of computing that has not seen the benefits of Moore’s Law.

The Lone Hold-out: I/O

I/O remains the last hold-out. It’s not as if there haven’t been significant advances in speed, protocols and connectors; there have. But the advances are evolutionary. Mechanical connectors remain mechanical constructs, where metal touches metal to create a connection. And as data rates increase, so too do the challenges of routing high-speed signals through traces, flex cables, and mechanical plugs/receptacles, not to mention the fragility of high-speed flex cables, and the exposure of mechanical connectors to everyday hazards such as water, dirt, lint, etc.

But Keyssa has solved that issue with the world’s first solid-state connector — another topic entirely (see “The Old Way is…Old”

The mechanical connector is only a symptom of the problem. We need to trace (I promise, no pun was intended) our way back to identify the core of the problem, the source of which resides in I/O architecture of the processor.

The Laundry List of Issues with Processor and System I/O:

The primary challenge of I/O is scale, and the primary challenge of scale is the combination of legacy protocols and the number of pads and pins required to accommodate these protocols. Take the processor below, a Freescale i.MX ARM-based CPU, used as an example for no other reason than this diagram was readily accessible. The issue is the same for any multi-purpose processor – pin after pin after pin for external interfaces, in this case including:

  • Displays (parallel interfaces and serials interfaces, including LVDS, HDMI, MIPI/DSI)
  • Camera sensor interfaces, including parallel and MIPI-CSI)
  • Expansion cards, including MMC/SD/SDIO
  • Storage, including SATA
  • Data ports, including USB, PCIe
  • Miscellaneous ports, including I2S, UART, eCSPI, SJC, GPIO, SPDIF, CAN, Ethernet

It’s a mess:

Freescale i.MX ARM-based CPU
(sample case of I/O proliferation)


To Repeat: “There’s no Moore’s Law for Pins!”

Looking at the number of pins required in today’s processors, the problem statement is evident: Too many protocol connections limit system scalability and increase cost. In short:

Too many wires, protocols, mechanical connectors, signal integrity issues (ESD, EMI, RFI, etc.), PHYs WILL NOT SCALE with increasing processor and memory performance.


VPIO: The Democratization of Bits

Bits are bits…or are they? We all know that some bits flow one way, some the other, and some both. We know that some bits travel extremely fast, others much slower. And we know that some bits require feedback, and others don’t. But as their core, aren’t bits all the same?

For VPIO, the answer is a resounding yes.

VPIO (Virtual Pipe-I/O) was architected and developed with this in mind. VPIO is designed to take in bits…any bits – low-speed…high-speed it doesn’t matter – aggregate them into one virtual channel, send them over a “virtual pipe” to their destination and disaggregate on the other side. In short, all those separate pads, pins, and PHYs required for specific protocols can be combined into one pipe and sent over one or more standard SerDes to wherever their destination needs to be.

VPIO At-a-Glance

VP I/O doesn’t address every I/O issue; it just answers the most critical issue: how to scale I/O within a processor and system.


Why This…Why Now?

The proliferation of I/O is nothing new, but there are a couple of market drivers that are raising this issue to a critical level.

  1. Data rates: signals are getting faster and faster, and managing those signals both internally and externally is becoming increasingly challenging.
  2. Industrial Design: as devices get smaller, thinner, more elegant, designing with big bulky connectors becomes more and more difficult.
  3. Modular Computing: whether it’s two devices coming together to dock, or two devices coming together to form one, more powerful device, modular products require that many disparate signals be directed from one device to another, ideally through a single point of contact. Unfortunately, too often this looks like the picture below (which happens to be on the bottom of the notebook used to write this document):

Too many protocols and signals make this connector big, ugly and unreliable.


So Why Should I Care?

For anyone whose work involves managing I/O within a system, VPIO can make a huge impact.

For Processor/SoC Architects and Designers:

  • Minimize the number of PHYs, pins and pads
  • Lower overall processor power requirements
  • Easily scalable to new protocols
  • Synthesizable RTL

For System Architects:

  • OS Transparency
  • Simplified routing and design
  • Minimize the number and complexity of flex connectors
  • Alleviates signal integrity issues (ESD, EMI, RFI, Etc.)

For Product Designers:

  • Sleek, thinner, more elegant industrial design
  • Eliminates/minimizes impact of mechanical connector on ID

For Test:

  • Unified test infrastructure cross a wide range of devices and production lines
  • Unified Built-in System Test


A Few Answers to a Few Questions You May be Asking:

Q. Is VPIO intended to replace all pins all the time?

A. The short answer is no. There will always be a need for specific protocols to travel to their specific destinations; VPIO is not needed in these situations and includes a bypass mode to accommodate these use cases. But even in these cases, for certain protocols that include both high-speed and low-speed signals (think DisplayPort with its bi-directional low-speed AUX channel), VPIO can aggregate the two and send/receive over one lane – providing added efficiencies.

Q. Doesn’t this require VPIO to be integrated into existing processors?

A. The most effective way to take advantage of VPIO is when VPIO is integrated. But it’s not necessary. There VPIO-based ASICs currently being developed that take advantage of the technology for specific use cases.


The Bottom Line:

They say Moore’s Law is coming to an end. Even so, 50+ years of Moore’s Law has seen solid-state devices shrink and become faster, far outpacing the same new developments in I/O. If our computing devices want to continue to scale, then we need to address the weak link in the three-legged stool we call computing devices. Device-to-device connectivity is in sore need or rearchitecting. And so, we give you VPIO.

To Find Out More:

Visit our website, at

About the Authors:

Roger Isaac, Chief Technology Officer, Keyssa, Co-architect, VPIO

Roger has served as the chairman of the low-power memory committee at JEDEC, and has led a number of technical task groups including LPDDR2, LPDDR4, and UFS. He also served on the organization’s Board of Directors. Roger has led architecture, design, marketing, and intellectual property teams at AMD, Spansion, and Silicon Image.

Ajay Bhatt, Former Chief I/O Architect, Intel; Co-architect, VPIO

Roger has served as the chairman of the low-power memory committee at JEDEC, and has led a number of technical task groups including LPDDR2, LPDDR4, and UFS. He also served on the organization’s Board of Directors. Roger has led architecture, design, marketing, and intellectual property teams at AMD, Spansion, and Silicon Image.

Share this:

Leave a comment