Briefing Note: Will Micron’s Automata Processor Solve IT’s Big Data Performance Problem?

Big Data is becoming a big problem for IT as companies rush to embrace the holy grail of analytics and find their traditional compute infrastructures aren’t up to the task. Storage performance has historically been the limiting factor, even at the processor level, where memory bandwidth is commonly blamed for not being able to keep up with advancing CPU technologies. But according to memory manufacturer Micron, the real problem is an inefficient processing architecture, especially when it comes to the randomized, unstructured data sets common in Big Data applications. This is the impetus behind their development of the Automata processor, a new technology approach that may solve IT’s Big Data performance challenge.

Micron’s Automata is designed to process large volumes of unstructured data, like those typically found in Big Data applications. Its highly parallel design excels at pattern matching and the kinds of comparison-heavy work done in these environments. Micron has been developing this technology for the past eight years and plans to release products this year.

CPU Architecture

Traditional computer processors use what’s called a “von Neumann architecture”, a structure that was described by physicist and mathematician John von Neumann in 1945. Comprised of several component sub-systems – a controller, an arithmetic logic unit and a memory unit – its highly serialized design requires that complex problems be broken into extremely simple instruction steps. All this generates a lot of data movement as information is funneled into and out of the processing engine for each of these steps.

Von Neumann Bottleneck

Computers handle this movement with data busses, currently 64-bit wide channels that move ‘words’ of data between the CPU (the control and arithmetic logic units) and memory during the decision-making process. It’s this bus architecture that creates the “von Neumann bottleneck”, a phenomenon whereby the speed of the CPU is significantly restricted by the system’s ability to transfer data between the memory and processing unit. This is the challenge that Micron’s Automata processor was designed to address.

Automata Design

The ‘inspiration’ for this approach has its roots in conventional memory chip design, which is extremely parallel. Internal to the device, each memory read operation accesses an entire row of data, typically a ‘page’ in length, returning thousands of bits of information. Unfortunately, the bus architecture described above can only handle data in 64-bit increments, so the information fetched by that single access operation must be shifted out in word-sized chunks; hence, the von Neumann bottleneck.

How Automata Works

Instead of processing data serially, in word increments as a traditional CPU architecture does, an Automata chip is comprised of thousands of simple processing elements, each capable of analyzing the input stream and making and independent decisions about what actions to take next. Micron has designed a PCIe board with an FPGA controller surrounded by multiple Automata chips. This board provides over 1.5 million processing elements that work together to provide an aggregate capacity of over 200 trillion match and route decisions per second.

The Automata processor won’t run an operating system, instead, this board functions much like a graphics processing unit (GPU) or an accelerator to a traditional CPU. The PCIe interface allows the card to support traditional computer architectures.

In order to leverage this highly parallel design, compute problems must be structured much differently than with traditional programming techniques which rely on complex, serialized instructions. This makes Automata better suited for different kinds of problems than are traditional processors.

Where Automata is Used

The Automata processor excels in applications that involve large numbers of relatively simple operations, like comparisons and pattern matching. Looking for a specific feature in a large amount of unstructured data can be inefficient for a traditional CPU architecture that would typically load the pattern it’s looking for and then compare that with each candidate data point in the data set being analyzed, then load another and repeat the process. But this is a job that’s ideal for Automata, given its ability to store thousands of individual patterns or ‘target’ data objects and run comparisons against an input stream for each target, simultaneously.

These kinds of operations abound in use cases such as network security where thousands of specific threat profiles can be compared to the data coming into a network environment simultaneously. Another use case is video analytics where images can be scanned for specific patterns (people, objects, writing, etc), producing real-time results. Bioinformatics applications where large, complex strings of genomic information need to be analyzed also benefit from Automata’s highly parallel processing. Finally, Big Data analytics applications are also well suited, especially those problems that involve large numbers of data points, as would be generated by real-time sensor data, or “internet of things” applications.

Automata Ecosystem

Micron has created an ecosystem to support and develop the Automata technology, including a software developer’s kit, workbench tool kits for testing designs and a developer portal where resources are made available and the community can share ideas. In addition, the University of Virginia has partnered with Micron to establish the Center for Automata Processing, where developers can run designs and work together to create compelling applications that exploit the capabilities of the Automata Processor

Why Micron?

As a memory manufacturer, Micron understands the problems with compute performance that have been attributed to storage and memory designs. Its development efforts into faster memory technologies, like the Hybrid Memory Cube, are a response to this market need. But as an innovator, the company also looked at the problem from the processing side as well, suspecting that a computing architecture first proposed 70 years ago might be due for an update. The inspiration also came from Micron’s deep knowledge of memory design and how that could be used as the platform to support a highly parallel processing architecture.

StorageSwiss Take

As a new processing technology Automata is certainly interesting, but also timely, given the advent of Big Data analysis and the Internet of Things. These projects that generate large, random unstructured data sets are pushing the limits of traditional compute architectures. This has given rise to distributed processing technologies like Hadoop that enable a more parallel approach to tackling very large analytics problems.

Automata could be seen as the same approach at the processor level, creating an alternative to traditional serialized CPU architectures that’s ideal for handling the explosion of image-based processing and the challenges posed by the kinds of data sets common in Big Data applications. The fact that it’s coming first from a memory company, and not a processor company, is a testament to Micron’s innovation.

Click Here To Sign Up For Our Newsletter

Eric is an Analyst with Storage Switzerland and has over 25 years experience in high-technology industries. He’s held technical, management and marketing positions in the computer storage, instrumentation, digital imaging and test equipment fields. He has spent the past 15 years in the data storage field, with storage hardware manufacturers and as a national storage integrator, designing and implementing open systems storage solutions for companies in the Western United States.  Eric earned degrees in electrical/computer engineering from the University of Colorado and marketing from California State University, Humboldt.  He and his wife live in Colorado and have twins in college.

Tagged with: , , , , , , , ,
Posted in Briefing Note
One comment on “Briefing Note: Will Micron’s Automata Processor Solve IT’s Big Data Performance Problem?
  1. Sean says:

    It looks good. Micron are doing a terrible job presenting it to the public. It seems low power and suitable for mobile phones etc. How useful it is depends on how creative you will be able to get with it. Memory based computing is very energy efficient, especially if you can avoid CPU L1, L2 caches. For example you could replace a 1000 machine instruction function with one memory lookup. So that’s like 1000 times faster and uses about 1000 times less energy.
    I think programming evolved from very memory constrained systems. In 1979 a Z80 CPU cost $35, quite cheap. 64 KBytes of RAM for it cost $625! I think it is entrenched that you don’t “waste” memory.

Comments are closed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 25,553 other subscribers
Blog Stats
  • 1,906,112 views
%d bloggers like this: