R Nelson Mug Thumb

Big data, big decisions

You’ve undoubtedly heard a lot about big data. Gartner announced last August that the technology had passed the peak of inflated expectations of the market-research firm’s hype cycle. According to the hype cycle model, big data is descending into the trough of disillusionment, from which it will emerge, climbing the slope of enlightenment on its way to the plateau of productivity.

Gartner pointed out that passing the hype peak is not a bad thing—it signals that a technology is headed toward maturity. And from my perspective, it’s difficult to believe that big data’s “trough of disillusionment” will be very deep. In fact, it’s hard to believe it exists at all. In February, the U.S. government made a high-profile appointment of DJ Patil as deputy chief technology officer for data policy and chief data scientist. At the time of the appointment, U.S. chief technology officer Megan Smith said, “Government data has supported a transformation in the way we live today for the better.”

In February, technology advisory services firm Lux Research announced it had acquired Energy Points, a provider of data analytics for energy and resources, along with all its data scientists.

In addition, last fall MIT offered a well attended online course titled “Tackling the Challenges of Big Data” and repeated it this winter. And the university will conduct an on-campus short program titled “Machine Learning for Big Data and Text Processing” June 8-12.

In an EE-Evaluation Engineering web exclusive article posted in February, Michael Schuldenfrei, chief technology officer of Optimal+, touts big data’s applicability to semiconductor test. He also provides a succinct definition of big data: “… an all-encompassing term for any collection of data sets so large or complex that it becomes difficult to process using traditional data processing applications.” And National Instruments, which has long championed the concept of “Big Analog Data,” devotes a section of its NI Automated Test Outlook 2015, released earlier this year, to the topic.

Perhaps everyone but NI, MIT, Lux, Optimal+, and the U.S. government is disillusioned with big data: but if not, it’s probably best to be prepared to climb the slope of enlightenment sooner rather than later. IT departments will have their work cut out as they’ll need to decide what to do about their 1980s-vintage SQL-oriented disk-based relational databases. At the least, they will probably face migration from row stores to column stores, which are much faster for many disk-based operations. They might also move from relational to key-value or key-document data models and NoSQL or NewSQL languages. And main-memory databases become feasible for some applications as RAM prices drop.

Engineers will have a role to play as well. As NI’s ATO 2015 puts it, “At present, the sheer amount of data being generated by engineering departments is causing a chasm between IT and engineering. Unless these groups work together to develop tools and methods to better use the data, this chasm will grow deeper.”

Of course, cloud computing offers engineers a way around the IT department. If you have a credit card and expense account, you can buy software, a platform, or infrastructure as a service. But whether your servers and memory reside in-house or in the cloud, it’s best to involve IT because of the value test data can provide beyond the engineering department.

Hinting at the potential for disillusionment, NI’s ATO cautions against unreasonable expectations and underestimating the effort involved in establishing a corporate-wide test-data analytics solution. The goal, the ATO says, is to prioritize a long-term vision that lets the IT department plan accordingly.

Rick Nelson
Executive Editor
Visit my blog

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!