Remember when IBM’s Watson supercomputer beat out two of the best human competitors ever to appear on Jeopardy? To accomplish this feat, Big Blue tapped into a subset of artificial intelligence called machine learning — in effect, it’s a way for computers to learn from examples and independently create meaningful relationships based on available data. Now, this kind of learning is making inroads as companies start to leverage the power of computer cognition.

2 brains

Ground-Floor Thinking

According to a recent Venture Beat article, search giant Google has now incorporated machine learning algorithms into its Google Sheets spreadsheet service.When activated, the application makes educated guesses about what should go into blank cells based on what’s already present. IBM, meanwhile, is rolling out its Watson Developer Cloud, giving companies access to app-building services such as machine translation, data visualization and language relationship extraction.

All of this sounds impressive, but amounts to ground-floor thinking — ways to grab the very edge of machine learning and apply it at a consumer level. It’s interesting, it’s novel, but ultimately shallow for companies faced with the challenges of big data, bring-your-own-device (BYOD) and a growing number of interdependent IT processes which operate outside the purview of local IT departments.

Knowledge Gap

Current-generation IT operations software excels at discovering “what”, but often fails to account for “why”. This creates a disconnect between what end-users (workers) know and what’s happening — the results are IT security and governance methodologies that are necessarily reactive, rather than proactive. Workers are frustrated when systems fail to perform as desired, while IT professionals struggle to understand what’s causing these issues.

In part, the problem stems from the sheer number of IT end points now required by small and midsized companies, to say nothing of enterprise-level networks. Manually tracking the relationships between perimeter and internal systems and updating them as new end points are added is a full-time job even for experienced IT staff. And beyond company-controlled infrastructure lies the cloud, which represents a massive pool of potential services but few ways to track how deep end users dive. Take it a step further and you’re into the territory of big data: companies need to collect every bit of information they produce, but what good is data in isolation?

The Next Step

At Nexthink, we see machine learning differently. Our self-learning construct doesn’t focus on app development or data fill but rather the discovery of meaningful connections in your IT infrastructure. Patterns are developed by analyzing end-points minute-to-minute, giving you total system visibility — the “why” necessary to optimize your IT from the ground up.

In its most basic form, machine learning is able to compete with human intellect and draw relational conclusions from massive data sets at incredible speed. For consumers, this means smarter apps and better spreadsheets. For companies, this kind of learning represents the augmentation of intellectual reach and lays the foundation for IT governance based on fundamental network relationships.