Since their invention, computers have always done what the human brain can’t.
That’s why computers are best at handling massive sets of data and information and categorizing them into huge clusters. When it comes to picking apart minute details and looking at small sets of data, however, the human brain is more adept than a computer.
However, that could soon change with a new machine-learning algorithm that recently came out of MIT.
According to a December 8 Popular Science article, the MIT Model allows computers to group data points together based upon the data’s similarities to one another. The algorithm then creates a “prototype” from the different categories of data that includes features that are similar among all the pieces of data.
MIT’s press release on the MIT Model explains the way this model works by using a typical voter population in an election as an example.
“A plurality of the voters might be registered as Democrats, but a plurality of Republicans may have voted in the last primary,” MIT’s press release explains. “The conventional algorithm might then describe the typical voter as a registered Democrat who voted in the last Republican primary. The prototype constraint makes that kind of result very unlikely, since no single voter would match its characterization.”
The MIT Model could also help computers independently ward off spam and viruses, which currently affect about nine out of every 1,000 computers today, by better categorizing abnormal pieces of data and picking them out from the bigger picture of data on the computer’s system.
And, believe it or not, the MIT Model has been proven to be successful at helping computers interpret data more like we do. Under the traditional topic model algorithm, computers asked to find cooking recipes would bring back a random list of ingredients, while computers with the MIT Model were more likely to deliver results that more closely resembled recipes, according to Popular Science.
However, the MIT Model isn’t the perfect algorithm just yet — so it might still be some time before our computers can interpret data on a small scale as well as we can.
Speak Your Mind