The world certainly remembers when the first android phone came out. Oh what sweet glory. We all reveled in the fact that there was a phone that could be used to do a lot more than just talk and surf the web. Apart from the myriads of apps that we were blessed with from the geniuses behind the concept was that of voice command. That idea was brilliant. You could actually be able to “tell” your phone to open a message and spell it out for you. Now that was bad ass. Unfortunately, there is always a flaw in the initial stages of a brilliant idea. What was the flaw with voice command back in the day? The problem was pretty disdainful. You could not use voice command to command your phone. I know, right…
Back then voice command could give you weird replies. “Open messages” and the action given would be an opening browser. How much did that suck? However, this problem is now an issue of the past thanks to Google’s Jelly Bean that uses neural networks to bypass the problem and use a mathematically cognitive system to give you the action that you actually expect.
How does the neural network work? The answer is way too complex for simple minds to fathom but it would be unwise not to explain.
The idea of the neural network was derived from an actual neural system in the human brain that are a group of chemically connected and functionally associated neurons. These neurons function under the nervous system to give of physiological ability. Now, this idea was taken from the confines of the brain (literally) and integrated into the android system to give you Jelly Bean. Here’s the kicker to this entire thing. It sounds like something out of a biology class but then you have the likes of Donald Hebb in the 1940s that saw it fit to take the idea from a widely skeptical theory from the late 19th century and make the Hebbian Learning (Wikipedia ).
The pioneer of the idea at Google is known as Vincent Vanhoucke who claims that the idea is made possible when you give a command to your smartphone and the information is sent to the dozens of Google servers worldwide so it they can try and make heads or tails out of what you are saying. The neural networks at those servers were built by Vanhoucke and his team.
Do not think that the use of neural networks in personal systems is something that has been first made possible by Google because the whole notion was under great scrutiny and research back in the 1980s and now with Google, IBM, and Microsoft all diving deep into the idea you should start preparing yourself for a more integrated smartphone that might actually be able to talk to you. Yes, all the directors were in reality giving us premonitions and not just shit movies……
You need to understand the complex simplicity behind the neural network. Last year Google researchers used the technology to develop a system that taught itself how to identify cats on Youtube. As far as you now know the whole idea is pretty simple but here is the complex state of the matter. Imagine teaching your computer to perform your basic mathematical equations. Something as light as BODMAS. How difficult can that be? Now imagine teaching it to perform Euler’s Identity. That would suck. It is basically the most beautiful mathematical equation but to teach a PC to do this would be seemingly impossible.
The guys at the three leading corporations (Microsot, IBM, Google) are using this technology to do a lot more than find cats on Youtube. The first brilliant invention was Jelly Bean and as the future comes (you know, a nano second and beyond) something more brilliant will be invented. Who knows? Maybe tomorrow images won’t be just pixels to computers. Here’s to hoping.