Do you keep hearing the phrase “AI” and wonder what it’s all about? Well if you need a crash course in AI (or machine learning, as it’s often called), we got you covered! So then…what do the self-driving Google car, Netflix recommendations and handwriting recognition software all have in common? They signal the eventual takeover of robots from humans (we kid!). They all use AI, or machine learning, which allows computers to identify patterns and operates under the theory that computers can learn without programming them for specific tasks. Scientists want to find out if computers could learn from data, therefore, the more data they are exposed to, the more they are able to adapt independently.
There has been a long standing quest to infuse intelligence into computers and machines in order to get them to work for us. Japan is currently planning a robotics revolution, envisioning a world where artificial intelligence is integrated into everyday life, helping with things such as carrying bags in airports, caring for the elderly, and ferrying us around in robot taxis. While some of us in the West may be seeing visions of Westworld or the Terminator, Japan (and many other countries) are embracing the integration of robots in society.
In more and more cases, machine intelligence is becoming superior to human intelligence. Machines can assist humans in a variety of ways, and one reason for this is because machines are way more accurate than humans. Machine vision is better than human vision, self-driving cars are more secure than human driving cars (looking at accidents per mileage), as the robot vision system is much more accurate and predictable than human action. Machines are also not ruled by emotion, so they can always act clearly according to their programming. Of course this can have ethical and moral implications (should a self-driving car hit a child crossing the road if it can’t turn in time?). However, it seems as though we are increasingly able to outsource some of our tasks and brain power to robots and machines.
Now perhaps machines will be able to write our stories as well. There has already a lot of experimentation in creating content by extracting data from images. Using something called “attention network”, a computer can be shown a scene of a picture that can help it to tell a story. The attention network can look at different areas of a scene and describe exactly what it sees. So by pointing out the different scenes or aspects of various photos, and putting them together, it is possible to create a story. However, what is missing from machine learning and AI in storytelling is that it lacks context. Context is something that is extremely difficult to replicate or represent in computers. For now, it seems to be a uniquely human trait.
One popular method in AI is “word embedding”, which allows the machine to identify a word by the company it keeps. So if the machine can recognize a word by other words around it, it can figure out the word through association. This algorithm will then try to look at a word and predict perhaps 5 words that usually come before and after. What’s cool about this is that the algorithm continues to learn with every word is correctly predicts, so it trains the machine to learn dynamic lexicons, and human language. This is some of the machine learning and AI that we use here at DEEP. While focusing on sports, for example, we may want to collect data on injuries.
Well in sports, there are thousands of injuries and many different ways to describe them, so we had to be smart about our method. We programmed the algorithm with common injuries such as ankle sprains, concussions and torn ACLs, and then used them to see injuries that are similar. This expands the types of injuries we can identify because the computer has already seen tons of data with injury language and can make the connection based on similarity of language.
It’s impossible to predict where the future of AI and machine learning will take us. Looking ahead in a positive light, machine learning can help augment all the areas where one really needs to have a wide or deep knowledge of a subject (or many subjects). Lawyers, doctors, and transportation are already being transformed before our very eyes.
What kind of AI or robot do you wish you had in your life?
Here at DEEP, we investigate the world of knowledge visualization, so stay up to date with us as we share our findings!