Tuesday, July 15, 2014

Microsoft's Project Adams aims to teach computers "to see"


Digital assistants like Siri, Google Now and Cortana may be very helpful in lots of scenarios; but they also show just how limited their apparent intelligence is. They're still just "dumb" assistants that aim to look a bit more intelligent than they really are, and they really have no clue about the real world we live in. Microsoft wants to change all that with their Project Adam.

One such example can be easily found. Ask your computer to find the photos you took last year with a waterfall in it. That's something even a child could do, but for computers, images are just a bunch of pixels, and unless you have a specific pattern to look for (like a face) they're not very good at understanding what's on it.

With Project Adam Microsoft wants to change all that using thousands and thousands of samples of every imaginable kind, so that when their computers look at an image, they'll not only be able to say if that's a car or a dog, but go as far as saying what type of dog/car it is, and that you can also see this particular kind of trees in the background, and so on.

In the future, systems such as these will allow us to look at a meal and immediately know how many calories we're about to ingest; or look at an animal and see information about it; or even ask things as "what kind of car is that?" as we stroll down the street and have our digital assistant answer us.

No comments:

Post a Comment

Related Posts with Thumbnails

Amazon Store