Do you often find yourself standing or driving somewhere and not knowing either where you are exactly or where what you are looking for is located? Or do you find yourself curious and wanting to know more about the building or structure you are standing in front of?
I sure do and so I was buoyed recently by some of the R&D results featured at TechFest 2007, a showcase of over 750 global researchers put on by Microsoft Research in Redmond WA. One demonstration provided a good example of some themes I’ve mentioned previously here at Off Course – On Target, such as automated metadata generation, finding versus searching, contextual metadata, and personalization. To learn more about TechFest:
- Read about it in USA Today, and Schobleizer's blog
- See photos at Laughing Squid
- View video at Mix 07
Today I’m focusing on one item from TechFest that caught my attention, the ability to use a cell phone’s camera to trigger a map display and other relevant information about your current location based on the photo you take. This is also a good example of using audio or visual input and output rather than text to convey information. This type of photo feedback enables a more automated and immediate feedback loop to provide you with the information you need.
To do the initial research and demonstration, the developers acquired millions of street level** pictures of Seattle, which they indexed in a database of distinguishing features. These were then matched up with reference information about that location.
** Not interested in being street bound? Then check out Sky Server which lets you walk around the sky the same as you can with Google Earth or Microsoft Virtual Earth used in their super handy Local Live Search!
It will be a challenge to scale this for a large number of cities and other locations. However, we are also seeing some other very scalable phenomena that may very well make this all quite possible, saleable and I'd say probable. Consider the huge and apparently sustainable amount of photos and videos being uploaded to marquee examples such as Flickr and YouTube. These could easily provide the volume of photos needed for this technology.
As more devices (including mobile phones) add GPS capabilities, precise location information will become available. See this recent review of some new smart phones with GPS. But we will still want a richer collection information triggered by the GPS data that would provide relevant to our situation and location. This might include photo images of what surrounds any given location, who else is in the proximity at that time, who we may want to meet there, and resources and services in the vicinity, such as restaurants, shops, Wi-Fi hotspots, parking spaces, and hotel rooms. TechFest offered a number of possibilities.
There’s lots more to report about TechFest 2007 and I'll talk more about it next time.
w
a
yne
=====
Comments