This has been a great week for exploring new visual tools. This post will look at one new tool that can compare photos to find images that have similarities to one another, another that creates 3D photo montages and a third that presents grouped keywords for image browsing.
First there is Riya, a visual search engine still in beta. I was especially intrigued by the Riya Personal Search that will let you use “face and text recognition to auto tag your photos”. Imagine a collection of 10,000 photos that include many combinations of the same 25 people. instead of going through and hand tagging them all – you should be able to use this feature to select good examples of each of the 25 people’s faces and let Riya’s magic do the busywork for you.
Over on Research Buzz they talk about Riya’s first commercial venture Like.com in Visual Search Engine for Finding Items by Photo. Basically this site lets you find clothing and accessories based on images in celebrity photos. What about buildings? What about automatic tagging of location by recognition of landmarks with distinctive shapes?
That leads us to Photosynth from Microsoft Live Labs . Photos are compared and stitched together into 3D models of space. ( Read more about how here.) They claim that the version they have out there now is a ‘tech preview’ or a ‘sneak peak’ at what is being built for real back in the labs. I played with it a bit – and it is very interesting. I love the idea overall – but I wish that there were a mode in which you could see the 3D world with many photos showing at once. Right now it shows you a sort of 3D point plotted version of the physical structures and then ‘hangs’ the individual photos in the proper place to give you a bit of ‘skin’ on the 3D skeleton as you browse through the photos. Take a look and play with it to see what I mean.
My fascination with this (beyond the cool factor) is the idea of feeding in photos taken a long time ago to create a virtual 3D space to walk through. The lower east side of New York City in the early 1900s or New Orleans before Katrina. I also wonder at being able to create 3D environments with a tie into time so that another slider control on the screen could let you alter the time you were viewing such that you could view the same facade change in fast forward.
Finally I wanted to share the approach taken by CSA Images for their Visual Brainstorming Index. They have provided a way to view keywords grouped by topic. Their goal is to inspire their customers by showing them terms that might jump out as being useful. This appears a great way to give users a quick grasp of what sorts of information (in this case images) can be found on this site. If a standard tool could show classified sets of keywords for users to explore it could go a long way to easily communicating the kinds of information available in a digitized archive online – especially if the browsable terms evolved as records were added to the collection. I am sure there are other great ways to communicate the big picture, but I think that a way to browse keywords or tags would give users a handle on the records in a way that a well written finding aid might not manage. Ask 100 users what a finding aid is. Then ask the same 100 users what a keyword or tag is. Moving to a model that users are comfortable with can only increase the use of the records we are working so hard to put online. Of course this assumes that the records in question are assigned keywords, tags or subject terms of some sort – but I think there are many paths to that (including the aforementioned Riya tool).
All three of these creations are interesting steps forward in searching through, processing and understanding images. As archivists make more progress in smoothing out the digitization process (or management of existing digital records), they will finally have more time to consider the wide array of tools that might make accessing those records easier. I hope that it happens sooner rather than later… and that those just starting the process of making archival records available online for research might make their plans with innovative ideas like these in mind.
I’m shooting aerial photos with a Nikon D2Xs hooked up to a GPS unit. The GPS position data shows up in Capture NX, but I am unable to figure out how to include the GPS position data on prints. Any help would be greatly appreciated. Cheers.
Comments are closed.