Episode SummaryWe’ve spoken in the past about computer vision on the TechEmergence show, but we haven’t covered much about it in industry apps. Few businesses have better mastered this technology in the form of an app better than Shutterstock. In today’s episode, we speak with Nathan Hurst, currently a distinguished engineer with Shutterstock and previously with Google, Amazon, and Adobe. Nathan delves into the topic of business apps that can “see”, and touches on what that means for the industry, some of the exciting developments that he’s seen over last the 10 years, and what he sees coming up in the next few years.
 

 

Get it on SCiTunes Badge

 

 

Expertise: Mathematics and computer science

Recognition in Brief: Nathan Hurst is a distinguished engineer at Shutterstock, where he is able to apply his primary interests in mathematics and computing into the development of products and services. Primary to Shutterstock, Nathan worked as an engineer with Amazon, and information metawrangler at Google, and a research scientist at Adobe. He earned both his Bachelor’s of Science in Mathematics and Computer Science, as well las his PhD from Monash University. Hurst’s current research interests intersect with computer vision, highly scalable data processing, information theory and compressive sensing.

Current Affiliations: Distinguished Algorithm Engineer at Shutterstock; Founder of Inkscape

 

Hurst1

Interview Highlights:

(1:25) What have you seen as some of the most important developments that have moved the application of machine vision forward since you’ve been involved in it?

(4:52) Do you think that within 5 or 10 years we’ll be able to get far enough down the road where computer will be able to identify a man sitting on a Harley Davison overlooking a cliff and have significant, legitimate reliability on a robust combination of nouns and verbs?

(7:24) What other applications am I missing? I can imagine that there’s a lot that you (Shutterstock) do with pictures…

(11:05) If you understand what’s in the image, it doesn’t matter if you’re searching in Japanese hypothetically…is that something you guys are presently working on, maybe farther down the line than your present image similarity, (is it) something in the works?

(12:25) When you talk about your (Shutterstock’s) editor, are you referring to people being able to adjust the results that they get from editing images…are you saying that machine learning might be able to play a role in giving them the result that they want…or did you mean something else?

(14:52) Do you think in the future you’ll be able to have people type in commands like ‘I’d like to remove this cat from this image’…and be able to have some degree of understanding…or some degree of creative editing work, ‘I want this cat to look happy’…is that still too far off as well, or where do you think we are on that continuum?

(18:04) These things (machine learning solutions) have to have some kind of business return on investment…despite the very quick pace, is there a very simple way to compare between machine learning platforms or solutions for machine vision for this purpose or that purpose…how do smart executives make those calls?

Hurst4

Shutterstock Highlights:

Introducing Shutterstock Editor: A Simple Way to Edit Beautiful Photos

Shutterstock Unveils Better, Faster, Stronger Search and Discovery Technology

Shutterstock Introduces Reverse Image Search for iOS

Related Emerging Technology Interviews: