Get 10 free images today. Use code PICK10FREE at checkout. Get started!

Blog Home Platform Solutions Shutterstock ConUHack Winner: Interactive Language Learning with Computer Vision

Shutterstock ConUHack Winner: Interactive Language Learning with Computer Vision

Congratulations to Dennis Cheong and Khaled Matloub for winning first place in the Shutterstock submissions category at ConUHack with LingoExplore.

Last week, the Shutterstock Montreal team was able to sponsor and work with students participating in the annual ConUHack, Concordia University’s official annual hackathon. 

As part of the Major League Hacking 2020 Hacking Season, the event brought together over 800 students for 24 hours of hacking, learning, and networking. 

We challenged students to use Shutterstock’s API to build innovative applications.

Among the number of impressive applications built, we’re happy to share that LingoExplore took home first place!

Subscribe to receive the latest API product announcements

About LingoExplore

Learning a new language is a common New Year’s resolutions. At the same time, it’s one of the most frequently broken resolutions

From personal experience with learning French, Dennis Cheong and Khaled Matloub are familiar with the challenges of learning a new language. So they took it upon themselves to develop a software solution that combines interactive learning and social reinforcement. 

LingoExplore provides users with a random word in French, called an artifact, along with a definition and hint of what the word might be. The user then has to find the object and take a picture of it. 

Then, LingoExplore runs the image through the Shutterstock Computer Vision API to determine if the user has uploaded an image of the correct object. If the image and the word are a match, it will be saved as a flashcard available for access at a later date. If they are not a match, the user is prompted to try again.

To encourage consistent use, LingoExplore also allows users to like and comment on each other’s artifacts and view their rankings within their community.

How it was built

Cheong and Matloub used React Native as the frontend framework with Firebase supporting data and image storage. 

The Shutterstock Computer Vision API powers the object recognition process that evaluates whether or not the user has uploaded a correct image. Specifically, LingoExplore uses the image auto-tagging feature in the Computer Vision API product.

By making an API call to the “keywords” endpoint, LingoExplore receives a list of keywords identified in the user-uploaded image which is then matched to the word first presented to the user.

Accessing Shutterstock Computer Vision API

We encourage students, developers, and businesses to bring their visions to life with the Shutterstock API. To that end, the Computer Vision API is free to integrate with no approval process needed. 

Here’s a quick start guide to help you get started.

Featured image by Tartila.

Share this post