Phrase Map

In this module, we used Processing and Android tablets to gain practical understanding of machine perception. We began prototyping and experimenting with geo-location, mobile and touch, computer vision and a host of other areas from sound to speech recognition.

For our final project, we chose to focus on speech to text and voice recognition. We commissioned each member of our program to recite a number of phrases in multiple languages to see how the language processing algorithm would recognize and register the words that were being said by each person.

Our project consisted on program producing a map where each word said in each language is displayed and linked to the country of origin of the people that said them. With the final result it was possible to see which languages the machine could process easiest and which ones were more difficult – as displayed by an increased word count.

In this way, clear patterns in voice recognition tied to multiple languages were uncovered and also comparisons between the country of origin of people reciting each phrase and the level of accuracy the software was able to process this information.

We also created an interactive tablet application producing a single personalised map for people present in the final exhibition, so they could also evaluate and compare their results with the ones from our class.