Category Archives: OpenSource

Open ended Experimentation in Node.js & Angular2

Too often 2 peoples look at the same data, image or video but never have the same opinion or judgment on it. A way to correct that and align the criteria of everybody at the same level is to have a golden sample with specific criteria of what the optimal answer would be and then train on it.

The Project:

This project was to build a training app for new personnel that get hire to let them align their criteria with our specification. It was also built as a white label so we can add as many images or criteria as needed. Allowing not all criteria to be display or filled depending on what is displayed. Some criteria can be black & white (buttons) and other are more opinion based (slider).

Technical Specs:


When someone start the training, it will select a picture from the Database at “random” according to an algorithm to avoid displaying the same picture many time. The user will fill some or all the criteria depending on the setup.



After adding all needed criteria, the user can add details as comment on the decision if needed. There is also external resources that can be added as link to help the user make the right decision. When all is complete the user must accept or reject the image quality in general.





Once the user click Accept/Reject, the system will validate and tell the user visually if he was right or not and give some example on what was wrong. Error is human so if the user want to contest the golden sample he can and add a description on why he think it should be accepted or rejected.

The list of criteria and structure (button or slider) can be easily be changed or added in the database as part of the setup of the system. New golden picture can be added via the UI if you have the manager permission or directly via the node API.


Many items are tracked like Success/Failure but also how many time the image was rejected and the user click accepted. What kind of criteria fail the most, what image was the most contested, what image fail the most, histogram of the number of minutes pass per image per user vs all others, number of image trained on per day/weeks/months.

trainapp_analytics Having complex analytics can help the user better understand his weakness and also help the manager understand what is the most difficult to understand for people in general.

Update on FieldDB

Its been 3 years since the FieldDB project was launched at CAML in Patzun, Guatemala. Since then the project has graduated into its own GitHub organization with 50+ collaborators and 50+ universities that we know about have been using it. In March we made sure that all the clients and libraries had Google Analytics integration to better understand how users were working with the apps.

This week Veronica has been learning Google Analytics to see how the app has grown. One of the questions Hisako had was  been where in the world are the users and how much time do users spend in the app on average.


Taking a look at iLanguageCloud user reviews

Its been a few years since Josh originally released the iLanguageCloud project. The iLanguageCloud project uses Jason Davies D3.js cloud library and some statistics to tokenize and identify stopwords so that it can support text in any unicode charset in any language.

Since the app was released a surprising number of users have found the app and have been using it. Users have been requesting features and providing feedback on the Play Store and Chrome Store.

Using @iLanguageLab word cloud to collect & display words to describe the moon. One S uses Word Central for help!
Some teachers have even tweeted about the app!

This summer Veronica will be looking over iLanguageCloud user reviews in order to document what needs to be done in the next releases. First she found that most of the reviews indicate that there are different user groups who have different goals when they open the iLanguageCloud project. Some users want to paste a full text and see a cloud, but most users want to see all the words they paste.

She started by identifying the user types with a CouchBD map reduce and learning how to do statistical analysis in LibreOffice. Once she had identified stats to categorize user types, she added tests for these user types in the codebase using Jasmine.

Users are often creating tag clouds, not full text clouds. We attribute this to users being used to having to pre-filter their words to only the words they want to show with random text sizes rather than text size which depends on their frequency or other factors.


While she is learning the tools (Angular.js, Travis) to make the modifications so that her user types tests pass, Veronica created a video tutorial showing how you can use the Chrome app so that users can have some instructions.


To help decide features get done first visit our GitHub feature list.

Welcome 2015 Summer Interns

Welcome 2015 summer interns Louisa Bielig who just graduated from a BA Honours at McGill and Veronica Cook-Vilbrin who will be entering Norwich University as a student in the fall.

Louisa recently presented her honours thesis “Resumptive classifiers in Chuj high topic constructions” at GLEEFUL and Harvard Undergraduate Linguistics Colloquia. Louisa was a previous intern on the FieldDB project where she helped build a tool to use the Inuktitut Bible as a corpus to supplement fieldwork. She has been using the FieldDB project for a few years to collect data for her thesis and for her research advisor’s projects. This summer she will be using Git, SublimeRegular ExpressionsYeoman, Angular.js, Jasmine, and CouchDB to improve the tools which users use to clean their data.

You can follow Louisa’s work on Github:


Veronica is a former mechanical engineering student turned to psychology. In preparation for running experiments and automating statistical analyses this summer she will be learning GitSublimeCouchDB, LibreOffice, Google Analytics, Yeoman, Angular.js, Jasmine and Travis to give iLanguageCloud users an update based both on the what the users are requesting, but also based on a behavioural analysis of what users have tried to do.

You can follow Veronica’s work on Github:


Recognizing Speech on Android

Tonight Gina and Esma will be presenting their Kartuli Speech Recognition trainer at Android Montreal.
14-08-20 - 1
The talk will shows how to use speech recognition in your own Android apps. The talk will start with a demo of the Kartuli trainer app to set the context for the talk, and then dig into the code and Android concepts under the demo. The talk has something for both beginner and advanced Android devs, namely  two ways to do speech recognition: the easy way (using the built-in RecognizerIntent for the user’s language) and the hard way (building a recognizer which wraps existing open source libraries if the built-in RecognizerIntent can’t handle the user’s language). While Gina was in Batumi she and some friends built an app so that Kartuli users (code) (slides) (installer) could train their Androids to recognize SMS messages and web searches. Recognizing Kartuli is one of the cases where you can’t use the built-in recognizer.
  • How to use the default system recognizer’s results in your own Android projects,
  • How to use the NDK in your projects,
  • How to use PocketSphinx (a lightweight recognizer library written in C) on Android

Live broadcast on YouTube
Code is open sourced on GitHub

Presentation at ComputEL workshop @ ACL 2014

This week Joel and Gina presented some of the work lab members Josh, Theresa, Tobin and Gina and interns ME, Louisa, Elise, Yuliya and Hisako have done on the LingSync project as part of their 20 minute presentation “LingSync & the Online Linguistic Database: New models for the collection and management of data for language communities, linguists and language learners” at the  Computational Approaches to Endangered Languages workshop at the 52nd Annual Meeting of the Association for Computational Linguistics (ACL).




LingSync and the Online Linguistic
Database (OLD) are new models for the
collection and management of data in
endangered language settings. The LingSync
and OLD projects seek to close a
feedback loop between field linguists, language
communities, software developers,
and computational linguists by creating
web services and user interfaces (UIs)
which facilitate collaborative and inclusive
language documentation. This paper
presents the architectures of these tools
and the resources generated thus far. We
also briefly discuss some of the features
of the systems which are particularly helpful
to endangered languages fieldwork and
which should also be of interest to computational
linguists, these being a service that
automates the identification of utterances
within audio/video, another that automates
the alignment of audio recordings and
transcriptions, and a number of services
that automate the morphological parsing
task. The paper discusses the requirements
of software used for endangered language
documentation, and presents novel data
which demonstrates that users are actively
seeking alternatives despite existing software.

Download full paper as .pdf or .tex



Week 5: Viewing the web through Kartuli Glasses

After meeting some local software developers we found that

  •  Many technical words are simply transliterations of English into Kartuli, and
  •  Many iPhone users don’t have a Georgian keyboard, as a consequence roughly 5% of comments on Facebook are in romanized Kartuli.
  • The most popular browser in Georgia (in Batumi, and the villages which are who we are able to ask) is actually Chrome!
  • Georgians go to school 100% in Kartuli, even during the USSR times. They have a very very high fluency in their native alphabet and reading in general.

This meant if we built a Chrome Extension which can transform all English letters into their Kartuli equivalent, then Georgians who aren’t entirely fluent with the English alphabet can read more content on the web. So far it seems to work great for Facebook, and for Google plus, but it can also be used on any web page!


Week 1-3: Taking Learn X from clickable prototype to field testing

After talking with members of the TLG volunteers (Teach Learn Georgia) when they come down from the mountains for the weekend, it looks like older volunteers (August 2013) could share what they have learned in the field with newer volunteers (March 2014) using our open source code base called “Learn X” which makes it possible to create an Android App that one or many users can use to create their own language learning lessons together using their Androids to take video, picture or record audio, backed by the FieldDB infrastructure for offline sync. Like heritage learners, TLG volunteers spend their time surrounded with the language and can understand more than they can speak, and what they speak about is highly dependent on their families and what their family speaks about most.

Screen Shot 2014-09-30 at 12.29.54 PM


Installer on Google Play
Open-sourced on GitHub