Too often 2 peoples look at the same data, image or video but never have the same opinion or judgment on it. A way to correct that and align the criteria of everybody at the same level is to have a golden sample with specific criteria of what the optimal answer would be and then train on it.
This project was to build a training app for new personnel that get hire to let them align their criteria with our specification. It was also built as a white label so we can add as many images or criteria as needed. Allowing not all criteria to be display or filled depending on what is displayed. Some criteria can be black & white (buttons) and other are more opinion based (slider).
When someone start the training, it will select a picture from the Database at “random” according to an algorithm to avoid displaying the same picture many time. The user will fill some or all the criteria depending on the setup.
After adding all needed criteria, the user can add details as comment on the decision if needed. There is also external resources that can be added as link to help the user make the right decision. When all is complete the user must accept or reject the image quality in general.
Once the user click Accept/Reject, the system will validate and tell the user visually if he was right or not and give some example on what was wrong. Error is human so if the user want to contest the golden sample he can and add a description on why he think it should be accepted or rejected.
The list of criteria and structure (button or slider) can be easily be changed or added in the database as part of the setup of the system. New golden picture can be added via the UI if you have the manager permission or directly via the node API.
Many items are tracked like Success/Failure but also how many time the image was rejected and the user click accepted. What kind of criteria fail the most, what image was the most contested, what image fail the most, histogram of the number of minutes pass per image per user vs all others, number of image trained on per day/weeks/months.
Having complex analytics can help the user better understand his weakness and also help the manager understand what is the most difficult to understand for people in general.
This week Veronica has been learning Google Analytics to see how the app has grown. One of the questions Hisako had was been where in the world are the users and how much time do users spend in the app on average.
Since the app was released a surprising number of users have found the app and have been using it. Users have been requesting features and providing feedback on the Play Store and Chrome Store.
Some teachers have even tweeted about the app!
This summer Veronica will be looking over iLanguageCloud user reviews in order to document what needs to be done in the next releases. First she found that most of the reviews indicate that there are different user groups who have different goals when they open the iLanguageCloud project. Some users want to paste a full text and see a cloud, but most users want to see all the words they paste.
She started by identifying the user types with a CouchBD map reduce and learning how to do statistical analysis in LibreOffice. Once she had identified stats to categorize user types, she added tests for these user types in the codebase using Jasmine.
Users are often creating tag clouds, not full text clouds. We attribute this to users being used to having to pre-filter their words to only the words they want to show with random text sizes rather than text size which depends on their frequency or other factors.
While she is learning the tools (Angular.js, Travis) to make the modifications so that her user types tests pass, Veronica created a video tutorial showing how you can use the Chrome app so that users can have some instructions.
Veronica is a former mechanical engineering student turned to psychology. In preparation for running experiments and automating statistical analyses this summer she will be learning Git, Sublime, CouchDB, LibreOffice, Google Analytics, Yeoman, Angular.js, Jasmine and Travis to give iLanguageCloud users an update based both on the what the users are requesting, but also based on a behavioural analysis of what users have tried to do.
This week lab members Farah and Gina will be talking about how to setup and tweek Sikuli tests for Android at GDG Android Montreal. In this talk they show how you can test image heavy, and/or legacy/hybrid android apps using OpenCV (computer vision) and Sikuli.
Sikuli is a framework which automates anything you see on the screen. It uses image recognition to identify and control GUI components. It is useful when there is no easy access to a GUI’s internal or source code, or writing tests crosses layers of technologies ie in a Cordova/HTML5 app running in a webview.
Sikuli is an open source project started at MIT which has grown to be used by developers for diverse types of clicker testing.
Here is a video showing how Farah used Sikuli to test a Cordova/HTML5 app running in an Android webview.
Tonight Gina and Esma will be presenting their Kartuli Speech Recognition trainer at Android Montreal.
The talk will shows how to use speech recognition in your own Android apps. The talk will start with a demo of the Kartuli trainer app to set the context for the talk, and then dig into the code and Android concepts under the demo. The talk has something for both beginner and advanced Android devs, namely two ways to do speech recognition: the easy way (using the built-in RecognizerIntent for the user’s language) and the hard way (building a recognizer which wraps existing open source libraries if the built-in RecognizerIntent can’t handle the user’s language). While Gina was in Batumi she and some friends built an app so that Kartuli users (code) (slides) (installer) could train their Androids to recognize SMS messages and web searches. Recognizing Kartuli is one of the cases where you can’t use the built-in recognizer.
How to use the default system recognizer’s results in your own Android projects,
How to use the NDK in your projects,
How to use PocketSphinx (a lightweight recognizer library written in C) on Android
LingSync and the Online Linguistic
Database (OLD) are new models for the
collection and management of data in
endangered language settings. The LingSync
and OLD projects seek to close a
feedback loop between field linguists, language
communities, software developers,
and computational linguists by creating
web services and user interfaces (UIs)
which facilitate collaborative and inclusive
language documentation. This paper
presents the architectures of these tools
and the resources generated thus far. We
also briefly discuss some of the features
of the systems which are particularly helpful
to endangered languages fieldwork and
which should also be of interest to computational
linguists, these being a service that
automates the identification of utterances
within audio/video, another that automates
the alignment of audio recordings and
transcriptions, and a number of services
that automate the morphological parsing
task. The paper discusses the requirements
of software used for endangered language
documentation, and presents novel data
which demonstrates that users are actively
seeking alternatives despite existing software.
Since Kartuli is an agglutinative language with very rich verb morphology searching for appropriate results is very difficult. Over the past few weeks of observing it seems like most Kartuli speakers prefer to search using Russian search engines, using Russian vocabulary. Mari (who is a lawyer) and Gina decided to create a corpus of law cases in Kartuli, and see if the FieldDB glosser can help build a stemmer that might be used for searching in Georgian.
While Mari was teaching Gina and Esma how to use the Georgian court websites, in the middle she showed them how she modifies her search terms to get some results in supreme court cases, unlike the constitutional court search page which lets you search for an empty string and see all results… This was an illuminating experience of searching as a minority language speaker, so we decided to share it as an unlisted YouTube video despite the poor image quality.
* Requires search to find documents
* Need to use very general search terms to get any results, and results you get are not always relevant to your case you are working on
* Documents are .html which is excellent for machines but Mari didn’t seem to excited about it, we will ask her more later
* Requires no search to find documents
* Documents are in .doc format which users are used to
* Easy to download documents so you can read them offline when you are in the village, or put on a usb key if you are using someone else’s computer for the internet.
This week we documented our findings about what popular apps and operating systems are available in Kartuli, and to what extent. The result was pretty good, but we identified two ways we could help, by showing Kartuli speakers how they can contribute to Chrome and Android localization.
We found out that because of how Google localizes Android, contributing translations for minority languages is extremely time consuming for the Android team, which means they wont be able to accept our help, not for Kartuli, not for Migmaq.
On the other hand, Chromium translations are managed using Launchpad and it is entirely possible to help out. Esma began contributing reviews and novel translations, we are waiting news to find out if she was successful!