GSoC/2014/Project ideas: Difference between revisions

From SMC Wiki
m (Hrishikesh.kb moved page SoC/2014/Project ideas to GSoC/2014/Project ideas)
 
(37 intermediate revisions by 10 users not shown)
Line 19: Line 19:
# Navaneeth ('''nkn__''' on irc.freenode.net)
# Navaneeth ('''nkn__''' on irc.freenode.net)
# Nishan Naseer ('''nishan''' on irc.freenode.net)
# Nishan Naseer ('''nishan''' on irc.freenode.net)
# Nandaja Varma ('''gem''' on irc.freenode.net)


=Ideas for Google Summer of Code 2014=
=Ideas for Google Summer of Code 2014=
* Please Read the [http://wiki.smc.org.in/SoC/2014#FAQ FAQ]
* Please Read the [http://wiki.smc.org.in/SoC/2014#FAQ FAQ]
* If you want to propose an idea, please do it in [http://lists.smc.org.in/listinfo.cgi/student-projects-smc.org.in student projects mailing list]


=Projects with confirmed mentors=


If you want to propose an idea, please do it in [http://lists.smc.org.in/listinfo.cgi/discuss-smc.org.in project mailing list]>


== A spell checker for Indic language that understands inflections ==
== A spell checker for Indic language that understands inflections ==
'''Project''':
'''Project''':


Line 33: Line 36:
Recently Tamil attempted developing a spellchecker using Hunspell with multi level suffix stripping. You can see the result here https://github.com/thamizha/solthiruthi.  
Recently Tamil attempted developing a spellchecker using Hunspell with multi level suffix stripping. You can see the result here https://github.com/thamizha/solthiruthi.  
Our attempt should be first to use Hunspell to achieve spellchecking with agglutination and inflection. Probably it will require lot of scripting to generate suffix patterns, we can ask help from existing language communities too. If Hunspell has limitation with multi level suffxes- sometimes Indian languages require more than 5 levels of suffix stripping, we need to document it(bug and documentation) and try to attempt python based solution on top of SILPA framework.
Our attempt should be first to use Hunspell to achieve spellchecking with agglutination and inflection. Probably it will require lot of scripting to generate suffix patterns, we can ask help from existing language communities too. If Hunspell has limitation with multi level suffxes- sometimes Indian languages require more than 5 levels of suffix stripping, we need to document it(bug and documentation) and try to attempt python based solution on top of SILPA framework.
The project is not about coding an existing algorithm, but to develop and implement an algorithm.
Hunspell's limitations can be understood from [[User:%E0%B4%B8%E0%B4%A8%E0%B5%8D%E0%B4%A4%E0%B5%8B%E0%B4%B7%E0%B5%8D/HunspellConversation| this conversation]] we had with the author of Hunspell in 2008
Homework to do before submitting applications:
# Use Hunspell in any Indian language like Malayalam for spell correction in editors or word processors and understand the limitations
# Study the nature of inflection and agglutination in Indian languages, read existing documents on this(ask for documents too) and note down your observations
# Study Hunspell and other spellcheckers to see how this problem is addressed
# Understand how a spell checker works. How to write a spellchecker from scratch?
# Come up with a plan about addressing the issue.


* '''[https://savannah.nongnu.org/task/index.php?12558 Savannah Task]'''
* '''[https://savannah.nongnu.org/task/index.php?12558 Savannah Task]'''
* '''Expertise required''': Average level understanding of grammar system of at least one Indian language
 
* Complexity: Advanced
'''Complexity''': Advanced
* '''Mentor''' : Santhosh Thottingal
 
'''Confirmed Mentor''': Santhosh Thottingal
 
'''How to contact the mentor''': IRC - santhosh on #smc-project on Freenode
 
'''Expertise required''': Average level understanding of grammar system of at least one Indian language and complete the homework as listed above.
 
'''What the student will learn''':


==Indic rendering support in ConTeXt==
==Indic rendering support in ConTeXt==
'''Project''':
'''Project''':


ConTeXt is another TeX macro system similar to LaTeX but much more suitable for design. To find more information about ConTeXt, see the wiki http://wiki.contextgarden.net/Main_Page. ConTeXt MKII  have Indic language rendering support using XeTeX. but MKII is deprecated, and the new MKIV backend doesn't support Indic rendering yet. The aim of this project is to add support to Inidic rendering to ConTeXt MKIV. XeTeX is using Harfbuzz to do correct Indic rendering.
ConTeXt is another TeX macro system similar to LaTeX but much more suitable for design. To find more information about ConTeXt, see the wiki http://wiki.contextgarden.net/Main_Page. ConTeXt MKII  have Indic language rendering support using XeTeX. but MKII is deprecated, and the new MKIV backend doesn't support Indic rendering yet. The aim of this project is to add support to Inidic rendering to ConTeXt MKIV. XeTeX is using Harfbuzz to do correct Indic rendering.


* '''[https://savannah.nongnu.org/task/index.php?12559 Savannah Task]
'''More Details''': A partially working patch by Rajeesh for MKIV lua code is available. ConTeXt mkii (deprecated) can work with XeTeX backend for Indic rendering. Here is a sample file:
* '''Expertise required''': Understanding of the TeX system, experience in either LaTeX or ConTeXt and basic understanding of Indic language rendering. MKIV uses Lua, familiarity with Lua, opentype specifications or Harfbuzz will be added advantage.
 
* '''Mentor''' : Rajeesh K Nambiar
* '''Complexity''' : Advanced
 
* '''More Details''': A partially working patch by Rajeesh for MKIV lua code is available. ConTeXt mkii (deprecated) can work with XeTeX backend for Indic rendering. Here is a sample file:
  \usemodule[simplefonts]
  \usemodule[simplefonts]
  \definefontfeature[malayalam][script=mlym]
  \definefontfeature[malayalam][script=mlym]
Line 59: Line 75:
Generate the output using command
Generate the output using command
  texexec --xetex <file.tex>
  texexec --xetex <file.tex>
* '''[https://savannah.nongnu.org/task/index.php?12559 Savannah Task]'''
'''Complexity''' : Advanced
'''Confirmed Mentor''' : Rajeesh K Nambiar
'''How to contact the mentor''': IRC - rajeeshknambiar on #smc-project on Freenode
'''Expertise required''': Understanding of the TeX system, experience in either LaTeX or ConTeXt and basic understanding of Indic language rendering. MKIV uses Lua, familiarity with Lua, opentype specifications or Harfbuzz will be added advantage.
'''What the students will learn''':


==Language model and Acoustic model for Malayalam language for speech recognition system in CMU Sphinx==
==Language model and Acoustic model for Malayalam language for speech recognition system in CMU Sphinx==
'''Project''':


CMU Sphinx is a large vocabulary, speaker independent speech recognition codebase and suite of tools, which can be used to develop speech recognition system in any language. To develop an automatic speech recognition system in a language, acoustic model and language model has to framed for that particular language.  Acoustic models characterize how sound changes over time. It captures the characteristics of basic recognition units. The language model describes the likelihood, probability, or penalty taken when a sequence or collection of words is seen. It attempts to convey behavior of the language and tries to predict the occurrence of specific word sequences possible in the language. Once these two models are developed, it will be useful to every one doing research in speech processing. For Indian languages Hindi, Tamil, Telugu and Marati, ASR systems have been developed using sphinx engine. In this project work is aimed at developing acoustic model and language model for Malayalam.
CMU Sphinx is a large vocabulary, speaker independent speech recognition codebase and suite of tools, which can be used to develop speech recognition system in any language. To develop an automatic speech recognition system in a language, acoustic model and language model has to framed for that particular language.  Acoustic models characterize how sound changes over time. It captures the characteristics of basic recognition units. The language model describes the likelihood, probability, or penalty taken when a sequence or collection of words is seen. It attempts to convey behavior of the language and tries to predict the occurrence of specific word sequences possible in the language. Once these two models are developed, it will be useful to every one doing research in speech processing. For Indian languages Hindi, Tamil, Telugu and Marati, ASR systems have been developed using sphinx engine. In this project work is aimed at developing acoustic model and language model for Malayalam.


'''Mentor''': Deepa P. Gopinath
'''Background Reading'''
=== Background Reading ===
* [http://www.cs.cmu.edu/~gopalakr/publications/spdatabases_specom05.pdf 'Development of Indian Language Speech Databases for Large Vocabulary Speech Recognition Systems'], Gopalakrishna  Anumanchipalli, Rahul Chitturi, Sachin Joshi, Rohit Kumar, Satinder Pal Singh, R.N.V. Sitaram, S P Kishore
* [http://www.aclweb.org/anthology/W/W12/W12-5808.pdf "Automatic Pronunciation Evaluation And Mispronunciation Detection Using CMUSphinx"], Ronanki Srikanth, James Salsman
* http://www.speech.cs.cmu.edu/
* http://cmusphinx.sourceforge.net/wiki/tutorial
* [http://www.ijarcsse.com "HTK Based Telugu Speech Recognition"], P. Vijai Bhaskar, AVNIET ,Hyderabad, Prof. Dr. S. Rama Mohan Rao, A.Gopi
* [http://www.cs.cmu.edu/~araza/Automatic_Speech_Recognition_System_for_Urdu.PDF "Design and  Development of an Automatic Speech Recognition System for Urdu"], Agha Ali Raza,  M.Sc. Thesis, FAST‐National University of Computer and Emerging Sciences
* [http://www.ccis2k.org/iajit/PDF/vol.6,no.2/11IASRUCSS186.pdf "Investigation Arabic Speech Recognition Using CMU Sphinx System"], Hassan Satori1, 2, Hussein Hiyassat3, Mostafa Harti1, 2, and Noureddine Chenfour
* [http://www.try.idv.tw/static-resources/homework/pr/PR_Final_Report.pdf "Understanding the CMU Sphinx Speech Recognition System"], Chun-Feng Liao
 
'''Complexity''' :
 
'''Confirmed Mentor''' : Deepa P Gopinath
 
'''How to contact the mentor''': IRC - deepagopinath on #smc-project on Freenode
 
'''Expertise required''':
 
'''What the students will learn''':
 
 
==SILPA Project Based==
 
 
===SILPA Project Improvements===
 
'''Project''':
 
This is set of ideas needed to improve the existing SILPA infrastructure. We have decided following tasks as part of this project
 
# Provide REST API to SILPA without disturbing existing JSONRPC API
# Improve the Transliteration module
# Integrate [https://github.com/Project-SILPA/flask-webfonts Flask Webfonts] extension with SILPA to provide Webfonts support.
 
==== Provide REST like API for SILPA ====


SILPA provides JSONRPC API currently which is also utilized by the templates of framework. JSONRPC is not well supported in all languages and results in [https://en.wikipedia.org/wiki/Not_invented_here NIH code]. So we would like to provide REST like HTTP based API's for SILPA and at the same time leave the current JSONRPC code untouched for backward compatibility reasons.


* [http://www.cs.cmu.edu/~gopalakr/publications/spdatabases_specom05.pdf 'Development of Indian Language Speech Databases for Large Vocabulary Speech Recognition Systems'], Gopalakrishna  Anumanchipalli, Rahul Chitturi, Sachin Joshi, Rohit Kumar, Satinder Pal Singh, R.N.V. Sitaram, S P Kishore
'''Objectives''':
 
* Develop module or use existing module to provide REST like API's
* API should support GET and POST. [http://www.w3.org/2001/tag/doc/whenToUseGet.html When to use GET?].
 
Many people have doubt on how the API should look like. We can give twitter API (https://dev.twitter.com/docs/api) as example
Sample API calls :
-------------------------------------------------------------
    POST api.silpa.org.in/payyans/ASCII2Unicode
    Paramets: text, font
    Response: JSON data
-------------------------------------------------------------
    POST api.silpa.org.in/payyans/Unicode2ASCII
    Paramets: text, font
    Response: JSON data
-------------------------------------------------------------
Generic:
    GET/POST (http://api.silpa.org.in/module/function_name or http://silpa.org.in/api/module/function_name)
    Parameters: function parameters
    Response: JSON encoded return value from function
 
====  Improve Transliteration module ====
 
We have a Transliteration module which supports transliteration from any Indic language to other Indic language and also support to English to Indic and Indic to English transliteration. Also we support IPA and ISO15919 transliteration system. But the module isn't in perfect shape and has lot of bugs. With this idea we would like to improve the following parts
 
# Improve cross indic language transliteration system. Currently only Malayalam and Kannada are working without any external language support, all other Indian languages are first transliterated to Malayalam and then transliterated to target Indic language. We want to remove this cycle from source -> Malayalam -> target.
# English to IPA transliteration is currently broken and this needs to be fixed. See [https://github.com/Project-SILPA/Transliteration/issues/3 IPA transliteration bug].
# Once the IPA transliteration issue above is fixed, imporve English to Indic transliteration system using IPA. Currently English to Indic transliteration system is done using CMU Sphinx dictionary which is having limited set of words which inturn limits the output of English to Indic transliteration system.
# Improve IS015919 to Indic transliteration system see [https://github.com/Project-SILPA/Transliteration/issues/4 IS015919 to Indic transliteration].
 
CLDR has transliteration data for Indic languages. We can explore it and see the feasibility. For an intermediate representation of the scripts either IPA can be used or ISO 15919 standard can be used. All these must be supplemented with exception rules and special case handling to achieve more perfect result.


* [http://www.aclweb.org/anthology/W/W12/W12-5808.pdf "Automatic Pronunciation Evaluation And Mispronunciation Detection Using CMUSphinx"], Ronanki Srikanth, James Salsman
==== Integrating flask-webfonts extension with SILPA ====


* http://www.speech.cs.cmu.edu/
SILPA used to have a Webfonts module for serving Indian language fonts as Webfonts for browsers. During GSOC 2013 it was separated as an extension to Flask framework which can be generally used with any Flask powered app. The current code can be found at [https://github.com/Project-SILPA/flask-webfonts]. The module is not fine tuned yet so below are the objectives.
* http://cmusphinx.sourceforge.net/wiki/tutorial


* [http://www.ijarcsse.com "HTK Based Telugu Speech Recognition"], P. Vijai Bhaskar, AVNIET ,Hyderabad, Prof. Dr. S. Rama Mohan Rao, A.Gopi
# The module is not yet fine tuned and using it will make other modules break. This needs to be fixed (Can be checked with 'webfonts' branch of SILPA code on github.
# Write tests to check the functionalities.
# Adhere to Flask extension guidelines and submit the modules to Flask extensions directory.
# Write a tool which can take a directory containing fonts file or single font file and generate configuration file needed by the extension. (A possible such tool which is outdated can be found at [https://github.com/copyninja/fontinfo])
# Provide HTTP api's through flask extension which can expose the CSS for applications.


* [http://www.cs.cmu.edu/~araza/Automatic_Speech_Recognition_System_for_Urdu.PDF "Design and  Development of an Automatic Speech Recognition System for Urdu"], Agha Ali Raza,  M.Sc. Thesis, FAST‐National University of Computer and Emerging Sciences
For all tasks above we expect documentation, test cases from the students as deliverable.  


* [http://www.ccis2k.org/iajit/PDF/vol.6,no.2/11IASRUCSS186.pdf "Investigation Arabic Speech Recognition Using CMU Sphinx System"], Hassan Satori1, 2, Hussein Hiyassat3, Mostafa Harti1, 2, and Noureddine Chenfour
'''Complexity''' : Intermediate


* [http://www.try.idv.tw/static-resources/homework/pr/PR_Final_Report.pdf "Understanding the CMU Sphinx Speech Recognition System"], Chun-Feng Liao
'''Confirmed Mentors''' : Vasudev Kamath, Jishnu Mohan


==SILPA BASED==
'''How to contact the mentors''': IRC -
*Vasudev Kamath - copyninja on #smc-project and #silpa on Freenode
*Jishnu Mohan - jishnu7 on #smc-project and #silpa on Freenode


===Provide REST API for new flask based Silpa, including conversion of templates to this REST API from JSON RPC===
'''Mailing List''': silpa-discuss@nongnu.org <preferred>
'''Project''': Silpa is now relying on JSONRPC. We need to, either completely move to REST API or provide REST API as an additional feature.


'''Expertise required''': Python , Flask , Jinja , HTML, Javascript
'''Expertise required''': Python , Flask , Jinja , HTML, Javascript


'''Mentor''' : Vasudev/Jishnu
'''What the students will learn''':
 
# Writing applications using Flask
# Various Transliteration system knolwedge
# Webfonts knowledge and writing extensions for Flask
# Test drive development.


===Android SDK for Silpa===
===Android SDK for Silpa===
'''Project''': Port possible Silpa modules to java and create SDK so that other developers can use this for their apps. Modules like Indic Render, Transliteration, Payyas has really good potential in android because of the fragmentation exists in Android and lack for proper Indic support. This SDK will help developers to support their Indic app in wide range of android devices.
 
'''Project''':  
 
Port possible Silpa modules to java and create SDK so that other developers can use this for their apps. Modules like Indic Render, Transliteration, Payyas has really good potential in android because of the fragmentation exists in Android and lack for proper Indic support. This SDK will help developers to support their Indic app in wide range of android devices.
 
'''Objectives''':
<Please note this idea is for a SDK, not an app or just a java port>
 
*All modules need to be ported to java so that it can be used inside an Android Project.
*Other applications should be able to use this Silpa library to easy integrate features (as a SDK) from our modules. Eg.
**Transliteration - Developer can specify a text input inside the  application needs transliteration, and our SDK should take care of the  transliteration process whenever user inputs text to that field.
**Render module - Detect whether necessary font is available in the  system, if it is not, render text as image and replace text with this.
**All modules can be explained like this.
*Investigate whether image rendering part of render module can be done in device, inside application itself. Few ways to implement that are
**Compiling cairo/pango with ndk
**Compiling Harffbuzz from AOSP tree with ndk
**Based on the result of rendering module investigation, we can device on whether to use server side rendering or not.
**Pack popular fonts with the SDK, Use it to display text if device doesn't have required font. (there are few hacks to get better rendering in older versions of android). Developer should be able to force rendering using packaged font, to get consistency across devices.
 
<Better to prepare SDK with helper than preparing application itself. SDK aka library>
 
'''Complexity''' : Advanced
 
'''Confirmed Mentors''' : Hrishikesh K. B, Jishnu Mohan, Aashik S
 
'''How to contact the mentor''': IRC -
*Hrishikesh K B - stultus on on #smc-project and #silpa on Freenode
*Jishnu Mohan - jishnu7 on #smc-project and #silpa on Freenode
*Aashik S - irumbumoideen on #smc-project on Freenode


'''Expertise required''': Java, Android, Python
'''Expertise required''': Java, Android, Python


'''Mentor''' :
'''What the students will learn''':


=== Converting indic processing modules currently in SILPA into javascript modules library ===
===Converting indic processing modules currently in SILPA into javascript modules library===
'''Project''': Port some of the silpa algorithms to node modules.


Several modules, alogorithms in SILPA project is done in python now. But porting them to javascript helps developers. For example, cross language transliteration can be done javascript too if we port the algorithm and transliteration rules. Similarly the approximate search can be ported. A flexibile fuzzy search on the web pages will be possible if we have the algorithm in javascript.
'''Project''':
 
Port some of the silpa algorithms to node modules. Several modules, alogorithms in SILPA project is done in python now. But porting them to javascript helps developers. For example, cross language transliteration can be done javascript too if we port the algorithm and transliteration rules. Similarly the approximate search can be ported. A flexibile fuzzy search on the web pages will be possible if we have the algorithm in javascript.


Proposed javascript module pattern is https://github.com/umdjs/umd
Proposed javascript module pattern is https://github.com/umdjs/umd
Line 108: Line 241:
Student proposals should have a list of alogorithms planning to port, planned demo applications, planned documentation details, and publishing details(Example: npm registry)
Student proposals should have a list of alogorithms planning to port, planned demo applications, planned documentation details, and publishing details(Example: npm registry)


'''Expertise required''': javascript, python
'''Complexity''' :  


'''Mentor''' :   -
'''Confirmed Mentor''' : Santhosh Thottingal, Jishnu Mohan


===  Improving cross language transliteration system.  ===
'''How to contact the mentor''': IRC - jishnu7 santhosh on #smc-project and #silpa on Freenode
'''Project''':


Currently only Kannada and Malayalam are perfect rest all are first converted to Malayalam then to English due to lack of language internal. Also currently for English to Indic we use CMUDict so transliteration capability is limited to words in CMUDict only probably we could develop better method for English to Indic transliteration
''' Mailing List''': silpa-discuss@nongnu.org


CLDR has transliteration data for Indic languages. We can explore it and see the feasibility. For an intermediate representation of the scripts either IPA can be used or ISO 15919 standard can be used. All these must be supplemented with exception rules and special case handling to achieve more perfect result.
'''Expertise required''': javascript, python


'''Expertise required''':python
'''What the students will learn''':


'''Mentor''' : Vasudev/Jishnu
===Integrate Varnam into Silpa===


=== Internationalize SILPA project with Wikimedia jquery projects ,  Improve  the webfonts module in Silpa using jquery.webfonts and provide more Indic and complex fonts as part of it ===
Create a Silpa module which hosts [http://www.varnamproject.com varnam]. This includes making a python port for libvarnam and making a Silpa module which uses the python port.  
'''Project''':


'''Internationalize SILPA''' :-  SILPA project has many Indic language applications, but as of now, if somebody want to input in Indian languages, there is no built in tool in it. Similarly, the application is not internationalized. Both of these can be achieved by using the [//github.com/wikimedia/jquery.ime jquery.ime] and [//github.com/wikimedia/jquery.ime jquery.i18n] libraries from Wikimedia. A sample implementation is avaliable in our [http://smc.org.in website]. The i18n should be in the SILPA flask framework with a nice templating system. Similarly the interface should have webfonts using [https://github.com/wikimedia/jquery.webfonts jquery.webfonts] library.
'''Complexity''' : Medium


'''Improve  the webfonts ''' :-
'''Confirmed Mentor''' : Navaneeth K N
* Currently Silpa provides 36 webfonts. add more fonts to this collection.
* Rewrote webfonts module to use the features of jquery.webfonts
* reate a repo as per jquery.webfonts specification
* Provide a clean api so that other websites can use our webfonts in their websites
* Document the usage
* Provide font preview and download options 
* **This is partly done**.


====More Details====
'''How to contact the mentor''': IRC - nkn__ on #smc-project and #silpa on Freenode
* [https://github.com/wikimedia/jquery.i18n jquery.i18n]
* [https://github.com/wikimedia/jquery.ime jquery.ime]
* [https://github.com/wikimedia/jquery.webfonts jquery.webfonts]


'''Expertise required''': jQuery, css, html5, Python , flask , technical understanding about fonts
''' Mailing List''': silpa-discuss@nongnu.org


'''Mentor''' : Jishnu/Vasudev
'''Expertise required''': C, Python


==Language filter for diaspora==
==Language filter for diaspora==
Line 150: Line 271:
Diaspora is a Free Software, federated social networking platform. Diaspora users post in many languages. When people use more than one language in their posts, it is inconvenient for people who don't understand a language. This task is to tag every post with languages used in the post, ideally detected automatically, but with an option to override it. Once each post has a language tag, people should be able to choose their preferred language and posts in other languages should be hidden by default. Also provide an option to translate posts and comments.
Diaspora is a Free Software, federated social networking platform. Diaspora users post in many languages. When people use more than one language in their posts, it is inconvenient for people who don't understand a language. This task is to tag every post with languages used in the post, ideally detected automatically, but with an option to override it. Once each post has a language tag, people should be able to choose their preferred language and posts in other languages should be hidden by default. Also provide an option to translate posts and comments.


* '''Expertise required''': Ruby on Rails
'''Complexity''' :
* '''Mentor''': Pirate Praveen, Ershad K
 
'''Confirmed Mentors''' : Pirate Praveen, Ershad K
 
'''How to contact the mentors''': IRC 
*Pirate Praveen - j4v4m4n on #smc-project on Freenode
*Ershad K - ershad on #smc-project on Freenode
 
'''Expertise required''': Ruby on Rails
 
'''Upstream discussion''': https://www.loomio.org/d/4vTqCj5X/language-filter-for-diaspora-as-a-gsoc-project
 
'''What the students will learn''':


==Varnam Based==
==Varnam Based==
Varnam is a cross-platform predictive transliterator for Indian languages. It works mostly like Google's transliterate, but shows key differences in the way word tokenization is done. It has a learning system built in which allows Varnam to make smart predictions.
There are varnam clients available as [https://addons.mozilla.org/en-US/firefox/addon/varnam-transliteration-base/ Firefox]] & [https://chrome.google.com/webstore/detail/varnam-ime/abcfkeabpcanobhdmcmdabejaamephaf Chrome addon] and an [https://gitorious.org/varnamproject/libvarnam-ibus/source/d939adf50024013902c27310c03ef21a9210cdcb IBus engine].
To try out Varnam, navigate to [http://varnamproject.com/editor[http://varnamproject.com/editor]]. Currently it support Hindi and Malayalam.
* [http://www.varnamproject.com/docs/faq FAQ]
* [http://www.varnamproject.com/docs Documentation]
* [http://www.varnamproject.com/docs/contributing Contributors guide & ideas to work on]
Apart from the following ideas, you can propose your own idea.
===Programming language bindings & varnam-daemon===
Varnam is written on C which makes interoperability with other languages easy. There are language bindings available for `NodeJs` and `Ruby`. Supporting Varnam in multiple languages allows projects to use varnam easily to enable Indian language input.
To make using varnam from different languages easier, make a cross platform standalone process which uses `libvarnam` shared library and exposes a RPC API over network. This allows any programming language with a socket support can be used with libvarnam. This also makes language bindings fairly easy because they don't have to work with the native interoperability support. The protocol can be a simple text based protocol for all the commands that `libvarnam` supports.
'''Complexity''' : Advanced
'''Confirmed Mentor''' : Navaneeth K N
'''How to contact the mentor''': IRC - nkn__ on #smc-project on Freenode
'''Expertise required''': C
===Improvements to the REST API===
===Improvements to the REST API===


This includes rewrite of the current implementation in `golang` and add
'''Project''':
support for WebSockets to improve the input experience. This also
 
includes making scripts that would ease embedding input on any webpage.
This includes rewrite of the current implementation in `golang` and add support for WebSockets to improve the input experience. This also
All the changes done will go live on[1]
includes making scripts that would ease embedding input on any webpage. All the changes done will go live on[1]
 
'''Complexity''' : Advanced
 
'''Confirmed Mentor''' : Navaneeth K N
 
'''How to contact the mentor''': IRC - nkn__ on #smc-project on Freenode


'''Expertise required''': Basic understanding of golang and C
'''Expertise required''': Basic understanding of golang and C
'''What the students will learn''':
===Improve the learning system===
'''Project''':
The main goal of this is to improve how varnam tokenizes when learning words. Today, when a word is learned, varnam takes all the possible prefixes into account and learn all of them to improve future suggestions. But sometimes, this is not enough to predict good suggestions. An improvement is suggested which will try to infer the base form of the word under learning.
Varnam has a learning system built-in which can learn words and it can also learn possible other ways to write a word. Consider the following example.
<code>
<pre>
learn("भारत") = [bharat, bhaarath, bharath]
transliterate("bharat") = भारत
transliterate("bhaarath") = भारत
transliterate("bharath") = भारत
</pre>
</code>
Varnam also learns a word's prefixes so that it can produce better predictions for any word which has the same prefix. So in this case, with just learning the word "भारत", varnam can predict "bharateey" = "भारतीय".
The proposed idea talks about making this learn better. One example is infer the word "भारत" when learning भारतीय. Something like a porter stemmer implementation but integrated into the varnam framework so that
new language support can be added easily.
This idea also includes improving concurrency support for learn. Currently, learn can't be called concurrently because of the restrictions on the SQLite. So every learn has to be sequentially done. This needs to be improved by having a simple internal queue from which words gets queued when the learn is busy.


'''Complexity''' : Advanced
'''Complexity''' : Advanced


=== Improve the learning system===
'''Confirmed Mentor''' : Navaneeth K N
 
'''How to contact the mentor''': IRC - nkn__ on #smc-project on Freenode
 
'''Expertise required''':  Knowledge in C, Ruby (basics)
 
'''What the students will learn''':
 
===Create an Android IME===


The main goal of this is to improve how varnam tokenizes when learning
Varnam will be ported as a Silpa module and it will be available on Android as part of the android SDK project which Silpa has proposed. This idea is merged to the [http://wiki.smc.org.in/SoC/2014/Project_ideas#Android_SDK_for_Silpa Silpa] project ideas.
words. Today, when a word is learned, varnam takes all the possible
prefixes into account and learn all of them to improve future
suggestions. But sometimes, this is not enough to predict good
suggestions. An improvement is suggested which will try to infer the
base form of the word under learning


'''Expertise required''': Knowledge in C
===Enable varnam's suggestions system to be used from Inscript or any other input system===
 
Varnam has knowledge about lot of words. This idea proposes a method to use these words and provide suggestions for other input systems. Basically, in Varnam, the API call will be something like,
 
<code><pre>
varnam_get_suggestions (handle, "भारत");
</pre></code>
 
This will fetch all the suggestions which has the given prefix.
 
`varnam_get_suggestions` needs to keep track of the previous words and use [http://en.wikipedia.org/wiki/N-gram n-gram] based dataset to filter the results. This should also learn the words back into the word corpus that varnam is using. Filtering suggestions won't be just a prefix search, but it will have knowledge about how text can be written in the target language and provide smart filtering. Searching in a large corpus and providing real-time suggestions makes this a challenging task.
 
Once this is implemented in `libvarnam`, it can be used in the ibus-engine.
 
'''Complexity''' : Advanced
 
'''Expertise required''': C, Unicode & encodings
 
'''Confirmed Mentor''' : Navaneeth K N
 
'''How to contact the mentor''': IRC - nkn__ on #smc-project on Freenode


'''Complexity''' : Medium


=== Word corpus synchronization ===
=== Word corpus synchronization ===


Create a cross-platform synchronization tool which can upload/download
'''Project''':
the word corpus from offline IMEs like varnam-ibus[2]. This helps to
build the online words corpus easily.


'''Expertise required''': Knowledge in C/golang
Create a cross-platform synchronization tool which can upload/download the word corpus from offline IMEs like varnam-ibus[2]. This helps to build the online words corpus easily.


'''Complexity''' : Medium
'''Complexity''' : Medium


[1]: http://www.varnamproject.com
'''Confirmed Mentor''' : Navaneeth K N
[2]: https://gitorious.org/varnamproject/libvarnam-ibus/
 
'''How to contact the mentor''': IRC - nkn__ on #smc-project on Freenode
 
'''Expertise required''': Knowledge in C/golang


'''Mentor''': Navaneeth K N
'''What the students will learn''':


== Adding Braille Keyboard layouts for Indian Languages to m17n Library==
 
* [1]: http://www.varnamproject.com
* [2]: https://gitorious.org/varnamproject/libvarnam-ibus/
 
==Adding Braille Keyboard layouts for Indian Languages to m17n Library==
 
'''Project''':


Project is building support for Bharati Braille keyboard layouts in GNU/Linux systemes.  Bharati Braille standard is the official Braille standard in India. A regular QWERTY keyboard is used for data entry. SDF-JKL keys are used for six dots of Braille. This support need to be built as m17n layouts. This will enable visually challenged people who studied braille layouts to use GNU/Linux systems easily with the help of Audio feedback from TTS
Project is building support for Bharati Braille keyboard layouts in GNU/Linux systemes.  Bharati Braille standard is the official Braille standard in India. A regular QWERTY keyboard is used for data entry. SDF-JKL keys are used for six dots of Braille. This support need to be built as m17n layouts. This will enable visually challenged people who studied braille layouts to use GNU/Linux systems easily with the help of Audio feedback from TTS


====More Details====
'''More Details'''
* http://www.acharya.gen.in:8080/disabilities/bh_brl.php
* http://www.acharya.gen.in:8080/disabilities/bh_brl.php
* http://en.wikipedia.org/wiki/Bharati_Braille
* http://en.wikipedia.org/wiki/Bharati_Braille
* http://www.nongnu.org/m17n/
* http://www.nongnu.org/m17n/


'''Mentor''': Anvar Aravind
 
'''Complexity''' :
 
'''Confirmed Mentor''' : Anilkumar K V
 
'''How to contact the mentor''': IRC - anilkumar on #smc-project on Freenode
 
'''Expertise required''':
 
'''What the students will learn''':


==Grandham ==
==Grandham ==
=== Adding MARC21 import/export feature in Grandham ===
=== Adding MARC21 import/export feature in Grandham ===


We need a feature in Grandham to import and parse data from MARC21 documents. We should also be able to export existing data in MARC21.
We need a feature in Grandham to import and parse data from MARC21 documents. We should also be able to export existing data in MARC21.
'''Complexity''' : High
'''Confirmed Mentor''' : Ershad K
'''How to contact the mentor''': IRC - ershad on #smc-project on Freenode


'''Expertise required''': Knowledge in Ruby/Ruby on Rails
'''Expertise required''': Knowledge in Ruby/Ruby on Rails


'''Complexity''' : High
'''What the students will learn''':
 
 
* [1]: http://dev.grandham.org
* [2]: https://github.com/smc/grandham


[1]: http://dev.grandha.org
=Projects with unconfirmed mentors=
[2]: https://github.com/smc/grandham

Latest revision as of 09:28, 5 February 2015

Apart from the following ideas , you can propose your own ideas

Potential Mentors

  1. Santhosh Thottingal (santhosh on irc.freenode.net)
  2. Baiju M (baijum on irc.freenode.net)
  3. Praveen A (j4v4m4n on irc.freenode.net)
  4. Rajeesh K Nambiar (rajeeshknambiar on irc.freenode.net)
  5. Vasudev Kammath (copyninja on irc.freenode.net)
  6. Jishnu Mohan (jishnu7 on irc.freenode.net)
  7. Hrishikesh K.B (stultus on irc.freenode.net)
  8. Anivar Aravind (anivar on irc.freenode.net)
  9. Anilkumar K V (anilkumar on irc.freenode.net)
  10. Sajjad Anwar (geohacker on irc.freenode.net)
  11. Deepa V Gopinath (deepagopinath on irc.freenode.net)
  12. jain Basil (jainbasil on irc.freenode.net)
  13. Ershad K (ershad on irc.freenode.net
  14. Navaneeth (nkn__ on irc.freenode.net)
  15. Nishan Naseer (nishan on irc.freenode.net)
  16. Nandaja Varma (gem on irc.freenode.net)

Ideas for Google Summer of Code 2014

Projects with confirmed mentors

A spell checker for Indic language that understands inflections

Project:

SILPA project has a spellchecker written using python with a not so simple algorithm. But still it is not capable of handling inflection and agglutination occurring in Indian languages especially south Indian languages. The dictionary we have for Malayalam spellchecker have about 150000 words. Of course we can expand the dictionary, but that doesn't have much value since words can be formed in Malayalam or Tamil etc by joining multiple words. In addition to that, words get inflected based on grammar forms(sandhi), plural, gender etc. Hunspell has a system to handle this, but so far nobody succeeded in getting it working for multi level suffix stripping as required for Malayalam. Some times a Malayalam word can be formed by more than 5 words joining together. We will need a word splitting logic or a table taking care of all patterns. The project is to attempt solving this with hunspell. If that is not feasible(hunspell upstream is not active), develop an algorithm and implement it.

Recently Tamil attempted developing a spellchecker using Hunspell with multi level suffix stripping. You can see the result here https://github.com/thamizha/solthiruthi. Our attempt should be first to use Hunspell to achieve spellchecking with agglutination and inflection. Probably it will require lot of scripting to generate suffix patterns, we can ask help from existing language communities too. If Hunspell has limitation with multi level suffxes- sometimes Indian languages require more than 5 levels of suffix stripping, we need to document it(bug and documentation) and try to attempt python based solution on top of SILPA framework.

The project is not about coding an existing algorithm, but to develop and implement an algorithm.

Hunspell's limitations can be understood from this conversation we had with the author of Hunspell in 2008

Homework to do before submitting applications:

  1. Use Hunspell in any Indian language like Malayalam for spell correction in editors or word processors and understand the limitations
  2. Study the nature of inflection and agglutination in Indian languages, read existing documents on this(ask for documents too) and note down your observations
  3. Study Hunspell and other spellcheckers to see how this problem is addressed
  4. Understand how a spell checker works. How to write a spellchecker from scratch?
  5. Come up with a plan about addressing the issue.

Complexity: Advanced

Confirmed Mentor: Santhosh Thottingal

How to contact the mentor: IRC - santhosh on #smc-project on Freenode

Expertise required: Average level understanding of grammar system of at least one Indian language and complete the homework as listed above.

What the student will learn:

Indic rendering support in ConTeXt

Project:

ConTeXt is another TeX macro system similar to LaTeX but much more suitable for design. To find more information about ConTeXt, see the wiki http://wiki.contextgarden.net/Main_Page. ConTeXt MKII have Indic language rendering support using XeTeX. but MKII is deprecated, and the new MKIV backend doesn't support Indic rendering yet. The aim of this project is to add support to Inidic rendering to ConTeXt MKIV. XeTeX is using Harfbuzz to do correct Indic rendering.

More Details: A partially working patch by Rajeesh for MKIV lua code is available. ConTeXt mkii (deprecated) can work with XeTeX backend for Indic rendering. Here is a sample file:

\usemodule[simplefonts]
\definefontfeature[malayalam][script=mlym]
\setmainfont[Rachana][features=malayalam]
\starttext
മലയാളം \TeX ഉപയോഗിച്ച് ടൈപ്പ്സെറ്റ് ചെയ്തത്
\stoptext

Generate the output using command

texexec --xetex <file.tex>

Complexity : Advanced

Confirmed Mentor : Rajeesh K Nambiar

How to contact the mentor: IRC - rajeeshknambiar on #smc-project on Freenode

Expertise required: Understanding of the TeX system, experience in either LaTeX or ConTeXt and basic understanding of Indic language rendering. MKIV uses Lua, familiarity with Lua, opentype specifications or Harfbuzz will be added advantage.

What the students will learn:


Language model and Acoustic model for Malayalam language for speech recognition system in CMU Sphinx

Project:

CMU Sphinx is a large vocabulary, speaker independent speech recognition codebase and suite of tools, which can be used to develop speech recognition system in any language. To develop an automatic speech recognition system in a language, acoustic model and language model has to framed for that particular language. Acoustic models characterize how sound changes over time. It captures the characteristics of basic recognition units. The language model describes the likelihood, probability, or penalty taken when a sequence or collection of words is seen. It attempts to convey behavior of the language and tries to predict the occurrence of specific word sequences possible in the language. Once these two models are developed, it will be useful to every one doing research in speech processing. For Indian languages Hindi, Tamil, Telugu and Marati, ASR systems have been developed using sphinx engine. In this project work is aimed at developing acoustic model and language model for Malayalam.

Background Reading

Complexity :

Confirmed Mentor : Deepa P Gopinath

How to contact the mentor: IRC - deepagopinath on #smc-project on Freenode

Expertise required:

What the students will learn:


SILPA Project Based

SILPA Project Improvements

Project:

This is set of ideas needed to improve the existing SILPA infrastructure. We have decided following tasks as part of this project

  1. Provide REST API to SILPA without disturbing existing JSONRPC API
  2. Improve the Transliteration module
  3. Integrate Flask Webfonts extension with SILPA to provide Webfonts support.

Provide REST like API for SILPA

SILPA provides JSONRPC API currently which is also utilized by the templates of framework. JSONRPC is not well supported in all languages and results in NIH code. So we would like to provide REST like HTTP based API's for SILPA and at the same time leave the current JSONRPC code untouched for backward compatibility reasons.

Objectives:

  • Develop module or use existing module to provide REST like API's
  • API should support GET and POST. When to use GET?.

Many people have doubt on how the API should look like. We can give twitter API (https://dev.twitter.com/docs/api) as example Sample API calls :


   POST api.silpa.org.in/payyans/ASCII2Unicode
   Paramets: text, font
   Response: JSON data

   POST api.silpa.org.in/payyans/Unicode2ASCII
   Paramets: text, font
   Response: JSON data

Generic:

   GET/POST (http://api.silpa.org.in/module/function_name or http://silpa.org.in/api/module/function_name)
   Parameters: function parameters
   Response: JSON encoded return value from function

Improve Transliteration module

We have a Transliteration module which supports transliteration from any Indic language to other Indic language and also support to English to Indic and Indic to English transliteration. Also we support IPA and ISO15919 transliteration system. But the module isn't in perfect shape and has lot of bugs. With this idea we would like to improve the following parts

  1. Improve cross indic language transliteration system. Currently only Malayalam and Kannada are working without any external language support, all other Indian languages are first transliterated to Malayalam and then transliterated to target Indic language. We want to remove this cycle from source -> Malayalam -> target.
  2. English to IPA transliteration is currently broken and this needs to be fixed. See IPA transliteration bug.
  3. Once the IPA transliteration issue above is fixed, imporve English to Indic transliteration system using IPA. Currently English to Indic transliteration system is done using CMU Sphinx dictionary which is having limited set of words which inturn limits the output of English to Indic transliteration system.
  4. Improve IS015919 to Indic transliteration system see IS015919 to Indic transliteration.

CLDR has transliteration data for Indic languages. We can explore it and see the feasibility. For an intermediate representation of the scripts either IPA can be used or ISO 15919 standard can be used. All these must be supplemented with exception rules and special case handling to achieve more perfect result.

Integrating flask-webfonts extension with SILPA

SILPA used to have a Webfonts module for serving Indian language fonts as Webfonts for browsers. During GSOC 2013 it was separated as an extension to Flask framework which can be generally used with any Flask powered app. The current code can be found at [1]. The module is not fine tuned yet so below are the objectives.

  1. The module is not yet fine tuned and using it will make other modules break. This needs to be fixed (Can be checked with 'webfonts' branch of SILPA code on github.
  2. Write tests to check the functionalities.
  3. Adhere to Flask extension guidelines and submit the modules to Flask extensions directory.
  4. Write a tool which can take a directory containing fonts file or single font file and generate configuration file needed by the extension. (A possible such tool which is outdated can be found at [2])
  5. Provide HTTP api's through flask extension which can expose the CSS for applications.

For all tasks above we expect documentation, test cases from the students as deliverable.

Complexity : Intermediate

Confirmed Mentors : Vasudev Kamath, Jishnu Mohan

How to contact the mentors: IRC -

  • Vasudev Kamath - copyninja on #smc-project and #silpa on Freenode
  • Jishnu Mohan - jishnu7 on #smc-project and #silpa on Freenode

Mailing List: silpa-discuss@nongnu.org <preferred>

Expertise required: Python , Flask , Jinja , HTML, Javascript

What the students will learn:

  1. Writing applications using Flask
  2. Various Transliteration system knolwedge
  3. Webfonts knowledge and writing extensions for Flask
  4. Test drive development.

Android SDK for Silpa

Project:

Port possible Silpa modules to java and create SDK so that other developers can use this for their apps. Modules like Indic Render, Transliteration, Payyas has really good potential in android because of the fragmentation exists in Android and lack for proper Indic support. This SDK will help developers to support their Indic app in wide range of android devices.

Objectives: <Please note this idea is for a SDK, not an app or just a java port>

  • All modules need to be ported to java so that it can be used inside an Android Project.
  • Other applications should be able to use this Silpa library to easy integrate features (as a SDK) from our modules. Eg.
    • Transliteration - Developer can specify a text input inside the application needs transliteration, and our SDK should take care of the transliteration process whenever user inputs text to that field.
    • Render module - Detect whether necessary font is available in the system, if it is not, render text as image and replace text with this.
    • All modules can be explained like this.
  • Investigate whether image rendering part of render module can be done in device, inside application itself. Few ways to implement that are
    • Compiling cairo/pango with ndk
    • Compiling Harffbuzz from AOSP tree with ndk
    • Based on the result of rendering module investigation, we can device on whether to use server side rendering or not.
    • Pack popular fonts with the SDK, Use it to display text if device doesn't have required font. (there are few hacks to get better rendering in older versions of android). Developer should be able to force rendering using packaged font, to get consistency across devices.

<Better to prepare SDK with helper than preparing application itself. SDK aka library>

Complexity : Advanced

Confirmed Mentors : Hrishikesh K. B, Jishnu Mohan, Aashik S

How to contact the mentor: IRC -

  • Hrishikesh K B - stultus on on #smc-project and #silpa on Freenode
  • Jishnu Mohan - jishnu7 on #smc-project and #silpa on Freenode
  • Aashik S - irumbumoideen on #smc-project on Freenode

Expertise required: Java, Android, Python

What the students will learn:

Converting indic processing modules currently in SILPA into javascript modules library

Project:

Port some of the silpa algorithms to node modules. Several modules, alogorithms in SILPA project is done in python now. But porting them to javascript helps developers. For example, cross language transliteration can be done javascript too if we port the algorithm and transliteration rules. Similarly the approximate search can be ported. A flexibile fuzzy search on the web pages will be possible if we have the algorithm in javascript.

Proposed javascript module pattern is https://github.com/umdjs/umd

Student proposals should have a list of alogorithms planning to port, planned demo applications, planned documentation details, and publishing details(Example: npm registry)

Complexity :

Confirmed Mentor : Santhosh Thottingal, Jishnu Mohan

How to contact the mentor: IRC - jishnu7 santhosh on #smc-project and #silpa on Freenode

Mailing List: silpa-discuss@nongnu.org

Expertise required: javascript, python

What the students will learn:

Integrate Varnam into Silpa

Create a Silpa module which hosts varnam. This includes making a python port for libvarnam and making a Silpa module which uses the python port.

Complexity : Medium

Confirmed Mentor : Navaneeth K N

How to contact the mentor: IRC - nkn__ on #smc-project and #silpa on Freenode

Mailing List: silpa-discuss@nongnu.org

Expertise required: C, Python

Language filter for diaspora

Diaspora is a Free Software, federated social networking platform. Diaspora users post in many languages. When people use more than one language in their posts, it is inconvenient for people who don't understand a language. This task is to tag every post with languages used in the post, ideally detected automatically, but with an option to override it. Once each post has a language tag, people should be able to choose their preferred language and posts in other languages should be hidden by default. Also provide an option to translate posts and comments.

Complexity :

Confirmed Mentors : Pirate Praveen, Ershad K

How to contact the mentors: IRC

  • Pirate Praveen - j4v4m4n on #smc-project on Freenode
  • Ershad K - ershad on #smc-project on Freenode

Expertise required: Ruby on Rails

Upstream discussion: https://www.loomio.org/d/4vTqCj5X/language-filter-for-diaspora-as-a-gsoc-project

What the students will learn:

Varnam Based

Varnam is a cross-platform predictive transliterator for Indian languages. It works mostly like Google's transliterate, but shows key differences in the way word tokenization is done. It has a learning system built in which allows Varnam to make smart predictions.

There are varnam clients available as Firefox] & Chrome addon and an IBus engine.

To try out Varnam, navigate to [http://varnamproject.com/editor]. Currently it support Hindi and Malayalam.

Apart from the following ideas, you can propose your own idea.

Programming language bindings & varnam-daemon

Varnam is written on C which makes interoperability with other languages easy. There are language bindings available for `NodeJs` and `Ruby`. Supporting Varnam in multiple languages allows projects to use varnam easily to enable Indian language input.

To make using varnam from different languages easier, make a cross platform standalone process which uses `libvarnam` shared library and exposes a RPC API over network. This allows any programming language with a socket support can be used with libvarnam. This also makes language bindings fairly easy because they don't have to work with the native interoperability support. The protocol can be a simple text based protocol for all the commands that `libvarnam` supports.

Complexity : Advanced

Confirmed Mentor : Navaneeth K N

How to contact the mentor: IRC - nkn__ on #smc-project on Freenode

Expertise required: C


Improvements to the REST API

Project:

This includes rewrite of the current implementation in `golang` and add support for WebSockets to improve the input experience. This also includes making scripts that would ease embedding input on any webpage. All the changes done will go live on[1]

Complexity : Advanced

Confirmed Mentor : Navaneeth K N

How to contact the mentor: IRC - nkn__ on #smc-project on Freenode

Expertise required: Basic understanding of golang and C

What the students will learn:


Improve the learning system

Project:

The main goal of this is to improve how varnam tokenizes when learning words. Today, when a word is learned, varnam takes all the possible prefixes into account and learn all of them to improve future suggestions. But sometimes, this is not enough to predict good suggestions. An improvement is suggested which will try to infer the base form of the word under learning.

Varnam has a learning system built-in which can learn words and it can also learn possible other ways to write a word. Consider the following example.

learn("भारत") = [bharat, bhaarath, bharath]
transliterate("bharat") = भारत
transliterate("bhaarath") = भारत
transliterate("bharath") = भारत

Varnam also learns a word's prefixes so that it can produce better predictions for any word which has the same prefix. So in this case, with just learning the word "भारत", varnam can predict "bharateey" = "भारतीय".

The proposed idea talks about making this learn better. One example is infer the word "भारत" when learning भारतीय. Something like a porter stemmer implementation but integrated into the varnam framework so that new language support can be added easily.

This idea also includes improving concurrency support for learn. Currently, learn can't be called concurrently because of the restrictions on the SQLite. So every learn has to be sequentially done. This needs to be improved by having a simple internal queue from which words gets queued when the learn is busy.


Complexity : Advanced

Confirmed Mentor : Navaneeth K N

How to contact the mentor: IRC - nkn__ on #smc-project on Freenode

Expertise required: Knowledge in C, Ruby (basics)

What the students will learn:

Create an Android IME

Varnam will be ported as a Silpa module and it will be available on Android as part of the android SDK project which Silpa has proposed. This idea is merged to the Silpa project ideas.

Enable varnam's suggestions system to be used from Inscript or any other input system

Varnam has knowledge about lot of words. This idea proposes a method to use these words and provide suggestions for other input systems. Basically, in Varnam, the API call will be something like,

varnam_get_suggestions (handle, "भारत");

This will fetch all the suggestions which has the given prefix.

`varnam_get_suggestions` needs to keep track of the previous words and use n-gram based dataset to filter the results. This should also learn the words back into the word corpus that varnam is using. Filtering suggestions won't be just a prefix search, but it will have knowledge about how text can be written in the target language and provide smart filtering. Searching in a large corpus and providing real-time suggestions makes this a challenging task.

Once this is implemented in `libvarnam`, it can be used in the ibus-engine.

Complexity : Advanced

Expertise required: C, Unicode & encodings

Confirmed Mentor : Navaneeth K N

How to contact the mentor: IRC - nkn__ on #smc-project on Freenode


Word corpus synchronization

Project:

Create a cross-platform synchronization tool which can upload/download the word corpus from offline IMEs like varnam-ibus[2]. This helps to build the online words corpus easily.

Complexity : Medium

Confirmed Mentor : Navaneeth K N

How to contact the mentor: IRC - nkn__ on #smc-project on Freenode

Expertise required: Knowledge in C/golang

What the students will learn:


Adding Braille Keyboard layouts for Indian Languages to m17n Library

Project:

Project is building support for Bharati Braille keyboard layouts in GNU/Linux systemes. Bharati Braille standard is the official Braille standard in India. A regular QWERTY keyboard is used for data entry. SDF-JKL keys are used for six dots of Braille. This support need to be built as m17n layouts. This will enable visually challenged people who studied braille layouts to use GNU/Linux systems easily with the help of Audio feedback from TTS

More Details


Complexity :

Confirmed Mentor : Anilkumar K V

How to contact the mentor: IRC - anilkumar on #smc-project on Freenode

Expertise required:

What the students will learn:

Grandham

Adding MARC21 import/export feature in Grandham

We need a feature in Grandham to import and parse data from MARC21 documents. We should also be able to export existing data in MARC21.


Complexity : High

Confirmed Mentor : Ershad K

How to contact the mentor: IRC - ershad on #smc-project on Freenode

Expertise required: Knowledge in Ruby/Ruby on Rails

What the students will learn:


Projects with unconfirmed mentors