User:Nandaja/GSoC 2013 Automated Rendering Testing: Difference between revisions

From SMC Wiki
 
(2 intermediate revisions by the same user not shown)
Line 215: Line 215:


I have changed the framework interface from its previous form, although the previous front end automated_rendering_testing.py is still present in the repo. Now the new interface, rendering_testing.py, need all the file names to be provided as command line arguments. The user gets the convenience  of using the tab completion this way. The user will have to give as command line arguments 6 files (font  file, test cases file, reference file, rendering output and files to store output) and an optional directory name(if the engine is harfbuzz).
I have changed the framework interface from its previous form, although the previous front end automated_rendering_testing.py is still present in the repo. Now the new interface, rendering_testing.py, need all the file names to be provided as command line arguments. The user gets the convenience  of using the tab completion this way. The user will have to give as command line arguments 6 files (font  file, test cases file, reference file, rendering output and files to store output) and an optional directory name(if the engine is harfbuzz).
If the rendering engine is harfbuzz, user can run the script generate_hb_rendering.py  along with the test cases file and font file as parameters, to create the rendered output file. If that is not the case, the user will have to create this file as well in the prescribed form.  
 
If the rendering engine is harfbuzz, user can run the script generate_hb_rendering.py  along with the test cases file and font file as parameters, to create the rendered output file. If that is not the case, the user will have to create this file as well in the prescribed form.  
 
Now, the algorithm that actually test the rendering was a bit buggy and was giving certain wrong outputs for words with multiple rendering engines and I have cleared this error. This feature gives correct output now for the files I tried it with.  
Now, the algorithm that actually test the rendering was a bit buggy and was giving certain wrong outputs for words with multiple rendering engines and I have cleared this error. This feature gives correct output now for the files I tried it with.  
The next thing I am working on is the web interface and I am using Flask framework. Will make this code public as soon as I get the script running from the page.  
The next thing I am working on is the web interface and I am using Flask framework. Will make this code public as soon as I get the script running from the page.  
Find the code here: https://gitlab.com/gem/automated-rendering-testing/tree/master
Find the code here: https://gitlab.com/gem/automated-rendering-testing/tree/master
More info in the README
===25/08/2013===
The work of mine has been correcting the reference glyph files and developing a web interface for the proposed framework. I had tried and made the reference files least buggy as possible. I have gone through the glyph names of almost all the 243 words in 4 fonts. I had to invest a lot of time on this especially due to one minor misunderstanding of mine on the multiple correct renderings of the words. And I hope it will get much refined after Rajeeshettan proof read it for 2 fonts as he has suggested.
(I have changed the renderings of words with repham in Rachana such that the dotreph comes first. So words like these http://troll.ws/image/2e3a872e, http://troll.ws/image/469dd87a, http://troll.ws/image/5838dbec although looks correct, will be in the wrongly rendered words list by harfbuzz.)
The next part of this weeks work was developing the web interface (Excuse my poor design, I am cleaning it up as I write). It doesn't actually spits output to the user now or doesn't make it easier for the user to open files. I am hoping to make it run the script well in a week's time and don't think it is ready yet for the review. So I would like another week to make it ready for reviewing.
And finally about the C code I have added to the repo. I will start working on a new code in C++ once I am done with the webpage as I find the present code massively buggy and really inefficient. I hope I'll be able to update it the week after next.
My code here: https://gitlab.com/gem/automated-rendering-testing/tree/master
===9/09/2013===
Here is the present status of the project.
* The testing framework now can evaluate words with multiple correct renderings, provided the correct renderings are provided in the reference file separated by semi colon.
* Reference glyph for both Rachana and Meera has been updated as per the latest updates (changes in glyph names) in the upstream.
* Reference for Devanagari font is being added to the repo.
Present status of the framework is:
* rendering_test.py can accept  up to 7 inputs, which being the test cases file, reference file, rendered output file, font file, output file, error file and a directory name.
* Of this everything but reference file and rendering output are optional.
* Output will be produced as per the parameters passed.
* pep-8 errors reported before has been cleared.
By the end of this week, I am planning to finish:
* Complete Devanagari references
* The immediate next priority being C++ implementation of the code, I will be working on that.
* Proof read Suruma and Lohith-Malayalam test cases
Once this is all done, I will work on the web interface.
Find my code here: https://github.com/nandajavarma/Automated-Rendering-Testing

Latest revision as of 10:05, 9 September 2013

Personal information

  • Name: Nandaja Varma
  • Email Address: <nandaja.varma AT gmail DOT com>
  • Freenode IRC Nick: gem
  • University and current education ː BTech Computer Science, Calicut University (NSS College palakkad)
  • Blog URL: nandajavarma.wordpress.com

Why do you want to work with the Swathanthra Malayalam Computing?

Since I came to know about the activities of SMC (That would be by the starting of my second year studies at college), I wanted to be a part of this community and make some significant contributions to it. I see this as a great opportunity for it. Would like to do the same through any other means possible, as well.

Do you have any past involvement with the Swathanthra Malayalam Computing or another open source project as a contributor?

Yes, I have recently started contributing to SMC's Gnome localization team. I make contributions to Debian community as a packager, mainly packaging Ruby gems for Debian. I also got involved in digitalization works with Malayalam Wikigrandhashala recently.

Did you participate with the past GSoC programs, if so which years, which organizations?

No, I did not.

Do you have other obligations between May and August? Please note that we expect the Summer of Code to be a full time, 40 hours a week commitment ?

I have no other obligations whatsoever between the proposed months. I will be able to make this 40 hours a week commitment GSoC.

Will you continue contributing/ supporting the Swathanthra Malayalam Computing after the GSoC 2013 program, if yes, which area(s), you are interested in?

Yes, Most definitely. I would like to continue my contributions with the localization works as translation is one of my area of interests. I would also like to make major contributions to SMC's rendering fixing related works.

Why should we choose you over other applicants?

I have understanding of the rendering engine, Harfbuzz's working and I have played with a couple of scripts which basically prints the glyph index of a particular text in a particular font. As of implementing my project idea, I have good knowledge in C programming language and have good reading and writing skills in Malayalam. This would definitely help me in creating the list of base glyph words for this project. Also I also have quite a clear knowledge on test rendering stack and its constituent modules.


Project Description

An Overview of your proposal

Harfbuzz is an opensource development library for shaping Unicode text, specifically complex scripts. Developing an automated mechanism to test what has been rendered by harfbuzz for different Indic languages is the main objective of this project. As of now, there is no actual mechanism to check if Harfbuzz is rendering the text correctly. As harfbuzz is a very efficient, widely used and undoubtedly about to be used for a long time to come, the project is highly relevant. The proposed system has the ability to test renderings in different indic languages using different fonts.

The need you think it fulfills

Implementation of the above mentioned idea can make sure that what is being rendered be harfbuzz is actually correct. It would make it easier for developers or users if such a mechanism exists because now the only way to do is manually testing it, which can be time consuming and is error prone. Also, anyone can get the renderings tested even if she knows the particular language that is being rendered or not.

Any relevant experience you have

I have decent knowledge in C programming language, in which Harfbuzz is implemented. I am quite familiar with harfbuzz architecture and its renderings. Also my knowledge on test rendering stacks, glyphs and Unicode encoding would definitely help in taking me further. I also have experience in localization and digitalization works which, I hope, will help me at some points of the project.

How you intend to implement your proposal

Harfbuzz is a shape rendering engine for Unicode text, especially complex scripts. Harfbuzz basically offers two utilities hb-view and hb-shape for testing and viewing the rendering. hb-view gives as its output the view of the rendered unicode character based on its font, basically as an image where as hb-shape actually gives as its output the glyph index of that particular character based on its font. For example if we give the command: hb-view Rachana.ttf മലയാളം , We get an output like this: [m1=0+1046|l3=1+1462|y1=2+1624|uni0D3E=2+826|lh=4+1134|uni0D02=4+856], which is basically the glyph index of the word 'മലയാളം'. Glyphs represent the shapes that characters can have when they are rendered or displayed. Opentype is the prominent font standard used today. Opentype font technology deals with glyphs where as Unicode deals with characters. Glyph indices are mapping between a Unicode character to its corresponding glyph(s). So Glyph indices are one of the most important things to be dealt with when it comes to rendering.

So, to implement this idea of making the testing automated, What will be done is evaluating the output of hb-shape functionality. As it shows the glyph index of any word that we give as input, we can check this value for correctness. So the methodology to be followed to check this for correctness will be as follows: Create a baseline glyph words list that consist of a word and it's corresponding glyph index for each font. This must contain the correct rendering of each of the words specified. We will have to create this particular list for every indic language for which we are planning to implement this testing. For creating this table, we can make use of fontforge, which is a font editor that can be used to create fonts. So we will get the layout of each character in this application. We can create a baseline glyph words table using the glyph index data that we can fetch from fontforge for different indic languages. But, obviously, we cannot create a table with every single character or character combinations possible, which is difficult as well as less efficient as it will drastically affect the comparing procedure. So special care should be taken to create a table that consists of most important characters that might go wrong and should not, special case characters , etc. We have to intelligently pick the words or character combinations which can significantly decrease the total number of entries in this list, which presumably can be entered into a database or a more efficient database like a hash table or a trie can be used to fastly search for the data while providing our list as a separate text file.

Then script should be written, in C, to accept hb-shape output as input and then check it against out baseline glyph word, find the exact matching word as see if the glyph indices match. If it doesn't, then that can be flagged as being incorrectly rendered. Also, it might so happen that the comparing words do not appear in the list we provide. Here comes the efficiency of the words we have chosen. Either we can assume that the particular word or character is very rarely used or assume that the word input was given wrong. If the hit of a same word happens more than a certain number of times we can say that out assumptions were wrong and we can think of a mechanism to get this particular word flagged and then add its corresponding glyph index, as an upgrade.

Also, to interact with this proposed library, a Web front end can also be made, in PHP, to make it more user friendly rather that using the command line.

A rough timeline for your progress with phases

  • Week 1 - 2  : Learn more about Opentype and Unicode. Learn well the way usually font shapes are rendered in engines and how to they appear when we combine characters to words and what will the changes happened to the glyph indices be.
  • Week 2 - 3  : Create a list of words or characters in Unicode that necessarily is needed to test against with harfbuzz output. Most preferably, this one in Malayalam. Select the words efficiently to make the whole list effective as well as concise.
  • Week 4 - 5  : Start coding for the application with the collected data as the baseline.
  • week 6  : Test the code against some Harfbuzz Malayalam Renderings against the provided list and make changes accordingly to make it perform perfect and faster.
  • Week 7 - 8  : Create the baseline glyph word index for as many indic languages as possible, although there is time and linguistic barriers. Planning to collect it at least for Hindi.
  • Week 9  : Creating the web front end for the application.
  • Week 10  : Testing, reviewing and documentation.

Tell us something about you have created

I have created a prototype search engine, using Hadoop in the back end and python for further ranking processes with a web page as an interface.

Have you communicated with a potential mentor? If so who?

Yes, I have communicated with the mentor Rajeesh K Nambiar.

SMC Wiki link of your proposal

SMC wiki link

Progress

20/07/2013

  • Started coding for the project three days ago.
  • As for my current developing code the inputs needed are a file with a list of words/characters, the rendering of which are to be tested. Along with that, the correct glyph names of the words/characters. This is extracted manually from font forge at the moment. Eg: ക[k1]
  • The next file needed is a file with output of harfbuzz renderings of all the words/characters chosen for testing. A separate script is written for this purpose which is to be executed on the test words file which will yield an output of the form: ക[k1=0+1588]. This is actually the output of hb-shape command. The value following the = will be ignored for now.
  • In the testing script, the first file will be opened, read the characters appearing before [, i.e our word/character. Then until the ] sign is encountered the strings(elimination =, + and digits) will be added to an array. The same character will be looked up in the harfbuzz rendered outputs' file and the glyph names will be similarly collected in an array.
  • Then compare the two strings. If both are the same we enter a value 0 to a check array. Check[i] = 0. Otherwise, check[i] = 1.
  • The last two steps are repeated until end of the file is encountered.
  • After that, we look up the check array. All the words listed at ith position with check[i] = 1 will be stored on to a separate file.
  • Finally we can run another script on this results file to get the hb-view outputs of these words to get a better understanding of the rendering mistake.
  • Further corrections to the above algorithm will be updated periodically.

29/07/2013

Coding period for GSoC has started the past week and I have been working on a very simple implementation of the proposal in C and two tiny bash scripts. My code is available here: https://gitlab.com/gem/automated-rendering-testing

The first thing to be done to test using these scripts is create a file that contains a set of words to be tested to see if their rendering is correct. Here I have taken a sample test data file created by SMC a while ago (ml-harfbuzz-testdata,txt). Now pass this file through the script render_test.sh along with the necessary font file. That is:

./render_test.sh ml-harfbuzz-testdata.txt /path/to/fontfile

This will create a file named rendered_glyphs.txt that contains the output of hb-shape function of harfbuzz, i.e. the glyph name followed by some additional numbers (which will be ignored for now).

Now create a file that contains the actual glyph names of the words in the the test data wordfile. I got the data from font forge. This has to be created manually and, as of now, obeying the following structure:

[glyph11,glyph12,glyph13,...,glyph1n]

[glyph21,glyph22,glyph33,....,glyph2n]

.

.

.

Also make sure that glyph names of each word is in the same order as that of the corresponding words in the test data file. I have named it orig_glyphs.txt Once this is done, we can pass the above two files through the executable of the script rendering_testing.c, say rendering_testing. That is:

./rendering_testing orig_glyphs.txt rendered_glyphs.txt

This script will compare the glyphs in order and if it find any pairs that doesn’t match, it will write to a file, result.txt, the line number in which the word appears in the test data file. Otherwise it will tell you the renderings are perfect.

Once this is done, to see the words with wrong renderings we will have to run the third script show_rendering.sh. It takes as input the result.txt file, the test data file and also the font file. That is:

./show_rendering.sh result.txt ml-harfbuzz-testdata.txt /path/to/fontfile

This script will create png images of the wrongly rendered words in the current directory.

That is all about my scripts. But the C code is very much inefficient. It even spits segmentation faults with some files. Once I make sure that I am on the right path after discussing with my mentor, I will be working on improving my algorithm and making this code better. That would be my next week’s work.


14/07/2013

This week I've been working on generating a baseline glyphs file for 4 fonts: Rachana, Meera, Suruma and Lohith-Malayalam. I have selected some malayalam words from harfbuzz tree and Santhosh Thottingal's test cases which I thought would be enough to test rendering problems. Then I started listing the glyph names of these files for each fonts in separate text files. To get the corresponding Unicode code point of each word, I wrote a small Java code. So I executed the script on each word, found all the code points and made 4 text files that contains the corresponding glyph names of the four fonts I mentioned earlier.

Although my mentor did tell me that it is not possible to generate glyph names automatically, I wasted more than a couple of days on a Font Forge script to make it automatically output the glyph names. But that gives the glyph name only if we click on each character, which became terribly disappointing. So instead I used it to make the baseline glyphs file in the structure I want if I click on the necessary characters. But this code is trivial as far as rendering testing is concerned and will leave it out from now (Just noting it down as it wasted a very non-trivial amount of my time ;-) ).

I have modified the main C code such that it will ask the tester which font she wants and after choosing the one she needs it will output the result based on the words I have given.

But my mentor pointed out that it looks quite messy looking at codes in 3 different languages for a single framework so I'll be re-writing my code in Python this week.

You can find my code here: https://github.com/nandajavarma/Automated-Rendering-Testing (although the README is not up-to-date)

(The above content are from my blog: http://nandajavarma.wordpress.com/)

21/7/13

This week my main task was to migrate my code to Python. As of now I have implemented my algorithm in Python. Here is the link to the repo: https://gitlab.com/gem/automated-rendering-testing/tree/master

I have expanded my test cases' list a bit. Now it has 243 Malayalam words. I have manually created files with glyph names of these test cases in four fonts: Rachana, Meera, Suruma and Lohith-Malayalam in files names rachana-glyph.txt, meera-glyph.txt etc. (It is still a bit buggy, so haven't pushed the latest commit of this yet).

What the code basically does is, it will ask the tester which font she/he wants to test in. Say it is Meera. The code will look for the reference file which we have manually created and the file with harfbuzz renderings of the test cases, named hb_meera_rendering.txt. This file can be created by running harfbuzzrendering.py script with proper font files in the current directory. The main script rendering_testing.py will scan both these files and compare the glyph name corresponding to each word and stores the wrongly rendered words to a new list. Finally hb-view will be executed on the words inside this list and a file named output.png will be generated in the same directory that pictorially represents the wrong renderings.

One can even provide a separate test cases files (but by preparing the reference file in specific structure) and/or a separate font (but generating the renderings files directly by running hb-view on the test cases) . If the font file of any of these given four fonts are being updated, just copy the new version and execute the harfbuzzrendering.py script. Then testing can be done as mentioned earlier.

The baseline glyph names' files aren't ready yet with complete glyph names of all the 243 words. Will be able to complete it within 1-2 days.

28/7/13

The works this week has been a little slow with college exams and assignments. This is what I have done so far this week.

I have completed the list of reference files containing glyph names of 243 words from four fonts each. Fonts being: Rachana, Meera, Suruma and Lohit-Malayalaam.

The code has been modified to equip not only harfbuzz renderings but renderings from other engines line Uniscribe, provided the user will produce the output of the rendering engine herself/himself. I have created a Python package containing 2 modules each for testing and creating output. The main script automated_rendering_testing.py will make use of this package to test and give the final result. To test the framework, one can just run ./automated_rendering_testing and then provide the necessary information, when asked.

Coming to the tester, first it will compare the reference file and the rendering output. The it will create a file named result.txt containing the wrongly rendered words along with the number corresponding to the word in test cases’ file. This file is used only to create the png file of the wrongly rendered words, if the engine is harfbuzz. Other wise this file is ignored. Now the actual output is a file test_result.txt with the format:

Sl.No Word Rendering status(correct/wrong)

User can view this file, see the status and see the wrongly rendered word. The agenda for this week is to re-write the whole code in C. One can view code from here: https://github.com/nandajavarma/Automated-Rendering-Testing

11/06/2013

The following modifications were asked to be made on the existing framework by my mentor after a Hangout session as part of the evaluations:

1. Modify the comparison algorithm so as to show positive results for the words with multiple correct renderings - This modification is made. Now, the user can give multiple glyph names separated by comma in the reference file and if the rendering matches any one of these, the framework will return a positive response.

2. Modify the reference glyph file, adding the glyph names of words with multiple correct renderings. Also some corrections were asked to be made in the existing reference file.

3. Modify the framework such that the user can even test by giving the file names as parameters. This one needs a little more work as I didn't give options in argument parser for all the necessary file inputs. Will update this soon.

Along with these some minor fixes were asked to be done on the script and all those are taken care of.

As for the further developments, planned to create a web interface for this framework. I am trying to create this interface using Flask and I am currently working on it. After that, the framework will be implemented in C. I have added a partially working implementation of this in the repo. After the completion of all these, if time permits, references for other fonts are also planned to be made.

Find my code here: https://gitlab.com/gem/automated-rendering-testing/tree/master

17/08/2013

I have changed the framework interface from its previous form, although the previous front end automated_rendering_testing.py is still present in the repo. Now the new interface, rendering_testing.py, need all the file names to be provided as command line arguments. The user gets the convenience of using the tab completion this way. The user will have to give as command line arguments 6 files (font file, test cases file, reference file, rendering output and files to store output) and an optional directory name(if the engine is harfbuzz).

If the rendering engine is harfbuzz, user can run the script generate_hb_rendering.py along with the test cases file and font file as parameters, to create the rendered output file. If that is not the case, the user will have to create this file as well in the prescribed form.

Now, the algorithm that actually test the rendering was a bit buggy and was giving certain wrong outputs for words with multiple rendering engines and I have cleared this error. This feature gives correct output now for the files I tried it with. The next thing I am working on is the web interface and I am using Flask framework. Will make this code public as soon as I get the script running from the page. Find the code here: https://gitlab.com/gem/automated-rendering-testing/tree/master More info in the README

25/08/2013

The work of mine has been correcting the reference glyph files and developing a web interface for the proposed framework. I had tried and made the reference files least buggy as possible. I have gone through the glyph names of almost all the 243 words in 4 fonts. I had to invest a lot of time on this especially due to one minor misunderstanding of mine on the multiple correct renderings of the words. And I hope it will get much refined after Rajeeshettan proof read it for 2 fonts as he has suggested. (I have changed the renderings of words with repham in Rachana such that the dotreph comes first. So words like these http://troll.ws/image/2e3a872e, http://troll.ws/image/469dd87a, http://troll.ws/image/5838dbec although looks correct, will be in the wrongly rendered words list by harfbuzz.)

The next part of this weeks work was developing the web interface (Excuse my poor design, I am cleaning it up as I write). It doesn't actually spits output to the user now or doesn't make it easier for the user to open files. I am hoping to make it run the script well in a week's time and don't think it is ready yet for the review. So I would like another week to make it ready for reviewing.

And finally about the C code I have added to the repo. I will start working on a new code in C++ once I am done with the webpage as I find the present code massively buggy and really inefficient. I hope I'll be able to update it the week after next.

My code here: https://gitlab.com/gem/automated-rendering-testing/tree/master

9/09/2013

Here is the present status of the project.

  • The testing framework now can evaluate words with multiple correct renderings, provided the correct renderings are provided in the reference file separated by semi colon.
  • Reference glyph for both Rachana and Meera has been updated as per the latest updates (changes in glyph names) in the upstream.
  • Reference for Devanagari font is being added to the repo.

Present status of the framework is:

  • rendering_test.py can accept up to 7 inputs, which being the test cases file, reference file, rendered output file, font file, output file, error file and a directory name.
  • Of this everything but reference file and rendering output are optional.
  • Output will be produced as per the parameters passed.
  • pep-8 errors reported before has been cleared.

By the end of this week, I am planning to finish:

  • Complete Devanagari references
  • The immediate next priority being C++ implementation of the code, I will be working on that.
  • Proof read Suruma and Lohith-Malayalam test cases

Once this is all done, I will work on the web interface.

Find my code here: https://github.com/nandajavarma/Automated-Rendering-Testing