All posts by Don

What about the “Declaration of Linguistic Rights”?

Logo of UDLRThere are probably not many people who have heard of the Universal Declaration of Linguistic Rights (UDLR). The whole concept of linguistic rights is not widely known or discussed outside of some “MINEL” (minority, indigenous, national, endangered, local) language communities and language experts and activists. During this International Year of Languages, and with an upcoming Symposium on Linguistic Rights in the World (Geneva, 24 April), it would seem to be an ideal moment to ask where we are going with the UDLR and the whole concept.

The story behind the UDLR apparently is that it was initiated in September 1994 by the International PEN Club’s Translations and Linguistic Rights Committee and the Escarré International Centre for Ethnic Minorities and Nations, and culminated with its adoption at the World Conference on Linguistic Rights held in Barcelona on 6-9 June 1996. UNESCO was asked for its support, and apparently accorded it. However the UDLR has not been ratified by the UN General Assembly and does not have the status in international law that something like the Universal Declaration of Human Rights (UDHR) has.

Speaking of the latter, language is mentioned as a factor not to be used to limit application of the rights enumerated therein:

Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. … (UDHR, Article 2; my emphasis added)

However, this is not quite the same as – or at least does not have the same emphasis as – “linguistic rights,” which concern individual and community rights to use a language. Hence the motivation to write something like the UDLR.

The point is perhaps clearer in considering the extreme opposite – “linguistic genocide” – which refers to deliberate efforts by a government or power to prevent, limit, and ultimately eliminate the use of a specific language, and may be regarded as a type of cultural genocide.

There is an interesting discussion of the latter and international law in the advanced version of an expert paper on children’s education and human rights prepared for the upcoming 7th Session of the Permanent Forum on Indigenous Issues (21 April-2 May in New York). The paper was submitted by Lars Anders-Baer (prepared in cooperation with Ole Henrik-Magga, Robert Dunbar and Tove Skutnabb-Kangas) and entitled “Forms of Education of Indigenous Children as Crimes Against Humanity?” According to the authors, cultural genocide was not explicitly included in the Convention on the Prevention and Punishment of the Crime of Genocide (adopted by the UN in 1948, the same year as the UDHR) for various reasons. However the authors find that there are still ways that this international agreement can be used against cultural genocide, and linguistic genocide.

Nevertheless it seems that while the field of international law and human rights is a complex and evolving one, there are some significant gaps when it come to languages. Specifically there are apparently no explicit protections of linguistic rights such as proposed in the still unofficial UDLR of 1996. But is the ULDR the best way to fill these gaps? One expert suggested that it might need a rewrite before it could hope for international ratification. But there has to my knowledge been no such discussion. It would be a shame if the International Year of Languages were to pass without any serious consideration of picking up this initiative.

A small positive step would be to begin by focusing on the rights of children, as the abovementioned article does. In a different context I’ve also called attention to the punishment of children for speaking their mother tongues in Africa (a practice that has been known in many other parts of the world as well). An earlier example is Tove Skutnabb-Kangas’s “Declaration of Children’s Linguistic Rights” published in 1995 (originally in 1986; thanks to Joan Wink for calling my attention to it):

  1. Every child should have the right to identify with her original mother tongue(s) and have her identification accepted and respected by others.
  2. Every child should have the right to learn the mother tongue(s) fully.
  3. Every child should have the right to choose when she wants to use the mother tongue(s) in all official situations.

At the very least, perhaps this short formulation and the longer UDLR could be publicized more in order to help raise awareness about linguistic rights issues.

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

“Lost Crops of Africa”


Lost Crops of Africa: …

Read this FREE online!Full Book

The third and final volume of the Lost Crops of Africa series was recently published by the National Academies Press. Its topic is Fruits. I just received a copy, as well as a one of the second volume on Vegetables, which was published two years ago. Vol. 1 on Grains was published in 1996.

In that gap of time is a story, but the good news is that this project has finally been brought to a successful conclusion, the result of an incredible effort by Dr. Noel Vietmeyer and Mark Dafforn. The concept is that there are a lot of important cultivated and wild foods native to Africa that are neglected in research and planning, and so in effect “lost” beyond the local areas where they are well known.

Taken together the three volumes profile 11 cultivated and several wild grains, 18 vegetables, and 24 cultivated and wild fruits. I won’t list them here, but hope to take a few moments to highlight individual species and my comments on them in the future.

I had the privilege of contributing briefly to this project in the early stages, mainly as an intern in 1992 with an office of the National Academy of Sciences/National Research Council called BOSTID (Board on Science and Technology for Development). At the time the plan was for a six volume series covering grains, cultivated fruits, wild fruits, vegetables, legumes, and roots and tubers. As I was told, the idea grew out of an earlier successful project on Lost Crops of the Incas (1989), but that it very quickly it became apparent that in the case of Africa there were quite a lot of species of interest.

Unfortunately BOSTID, which had done a lot of quality (and interesting) publications since its establishment in 1970, disappeared into another office in a mid 1990s reorganization and the Lost Crops of Africa project was put on hold. Funding was found to publish Vol. 1 in 1996, but then the effort relied on Noel and Mark, and a decision was made to condense the rest of the series into two volumes. Mark led the project to ultimately complete editing and publication (sponsored by the Africa Bureau and the Office of Foreign Disaster Assistance of USAID). Incredible, but altogether the effort spanned 20 years. Mark and Noel deserve a huge amount of credit for their perseverance on this project.

I haven’t found any reviews of volumes two and three, but from quick perusal these cover the quite a number of species in the same highly readable style of vol. 1 (which was summarized in the New York Times on April 23, 1996; see also a review in ODI’s Natural Resource Perspectives 23 [9/97], and a short critical perspective on H-Africa).

Altogether the contribution of this series is in bringing various edible plant species to broader attention in a world that focuses – at its risk – on a few cultivars of a few main crops. Having this information in book format is of obvious use (such volumes from the BOSTID are still referenced in the field and these post-BOSTID volumes will continue to be as well, no doubt). Much has changed since the first volume was published in terms of the technologies for disseminating information, and I’m given to think that a wiki format to complement the online versions of the books could facilitate updates and ongoing contributions by specialists in the field. That would assure the longer term impact of this important work as a living resource. Who would set it up and maintain it is another issue.

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

2008 Linguapax Prize winner: Neville Alexander

The recipient of the Linguapax Prize for 2008 is Dr. Neville Alexander of South Africa. The prize is awarded annually (since 2000) in recognition of contributions to linguistic diversity and multilingual education.

Although the Linguapax site does not at this writing have updated information, the website of the UNESCO Centre of Catalonia (which is connected with Linguapax) has this press release dated 22.02.2008:

The South African linguist Neville Alexander will receive the Linguapax Award today in Barcelona, on the occasion of the Mother Language Day. The ceremony is framed in the Intercultural Week organised by the Ramon Llull University. Alexander, who coordinates the Project for the Study of Alternative Education in South Africa has devoted more than twenty years of his professional life to defend and preserve multilingualism in the post-apartheid South Africa and has become one of the major advocates of linguistic diversity.

There is various material online about Dr. Alexander including:

I don’t want to be negative about the Linguapax Institute‘s efforts, but publicity about this really has been lacking. An email request to Linguapax for more information received no reply. I hope to have more information about Linguapax and its important work in a future posting.

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

2008 Year of the Frog

Logo of the Year of the FrogAnyone can declare a year of something and several conservation organizations have combined to declare 2008 the Year of the Frog. They’re calling attention to the importance of amphibians (apparently about half the species in the world are threatened or endangered), and to an “Amphibian Conservation Action Plan” of which something called an “Amphbian Ark” is planned “in which select species that would otherwise go extinct will be maintained in captivity until they can be secured in the wild.”

Oh, and this being “leap day,” some have apparently jumped at the chance to call it International Day of the Frog.

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

International Year of the Potato

IYP logo, with permissionThe UN has given 2008 several designations, of which International Year of the Potato (IYP) is another one.* The reason for IYP is given as follows:

The celebration of the International Year of the Potato (IYP) will raise awareness of the importance of the potato – and of agriculture in general – in addressing issues of global concern, including hunger, poverty and threats to the environment.

The origin of IYP was apparently a proposal by Peru within the UN Food and Agriculture Organization (FAO), which eventually resulted in a UN General Assembly resolution.

The IYP website is a nicely organized with information in the 6 UN languages, including activities for children.

* I’ve previously commented on the International Year of Planet Earth, and for the International Year of languages, have commented briefly and devoted a section of this website.

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Reflecting on “Computing’s Final Frontier”

In the March issue of PC Magazine, John Dvorak comments on four areas of computer technology in his column entitled “Computing’s Final Frontier“: voice recognition; machine translation (MT); optical character recognition (OCR); and spell-checkers. Basically he’s decrying how little progress has been made on these in recent years relative to the vast improvements in computer capacities.

I’d like to comment briefly on all four. Two of those – voice recognition, or actually speech recognition, and MT – are areas that I think have particular importance and potential for non-dominant languages (what I’ve referred to elsewhere as “MINELs,” for minority, indigenous, national, endangered or ethnic, and local languages) including African languages on which I’ve been focusing. OCR is key to such work as getting out-of-print books in MINELs online. And spell-checkers are fundamental.

Voice recognition. Dvorak seems to see the glass half empty. I can’t claim to know the technology as he does, and maybe my expectations are too low, but from what I’ve seen of Dragon NaturallySpeaking, the accuracy of speech recognition in that specific task environment is quite excellent. We may do well to separate out two kinds of expectations: one, the ability of software to act as an accurate and dutiful (though at times perhaps a bit dense) scribe, and the other as something that can really analyze the language. For some kinds of production, the former is already useful. I’ll come back to the topic of software and language analysis towards the end of this post.

Machine translation. I’ve had a lot of conversations with people about MT, and a fair amount of experience with some uses of it. I’m convinced of its utility even today with its imperfections. It’s all too easy, however, to point out the flaws and express skepticism. Of course anyone who has used MT even moderately has encountered some hilarious results (mine include English to Portuguese “discussion on fonts” becoming the equivalent of “quarrels in baptismal sinks,” and the only Dutch to English MT I ever did which yielded “butt zen” from what I think was a town name). But apart from such absurdities, MT can do a lot – I’ll enjoy the laughs MT occasionally provides and take advantage of the glass half full here too.

But some problems with MT results are not just inadequacies of the programs. From my experience using MT, I’ve come to appreciate the fact that the quality of writing actually makes a huge difference in MT output. Run-on sentences, awkward phrasing, poor punctuation and simple spelling errors can confuse people, so how can MT be expected to do better?

Dvorak also takes a cheap shot when he considers it a “good gag” to translate with MT through a bunch of languages back to the original. Well you can get the same effect with the old grapevine game of whispering a message through a line of people and see what you get at the end – in the same language! At my son’s school they did a variant of this with a simple drawing seen and resketched one student at a time until it got through the class. If MT got closer to human accuracy you’d still have such corruption of information.

A particularly critical role I see for MT is in streamlining the translation of various materials into MINELs and among related MINELs, using work systems that involve perhaps different kinds of MT software as well as people to refine the products and feedback into improvements. In my book, “smart money” would take this approach. MT may never replace the human translator, but it can do a lot that people can’t.

Optical character resolution. Dvorak finds fault with OCR, but I have to say that I’ve been quite impressed with what I’ve seen. The main problems I’ve had have been with extended Latin characters and limited dictionaries – and both of those are because I’m using scanners at commercial locations, not on machines where I can make modifications. In other words I’d be doing better than 99% accuracy for a lot of material if I had my own scanners.

On the other hand, when there are extraneous marks – even minor ones – in the text, the OCR might come up with the kind of example Dvorak gives of symbols mixed up with letters. If you look at the amazing work that has been done with Google Patent Search, you’ll notice on older patents a fair amount of misrecognized character strings (words). So I’d agree that it seems like one ought to be able to program the software to be able to sort out characters and extraneous marks through some systematic analysis (a series of algorithms?) – picking form out of noise, referencing memory of texts in the language, etc.

In any event, enhancing OCR would help considerably with more digitization, especially as we get to digitizing publications in extended Latin scripts on stenciled pages and poor quality print of various sorts too often used for materials in MINELs.

Spell-checkers. For someone like me concerned with less-resourced languages, the issues with spell-checkers are different and more basic – so let me get that out of the way first. For many languages it is necessary to get a dictionary together first, and that may have complications like issues of standard orthographies and spellings, variant forms, and even dictionary resources being copyrighted.

In the context of a super-resourced language like English, Dvorak raises a very valid criticism here regarding how the wrong word correctly spelled is not caught by the checker. However, it seems to me that the problem would be appropriately addressed by a grammar-checker, which should spot words out of context.

This leads to the question of why we don’t have better grammar-checkers? I recall colleagues raving in the mid-90s about the then new WordPerfect Grammatik, but it didn’t impress me then (nevertheless, one article in 2005 found it was further along than Word’s grammar checker). The difference is more than semantic – grammar checkers rely on analysis of language, which is a different matter than checking character strings against dictionary entries (i.e., spell-checkers).

Although this is not my area of expertise, it seems that the real issue beneath all of the shortcomings Dvorak discusses is the applications of analysis of language in computing (human language technology). Thus some of the solutions could be related – algorithms for grammar checking could spot properly-spelled words out of place and also be used in OCR to analyze a sentence with an ambiguous word/character string. These may in turn relate to the quality of speech recognition. The problems in MT are more daunting but in some ways related. So, a question is, are the experts in each area approaching these with reference to the others, or as discrete and separate problems?

A final thought is that this “final frontier” – what I have sometimes referred to as “cutting edge” technologies – is particularly important for speakers of less-resourced languages in multilingual societies. MT can save costs and make people laugh in the North, but it has the potential to help save languages and make various kinds of information available to people who wouldn’t have it otherwise. Speech recognition is useful in the North, but in theory could facilitate the production of a lot of material in diverse languages that might not happen otherwise (it’s a bit more complex than that, but I’ll come back to it another time). OCR adds increments to what is available in well-resourced languages, but can make a huge difference in available materials for some less-resourced languages, for which older publications are otherwise locked away in distant libraries.

So, improvement and application of these cutting edge technologies is vitally important for people / markets not even addressed by PC Magazine. I took issue with some of what Dvorak wrote in this column but ultimately his main point is spot on in ways he might not have been thinking of.

Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail