27 December 2009

New issue of SLEID (online journal on HE-related topics)

http://sleid.cqu.edu.au/viewissue.php?id=21

 

Articles and links to them:

http://sleid.cqu.edu.au/include/getdoc.php?id=843&article=262&mode=pdf
Children's wonder-initiated research: a primary school case study
Shelley A Kinash and Michelle Hoffman

http://sleid.cqu.edu.au/include/getdoc.php?id=846&article=259&mode=pdf
Using Likert Scales with images: Addressing validity concerns
Laura Reynolds-Keefer, Robert Johnson, Tammiee Dickenson and Laura McFadden

http://sleid.cqu.edu.au/include/getdoc.php?id=848&article=254&mode=pdf
The Integration of Diverse Perspectives in Counseling Practice
Adele Baruch-Runyon

http://sleid.cqu.edu.au/include/getdoc.php?id=851&article=281&mode=pdf
Toward Equal Rights for Women in Turkey: Nonformal Education and the Law
Mary Ann Maslak

http://sleid.cqu.edu.au/include/getdoc.php?id=853&article=237&mode=pdf
A model for alternative assessment of students' problem solving processes: conceptual and experimental validation
Rama Klavir and Malka Gorodetsky

http://sleid.cqu.edu.au/include/getdoc.php?id=855&article=265&mode=pdf
Personality or Pedagogy: Which personal characteristics are necessary for ESL teachers to possess and what role do they play compared to formal pedagogical training in ESL teaching?
Lois N Spitzer

24 December 2009

TEFL FORUM: ELT in Japan in 2010

These feature articles and articles in brief are under development for publication in 2010 at the sister blog, ELT-J Online Magazine:

1. Vocabulary activities (semantic mapping) for the conversation class.
2. Vocabulary activities (semantic mapping) for the beginning-level writing class.
3. Teaching English /l/ vs. /r/ (applied phonology).
4. Introducing a different sort of audio-visual electronic dictionary for FL learning (follows up on the previous article about units of phonology, the 'visually salient articulatory gesture').
5. Variations of and considerations for the multiple-choice vocabulary question for FL practice and assessment.
6. A look at schema theory and its applications for ELT.
7. A look at 'phonemic awareness' and 'phonological awareness'--what are they and how might they apply to ELT.
8. Analysis of the issue 'phonics vs. whole language' from an ELT perspective.
9. An adaptation of 'semantic feature analysis' to the classroom study of EFL vocabulary.

23 December 2009

Waseda professor leads way for Android OS in Japan

http://techon.nikkeibp.co.jp/english/NEWS_EN/20080827/156975/

excerpt:

Japanese Community Formed to Open Shop for Android Apps


A community will be launched to promote the diffusion and development of Google Inc's "Android" platform for mobile phones in Japan Sept 12, 2008.

The community is aimed at succeeding and advancing the activities of the "Android Study Meetings," which have been conducted to give engineers the opportunities to share information about Android.

Professor Fujio Maruyama of Waseda University revealed a memorandum of intent to establish the community at the 9th Android Study Meeting on Aug 25. An opening ceremony will be conducted at Fujisoft Inc's Akihabara Building from 19:00 Sept 12.


See linked article for full information.

GOOGLE APPS FOR INSTITUTIONS

While popular in the US, apparently only one institution in Japan has moved to Google Apps to carry much of its IT services burden. That is Waseda University, a top private university in Tokyo and one that is at the center of Google's Android OS development for Japan.

To see why Google recommends Google Apps for institutions, see:

http://www.google.com/a/help/intl/en/edu/appsatschool.html#utm_campaign=gonegoogleuni&utm_medium=oa&utm_source=en-oa-na-us-gonegoogleuni-ihe-blogs&utm_term=ihe-blogs-appsleads

http://www.google.com/a/help/intl/en/edu/index.html


Top 10 reasons to use Google Apps

excerpt:

>>  1. Students will love you for it

Schools tells us that when they ask their students what email they'd prefer, they overwhelmingly say Gmail.

"Our students approached us about a year ago, saying that we needed to improve our email and collaboration services. We actually had our student government tell us, 'we want you to implement Google Apps.'" - Wendy Woodward, Director of Technology Support Services, Northwestern University

2. Free up your IT

Focus your IT on activities that add value instead of worrying about the uptime of your email services.

"Google Apps has allowed us to get out of providing these commodity type services - such as maintaining an email and calendaring system - and focus on the things that we are uniquely equipped to do, like providing more resources to be able to better support teaching, learning and research." - Todd Sutton, Assistant Vice Chancellor for Application Services, UNC Greensboro

3. Easy to deploy

No software to install, no hardware to buy, just validate your MX records and create your accounts to get started. To integrate with what you already have, we work with open standards, have created a multitude of APIs, can point you to open source solutions for common integrations, and have approved partners with experience deploying Apps in schools. <<

See link above for all 10 reasons.

18 December 2009

OECD to Japan's HE: Embrace the Change (for Change's Sake)

After being given still yet another 'Cook's tour', the OECD's neoliberal ad hoc panel left Japan with their typical neoliberal convictions further entrenched. Outside of such polite circles, talk is that the 'public' is now upset that with the 'big bang' reforms of 2004, surprise surprise, tuition costs went up at the former national and former public universities. And that the reforms have been nothing but an unmitigated disaster while at the same time the predicted demographic disaster has failed to materialize. But the neoliberals of the OECD will just say, "Ah ah, too little too late."  Another thing that is happening is that, with the LPD's fall from power, the Ministry of Education is in disarray (more than its usual disarray that is) and the former national universities are headed into financial and fiscal decline.

The full OECD report is available here. JPN HEO will try to analyze the report and write a review in the near future. 

http://www.oecd.org/dataoecd/44/12/42280329.pdf

Excerpt one:

It is against this background that, on April 1st 2004, Japanese higher education underwent the kind of ‘big bang’ reform which was unprecedented. Though regarded with some hostility within the universities themselves, there was a widespread political and public sentiment that reform was overdue and that, in comparison with the higher education systems among Japan’s traditional peers in North America, Australasia and Europe, Japanese universities were falling behind. The reforms, at least in their intention, were fundamental and far-reaching. As a result, though a few years have elapsed since the reforms were introduced, their impact is still working its way through. Japanese tertiary education is still in transition. The desired benefits of the reforms are not yet secured and if they do not materialise, both political and public patience is likely to wear thin. There is a widespread demand that the tertiary education system become, via the modernisation agenda embedded in the reforms, more responsive, more agile, more globally competitive and accompanied by higher standards and higher quality all round.


Excerpt two:

In spite of the pace and scope of change in recent years, much remains to be done. The Ministry of Education, Culture, Sports, Science, and Technology begun to change from an organisation accustomed to exercising detailed managerial and financial direction of higher education  institutions into one that no longer does so. However, it has not yet fully worked out its new role within the tertiary system, nor has it fully equipped itself with the performance-based information and repertoire of incentives that it needs to monitor and shape the activities of newly autonomous institutions. And, for their part, some higher education institutions appear keen to operate as they long have done, holding fast to the Humboldtian vision of the university and to long-standing institutional practices - with respect to academic careers, to internal resource allocation, and institutional leadership.

As we have outlined in the report, we think university leaders and ministry officials have much to gain from embracing continued change; indeed, that it is the necessary condition for gaining wider public investment in the sector. And, we think that central government authorities and stakeholders outside of government have much to gain from enlisting their support. During the course of our visit we met with men and women - professors, administrators, and civil servants - who clearly grasped the new possibilities that deepened reform makes possible, and who are eager to press forward with change. Working together with patience, trust, and understanding they can ensure that Japan system of tertiary education stands as a model to the entire OECD, and, indeed, the wider world
.

Plans for Manga Library Run into Trouble with New Government in Japan

It looks like Meiji University will become the host to the new manga library though.  


1. http://www.google.com/hostednews/afp/article/ALeqM5gJAzSuC3AOKkfxHPbRwOEBevmQKg

Japanese university plans huge 'manga' library

(AFP)

excerpt:

>>  TOKYO ? In a move to promote serious study of Japanese manga, a university in Tokyo plans to open a library with two million comic books, animation drawings, video games and other cartoon industry artifacts.

Tentatively named the Tokyo International Manga Library, it would open by early 2015 on the campus of the private Meiji University, and be available to researchers and fans from Japan and abroad. <<

excerpt:


>> The former conservative government of Taro Aso, which was ousted in August elections, had earmarked 11.7 billion yen (128 million dollars) for a museum on Japanese cartoon art and pop culture to be built in Tokyo.

But the plan, part of wider stimulus measures, was axed by the new centre-left government, which criticised the construction as a "state-run manga cafe" that has nothing to do with boosting the economy. <<


2. http://www.exfn.com/manga-library-planned-for-japan


>> A Tokyo university is planning to open a library to promote serious study of Japanese manga comics.

The proposed Tokyo International Manga Library will house two million comic books, animation drawings, video games and other cartoon industry artefacts.

It is hoped the new library will open by early 2015 at Meiji University. <<

Setting the record straight on the language policies of JALT


Well, I've waited over ten years to get a serious response from Oda or Braine.

See:


Message 2: Response to Masaki Oda's chapter in Braine's new book



Date: Tue, 29 Jun 1999 16:49:03 +0900

From: Charles Jannuzi <[e-mail deleted]>

Subject: Response to Masaki Oda's chapter in Braine's new book


[On Monday, June 28, LINGUIST posted a review of George Braine's new book Non-Native Educators in English Language Teaching (Review by Mae Wlazlinski, LINGUIST Review, 10.999.) We received several messages with comments concerning the chapter by Masaki Oda and its topic of English centrism in Japan, specifically in JALT. The messages are included here and we open up discussion on the matter.] I am interested in this title--specifically in the Oda chapter. I have not read his chapter or the rest of the book yet, but I was involved unwillingly in forming Oda's scholarship on the languages of JALT.

There was a debate in the pages of 'The Language Teacher', JALT's monthly magazine, that Oda instigated. The discussion actually started when Richard Marshall pointed out that should JALT change to a bilingual (English and Japanese) official language policy, there might be a concern about JALT's international status in its publications and conferences (to some extent dominated by anglophone scholars who do not live and work in Japan).

It was a coherent argument put forward by Marshall, at least in my opinion, calling for caution on a switch to an official two-language policy, especially in light of the fact that not that many people are going to volunteer to translate and interpret for free and JALT doens't have the money to pay for it. JALT did enact a two-language policy. Oda responded to Marshall in the pages of TLT, accusing Marshall and the leadership of JALT of "linguistic imperialism" and "linguicism".

I responded in support of Marshall to this extent: I agreed that JALT should have a two-language policy, but that Marshall and the leadership of JALT were not linguicists or linguistic imperialists. I also pointed out that, since English-speakers are a clear minority in Japan and there are other language minorities here, a two-language policy of Japanese and English would not eliminate the language bias problem or other forms of prejudice ( many of such problems stem from English's minority status in Japan, in fact).. In other words, Japanese was potentially as much a language of discrimination as Oda perceived English to be.

Oda then reponded in this manner: he attempted to paraphrase both Marshall's views and my views as one conflated set of views and called me too a linguicist and linguistic imperialist. What's more, he seemed upset that I would call him a linguicist and linguistic imperialist because I had pointed out how Japanese had been a language of colonial, imperialist aggression and oppression. I never directly accused Oda of being a linguicist or linguistic imperialist (the terms are far too problematic for me to fling them around like that). I was myself upset that I had to continue this debate just to defend myself from such misrepresentation and abuse in print. I think had the editor of TLT read the entire exchange upon receiving Oda's second response, that response would never have been published.

My concern now here is that Oda has created some sort of fiction in the pages of Braine's book because he was upset and embarassed over the exchange in TLT. I will try to find time to buy and read the book, and if I find that Oda has done something that approaches a one-sided, skewed version of the debate in JALT about language just to serve himself and some sort of personal vendetta, I will seek recourse in print and possibly with legal measures.

17 December 2009

ELT in Japan online publication now launched

Just to sum up the actions taken: I've decided to use the blog format to publish a 'practitioner journal' for those who teach EFL in Japan (and E. Asia).

The publication is located here:

http://eltinjapan.blogspot.com/

Here is the first issue's top feature:

http://eltinjapan.blogspot.com/2009/12/elt-j-issue-1-feature-article-ten.html

Here is a summary of what is in issue 1 and a list that links to all the articles:

http://eltinjapan.blogspot.com/2009/12/elt-in-japan-issue-1-december-2009.html

Here is a preview of future articles in the issues that will appear in 2010:

http://eltinjapan.blogspot.com/2009/12/preview-of-future-issuess-of-elt-j.html

TEFL FORUM: Issue 1 (December 2009) of 'ELT in Japan' (ELT-J)

http://eltinjapan.blogspot.com/2009/12/elt-in-japan-issue-1-december-2009.html

17 December 2009
'ELT in Japan', Issue 1 (December 2009)

These articles were first published at the related blog established prior to ELT-J, Japan Higher Education Outlook. .  These have been compiled to form the first issue of ELT-J.

The entire collection can be navigated from this page:

http://japanheo.blogspot.com/2009/12/tefl-forum-so-far.html

>> ELT-J Issue 1 Table of Contents <<

1. Proposes a more useful model/basic unit of phonology for EFL.

http://japanheo.blogspot.com/2009/12/facially-salient-articulatory-gesture.html


2. Looks at a literacy and phonology 'crutch' often used by Japanese EFL learners and relates it to standard concepts in ELT and EFL literacy.

http://japanheo.blogspot.com/2009/12/do-japanese-efl-students-need-katakana.html


3. Sums up ten major reasons why TEFL and EFL are so problematic in Japan and at Japanese universities.

http://japanheo.blogspot.com/2009/12/tefl-forum-ten-reasons-why-english.html

4. This is an article that is conceptually related to the article in item #2 on this list but comes at the issues from a different angle--that is, positive transfer vs. negative interference from the native literacy backgrounds of EFL students.

http://japanheo.blogspot.com/2009/12/tefl-forum-native-writing-systems.html

5. Looks at why TEFL/ELT/TESOL need a new approach to 'theory' and 'practice', where real theory emerges from real practice.

http://japanheo.blogspot.com/2009/09/breaking-down-theory-vs-practice.html

6. Questions the value of most academic research on ELT and FLL (e.g., 'SLA' research).

http://japanheo.blogspot.com/2009/06/why-is-research-in-elttefltesolalsla-so.html

7. An earlier version of item #3 on this list. Gives a briefer overview of the ten reasons and links back to the individual articles in which they were discussed in more detail.

http://japanheo.blogspot.com/2009/04/ten-reaons-why-english-education-in.html


8. Gives an overview of the many issues foreign nationals (e.g., 'native speakers of English) face teaching at the level of higher education in Japan, including TEFL at this level.

http://japanheo.blogspot.com/2008/03/teaching-as-foreign-national-at.html



Labels: AL, EFL, ELT, SLA, teaching English in Japan, TEFL, TEFL Forum, TESOL

------------------
Future issues will include articles on teaching vocabulary, pronunciation, and writing multiple choice questions. .
Posted by CEJ at 11:23
Labels: AL, EFL, ELT, ELT in Japan, SLA, teaching English in Japan, Teaching in Japan, TEFL, TEFL Forum, TEFL in Japan, TESOL

TEFL FORUM - Preview of future issues of ELT-J Online Magazine

These will also be announced in the TEFL Forum section of Japan Higher Education Outlook.

Preview of future issues of 'ELT-J Online Magazine':

1. Vocabulary activities for the conversation class.
2. Vocabulary activities for the writing class.
3. Teaching English /l/ vs. /r/ (applied phonology).
4. Introducing a different sort of audio-visual electronic dictionary for FL learning.
5. Variations of the multiple-choice vocabulary question for FL practice and assessment.

All these and more are under development for publication at ELT-J.

16 December 2009

TEFL Forum - ELT in Japan Groups - Google Group






Join today.

TEFL FORUM - ELT in Japan Groups - Yahoo Group


Join today.









Google Group for TEFL Forum


Google Groups

ELT in Japan

Visit this group

Japan's Government Says "Institutional Bottlenecks" Hinder Science Plan

Perhaps this helps explain why there are no Japanese universities in the THES-QS top 20 in 2009?  For example, the difficulties around foreign personnel who teach and conduct research continue. Also, the report cites an innovation--fixed term positions--as an improvement but if anything such contractual posts have helped contribute to instability in research and unfair treatment of foreign personnel, women and junior colleagues. 

http://www.mext.go.jp/english/wp/1260270.htm

White Paper on Science and Technology 2008 (Provisional Translation)

http://www.mext.go.jp/component/english/__icsFiles/afieldfile/2009/04/23/1260307_1.pdf


excerpt:

Elimination of Institutional Bottleneck to Dissemination of S&T Outcome
to Society

To create S&T-based innovation, it is necessary to ensure that research results achieved at
universities and other research institutions be steadily disseminated to society. Active exchange of researchers, smooth implementation of research activities, industry-academia-government cooperation, etc. have great effects not only on activation of R&D but on the return of research results to society and are the keys for enhancing effects of human and material investment on S&T. In order to realize this, approaches for elimination of institutional bottleneck in various aspects, such as the research exchange system, fixed-term system for researchers, independent administrative institution system, national university corporation system, and the intellectual property system, were performed to obtain significant progress. However, it is often said that there still exist institutional bottlenecks: emigration and immigration management of foreign researchers; working environment of female researchers who are involved in childbirth and child rearing; treatment of retirement allowance associated with movement between the research institutions; and fund procurement environment of research institutions have been identified. Among those, it is very important that elimination of institutional bottlenecks relating to clinical research involving clinical trials is pointed out. In our country encountering an aging society with a declining birth rate, which is the fastest in the world, clinical research involving clinical trials is the R&D means for realizing innovation leading to health enhancement of our nation and activation of the research is considered to bring great national benefits. It is essential to ensure that Japanese nationals can have earlier access to the world’s most advanced medical technologies; that the Japanese medical industry can aggressively pursue R&D activities and sharpen its international competitive edge; and that the health of the nation will further improve by
eliminating institutional bottlenecks obstructing research activities and promoting clinical research
involving clinical trials.

Profile of Japan's top university--U. of Tokyo (Toudai)

Recently Toudai dropped out of the THES-QS top 20 ranking of universities worldwide. I thought this would be a good time to run an alternative version of an earlier piece on the University of Tokyo.

---------------------------

Is University of Tokyo Japan's only world-class university? 
Charles Jannuzi, University of Fukui, Japan

It is unique and elite

When the university system of Japan is compared internationally, one institution is most often cited as Japan's best example of a 'world-class' university. This is, of course, the University of Tokyo (Toukyou Daigaku' or 'Toudai' for short). Toudai is perhaps most famous for graduating and networking elite bureaucrats and politicians, including prime ministers; however, the supposed lock on leadership in top government has waned over the past two decades. For example, this century's most popular prime minister of Japan, Junichiro Koizumi, and many of his advisors were graduates of the private elite Keio University.

Historically speaking, Toudai has the unique distinction of belonging to two important groups of universities: First, it was established almost at the very start of the imperial restoration in 1877 as the 'national university' of Japan. A decade later it became the top institution of a group referred to collectively as the imperial universities ('teikoku daigaku') run by the national government in Japan, but extended to Korea and Taiwan as well. These eventually formed a system of nine institutions which gave all their respective countries or Japanese regions their top institutions.

Second, University of Tokyo is the foremost (and only national-public) member of Japan's 'Ivy League' of six elite Tokyo institutions (the other five being the private universities of Waseda, Keio, Housei, Meiji, and the Christian Rikkyou). Like its five private elite counterparts, Toudai's traditions and its reputation go back to the Meiji Restoration, a tumultuous period of forced westernization and development. Interestingly, one tradition did not survive the transition to modernity: foreign academics used to comprise the majority of the teaching and research faculty at the national universities during the Meiji and Showa eras, whereas now, in the era of internationalization and globalization, they are but a small percentage.

Toudai joins a third group of institutions in the mass era of HE in Japan

In the post-second world war era, stemming from the Occupation's reform of Japanese university education in 1949 but also in response to the needs of the development state, Toudai has flourished at the pinnacle of an expanded, much less elite national university system, which had grown to a total of nearly 100 institutions. However, after a period of top-down administrative reforms and forced mergers starting in the 1990s, the 87 remaining national universities were given new corporate charters and a considerable degree of administrative independence in April 2004.

Big, diverse and diversified by Japanese university standards

The newly corporatised University of Tokyo consists of three campuses, all in the Kanto region (with other facilities scattered about Tokyo and other parts of Japan). It has a total enrolment of around 28,000 students (quite large for Japan) coming from top senior high schools (and cram schools) from all over the country. Toudai also plays host to 2,100 of Japan's population of 120,000 international students (the largest total in Japan, with Chinese nationals forming the single largest group). Toudai has a faculty of 2,800 professors, associate professors, and lecturers. Annually some 2,200 foreign researchers visit for short and long periods of exchange. Toudai is a co-educational, multi-disciplinary university with a comprehensive range of taught programs, post-graduate research, and professional schools (such as its legendary law school).

Perhaps this enormous diversity in faculties, disciplines, and programmes reveal the university as a 'jack of all trades, but master of none' when compared to more focused, dynamic institutions. Like its cross-town rival, Waseda, Toudai's most famous and revered alumni are perhaps its literary figures, such as Soseki Natsume, Yukio Mishima, Kobo Abe, and Nobel winners, Yasunari Kawabata and Kenzaburo Oe. And, although the university can claim some top prizes in science and technology, the Toudai academic who last won a Nobel, Masatoshi Koshiba in 2002, did so for work on cosmic neutrino detection done in the 1980s. Kyoto University has more top prizes in science, including Nobels, and Tohoku University and University of Tsukuba are both widely considered to be more on the cutting edge in many important areas of scientific research.

Waves of reform in university education during the 1990s brought about sweeping changes in curriculum, graduate education, doctoral level research, and national university administration. The better managed and financed of Toudai's rivals took aim at beating the institution in certain niches, such as in the establishment of western style professional schools in law, business and accounting. Moreover, with their ties to industry and manufacturing, institutions such as the Tokyo Institute of Technology and the Nagoya Institute of Technology have emerged as more effectively focused on science and technology with actual commercial applications.

The University of Tokyo has tried to meet such challenges through the establishment of new graduate schools of an interdisciplinary or innovative nature: Frontier Sciences, Interdisciplinary information Studies, Information Science and Technology, and Public Policy. Because of the extent of direct subsidy from the national government (either in bloc grants or through competition), the university also hosts some top research institutes, such as its Institute of Medical Science, Earthquake Research Institute, Institure for Cosmic Ray Research, and the Institute for Solid State Physics.

Where Toudai outdoes the West, it is largely unknown in the West

Perhaps where Toudai is most on the cutting edge of science and technology, its people and their accomplishments are also the most unheralded. One very good example of this is Prof. Ken Sakamura's TRON OS project launched in 1984. The successful development of TRON was initially hampered by trade concerns under the US's Super 301 Trade Law (which largely kept it off personal computers because of a fear of reprisals) but was held back also by a lack of cooperation amongst competing companies unused to working together on an 'open source' project.

In its most successful manisfestations, TRON is an embedded operating system running such ubiquitious devices as mobile phones, fax machines, kitchen appliances, car navigation systems, etc. It is estimated that some form of TRON now runs over 3 billion such electronic appliances worldwide, making it the world's most popular (but still largely unknown) OS. Because of the speed advantages in TRON computing, even proponents of Linux and Windows computing are now working with the TRON project to produce portable, personal computing and communications devices and hybrid operating systems--the software that will run the hardware that will be required in the coming age of ubiquitous, fully networked computing. TRON, either by itself or in combination with Linux, could also play a major role in a recently announced pan-Asian effort to create an open-source OS for personal computing and the internet.

It would seem that it is with computers that Toudai continues to excel. About the time the TRON OS was really taking off as the embedded OS for Japanese electronic appliances, another research group at the University of Tokyo started the MD Grape project in order to design computer chips for supercomputers doing calculations in astrophysics. Researchers at Riken, a super group of national research institutes and centers in Japan, have adapted Toudai's MD Grape chip for applications in life sciences and molecular dynamics. Meanwhile, research at the University of Tokyo continues in order to develop a more general purpose chip capable of an incredible one trillion plus calculations per second.



-------------------------

See also:

http://www.topuniversities.com/

http://www.topuniversities.com/university-rankings/world-university-rankings/2009/results

TEFL FORUM SO FAR

Here is a list (with links) of some of the TEFL-related articles to appear at JHEO.  
They are listed from most recent to past. The TEFL Forum here at JHEO will then move on to articles on teaching pronunciation and vocabulary.

TEFL FORUM SUMMARY

1. Proposes a more useful model/basic unit of phonology for EFL.

http://japanheo.blogspot.com/2009/12/facially-salient-articulatory-gesture.html

2. Looks at a literacy and phonology 'crutch' often used by Japanese EFL learners and relates it to standard concepts in ELT and EFL literacy.


http://japanheo.blogspot.com/2009/12/do-japanese-efl-students-need-katakana.html

3. Sums up ten major reasons why TEFL and EFL are so problematic in Japan and at Japanese universities.

http://japanheo.blogspot.com/2009/12/tefl-forum-ten-reasons-why-english.html

4. This is an article that is conceptually related to the article in item #2 on this list but comes at the issues from a different angle--that is, positive transfer vs. negative interference from the native literacy backgrounds of EFL students. 

http://japanheo.blogspot.com/2009/12/tefl-forum-native-writing-systems.html

5. Looks at why TEFL/ELT/TESOL need a new approach to 'theory' and 'practice', where real theory emerges from real practice. 

http://japanheo.blogspot.com/2009/09/breaking-down-theory-vs-practice.html

6. Questions the value of most academic research on ELT and FLL (e.g., 'SLA' research).

http://japanheo.blogspot.com/2009/06/why-is-research-in-elttefltesolalsla-so.html


7. An earlier version of item #3 on this list. Gives a briefer overview of the ten reasons and links back to the individual articles in which they were discussed in more detail.


http://japanheo.blogspot.com/2009/04/ten-reaons-why-english-education-in.html

8. Gives an overview of the many issues foreign nationals (e.g., 'native speakers of English) face teaching at the level of higher education in Japan, including TEFL at this level.

http://japanheo.blogspot.com/2008/03/teaching-as-foreign-national-at.html

TEFL FORUM: The Facially Salient Articulatory Gesture as a Basic Unit for Applied Phonology in ELT

The Facially Salient Articulatory Gesture as a Basic Unit for Applied Phonology in ELT
Charles Jannuzi, University of Fukui, Japan


Introduction

This paper summarizes the analysis and interpretation of the results of two electromyographic procedures  in experimental phonology. The results of electromyographic experiments have been interpreted and  analyzed using concepts and theory from linguistics, applied linguistics, and phonology, specifically articulatory  phonology. The first electromyographic procedure on one native speaker of English obtained data on  the consonant sounds of English. The second electromyographic procedure was used to explore the large  vowel system of English.

Based on the results of these experiments, we propose a new theory about the basic sub-lexical unit of  speech production and perception. This paper posits a new, discrete, invariant, psychological unit of  phonology that functions below the level of word meaning to organize language. This model is a variation  of the articulatory gesture of articulatory phonology and phonetics, and it has implications and applications  relevant in many areas of applied linguistics and language education, including native language arts, second  and foreign language learning, and literacy. In order to contrast the new concept with the previously  established concepts of the 'phoneme' and 'feature', we will call the new phonological prime the 'visual  articulatory gesture', or, alternatively, it can be referred to as the 'facially salient articulatory gesture'.  The advantage of this new basic sub-lexical unit in phonology--and as a model for applied phonology in support of TEFL--is not merely the need in linguistics, applied linguistics and educational linguistics for an abstract  model that makes better phonetic and psychological sense. Rather, we feel strongly that any model more true of linguistic and psychological reality will yield better concepts, principles and practices for the classroom and materials.

The theory that emerges from our research helps to solve the problem of the lack of phonetic realism that  plagues structuralist, behaviorist and formalist accounts of the phonology of a language in actual acquisition  and then communicative use (production and perception). In part, this model of phonology is based on  a view of language as a learning system that builds up to a learned, stable state of functional complexity  (that is, the flow from language acquisition and learning to fluent use of a language to learn and communicate).

The 'learning to learn' stage involves necessary and sufficient inputs and feedback from visual,  acoustic-phonetic and kinesthetic signals. We call the most basic, sub-lexical, phonological unit of this  model (and indeed all language use) the 'articulatory gesture'. However, unlike previously established conceptualizations of the term 'articulatory gesture', which never really address what is meant by the term 'gesture', our basic sub-lexical unit involves 'faciality' or 'facial salience' in the visual and physiological components.

In this way we clarify why articulatory gestures are gestural in a linguistic sense and can help  account for rapid, reduced, connected, co-articulated speech. Unlike the descriptively simplistic but non- explanatory abstractions of the phoneme or feature, articulatory gestures ARE NOT merely formalizations  of repetitious, sequenced movements of articulators tracked at prominent points of articulation. Rather, the  articulatory gesture as a unit of phonology helps models psychological control of both language production  and perception. For a schematic overview of the articulatory gesture with the previously established analogues (see Figure One, link to graphic below).

Hyperlink to Figure One Graphic.  

Legacy concept: the structuralist phoneme

This term is perhaps most often defined and thought of in linguistics and language teaching as the smallest  sound unit to create lexical contrasts in a language. For example, we might posit the existence of the /b/  phoneme in 'bat' or 'bin' if we contrast them with the rhyming words 'at' or 'in' and see that these words  differ from the former by the absence or presence of one consonant phoneme. Or we might isolate a vowel  contrast by placing 'bet' alongside 'bit', thereby helping us to distinguish between the vowels /e/ and /i/.  There is something troubling, however, about the need for using words or lexical level meaning to help us  define or determine what sub-lexical and even sub-syllabic sound segments are. Moreover, we have to  think of phonemes as idealized or psychological categories of sounds, not actual instantiations of sound  categories. This is done to the point where phonemes subsist as mentalist or social super-structuralist  objects subsisting in some non-material realm, shorn entirely of their phonetic identities. Another aspect to  consider is just where in words phonemes can occur--that is be instantiated. We might think of the nasal- velar sound at the end of the word 'ring' as an example of a phoneme of English, but the distribution rules  for that phoneme in English determine that such a sound is not possible at the beginning of a syllable or  word.

Unfortunately, overall, the 'phoneme' does not hold up to any close linguistic scrutiny of how languages  are actually spoken, conveyed through the air as sound, received as audible material, and then perceived or  integrated into linguistic understanding and memory. Real speech doesn't naturally segment--we don't speak in discrete blips of Morse code. We can artificially segment spoken English, but the 'sounds' you will encounter will far exceed the 44-48 inventories phonemic accounts always give. The phoneme cannot be found in the  mouth; it cannot be found in the air coming out of the mouth; it cannot be found in the air going to someone's  ear; and it cannot be found in someone's ear. So then we are supposed to believe it is a socio-structuralist  or psychologically real object, in which case we hardly need phonetic criteria to delimit it. And this  is why phonetic analysis of phonemes always flounders on phonological nonsense or at least phonetic oversimplifications, if not all-out contradictions.

Take for example two of the most common types of sounds in English--indistinct, neutralized vowels of  very low productive intensity (such as unstressed /i/ and schwa) and glottal consonants, which are articulated  at the extreme ends of the vocal tract (the glottis and front of the mouth). How should we phonemically  interpret the schwa? Is it phonemically speaking the most common vowel sound of English and a category  in its own right, or is it so common because it is an unstressed allophone of so many other vowels? Why  should so many distinct vowels converge on the same sound for an unstressed allophone? Or what about  geminate glottal consonants (such as the glottal /t/ we might find in the middle of the word cattail)?  Phonemic accounts of the schwa or the glottal geminate might say that they are phonemes in their own  right and that when they alternate with other sounds, these are processes of morphophonemic alternation.  However, what if we said the schwa is just a phonetic variant of most of the vowels of English? And the  glottal geminate a variation of English consonant stops. After all, there is enough phonetic similarity to  make the case.

Other difficulties of interpretation and explanation abound. How should we phonemically interpret vow els in languages with diphthongs and triphthongs? How should we actually phonemically interpret the 'ng'  sound(s) at the end of 'ring' or 'sing'? Native intuition is that they are two sounds or sub-syllabic elements,  such as two concurrent, distinct but overlapping features (which is why no one has a real problem with the  digraph of the orthography). But you could put these words into minimal pairs and come up with all sorts  of contrasts. Ring vs. rig, sing vs. sin, etc. One phonemic account might make the sound fall under the same  category as 'ng' because the 'ng' sound is the opposite of the 'ng' and comes only at the beginning of a word  or syllable--except the failure to meet any criterion of phonetic similarity might be invoked. Or how should  we treat the in-/im- prefix? Is the prefix of 'inert' and 'immobile' different in its forms because of morphophonemic variation or could one argue that either the /n/ or the phonetically similar /m/ is actually an allophonic variation of a phoneme? The phonemic model for teaching and learning a foreign language's  phonology predominates and is largely a formal model inherited from structuralism. Even if we supplement  or supplant the idea of phonemic segments (segmentals) with suprasegmentals (e.g., intonation), the basic  idea still centers learning pronunciation on the perception of arbitrary, social-systemic contrasts enabling  an individual as language user or learner to understand spoken language.

However, it is impossible to locate the phoneme or contrastive segment in articulation, acoustics in physical  space, or in reception and immediate analysis of the speech signal. This delimits it, if it exists at all, as  a largely inaccessible and overly abstract, logical category taken away from actual speech and the psychological control of the vocal tract. Such opaque, black box concepts do not transfer well to the classroom, where effective simplification is necessary for teaching and presentation to have an impact on language learners.

One might ask of the phoneme, if we do not say it and cannot naturally find it in the speech signal,  why do we think it actually exists in language? It could be conjectured that the concept of the phoneme  is actually an metalinguistic artifact of psychological perception--a super category imposed on sounds and  the vocal tract--stemming from linguistic insights about meaningful language use or even literacy in an  alphabetic language. We could also argue that some sort of concept of the phoneme is a convenient fiction  which allows us to refer more consistently to key points and manners of articulation in written language  than does standard English spelling, which is more geared toward preserving etymological relationships  across inflected and derived forms.

Legacy Concept: the Contrastive Feature

Early on in structuralist approaches to phonology (and then later, transformational ones as well) another  idea was posited that supplemented in more detail the earlier concept of the contrastive phoneme--that of  the contrastive feature. Phonemes, it was theorized, could be broken down further into distinctive features.  For example, whether or not they are phonemically contrastive, phonetically speaking, what typically sepa rates a language's /t/ from its /d/ is the feature of vocalization. Or, another example is how voice and a lack  of aspiration might distinguish an initial /p/ from an initial /b/. One difficulty of breaking phonemes down  into features, however, is that the aspects of speech that have been called features are a confusing mix of  psychological, articulatory-gestural, phonetic and acoustic phenomena.

Typical notions of features move back and forth between articulatory criteria (a point or manner of articulation in the vocal tract or respiratory tract or an acoustic effect found on an oscilloscope). Is something a feature only if a listener hears it or could a feature be something that is physiologically experienced and subsequently anticipated by the speaker? A second problem is that, as described in much discourse, they are not truly sub-syllabic; at least phonetically and acoustically speaking, features demonstrably spread over whole syllables, words and even word boundaries. Features, then, if we actually break up speech in order to demonstrate their existence, are supposed to work more like the various notes of a chord either struck almost simultaneously or plucked out in quick succession but sustained and stretched over an entire bar (in this case syllables and syllable sequences) to create harmonies and dissonance.

Evolution of language as gestural in nature

The human ability socially to convey thoughts, intentions, emotions, beliefs, and culturally bound ways  of living largely depends on the structured use of language. This cognitively controlled, structured system  for communication, we contend, evolved first as a visual-gestural system of body language quite analogous  to the sign language of the deaf in use today. That is, we are talking about a gestural language that involves  not only the hands and arms, but also movements of the muscles of the face to produce a form of controlled  speech that is more reliant on the visual conveyance of information than the acoustic mode. The full development of human language as we now know it, however, overlapped with the emergence of considerable  auditory and phonetic abilities crucial to the survival of the human species. These beneficial auditory and  phonetic talents also took on communicative functions contributing to the survival and adaptation of the  species.

Over time the visual-gestural system of language converged with the auditory and phonetic powers to  produce what we know today as the human language facility. It might make more sense to view the auditory  and phonetic aspects of human language as dominant over the visual and gestural ones. Also, not all  visual-gestural aspects of communication are linguistic in nature, though many can be specific to particular  groups and cultures. However, it might well be the case that visual and gestural abilities are still more integral to the psychology of language control and acquisition. For example, the use of gesture is two-part. It  provides a visual signal for someone at the receiving end, but the person producing the gesture also experiences it physiologically.

The ability to communicate with a human language depends essentially on a psychologically controlled,  coordinated speech and auditory system for the planned production and meaningful perception of language.  It should be pointed out that speech production itself depends on a convergence of more basic systems,  such as the ones, which hear, breathe, eat, and make non-linguistic noise. And hearing has a more basic  non-linguistic role enabling humans to distinguish and make a phonetically diverse set of noises for communication, such as signaling and sound camouflage. Human language, however, has more essential  aspects to it than speaking and hearing perception. It also involves visual and kinesthetic aspects and structural complexity that ranges from the phonological to the lexical to the syntactical. The visual and kinesthetic aspects of phonology, however, quite likely play an even larger role in language acquisition than  they do in mature, fluent, native language use for everyday communication.

A new, more pedagogically useful phonological prime proposed

In lieu of the previously established concepts of the phoneme and feature, we call the most basic, sub-lexical, phonological unit of speech production and processing the 'articulatory gesture'. However, unlike  previous conceptualizations of this term (for example, Browman & Goldstein, 1992), our basic sub-lexical  unit involves 'faciality' or 'facial salience' in order to explain how a unit of speech can function as a linguistic 'gesture'. It must be noted here, though, that which level of language should be used to interpret speech production and processing remains a theoretically undecided issue. Does the articulatory gesture map onto language at a sub-lexical level (such as the syllable or mora)? Or does the articulatory gesture actually correspond in an explanatory manner to the spoken and psychological level of word meaning--that is, the allomorph and morpheme? If the reality of the latter case holds, then morpho-phonology would assume primary  importance in any research program. Clearly more conceptual, theoretical and experimental undertakings  are required for this issue to begin to be resolved.

Using linguistic analysis and interpretation of the results of two experiments in electromyography, we  propose a new theory of phonology concerning the basic unit of sub-lexical language. Modern phonology  has long sought a basic, psychological, discrete, invariable unit of language subsisting beneath a word level  in order to closely model, describe and explain language acquisition, processing, perception and expression.  That is, in order to solve the age-old problem of 'how infinite use is made of finite means', phonological  inquiry needs a basic, stable, sub-lexical unit that works across all aspects of a language and across all  speakers of a language. Such a unit is not only deduced to exist because speech can be segmented into consonants and vowels that form syllables. This could simply reflect a phonetic reality of speech that has been  analyzed by linguists. Rather, a phonological prime subsisting at a sub-lexical level of language must also  function as a unit in the mental language planning stage that controls meaningful language use.

Some Implications for Applied Linguistics and Educational Linguistics

Such a psychologically, physiologically and phonetically realistic basic unit for phonology should yield  better teaching and learning materials (including software) in applied, practical and clinical areas such the  following: (1) foreign language teaching and learning, (2) speech therapy, and (3) learning disabilities,  such as reading and text processing disorders. This approach should also have important implications for  the development of speech recognition for automated word processing, language translation and artificial  intelligence.

There are available in applied linguistics various approaches to studying, describing, analyzing and  explaining the production, transmission and perception of a language's phonology. However, one crucial  problem is turning it into useful information for second or foreign language pedagogy. Phonological concepts  and terminology for teaching often seem overly complex and abstract--if not outright contradictory-to  both teachers and students. One possible reason for this perceived difficulty is that, in fact, many of the  terms and models used to teach phonology simply are not useful for adolescents and adults learning a  phonology. On the one hand, the meaning of terms in phonological discourse comes to seem opaque to students  and even the teachers attempting to explain and demonstrate them. On the other hand, the concepts  are too simplistic and static to do justice to the phonological, phonetic and physiological complexity that a  learner must deal with in mastering a second or foreign language's phonology.

The articulatory gesture and its implications for language learning

A third approach to FL pronunciation and phonology would be to appeal broadly but coherently to those  aspects of speech that apply to the phonetics, physiology as well as psychology of speech. Rather than  being determined through analysis of static, binary contrasts, the sub-syllabic units of speech are deduced  to exist and represented through dynamic descriptions of a complex of movements occurring in the vocal  tract, mouth and facial muscles. This is called an articulatory-gestural approach or articulatory phonology.  According to this approach, the basic units of phonological contrast are gestures, which are also abstract  characterizations of articulatory events, each with an intrinsic time or duration. Utterances are modeled as  organized patterns (constellations) of features, in which gestural units may overlap in time. The phonological  structures defined in this way provide a set of articulatorily based natural classes. Moreover, the patterns  of overlapping organization can be used to specify important aspects of the phonological structure of  particular languages, and to account, in a coherent way and general way, for a variety of different types of  phonological variation. Such variation includes allophonic variation and fluent speech alternations, as well  as 'coarticulation' and speech errors. Finally, it is suggested that the gestural approach clarifies our  understanding of phonological development, by positing that prelinguistic units of action are harnessed into (gestural) phonological structures through differentiation and coordination. (Browman & Goldstein, 1992, p.  155)

Although not well known or understood in FLT and FLL, an articulatory-gestural approach to phonology  (or articulatory phonology) may well hold out the most promise for reuniting pronunciation practice with  communicative language teaching and learning. One problem with any theory that seeks to explain how language is spoken because of what the tongue  wants (ease of articulation) is that it might not take into account what the ears easily hear. A language  user's vocal tract that has to repeat itself with emphasis actually ultimately does more work. Spoken language  as a system built on give-and-take communication is pushed and pulled between the needs of the  speaker and the listener (just as writing systems have to fit the needs of those who write and those who  read). Going toward language that is rather indistinct, lacks intensity and overlaps sounds (co-articulated  segments, super-syllabic features, reduced vowels, glottal consonants) might make it faster for the speaker,  but this only has to be optimized to the level of how fast a listener (as language user) can take it in (which  has physical limits). A rate of output beyond the point of what a listener can perceive does not contribute to  the efficiency of production or reception since it would cause a breakdown in communication.

Different languages, dialects, and accents arrive at different sets/constellations of articulatory gestures  (or articulatory routines) to get the job done. If there is considerable overlap of grammar and lexicon, then  mutually intelligible forms of languages can exist, despite quite a bit of variation in how things are pronounced.

Any number of ways could arrive at basically the same speed of output for optimum reception  and would be well below our maximum speed of output if our lazy tongues ruled our heads. But what  would be the point of being able to speak so fast and indistinctly that no one could understand you?

A facially prominent (visually salient) account of articulatory gestures

Following the example established with the electromyographic analysis and pedagogical recommendations  of Koyama, Okamoto, Yoshizawa, and Kumamoto (1976), we propose to take prior accounts of the  articulatory gesture and modify and simplify their focus for the purposes of L2 pedagogy and learning. The  rationale for this is, in part, based on our understanding of both language evolution and natural language  development in individuals. One possible way to account for the human ability to make meaning systematically  is to see the human vocal ability with language as a fortuitous adaptation of our respiratory, upper  digestive and auditory tracts that extends our ability to gesture semiotically. The face, however, is a transitional area that serves a role both in the vocal apparatus and in the purely visual-gestural system. Indeed, with the face's and mouth's exterior as an interface or transitional zone, it could be said that the vocal apparatus and the visual gestural tools of the upper body form a seamless semiotic continuum.

One clear advantage of an articulatory gestural account of phonology is that it gives a dynamic, physiological  basis to our ability to use a language to communicate. Moreover, such an approach might also help  us to account more holistically for the ability to handle fast (i.e., normal), reduced, co-articulated connected speech in everyday spoken communication. Not only do we hear such speech, but our prior physiological  experience of language use helps us to anticipate and fill-in information missed from the audible portion of  the stream of speech.

An articulatory gestural account that focuses on the face and the mouth most vitally allows us to reconcile  the natural, untutored, pre-literate language development of a native speaker with the possible course  an L2 learner would be better off following. Consider, native language acquisition depends crucially on  both auditory and visual inputs and feedback from caregivers. Even if fluent, stable language ability in  humans has shifted heavily toward the auditory part of the semiotic continuum, it seems most likely that  visual input (in coordination with the stream of speech) from the faces of immediate caregivers provides  necessary types of both input and feedback to infants acquiring a language. Note just how an infant must  experience language and its development: the infant experiences making movements in its own vocal tract  and face; s/he feels and hears the sounds thus produced directly through the medium of the head; s/he hears  the sounds going through the air and back into the head by way of the ear; s/he hears the caregiver respond  (often in exaggerated and simplified adult speech); s/he most crucially sees in three dimensions the facial  and upper body movements of the caregiver. No idealized schematics of the interior of the vocal tract of  either the infant or the caregiver are required, nor is a visual perspective on the inside of any human mouth  necessary. For a schematic overview that relates possible phonological units with type of interaction  and/or mode of reception, see Figure Two (link to Figure Two below):

Hyperlink to Figure Two Graphic.

What is electromyography?

Electromyography is a means to measure and graphically record in controlled settings the electrical  activity of muscles, including, of course, the muscles used in producing speech. Muscles generate electric  current when contracting or when the controlling nerves have been stimulated. Electrodes usually attached  to an abraded area of the skin over the muscle pick up the impulses. The output of the muscles can then be  displayed as wavelike forms on an oscilloscope and recorded as an electromyogram (EMG). The audible  signals which stem from the activity of the vocal tract can also be recorded simultaneously, though it must  be remembered that this audible stream of speech is an acoustic realization that results from the underlying  psychological cognition, including sub-lexical, sub-syllabic manipulation of phonological units into larger  structures of language (even if speech control is experienced at a point of subconscious control).

Language conceptualization and language planning, as cognitive processes, causally precede, but also  overlap with speech production. However, the relationship in actual speech performance is a complex one:  self-monitoring of speech (both acoustic and articulatory) as well as visual and acoustic feedback from an  interlocutor can to alter or reinforce planned speech, which then affects the subsequent articulatory performance of the speaker.

Electromyographic techniques could be used to measure activity all through the internal parts of the  vocal tract; however, such applications would prove impractical without very invasive--even surgical-placement  of the electrodes. Moreover, once in place, the set up would interfere with normal speech. After  Koyama et al. (1976), we instead propose the application of completely external, facial electromyographic  techniques. This is because we are looking for the sort of common, salient visual and kinesthetic experiences,  inputs and feedback that might naturally guide young language learners in their phonological development  as an integral part of the greater category called language acquisition. In other words, we propose  the use of electromyography as a means to better grasp, analyze and present what is most invariant and perhaps  even holistically essential about phonological development in such a way that these insights can be  applied to L2 teaching and learning.

Data collection efforts and what the results reveal

Our data collection efforts are still somewhat preliminary and have involved only one subject (an  American native speaker of English, one of the authors, Jannuzi). We have collected extensive data sets on  both the vowels and the consonants of English, and would next like to generalize this to a larger group of  native speakers of English. Moreover, in the future we plan to correlate visual and audio materials systematically with electromyographic data so as to triangulate the physiological-kinesthetic elements of controlled language production and perception with the concurrent visual and acoustic phenomena. However,  in presenting our conceptual and theoretical work here, we also have the experimental and pedagogical  insights of Koyama et al. to draw on. They have already shown, using electromyographic data and photographs  of the face, that there are specific but regular ways of using facial muscles in pronouncing the  English consonants. Moreover, they demonstrated how electromyographic data generated during speech  can be used to isolate points of instruction so that teachers can better train Japanese EFL learners in pronunciation and phonological development.

We have already been able to come to some tentative but interesting conclusions about the possible  physiological and articulatory gestural aspects of phonology. For example, a phonemic account of English  might place /l/ alongside /r/ as an important contrast that an English speaker has to make. However, teachers  must ask if saying there is a single contrast is actually very useful in order to teach students how to  make the sounds during actual communication. Acoustically speaking, English /l/s and /r/s produced in  some environments can appear to sound quite similar and hard to distinguish (perhaps because of three-formant,  voiced aspects, which make both /l/ and /r/ much like vowels). Phonetic or featural differentiation,  when it hits upon points of articulation, starts to be somewhat more useful. It is usually taught that an  English /l/ is an alveolar lateral whereas the /r/ is post-alveolar or retroflex. But both the /l/ and /r/ in actual language display an enormous, confusing range of variation.

Can an articulatory gestural account focused on the facial muscles involved in speech (namely, M. temporalis,  M. masetter, M. levator labii superioris alaeque nasi, M. orbicularis oris, M depressor labii inferioris,  M digastricus venter anterior) help to differentiate what are syllables or words with /r/ sounds from  those with /l/? Our initial conclusion is, yes it could. A very preliminary exploration of these two sounds  focusing on the muscles of the face indicates that there is clear, visible differentiation patterns to be found  across /r/s and /l/s. Most significantly, a syllable or word that begins with an /r/ sound is articulatorily pre-positioned to a mouth shape somewhat like an English /w/ or the vowel /u/, no matter what the following  vowel that forms the nucleus of the syllable is. On the other hand, in terms of facial movement and anticipatory shaping of the mouth, the /l/ is far different. Visual investigation reveals that typically the /l/ coarticulates with the following vowel; in articulatory-gestural terms, we could say that the vowel that is supposed to follow the /l/ is articulatorily anticipated before the /l/ is actually made. We plan to explore this sort of physiological patterning in much greater detail, with focuses on the English /l/, /r/ and English's rather large, difficult sets of affricates and fricatives, which are problematic for learners of various native language backgrounds. Also, careful analysis of actual electromyographic data has revealed other patterns that, while not necessarily contradicting phonemic or featural accounts, can be used to supplement and  clarify them.

In brief, here are some of the more startling aspects to language that electromyography has revealed:

1. Despite what traditional theory says, no phonemic or featural distinctions are singular differences. For  example, a phonemic account of the English /l/ and /r/ sounds would say that the contrast rests on the difference of a single phoneme. That is, the /r/ in the word 'ray' is very similar, acoustically speaking, with the word /l/ in 'lay'; however, /r/ has the added 'feature' of retroflexion. However, electromyographic analysis reveals far more detail. English /r/ is more like English /w/, but can be differentiated from /w/ in terms of muscular activity, timing, and a slight difference in the shape of the mouth. Also, English /r/ also forms relationships with preceding vowels when it closes a syllable, while English /w/ only acts as the onset of a phonological syllable, not its coda. For example, English /l/, in terms of muscular activity, is much more  like the English /d/, except in the visually perceivable aspect of timing--that is, an /l/ sound lasts longer  than a /d/, and this difference can be found in the electromyographic data as well as visually perceived on  the face of the speaker. In the position of the end of a word or syllable, English /l/ might also alter its gestural form through a relation with a preceding vowel sound.

2. Electromyographic data give tantalizing, psychologically significant hints as to the language planning  stage of speech production, which falls in between conceptualization and actual speech production. In fact,  electromyographic data reveal direct evidence of the physiological control that both precedes and accompanies  actual speech production. One counterintuitive aspect thus revealed concerns the intuitive notion of  speech being a sequence of sounds. In terms of the muscular activity that precedes speech production, we  cannot say that speech is, phonologically speaking, a simple sequence of sound segments. For example, a  one-syllable word ending in a /p/ consonant might display more muscular activity before even the first segment  of the one-syllable word has been produced as sound. If we contrast the words 'mop' and 'mob', we  see that the word-final /p/ sound is signaled in terms of muscular activity even before the initial /m/ has  been produced. In other words, in terms of muscular energy used even before the word is uttered, the initial  /m/ of 'mop' displays a higher energy level than the initial /m/ of 'mob' which could only be causally  accounted for by the effect of having to plan for the pronunciation of a final /p/ instead of a final /b/.

3. As the example in number 2 above shows, the electromyographic data offers indirect evidence of a physiological interface between language planning and actual speech production. However, it is not clear at  what level of language we can say that the articulatory gesture subsists. On the one hand, it would seem to  be a logical and useful sub-lexical unit of phonology that can subsume more static and incomplete models,  such as the phoneme or feature. On the other hand, it might more closely match up with the unit of language  known as a syllable. Or, more startlingly, it might be that, in connected speech, the articulatory gesture  as a unit actually coincides with words and lexical phrases. Certainly the manifold differences that the  electromyographic data reveal could be used to support the argument that one articulatory gesture equals  one spoken syllable type or even one word.

Phonological coding ability

From theoretical and experimental standpoints, we argue that a facially salient articulatory gesture is the  best model of psycholinguistically controlled speech at a sub-lexical, sub-syllabic level. However, within a  more comprehensive view of language and literacy, there still might be a place for the concepts of  phonemes and features. Certainly there are yet still more conceptual areas we must look at before we can  begin to account for phonology in language acquisition, language learning, listening outside of face-to-face  interaction, and literacy development. First, there are cognitive, linguistics skills called phonological coding (or processing) ability (PCA), centered on:

-Phonological perception and interpretation of phonetic or phonetically graphic data, 
-Analysis/decoding of acoustic (and/or visual, graphic) signals in oral communication (and/or written discourse),  -Re-coding/encoding of linguistic input for lexical access and word recognition, 
-Re-coding/encoding of linguistic input for comprehension and meaning making, 
-Retention of language representations in short-term working memory (more specifically, phonological  memory), and the linking of ALL these preceding points with long-term memory input, storage and  retrieval--which is what makes phonological coding skills central to language learning, since phonology  must be manipulated and stored as units such as features, phonemes, mora, syllable types, articulatory gestures/gestural routines and words (lexical units).


Metacognition: Awareness Skills in Language and Literacy Development

It is now a fairly well disseminated idea that language awareness and short-term memory skills at a  phonological level play an integral role in literacy development in languages with alphabetic orthographies  (Elkonin, 1963; Liberman, 1973; Liberman, Shankweiler, Fischer & Carter, 1974; Williams, 1995; Nation  & Hulme, 1997; Stahl, Duffey-Hester, & Stahl, 1998; from a cross-linguistic perspective using pure  research techniques, see Koda, 1987, 1998; for an ESL perspective using applied research case studies, see  Birch, 1998). These skills are thought to comprise a metacognitive type of analytic ability which over-layers verbal language processing but remains separable from what have traditionally been called phonics  skills, the latter of which emerge as part of reading in an alphabetically language. Thus, it is thought that  phonological awareness skills follow from the phonological processing, production and perception skills  that develop as a result of native language acquisition. However, they precede the development of phonics  skills and beginning literacy and may play some sort of causal role in reading development. The related  concepts of phonological and/or phonemic awareness are not well established within foreign language education,  coming as they do mostly from theorizing about and research on native literacy in languages that  are written alphabetically.

Epistemologically speaking, phonological and phonemic awareness abilities would seem to subsist  somewhere between what Skehan (1998) calls 'phonological coding ability' and what have been traditionally  termed phonics skills in native language arts. Phonics skills only come into play when alphabetic or syllabic  writing conventions are associated with and/or analyzed into some sort of phonological equivalent  during the reading and writing of text. In the case of reading written English, single letters and letter combinations functioning as graphemes (units of writing corresponding to units of sound) would be made to  stand for single sounds and sound combinations in some sort of psycholinguistic process during reading;  these representations might then be related to the phonology of spoken English to facilitate lexical access,  which would then lead to the integration of lexical meaning into syntax and discourse. Phonological awareness  skills may serve as some sort of metacognitive bridge between oral and written language processing.

It is often asserted that phonological awareness/metaphonological skills emerge before and may even  causally underlie beginning literacy (hence the need to distinguish them from what has been called traditionally phonics. It might also be argued that this ability to manipulate an internalized language phono-analytically leads to the acquisition of phonics skills at decoding and manipulating alphabetic writing--especially if phonics skills are a key part of beginning literacy development and subsequent functional literacy. Phonological awareness skills are thought to be activated as a sub-component of the reading process  because they help a reader (as language user) to decode and reconstruct information sampled from an  alphabetically written text and relate it at one specific level with the reader's internalized phonology of the  language being read. Such a step may be especially important in developmental literacy.

There is a different view, however, in which phonological awareness skills are seen as a fairly spontaneous  development bridging native language acquisition of phonology with literacy development. This  might undercut the hypothetical predictive, explanatory and instructional value of phonological awareness  in reading instruction, since this view would make such skills appear to be more a result of success at  beginning literacy than a causative factor underlying it. Another vexed issue is the orthography of English  itself; although written alphabetically, English violates the alphabetic principle (one symbol=one sound)  severely in numerous ways, to an extent that the reality of phonological and phonic reading of it has to be  drastically circumscribed, if not placed entirely in doubt. The language levels at which it can be said the  code of written English is stable and determines the language read would be at the word level and above.  For a highly adumbrated overview of the stages of phonological development in a literate society, see  Figure Three (hyperlink to Figure Three below):

Hyperlink to Figure Three Graphic. 

Conclusion

There are various approaches to account for the production, transmission and perception of a language's  phonology. However linguistically interesting such accounts and approaches are, how much useful information  do they provide to teachers and students? One might conclude that many of the terms and models  used to teach a second or foreign language phonology simply are not useful for adolescents and adults  learning an L2s phonology, and, what is worse, they might be confusing to the extent they hold back learning.  The problem might not be just one of technical complexity. The meaning of terms in phonological discourse  may be too opaque and unnatural to students and even teachers. But however technical, the concepts  could be too simplistic and static to do justice to the phonological, phonetic and physiological complexity  that an L2 learner must deal with in mastering second or foreign language phonology. We have described  and explained facial electromyography in support of a simplified gestural model of phonology. The electromyographic techniques we propose not only gives direct evidence in support of a gestural model, we  argue it also considerable potential for the pedagogy of FL phonology in terms of teacher training and  materials development (such as learning feedback software). What remains to be done must follow two  major courses. First, we plan to expand our use of data gathering with electromyography to include a larger  group of English native speakers, including a variety of dialects and accents. Electromyography will also  be used to explore how to give useful and specific feedback to Japanese speakers learning English pronunciation. The second major track that we will pursue is the development of improved teaching techniques  and learning materials that take advantage of the improved models and concepts of phonology that we have  explained here.

References

Birch, B. (1998). Nurturing bottom-up reading strategies, too. TESOL Journal 7(6), 18-23. 

Browman, C. P., & Goldstein, L. (1992). Articulatory phonology: An overview. Phonetica, 49,   155-180.  Elkonin, D. B. (1963). The psychology of mastering the elements of reading. In B. Simon & J. Simon

(Eds.), Educational psychology in the U.S.S.R. (pp. 165-179). New York: Routledge.  Harris, T. L., & Hodges, R. E. (Eds.)(1995). The literacy dictionary: The vocabulary of reading and  writing. Newark, DE: International Reading Association.

Koyama, S., T. Okamoto, M. Yoshizawa & M. Kumamoto (1976). An electromyographic study on training  to pronounce English consonants unfamiliar to the Japanese. Journal of Human Ergology, 5, 51-60.

Koda, K. (1987). Cognitive strategy transfer in second language reading. In J. Devine, P.L. Carrell, & D.E.  Eskey (Eds.), Research in English as a Second Language (pp. 127-144). Washington, D.C.: TESOL.  Koda K.(1998). The role of phonemic awareness in Second Language reading. Second Language  Research (London), 14(2), 194-215.

Liberman, I. Y.(1973). Segmentation of the spoken word and reading acquisition. Bulletin of the Orton  Society 23, 65-77.

Liberman, I.Y., Shankweiler, D., Fischer, W.M., & Carter, B. (1974). Explicit syllable and phoneme  segmentation in the young child. Journal of Experimental Child Psychology 18, 201-212.

Nation, K. & Hulme, C. (1997). Phonemic segmentation, not onset-rime segmentation, predicts early  reading and spelling skills. Reading Research Quarterly 32, 154-167.

Skehan, P. (1998). A cognitive approach to language learning. Oxford: Oxford University Press.

Williams, J. (1995). Phonemic awareness. In T. L. Harris & R.E. Hodges (Eds.), The literacy dictionary, (pp. 185-186). Newark, DE: International Reading Association.





Note: the graphics for this article are found at the following location:

http://picasaweb.google.com/jannuzi/PhonologicalModels#

Also accessible by clicking on the graphic:

Phonological Models

10 December 2009

University of Kyoto starts halal food for Muslim students

University of Kyoto (Kyodai), Japan's number two institution, has started to accomodate Muslim students. Perhaps some Japanese students will try eating halal as well?

 
http://search.japantimes.co.jp/cgi-bin/nn20091008f2.html

Thursday, Oct. 8, 2009

Kyoto University to serve halal food
Kyodo News

Kyoto University will start providing food permissible under Islamic law at the school's cafeteria to meet the needs of the increasing number of Muslim students on campus.

The cafeteria will introduce a halal food corner from Tuesday, avoiding pork and seasonings of pork origin, which Muslims are banned from eating. The new menus include chicken and croquettes made of broad beans, it said.

More than 1,000 Muslims live in the city of Kyoto, and many are Kyoto University students and their families.

The rare introduction is aimed at supporting such Muslim students, whose population is expected to rise under the university's plans to accept more foreign students.

While the co-op said it had problems in arranging a cooking environment to avoid mixing pork and related seasonings with halal food, it solved the issue by preparing the food at different hours.

Getting basic stats on Japan's HE system

These sources are easily accessed and downloaded in standard formats. However, they are only up to 2006. To get up to 2008, you have to go looking for the information in various online publications. Here is a sample of what you can get. 

http://www.mext.go.jp/english/statist/index.htm
http://www.mext.go.jp/english/statist/index11.htm

University and Junior College

Universities          
Junior Colleges    
Students by Course (University)    
Students by Course (Junior College)    
Students by Field of Study (University)    
Students by Field of Study (Junior College)    
Students by Field of Study (Master's Courses)    
Students by Field of Study (Doctor's Courses)    
Students by Field of Study (Professional Degree Courses)    
Students of Non-Japanese Nationality    
Students from Abroad by Region of Origin    
Students from Abroad by Field of Study    
Full-time Teachers by Type of Position (University)    
Full-time Teachers by Type of Position (Junior College)    
Full-time Non-teaching Staff by Type of Position    
Correspondence Courses    
New Entrants (University)    
New Entrants (Junior College)    
New Entrants (Master's Courses)    
New Entrants (Doctor's Courses)    
New Entrants (Professional Degree Courses)    
First Destination of New Graduates (University)    
First Destination of New Graduates (Junior College)    
First Destination of New Graduates (Master's Courses)    
First Destination of New Graduates (Doctor's Courses)/(Professional Degree Courses)    
New Graduates Entering Employment (University)    
New Graduates Entering Employment (Junior College)    
New Graduates Entering Employment (Master's Courses)    
New Graduates Entering Employment (Doctor's Courses)/(Professional Degree Courses)    

FACTOID: Japan has over 1200 universities, 4-year and 2-year colleges



Source: Japan Ministry of Education (MEXT),  
http://www.mext.go.jp/english/koutou/__icsFiles/afieldfile/2009/10/09/1284979_1.pdf

FACTOID: Japan boasts a HE continuance rate over 70%

It depends of course on how you define higher education. Continuance to two- and four-year institutions is now over 50%, and the institutions hope it will continue to grow to at least 60% because the size of senior high cohorts has been going down for the last 15 years. Another factor to consider is that, once students are admitted to universities and colleges, the vast majority graduate. There is very little academic pressure put on them to prove their worth in their first two years, unlike non-elite institutions in the US (and the vast majority of institutions in the US are NON-ELITE general education boot camps).

See:

http://www.mext.go.jp/english/koutou/001/001.htm

http://www.mext.go.jp/english/koutou/001/001/001.gif

Japan slips elite private university into Times-QS top 200 global rankings

In the first post on this year's rankings, I wrote:
 
Perhaps one aspect worth a closer look is how Japan's elite private universities fared. None are in the top 100, but are any rising or entering the top 200? That is worth a future post here at the Japan HEO Blog.

THES''s own article on the topic answers the question. Keio University makes its debut in the top 200--and even makes it into the top 150.


excerpt:
Japan counts 11 institutions in the top 200, among them two new entrants: the University of Tsukuba sharing 174th place and Keio University making an impressive debut at 142nd. Japan's representatives in the top 100 rose in number from four to six, led by the University of Tokyo at 22nd place (down from 19th).

See the entire article on THES online:

http://www.timeshighereducation.co.uk/story.asp?storycode=408560

09 December 2009

TEFL FORUM: Do Japanese EFL students need 'katakana eigo' to learn and to read English?

This is a re-post of an article published in September this year in order to put it with the recent 'TEFL Forum' and give it a bit more prominence. Please note that this article includes three suggested classroom activities at the end of the discussion.

Do Japanese EFL students need 'katakana eigo' to learn and to read English?

by Charles Jannuzi, University of Fukui, Japan


Introduction

'Katakana' is one of two syllabaries used in modern written Japanese; it is largely used to represent non-Chinese loan words, such as the numerous English loan words in Japanese called 'gairaigo'. It is also used in some contexts to stand for native onomatopoeia and other mimetic language, to show emphasis in a written text, to transcribe the readings of Chinese characters in legal documents, to provide a quickly input language for telegraphy, and to represent the popular names of animals and plants in native taxonomy, among other uses. However, katakana also finds widespread use in EFL in Japan in classrooms and materials as 'katakana eigo', which is a syllabic transliteration of English into a form that is more easily decodable for learners.

For the sake of this article's discussion, teacher attitudes toward katakana eigo can be summarized as the following three:

1. Katakana eigo is bad, and we should ban it.
2. Katakana eigo is not particularly useful, but it is part of the cross-lingual (L2 to L1)
reality, still let us not encourage it.
3. Katakana eigo is a useful crutch; helping students as a cognitive bridge to literacy in
EFL, so let us adapt it appropriately.

In this article I will explain why learners feel that katakana eigo is necessary in order to deal with the complexity and inconsistency of written English, and I will explain how teachers can plan and use content, materials and activities that will alleviate the need for such L1 crutches.

Katakana eigo: Is it natural?

It is natural for beginners to make substitutions and simplifications with the FL's sound system and sound tactics. Nonnative/JSL/JFL speakers of Japanese (many of them English teachers in Japan) are no different on this point. It is also a matter of course that students might take a very familiar, consistent, phonologically transparent, syllabic script like katakana and use it to transcribe a language written in one that is not so easy to decode for pronunciation (like the complex, alphabetic writing conventions of English). It does seem possible, though, that a persistent reliance on katakana eigo during beginning levels of instruction reinforces the idea that English does not have its own sound system and sound tactics. The impression that beginners might get is that the sounds and sound tactics of English are easily fitted into those of Japanese; they are not, not if intelligibility is to survive.

In standard phonological accounts, spoken Japanese has far fewer sound segments than English, and simpler tactics are used for putting these sounds together into syllables and words. A typical Japanese syllable is V or CV type; few consonant sounds can close a syllable, and there are not many consonant clusters. A writing system such as katakana that is based on an analysis of the syllable types of spoken Japanese, therefore, proves an ill fit for spoken English. What is at issue is the mental, phonological representations of the FL in the minds of the learners which enable them to learn and use it.

Here are two examples of how katakana eigo renders English into a Japanese form. Take the word 'banana'. In Japanese, this word would be written as three syllabic characters,, which we can romanize as ba-na-na. In this case the written Japanese corresponds perfectly with the English (though note, the Japanese form of this word would be given fairly even stress across all three syllables, while the English word typically receives the strongest stress on the second syllable with fairly neutral vowels in the first and final syllables). But look what happens with a second example, 'McDonald's'. In Japanese, this would be written as, which as romanized is ma-ku-do-na-ru-do. Now, both the words 'banana' and 'McDonald's' are well established loan words in modern spoken Japanese, and, as such, the nativized pronunciations of these for spoken Japanese are perfectly legitimate. But it is easy to see from these two examples what might happen to English words in an EFL setting if students used katakana to make target vocabulary more easily 'decodable'. If a word has a similar syllable structure to Japanese (V or CV), then the effects are not so profound. In the case of a word like 'McDonald's', the English word with three syllables becomes a six-syllable word with all open syllables and extra, intruded vowel sounds.

Is it possible that once such word forms are learned for EFL, that they make a lot of vocabulary of English largely incomprehensible? First, students, having learned the Japanized version of a word, may not recognize it while listening (or even reading, if they find the katakana for more easily memorized than English spelling). Second, if students produce such forms, are most English speakers outside of an EFL classroom in Japan going to understand them?

Next, let us turn to possible solutions that we might consider for teaching methods and materials. If katakana eigo is banned in class, this decision is a school's departmental or teacher's choice. However, we must also remain aware of two separate parts of linguistic reality in Japan, where English is both an important source of loan words and a much-studied FL. First, students are still going to make sound substitutions from Japanese and their own developing interlanguage when speaking and reading English out loud.

It is a natural linguistic phenomenon for beginners to struggle with the phonology of English when they start to learn the language. Construction and internalization of a FL's phonology goes along step-by-step with development in things like vocabulary and grammar (though sometimes the steps are backwards and not always forward). Second, English loan words become visible and usable in Japanese because they have been transcribed into katakana eigo form. Teachers working in an EFL environment have to recognize and affirm that there are quite legitimate processes going on when their students' L1 acquires a loan word from English. Moreover, it is expected for someone to use the L1's pronunciation of English loan words when speaking the L1 (including native English speakers when they speak Japanese).

Is Phonics a Possible Solution?

Phonics often refers to a set of methods for teaching beginning literacy to native English speakers, bilinguals and ESL learners in countries where English is the dominant language. In such methods teachers typically emphasize the rule-like nature of spelling-to-sound correspondences through direct instruction and practice. To many critics, the problems with phonics include the following: (a) too much emphasis on explicit rules and teacher-centered instruction of them, (b) a simplistic view of the nature of written English's complex and irregular spelling conventions, and (c) behaviorist drill and practice separated from real language use and meaning.

Given such problems, it might seem difficult to reconcile phonics methods with constructivist, student-centered, communicative EFL instruction. However, let us consider a different view of what phonics might be since it will help us to integrate phonics into our both our philosophies as well as our real world teaching. Goodman (1993) writes:

Phonics is always both personal and social, because we must build relationships between our own personal speech . . .the speech of our community and the social conventions of writing. It is always contextual because the values of both sound and letter patterns change in the phonological, grammatical and meaning contexts they occur in. And it's never more than part of the process of reading and writing. For all these reasons, phonics is learned best in the course of learning to read and write, not as a prerequisite. In fact, our phonics is determined by our speaking, listening, reading and writing experiences.(p. 51)

If we can agree with Goodman here, then we can see that phonics is not a set of simple rules for letter-to-sound correspondences "reversed engineered" from written English that teachers can then present and drill in to students. Rather, phonics is a complex system of relationships that the learner as reader and writer builds up and internalizes mentally; much like the other parts of a learner's FL language system, it could be said to exist only when language is being used in some way to make meaning.

A Few Notes on the Spelling of English

One of the reasons why doubts about phonics as something teachable arise has to do with the nature of English orthography and the ways it might be processed and read in real written text. The first fact that confronts us is inescapable: a simple alphabet relates one symbol with one categorical sound (sound segment, phoneme or phone). But the version of the Roman alphabet used to write English has only 26 letters, far short of the number necessary to represent spoken English's list of 44 to 48 sounds in simple one-sound-to-one-symbol conventions. This means that, while English is written alphabetically, these conventions are not limited to simple one-letter-to-one-sound correspondences. The second fact only makes matters seem worse: not only are the conventions complex, but there is a great deal of irregularity and inconsistency (more so than written French even, another literary language known to deviate from simple phonetic principles).

One reason for the complexity is that, at least in part, the spelling patterns do capture phonological aspects of the spoken language, but since there is a shortage of roman letters for English sounds, the conventions are by necessity complex. However, how do we account for the inconsistencies and irregularities? Historic and linguistic reasons can be given: at one time the writing conventions for writing Anglo-Saxon and British Danish were fairly phonemic, but these traditions died out and so are not really continuous with written English as we know it today. Then Norman French, after 1066, brought with it French spelling conventions and massive amounts of Latinate vocabulary.

Next, the subsequent age of mass literacy and printing accompanied the true emergence of modern English as a world language. During this period, English's strange mix of spelling conventions -- after infusions of even more Latinate vocabulary from writers such as Milton and exotic spelling conventions from Dutch printers and typesetters -- became frozen in place more or less. Written English curiously upholds both phonemic/phonological and etymological principles (the latter being a striking parallel with modern French). Most words have not lost their sound shapes in their written forms, but often spellings are stable across word roots, even though internal vowels change. For example, compare the stable spellings and unstable pronunciations of the related lexical roots of these words: phone, phonic, phonological, telephony, etc.

The tendency is for the complex processes of lexical derivation and grammatical morphology in English to produce many changes in pronunciation of syllable-internal vowel sounds while the spelling conventions refer more often consistently to word roots. It is this mix of conventions that leads some to theorize that English could be read at a word level in mature, fluent reading processes.

Ways to Cope in the Classroom

It may well be the case that written English as it is actually read, written and spelled forces the literate language user to juggle phonological and word-level principles. However, there is also the possibility that beginning literacy--especially in a SL or FL, where so much vocabulary is encountered for the first time in print, not speech--has to be more dependent on phonological processes in reading. The good news is that the spelling conventions for the English consonants sounds, while complex, are fairly consistent. The true source of difficulty is more centered on how the vowels of English are written.

Here are three activities that teachers can run with beginning to lower intermediate level learners of all ages to practice and reinforce phonics, pronunciation and phonological skills related to beginning EFL learning and literacy.

Activity One: Pronunciation and Phonics Crambo (an adaptation of a traditional spelling game)

1. Preparation: Go through student word lists (e.g., the lexical part of the syllabus of a course book) and select words that fit major and minor spelling patterns. Also, choose key sight words (which are also a major part of a beginner's vocabulary). Think of other rhyming words that students may not know, but that fit the patterns that the course vocabulary illustrate.

2. Preteaching: Explain/show what an English rhyme is, as Japanese students may have difficulty with the concept. Young learners especially may be quite open to language play, but their linguistic sense of it will be geared to the characteristics of Japanese, not English. Rhyme is one of these characteristics on which English and Japanese (but also Romance languages like Spanish and Italian) differ greatly. Show them how words can rhyme and have the same spelling pattern: e.g., time, lime, dime, etc. Also show them how words can rhyme but have totally different spellings: e.g., time, rhyme, climb. You can also show them how common sight words complicate matters still further: two, you, who.

3. Divide the class into teams. I have used this activity a lot for classes that could be divided into two teams, but more teams than that are possible. Two players from each team can come to the board. One will write for their team, while the other can relay information from the rest of the team members. This activity can be run having students rely solely on memory, or they can be encouraged to use textbooks, glossaries and dictionaries for the words they will need. Begin play by announcing a key word and writing it top, center on the board. Repeat the word several times. The first team to write a correct rhyme wins a point. Continue play with different team members rotating for each round. Emphasize that this is a team effort, so the members who are at their seats should give assistance to the two at the board.

4. Variations: Practice words that have the same vowel sound but do not rhyme. Or words that begin or end with the same target sound, such as problem sounds like /r/ or /l/ (in this case you will want only to say the key word several times and not write anything on the board).

Activity Two: Spelling Concentration (an EFL adaptation of Concentration)

1. Construct a set of word cards from large pieces of cardboard (I have used A4 and B4 sizes). On one side of each card print a key word. The words on the cards should be organized so that there are matching pairs of rhyming words or words that share the same internal vowel sounds (e.g., same soundsame spelling, same sounddifferent spelling, selected sight words). For example, in one set of cards I matched in non-rhymes, five pairs of short vowels (bad-cat, bed-pet, sit-tip, not-top, cut-cup), five pairs of 'long' vowels (ate-day, feet-heat, kite-sight, note-boat, room-tune), and three pairs with other vowels (out-town, loop-soon, boy-oil) for a total of 26 cards. After you have written all the key words on the cards, shuffle the deck thoroughly, then number the cards at random on their reverse sides, from 1 to 26. Tape or magnetically fix the word cards to the blackboard with the numbered sides showing.

2. This game works best if played between two teams, but team sizes should be kept down to groups that are small enough for all to participate. If you team teach, you might want to split up a large class and run two different games. There is not a lot of preteaching required for this game if the previous activity has already been done (teaching what words rhyme, how they might share an internal vowel, how they might begin or end with the same sound, etc.). You might want to run a demonstration round to show how the Concentration game will go.

3. One of the two teams must begin play; this can be decided at random since going first does not increase the odds of winning. The side that starts picks any two cards by calling out their numbers (this also gives beginners a chance to say the numerals in English out loud in real communication). The teacher (or appointed M.C.) turns the cards over so that they display their key words. The teacher says the words out loud several times so that the whole class can hear. If the two words on the cards match according to the teaching point of the game (e.g., rhymes, internal vowel sounds, initial sounds, final sounds, etc.), the two cards are taken down and given to the side that chose them. If cards are won, play continues with the same side getting the chance to call out two more numbers. The turn changes if two cards are turned over but the words do not match. Keep playing until all the cards have been matched and given to a side.

4. Hint on making this game work: point out to the teams that they need to split up memorization duties among their members; however, do not let them keep any written notes.

Activity Three: Phonics Snap (an EFL adaptation of the card game, Snap!)

1. Prepare a list of words from student vocabulary. Select these words on the basis of the spelling patterns they illustrate (for example, the most basic patterns of the five short vowels and the five long vowels). Think of words that both rhyme and illustrate the same spelling patterns and add them to the list (they may be from previously studied vocabulary, or they can be new words that should be decodable if phonics skills are used). Using the words you have collected, construct a set of 72 cards, one word on each card. The object of this game depends on randomly matching rhyming words, so be sure to include a large number of only a few rhymes (for example, a deck that is limited to the major patterns for the five long vowels). In short, this game does not work if there aren't enough examples of each rhyme. Because of the complexity of English spelling, it is possible to construct games to emphasize many different points. Some possibilities might include: rhymes with the same spelling, rhymes with different spellings, or rhymes with various spellings along with an occasional sight word, which should always come from known vocabulary (for example, eye might be matched with pie, my and buy).

2. This game is best played in pairs. Decks for an entire class could be used while the teacher checks how students are doing. Also, the teacher could play this game with a student who needs extra practice with English spelling and pronunciation. Team teaching would allow for this game to be used with a larger class. The two teachers could demonstrate it better, and they could cover more of the classroom when helping students learn to play it.

3. Have students form pairs. Distribute one deck of cards to each pair. After shuffling and dealing the cards (face down), one player begins play by placing their top card face up on the desk and pronouncing the word (e.g. 'light'). The other player then lays a card on top of the previous one and pronounces it (e.g. late). Play continues in turn until a rhyming card has been laid on top of the previous one (e.g., seen then bean). At that instant, the first player to recognize the rhyme and say 'Snap!' wins all the cards that have been laid. Players should not cheat by looking at their cards before they lay them, a point that should be stressed when the game is demonstrated and monitored. Players keep doing this until one player has won all the cards.

4. Other principles could be practiced with this game; for example, the same internal vowel sound in nonrhyming words ('feet' and 'bean').

Conclusion

It is understandable that students would want to resort to using katakana transcriptions of English to make the language they are studying clearer for decoding into pronunciations. Also, it is perfectly legitimate when this process is used to bring English loan words into Japanese. However, katakana eigo is of limited use for beginning literacy in real written English, and may well hinder language development, since it distorts perceptions of English pronunciation. Phonics can be used to lessen the need for things like katakana eigo, but it must be remembered that phonics is not simply some neat set of rules that teachers give to students. Rather, just as with the acquisition of any generative, patterned, rule-like aspect to a language, students must be given the opportunities to build up skills and abilities that they can actually apply to understanding and making meaning in the FL. Activities such as the three outlined in this article should help teachers to do just that.


References
Goodman, Kenneth S. (1993). Phonics phacts. Portsmouth, NH: Heinemann.

Back to top

Back to top
Click on logo to go back to top page.